WorldWideScience

Sample records for camera based positron

  1. 21 CFR 892.1110 - Positron camera.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Positron camera. 892.1110 Section 892.1110 Food... DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A positron camera is a device intended to image the distribution of positron-emitting radionuclides in the body...

  2. Gamma camera based Positron Emission Tomography: a study of the viability on quantification; Tomografia por emissao de positrons com sistemas PET/SPECT: um estudo da viabilidade de quantifizacao

    Energy Technology Data Exchange (ETDEWEB)

    Pozzo, Lorena

    2005-07-01

    Positron Emission Tomography (PET) is a Nuclear Medicine imaging modality for diagnostic purposes. Pharmaceuticals labeled with positron emitters are used and images which represent the in vivo biochemical process within tissues can be obtained. The positron/electron annihilation photons are detected in coincidence and this information is used for object reconstruction. Presently, there are two types of systems available for this imaging modality: the dedicated systems and those based on gamma camera technology. In this work, we utilized PET/SPECT systems, which also allows for the traditional Nuclear Medicine studies based on single photon emitters. There are inherent difficulties which affect quantification of activity and other indices. They are related to the Poisson nature of radioactivity, to radiation interactions with patient body and detector, noise due to statistical nature of these interactions and to all the detection processes, as well as the patient acquisition protocols. Corrections are described in the literature and not all of them are implemented by the manufacturers: scatter, attenuation, random, decay, dead time, spatial resolution, and others related to the properties of each equipment. The goal of this work was to assess these methods adopted by two manufacturers, as well as the influence of some technical characteristics of PET/SPECT systems on the estimation of SUV. Data from a set of phantoms were collected in 3D mode by one camera and 2D, by the other. We concluded that quantification is viable in PET/SPECT systems, including the estimation of SUVs. This is only possible if, apart from the above mentioned corrections, the camera is well tuned and coefficients for sensitivity normalization and partial volume corrections are applied. We also verified that the shapes of the sources used for obtaining these factors play a role on the final results and should be delt with carefully in clinical quantification. Finally, the choice of the region

  3. Initial performance studies of a wearable brain positron emission tomography camera based on autonomous thin-film digital Geiger avalanche photodiode arrays.

    Science.gov (United States)

    Schmidtlein, Charles R; Turner, James N; Thompson, Michael O; Mandal, Krishna C; Häggström, Ida; Zhang, Jiahan; Humm, John L; Feiglin, David H; Krol, Andrzej

    2017-01-01

    Using analytical and Monte Carlo modeling, we explored performance of a lightweight wearable helmet-shaped brain positron emission tomography (PET), or BET camera, based on thin-film digital Geiger avalanche photodiode arrays with Lutetium-yttrium oxyorthosilicate (LYSO) or [Formula: see text] scintillators for imaging in vivo human brain function of freely moving and acting subjects. We investigated a spherical cap BET and cylindrical brain PET (CYL) geometries with 250-mm diameter. We also considered a clinical whole-body (WB) LYSO PET/CT scanner. The simulated energy resolutions were 10.8% (LYSO) and 3.3% ([Formula: see text]), and the coincidence window was set at 2 ns. The brain was simulated as a water sphere of uniform F-18 activity with a radius of 100 mm. We found that BET achieved [Formula: see text] better noise equivalent count (NEC) performance relative to the CYL and [Formula: see text] than WB. For 10-mm-thick [Formula: see text] equivalent mass systems, LYSO (7-mm thick) had [Formula: see text] higher NEC than [Formula: see text]. We found that [Formula: see text] scintillator crystals achieved [Formula: see text] full-width-half-maximum spatial resolution without parallax errors. Additionally, our simulations showed that LYSO generally outperformed [Formula: see text] for NEC unless the timing resolution for [Formula: see text] was considerably smaller than that presently used for LYSO, i.e., well below 300 ps.

  4. Performance characteristics of the novel PETRRA positron camera

    CERN Document Server

    Ott, R J; Erlandsson, K; Reader, A; Duxbury, D; Bateman, J; Stephenson, R; Spill, E

    2002-01-01

    The PETRRA positron camera consists of two 60 cmx40 cm annihilation photon detectors mounted on a rotating gantry. Each detector contains large BaF sub 2 scintillators interfaced to large area multiwire proportional chambers filled with a photo-sensitive vapour (tetrakis-(dimethylamino)-ethylene). The spatial resolution of the camera has been measured as 6.5+-1.0 mm FWHM throughout the sensitive field-of-view (FoV), the timing resolution is between 7 and 10 ns FWHM and the detection efficiency for annihilation photons is approx 30% per detector. The count-rates obtained, from a 20 cm diameter by 11 cm long water filled phantom containing 90 MBq of sup 1 sup 8 F, were approx 1.25x10 sup 6 singles and approx 1.1x10 sup 5 cps raw coincidences, limited only by the read-out system dead-time of approx 4 mu s. The count-rate performance, sensitivity and large FoV make the camera ideal for whole-body imaging in oncology.

  5. Imaging performance of a multiwire proportional-chamber positron camera

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Mandez, V.; Del Guerra, A.; Nelson, W.R.; Tam, K.C.

    1982-08-01

    A new design - fully three-dimensional - Positron Camera is presented, made of six MultiWire Proportional Chamber modules arranged to form the lateral surface of a hexagonal prism. A true coincidence rate of 56000 c/s is expected with an equal accidental rate for a 400 ..mu..Ci activity uniformly distributed in a approx. 3 l water phantom. A detailed Monte Carlo program has been used to investigate the dependence of the spatial resolution on the geometrical and physical parameters. A spatial resolution of 4.8 mm FWHM has been obtained for a /sup 18/F point-like source in a 10 cm radius water phantom. The main properties of the limited angle reconstruction algorithms are described in relation to the proposed detector geometry.

  6. The review of myocardial positron emission computed tomography and positron imaging by gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Ohtake, Tohru [Tokyo Univ. (Japan). Faculty of Medicine

    1998-04-01

    To measure myocardial blood flow, Nitrogen-13 ammonia, Oxygen-15 water, Rubidium-82 and et al. are used. Each has merit and demerit. By measuring myocardial coronary flow reserve, the decrease of flow reserve during dipyridamole in patients with hypercholesterolemia or diabetes mellitus without significant coronary stenosis was observed. The possibility of early detection of atherosclerosis was showed. As to myocardial metabolism, glucose metabolism is measured by Fluorine-18 fluorodeoxyglucose (FDG), and it is considered as useful for the evaluation of myocardial viability. We are using FDG to evaluate insulin resistance during insulin clamp in patients with diabetes mellitus by measuring glucose utilization rate of myocardium and skeletal muscle. FFA metabolism has been measured by {sup 11}C-palmitate, but absolute quantification has not been performed. Recently the method for absolute quantification was reported, and new radiopharmaceutical {sup 18}F-FTHA was reported. Oxygen metabolism has been estimated by {sup 11}C-acetate. Myocardial viability, cardiac efficiency was evaluated by oxygen metabolism. As to receptor or sympathetic nerve end, cardiac insufficiency or cardiac transplantation was evaluated. Imaging of positron emitting radiopharmaceutical by gamma camera has been performed. Collimator method is clinically useful for cardiac imaging of viability study. (author). 54 refs.

  7. Positron source based on neutron capture

    Science.gov (United States)

    Hugenschmidt, C.; Kögel, G.; Repper, R.; Schreckenbach, K.; Sperr, P.; Triftshäuser, W.

    2003-10-01

    A positron beam based on absorption of high-energy prompt γ-rays from thermal neutron capture in 113Cd was installed at a neutron guide of the High Flux Reactor at the ILL in Grenoble. The positron source consists of platinum foils acting as γ-e +e --converter and positron moderator. After acceleration to 3 keV by electrical lenses the positrons were magnetically guided through the beamline. Measurements were performed for various source geometries. The beam diameter and moderation characteristics such as positron work function, moderation efficiency and degradation were determined as well. The results lead to an optimised design of the in-pile positron source which will be implemented at the Munich research reactor FRM-II.

  8. A Positron CT Camera System Using Multiwire Proportional Chambers as Detectors

    Science.gov (United States)

    1989-05-18

    Beijing) Abstract This article reports on a positron computerized tomography camera syste:. using 7ultiwire proportional chambers ( MWPC ) as detectors...This system is composed of two high-density MWPC gamma-ray detectors, an electronic readout system and a computer foi data piocessing. Three...proportional chamber ( MWPC ) PECT is directly used in the field of solid body investigation in physics to measure Fermi surfaces as well as to determine Lhe

  9. The development of high-efficiency cathode converters for a multiwire proportional chamber positron camera.

    Science.gov (United States)

    Marsden, P K; Bateman, J E; Ott, R J; Leach, M O

    1986-01-01

    A high-efficiency cathode converter for 511-keV photons has been developed for incorporation into a multiwire proportional chamber (MWPC) positron camera. The converter consists of a honeycomb pattern produced in a 1-mm-thick lead sheet to leave lead walls with a thickness of approximately 60 micron. The converter also serves as the cathode of an MWPC, the gap between the converter and the anode wire plane being 2.5 mm. This small gap results in a high secondary electron extraction efficiency without the need for additional drift voltages. Measurements of the efficiencies of a plane converter and of two types of structured converters in a single section MWPC are described and the efficiency is found to increase in proportion to the converter surface area. This result justifies the use of a simple theoretical model whereby an extrapolation to the efficiency of a detector consisting of a stack of 20 MWPC sections, each section having two converters, is made. The efficiency of this proposed system is calculated to be 17% for 511-keV photons.

  10. First platinum moderated positron beam based on neutron capture

    Science.gov (United States)

    Hugenschmidt, C.; Kögel, G.; Repper, R.; Schreckenbach, K.; Sperr, P.; Triftshäuser, W.

    2002-12-01

    A positron beam based on absorption of high energy prompt γ-rays from thermal neutron capture in 113Cd was installed at a neutron guide of the high flux reactor at the ILL in Grenoble. Measurements were performed for various source geometries, dependent on converter mass, moderator surface and extraction voltages. The results lead to an optimised design of the in-pile positron source which will be implemented at the Munich research reactor FRM-II. The positron source consists of platinum foils acting as γ-e +e --converter and positron moderator. Due to the negative positron work function moderation in heated platinum leads to emission of monoenergetic positrons. The positron work function of polycrystalline platinum was determined to 1.95(5) eV. After acceleration to several keV by four electrical lenses the beam was magnetically guided in a solenoid field of 7.5 mT leading to a NaI-detector in order to detect the 511 keV γ-radiation of the annihilating positrons. The positron beam with a diameter of less than 20 mm yielded an intensity of 3.1×10 4 moderated positrons per second. The total moderation efficiency of the positron source was about ɛ=1.06(16)×10 -4. Within the first 20 h of operation a degradation of the moderation efficiency of 30% was observed. An annealing procedure at 873 K in air recovers the platinum moderator.

  11. Movement-based Interaction in Camera Spaces

    DEFF Research Database (Denmark)

    Eriksson, Eva; Riisgaard Hansen, Thomas; Lykke-Olesen, Andreas

    2006-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movement-based projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space......, relations, and feedback. We see these as central for describing and analysing movement-based systems using camera tracking and we show how these three concepts can be used to analyse other camera tracking applications....

  12. Undulator-based production of polarized positrons

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, G. [Tel-Aviv Univ. (Israel); Barley, J. [Cornell Univ., Ithaca, NY (United States); Batygin, Y. [SLAC, Menlo Park, CA (US)] (and others)

    2009-05-15

    Full exploitation of the physics potential of a future International Linear Collider will require the use of polarized electron and positron beams. Experiment E166 at the Stanford Linear Accelerator Center (SLAC) has demonstrated a scheme in which an electron beam passes through a helical undulator to generate photons (whose first-harmonic spectrum extended to 7.9 MeV) with circular polarization, which are then converted in a thin target to generate longitudinally polarized positrons and electrons. The experiment was carried out with a one-meter-long, 400-period, pulsed helical undulator in the Final Focus Test Beam (FFTB) operated at 46.6 GeV. Measurements of the positron polarization have been performed at five positron energies from 4.5 to 7.5 MeV. In addition, the electron polarization has been determined at 6.7MeV, and the effect of operating the undulator with a ferrofluid was also investigated. To compare the measurements with expectations, detailed simulations were made with an upgraded version of GEANT4 that includes the dominant polarization-dependent interactions of electrons, positrons, and photons with matter. The measurements agree with calculations, corresponding to 80% polarization for positrons near 6 MeV and 90% for electrons near 7 MeV. (orig.)

  13. Dynamic positron computed tomography of the heart with a high sensitivity positron camera and nitrogen-13 ammonia

    Energy Technology Data Exchange (ETDEWEB)

    Tamaki, N.; Senda, M.; Yonekura, Y.; Saji, H.; Kodama, S.; Konishi, Y.; Ban, T.; Kambara, H.; Kawai, C.; Torizuka, K.

    1985-06-01

    Dynamic positron computed tomography (PCT) of the heart was performed with a high-sensitivity, whole-body multislice PCT device and (/sup 13/N)ammonia. Serial 15-sec dynamic study immediately after i.v. (/sup 13/N)ammonia injection showed blood pool of the ventricular cavities in the first scan and myocardial images from the third scan in normal cases. In patients with myocardial infarction and mitral valve disease, tracer washout from the lung and myocardial peak time tended to be longer, suggesting presence of pulmonary congestion. PCT delineated tracer retention in the dorsal part of the lung. Serial 5-min late dynamic study in nine cases showed gradual increase in myocardial activity for 30 min in all normal segments and 42% of infarct segments, while less than 13% activity increase was observed in 50% of infarct segments. Thus, serial dynamic PCT with (/sup 13/N)ammonia assessing tracer kinetics in the heart and lung is a valuable adjunct to the static myocardial perfusion imaging for evaluation of various cardiac disorders.

  14. First platinum moderated positron beam based on neutron capture

    CERN Document Server

    Hugenschmidt, C; Repper, R; Schreckenbach, K; Sperr, P; Triftshaeuser, W

    2002-01-01

    A positron beam based on absorption of high energy prompt gamma-rays from thermal neutron capture in sup 1 sup 1 sup 3 Cd was installed at a neutron guide of the high flux reactor at the ILL in Grenoble. Measurements were performed for various source geometries, dependent on converter mass, moderator surface and extraction voltages. The results lead to an optimised design of the in-pile positron source which will be implemented at the Munich research reactor FRM-II. The positron source consists of platinum foils acting as gamma-e sup + e sup - -converter and positron moderator. Due to the negative positron work function moderation in heated platinum leads to emission of monoenergetic positrons. The positron work function of polycrystalline platinum was determined to 1.95(5) eV. After acceleration to several keV by four electrical lenses the beam was magnetically guided in a solenoid field of 7.5 mT leading to a NaI-detector in order to detect the 511 keV gamma-radiation of the annihilating positrons. The posi...

  15. Positron annihilation spectroscopy applied to silicon-based materials

    CERN Document Server

    Taylor, J W

    2000-01-01

    deposition on silicon substrates has been examined. The systematic correlations observed between the nitrogen content of the films and both the fitted Doppler parameters and the positron diffusion lengths are discussed in detail. Profiling measurements of silicon nitride films deposited on silicon substrates and subsequently implanted with silicon ions at a range of fluences were also performed. For higher implantation doses, damage was seen to extend beyond the film layers and into the silicon substrates. Subsequent annealing of two of the samples was seen to have a significant influence on the nature of the films. Positron annihilation spectroscopy, in conjunction with a variable-energy positron beam, has been employed to probe non-destructively the surface and near-surface regions of a selection of technologically important silicon-based samples. By measuring the Doppler broadening of the 511 keV annihilation lineshape, information on the positrons' microenvironment prior to annihilation may be obtained. T...

  16. Positron energy distributions from a hybrid positron source based on channeling radiation

    Energy Technology Data Exchange (ETDEWEB)

    Azadegan, B.; Mahdipour, A. [Hakim Sabzevari University, P.O. Box 397, Sabzevar (Iran, Islamic Republic of); Dabagov, S.B. [INFN LNF, Via E. Fermi 40, 00044 Frascati (RM) (Italy); RAS P.N. Lebedev Physical Institute and NRNU MEPhI, Moscow (Russian Federation); Wagner, W., E-mail: w.wagner@hzdr.de [HZDR Dresden, P.O. Box 510119, 01314 Dresden (Germany)

    2013-08-15

    A hybrid positron source which is based on the generation of channeling radiation by relativistic electrons channeled along different crystallographic planes and axes of a tungsten single crystal and subsequent conversion of radiation into e{sup +}e{sup −}-pairs in an amorphous tungsten target is described. The photon spectra of channeling radiation are calculated using the Doyle–Turner approximation for the continuum potentials and classical equations of motion for channeled particles to obtain their trajectories, velocities and accelerations. The spectral-angular distributions of channeling radiation are found applying classical electrodynamics. Finally, the conversion of radiation into e{sup +}e{sup −}-pairs and the energy distributions of positrons are simulated using the GEANT4 package.

  17. Camera Based Navigation System with Augmented Reality

    Directory of Open Access Journals (Sweden)

    M. Marcu

    2012-06-01

    Full Text Available Nowadays smart mobile devices have enough processing power, memory, storage and always connected wireless communication bandwidth that makes them available for any type of application. Augmented reality (AR proposes a new type of applications that tries to enhance the real world by superimposing or combining virtual objects or computer generated information with it. In this paper we present a camera based navigation system with augmented reality integration. The proposed system aims to the following: the user points the camera of the smartphone towards a point of interest, like a building or any other place, and the application searches for relevant information about that specific place and superimposes the data over the video feed on the display. When the user moves the camera away, changing its orientation, the data changes as well, in real-time, with the proper information about the place that is now in the camera view.

  18. Potential advantages of a cesium fluoride scintillator for a time-of-flight positron camera.

    Science.gov (United States)

    Allemand, R; Gresset, C; Vacher, J

    1980-02-01

    In order to improve the quality of positron tomographic imaging, a time-of-flight technique combined with a classical reconstruction method has been investigated. The decay time of NaI(Tl) and bismuth germanate (BGO) scintillators is too long for this application, and efficiency of the plastic scintillators is too low. Cesium fluoride appears to be a very promising detector material. This paper presents preliminary results obtained with a time-of-flight technique using CsF scintillators. The expected advantages were realized.

  19. Simulation of positron energy distributions from a hybrid positron source based on channeling radiation

    Energy Technology Data Exchange (ETDEWEB)

    Azadegan, B., E-mail: azadegan@hsu.ac.ir; Mahdipour, A., E-mail: Ali.mahdipour88@yahoo.com

    2013-12-01

    Positron energy distributions of a non-conventional positron source which is based on the generation of channeling radiation by relativistic electrons channeled along different crystallographic planes and axes of Si, C, Ge and W single crystals and the subsequent conversion of radiation into e{sup +}e{sup −}-pairs in an amorphous tungsten target have been simulated. The photon spectra of channeling radiation were calculated using the Doyle–Turner approximation for the continuum potentials of the crystallographic planes and axes considered. The classical equations of motion for channeled electrons have been solved in order to obtain the particle trajectories, velocities and accelerations. Applying classical electrodynamics, spectral-angular distributions of channeling radiation are calculated and their dependence on the incidence angle of the electrons is investigated. The calculation of channeling radiation was carried out using our developed Mathematica codes whereas the conversion of radiation into e{sup +}e{sup −}-pairs and the energy distributions of the positrons have been simulated by means of the GEANT4 package.

  20. Vasomotor assessment by camera-based photoplethysmography

    Directory of Open Access Journals (Sweden)

    Trumpp Alexander

    2016-09-01

    Full Text Available Camera-based photoplethysmography (cbPPG is a novel technique that allows the contactless acquisition of cardio-respiratory signals. Previous works on cbPPG most often focused on heart rate extraction. This contribution is directed at the assessment of vasomotor activity by means of cameras. In an experimental study, we show that vasodilation and vasoconstriction both lead to significant changes in cbPPG signals. Our findings underline the potential of cbPPG to monitor vasomotor functions in real-life applications.

  1. Model based scattering correction in time-of-flight cameras.

    Science.gov (United States)

    Schäfer, Henrik; Lenzen, Frank; Garbe, Christoph S

    2014-12-01

    In-camera light scattering is a systematic error of Time-of-Flight depth cameras that significantly reduces the accuracy of the systems. A completely new model is presented, based on raw data calibration and only one additional intrinsic camera parameter. It is shown that the approach effectively removes the errors of in-camera light scattering.

  2. Brightness enhancement of a linac-based intense positron beam for total-reflection high-energy positron diffraction (TRHEPD)

    Science.gov (United States)

    Maekawa, Masaki; Wada, Ken; Fukaya, Yuki; Kawasuso, Atsuo; Mochizuki, Izumi; Shidara, Tetsuo; Hyodo, Toshio

    2014-06-01

    The brightness of a linac-based intense positron beam was enhanced for total-reflection high-energy positron diffraction (TRHEPD) measurements. The beam initially guided by a magnetic field was released into a non-magnetic region and followed by a transmission-type remoderation. The term "TRHEPD" is a new name of reflection high-energy positron diffraction (RHEPD), which is a technique for the determination of the topmost- and near-surface atomic configurations; the total reflection of the positron beam from a solid surface is a unique superior characteristic. The present system provides the final beam of almost the same quality as the previous one with a 22Na-based positron beam [A. Kawasuso et al., Rev. Sci. Instrum. 75, 4585 (2004)] but much increased flux, i.e., almost the same emittance but much higher brightness. It gave a ˜ 60 times intensified diffraction pattern from a Si(111)-(7 × 7) reconstructed surface compared to the previous result. An improved signal-to-noise ratio in the obtained pattern due to the intensified beam allowed observation of clear fractional-order spots in the higher Laue-zones, which had not been observed previously.

  3. Positron lifetime study in dilute electron irradiated lead based alloys

    Energy Technology Data Exchange (ETDEWEB)

    Moya, G. [Lab. de Physique des Materiaux, 13 Marseille (France); Li, X.H. [D.R.F.M., S.P.2.M., M.P., C.E.N.G., 38 Grenoble (France); Menai, A. [Lab. de Physique des Materiaux, 13 Marseille (France); Kherraz, M. [Lab. de Physique des Materiaux, 13 Marseille (France); Amenzou, H. [Lab. de Physique des Materiaux, 13 Marseille (France); Bernardini, J. [Lab. de Metallurgie, 13 Marseille (France); Moser, P. [D.R.F.M., S.P.2.M., M.P., C.E.N.G., 38 Grenoble (France)

    1995-06-01

    The recovery of defects in two dilute solute-lead based alloys (Pb-Au, Pb-Cd) has been followed by positron lifetime measurements after a 3 MeV electron irradiation at 20 K. Two distinct isochronal annealing stages, the first centred at about 150 K and the other around 275 K, are to be observed as exactly the same in both the pure Pb and dilute alloys but the vacancy clustering over the second stage seen in lead and Pb-Au is completely suppressed in the Pb-Cd alloy. The results are discussed in terms of a high interaction between the cadmium atoms and vacancies in agreement with a probable presence of atomic excitons. (orig.)

  4. Operator-based homogeneous coordinates: application in camera document scanning

    Science.gov (United States)

    Juarez-Salazar, Rigoberto; Diaz-Ramirez, Victor H.

    2017-07-01

    An operator-based approach for the study of homogeneous coordinates and projective geometry is proposed. First, some basic geometrical concepts and properties of the operators are investigated in the one- and two-dimensional cases. Then, the pinhole camera model is derived, and a simple method for homography estimation and camera calibration is explained. The usefulness of the analyzed theoretical framework is exemplified by addressing the perspective correction problem for a camera document scanning application. Several experimental results are provided for illustrative purposes. The proposed approach is expected to provide practical insights for inexperienced students on camera calibration, computer vision, and optical metrology among others.

  5. VIDEO-BASED POINT CLOUD GENERATION USING MULTIPLE ACTION CAMERAS

    Directory of Open Access Journals (Sweden)

    T. Teo

    2015-05-01

    Full Text Available Due to the development of action cameras, the use of video technology for collecting geo-spatial data becomes an important trend. The objective of this study is to compare the image-mode and video-mode of multiple action cameras for 3D point clouds generation. Frame images are acquired from discrete camera stations while videos are taken from continuous trajectories. The proposed method includes five major parts: (1 camera calibration, (2 video conversion and alignment, (3 orientation modelling, (4 dense matching, and (5 evaluation. As the action cameras usually have large FOV in wide viewing mode, camera calibration plays an important role to calibrate the effect of lens distortion before image matching. Once the camera has been calibrated, the author use these action cameras to take video in an indoor environment. The videos are further converted into multiple frame images based on the frame rates. In order to overcome the time synchronous issues in between videos from different viewpoints, an additional timer APP is used to determine the time shift factor between cameras in time alignment. A structure form motion (SfM technique is utilized to obtain the image orientations. Then, semi-global matching (SGM algorithm is adopted to obtain dense 3D point clouds. The preliminary results indicated that the 3D points from 4K video are similar to 12MP images, but the data acquisition performance of 4K video is more efficient than 12MP digital images.

  6. Infrared camera based on a curved retina.

    Science.gov (United States)

    Dumas, Delphine; Fendler, Manuel; Berger, Frédéric; Cloix, Baptiste; Pornin, Cyrille; Baier, Nicolas; Druart, Guillaume; Primot, Jérôme; le Coarer, Etienne

    2012-02-15

    Design of miniature and light cameras requires an optical design breakthrough to achieve good optical performance. Solutions inspired by animals' eyes are the most promising. The curvature of the retina offers several advantages, such as uniform intensity and no field curvature, but this feature is not used. The work presented here is a solution to spherically bend monolithic IR detectors. Compared to state-of-the-art methods, a higher fill factor is obtained and the device fabrication process is not modified. We made an IR eye camera with a single lens and a curved IR bolometer. Images captured are well resolved and have good contrast, and the modulation transfer function shows better quality when comparing with planar systems.

  7. A GRAPH BASED BUNDLE ADJUSTMENT FOR INS-CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    D. Bender

    2013-08-01

    Full Text Available In this paper, we present a graph based approach for performing the system calibration of a sensor suite containing a fixed mounted camera and an inertial navigation system. The aim of the presented work is to obtain accurate direct georeferencing of camera images collected with small unmanned aerial systems. Prerequisite for using the pose measurements from the inertial navigation system as exterior orientation for the camera is the knowledge of the static offsets between these devices. Furthermore, the intrinsic parameters of the camera obtained in a laboratory tend to deviate slightly from the values during flights. This induces an in-flight calibration of the intrinsic camera parameters in addition to the mounting offsets between the two devices. The optimization of these values can be done by introducing them as parameters into a bundle adjustment process. We show how to solve this by exploiting a graph optimization framework, which is designed for the least square optimization of general error functions.

  8. a Graph Based Bundle Adjustment for Ins-Camera Calibration

    Science.gov (United States)

    Bender, D.; Schikora, M.; Sturm, J.; Cremers, D.

    2013-08-01

    In this paper, we present a graph based approach for performing the system calibration of a sensor suite containing a fixed mounted camera and an inertial navigation system. The aim of the presented work is to obtain accurate direct georeferencing of camera images collected with small unmanned aerial systems. Prerequisite for using the pose measurements from the inertial navigation system as exterior orientation for the camera is the knowledge of the static offsets between these devices. Furthermore, the intrinsic parameters of the camera obtained in a laboratory tend to deviate slightly from the values during flights. This induces an in-flight calibration of the intrinsic camera parameters in addition to the mounting offsets between the two devices. The optimization of these values can be done by introducing them as parameters into a bundle adjustment process. We show how to solve this by exploiting a graph optimization framework, which is designed for the least square optimization of general error functions.

  9. Global Calibration of Multiple Cameras Based on Sphere Targets

    Directory of Open Access Journals (Sweden)

    Junhua Sun

    2016-01-01

    Full Text Available Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three, while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view.

  10. Benchmarking the Optical Resolving Power of Uav Based Camera Systems

    Science.gov (United States)

    Meißner, H.; Cramer, M.; Piltz, B.

    2017-08-01

    UAV based imaging and 3D object point generation is an established technology. Some of the UAV users try to address (very) highaccuracy applications, i.e. inspection or monitoring scenarios. In order to guarantee such level of detail and accuracy high resolving imaging systems are mandatory. Furthermore, image quality considerably impacts photogrammetric processing, as the tie point transfer, mandatory for forming the block geometry, fully relies on the radiometric quality of images. Thus, empirical testing of radiometric camera performance is an important issue, in addition to standard (geometric) calibration, which normally is covered primarily. Within this paper the resolving power of ten different camera/lens installations has been investigated. Selected systems represent different camera classes, like DSLRs, system cameras, larger format cameras and proprietary systems. As the systems have been tested in wellcontrolled laboratory conditions and objective quality measures have been derived, individual performance can be compared directly, thus representing a first benchmark on radiometric performance of UAV cameras. The results have shown, that not only the selection of appropriate lens and camera body has an impact, in addition the image pre-processing, i.e. the use of a specific debayering method, significantly influences the final resolving power.

  11. A method of camera calibration based on image processing

    Science.gov (United States)

    Duan, Jin; Kong, Chuiliu; Zhang, Dan; Jing, Wenbo

    2008-03-01

    According to the principle of optical measurement, an effective and simple method to measure the distortion of CCD camera and lens is presented in this paper. The method is based on computer active vision and digital image processing technology. The radial distortion of camera lens is considered in the method, while the camera parameters such as the pixel interval and focus of camera are calibrated. The optoelectronic theodolite is used in our experiment system. The light spot can imaging in CCD camera from the theodolite. The position of the light spot should be changed without the camera's rotation, while the optoelectronic theodolite rotates an angle. All view reference points in the image are worked out by computing the angle between actual point and the optical center where the distortion can be ignored. The error correction parameters are computed, and then the camera parameters are calibrated. The sub-pixel subdivision method is used to improve the point detection precision in our method. The experiment result shows that our method is effective, simple and practical.

  12. A cooperative control algorithm for camera based observational systems.

    Energy Technology Data Exchange (ETDEWEB)

    Young, Joseph G.

    2012-01-01

    Over the last several years, there has been considerable growth in camera based observation systems for a variety of safety, scientific, and recreational applications. In order to improve the effectiveness of these systems, we frequently desire the ability to increase the number of observed objects, but solving this problem is not as simple as adding more cameras. Quite often, there are economic or physical restrictions that prevent us from adding additional cameras to the system. As a result, we require methods that coordinate the tracking of objects between multiple cameras in an optimal way. In order to accomplish this goal, we present a new cooperative control algorithm for a camera based observational system. Specifically, we present a receding horizon control where we model the underlying optimal control problem as a mixed integer linear program. The benefit of this design is that we can coordinate the actions between each camera while simultaneously respecting its kinematics. In addition, we further improve the quality of our solution by coupling our algorithm with a Kalman filter. Through this integration, we not only add a predictive component to our control, but we use the uncertainty estimates provided by the filter to encourage the system to periodically observe any outliers in the observed area. This combined approach allows us to intelligently observe the entire region of interest in an effective and thorough manner.

  13. A mathematical model for camera calibration based on straight lines

    Directory of Open Access Journals (Sweden)

    Antonio M. G. Tommaselli

    2005-12-01

    Full Text Available In other to facilitate the automation of camera calibration process, a mathematical model using straight lines was developed, which is based on the equivalent planes mathematical model. Parameter estimation of the developed model is achieved by the Least Squares Method with Conditions and Observations. The same method of adjustment was used to implement camera calibration with bundles, which is based on points. Experiments using simulated and real data have shown that the developed model based on straight lines gives results comparable to the conventional method with points. Details concerning the mathematical development of the model and experiments with simulated and real data will be presented and the results with both methods of camera calibration, with straight lines and with points, will be compared.

  14. Camera-based measurement of respiratory rates is reliable.

    Science.gov (United States)

    Becker, Christoph; Achermann, Stefan; Rocque, Mukul; Kirenko, Ihor; Schlack, Andreas; Dreher-Hummel, Thomas; Zumbrunn, Thomas; Bingisser, Roland; Nickel, Christian H

    2017-06-01

    Respiratory rate (RR) is one of the most important vital signs used to detect whether a patient is in critical condition. It is part of many risk scores and its measurement is essential for triage of patients in emergency departments. It is often not recorded as measurement is cumbersome and time-consuming. We intended to evaluate the accuracy of camera-based measurements as an alternative measurement to the current practice of manual counting. We monitored the RR of healthy male volunteers with a camera-based prototype application and simultaneously by manual counting and by capnography, which was considered the gold standard. The four assessors were mutually blinded. We simulated normoventilation, hypoventilation and hyperventilation as well as deep, normal and superficial breathing depths to assess potential clinical settings. The volunteers were assessed while being undressed, wearing a T-shirt or a winter coat. In total, 20 volunteers were included. The results of camera-based measurements of RRs and capnography were in close agreement throughout all clothing styles and respiratory patterns (Pearson's correlation coefficient, r=0.90-1.00, except for one scenario, in which the volunteer breathed slowly dressed in a winter coat r=0.84). In the winter-coat scenarios, the camera-based prototype application was superior to human counters. In our pilot study, we found that camera-based measurements delivered accurate and reliable results. Future studies need to show that camera-based measurements are a secure alternative for measuring RRs in clinical settings as well.

  15. Movement-based interaction in camera spaces: a conceptual framework

    DEFF Research Database (Denmark)

    Eriksson, Eva; Hansen, Thomas Riisgaard; Lykke-Olesen, Andreas

    2007-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movementbased projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space,...

  16. Design of microcontroller based system for automation of streak camera.

    Science.gov (United States)

    Joshi, M J; Upadhyay, J; Deshpande, P P; Sharma, M L; Navathe, C P

    2010-08-01

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.

  17. CAMERA-BASED SOFTWARE IN REHABILITATION/THERAPY INTERVENTION

    Directory of Open Access Journals (Sweden)

    Anthony Lewis Brooks

    2014-06-01

    Full Text Available Use of an affordable, easily adaptable, ‘non-specific camera-based software’ that is rarely used in the field of rehabilitation is reported in a study with 91 participants over the duration of six workshop sessions. ‘Non-specific camera-based software’ refers to software that is not dependent on specific hardware. Adaptable means that human tracking and created artefact interaction in the camera field of view is relatively easily changed as one desires via a user-friendly GUI. The significance of having both available for contemporary intervention is argued. Conclusions are that the mature, robust, and accessible software EyeCon is a potent and significant user-friendly tool in the field of rehabilitation/therapy and warrants wider exploration.

  18. CAMERA-BASED SOFTWARE IN REHABILITATION/THERAPY INTERVENTION

    DEFF Research Database (Denmark)

    Brooks, Anthony Lewis

    2014-01-01

    Use of an affordable, easily adaptable, ‘non-specific camera-based software’ that is rarely used in the field of rehabilitation is reported in a study with 91 participants over the duration of six workshop sessions. ‘Non-specific camera-based software’ refers to software that is not dependent...... on specific hardware. Adaptable means that human tracking and created artefact interaction in the camera field of view is relatively easily changed as one desires via a user-friendly GUI. The significance of having both available for contemporary intervention is argued. Conclusions are that the mature, robust......, and accessible software EyeCon is a potent and significant tool in the field of rehabilitation/therapy and warrants wider exploration....

  19. CAMERA-BASED SOFTWARE IN REHABILITATION/THERAPY INTERVENTION (extended)

    DEFF Research Database (Denmark)

    Brooks, Anthony Lewis

    2014-01-01

    Use of an affordable, easily adaptable, ‘non-specific camera-based software’ that is rarely used in the field of rehabilitation is reported in a study with 91 participants over the duration of six workshop sessions. ‘Non-specific camera-based software’ refers to software that is not dependent...... on specific hardware. Adaptable means that human tracking and created artefact interaction in the camera field of view is relatively easily changed as one desires via a user-friendly GUI. The significance of having both available for contemporary intervention is argued. Conclusions are that the mature, robust......, and accessible software EyeCon is a potent and significant user-friendly tool in the field of rehabilitation/therapy and warrants wider exploration....

  20. a Symmetry Based Study of Positron Annihilation Spectra

    Science.gov (United States)

    Adam, Gh.; Adam, S.

    We describe a method for off-line analysis of spectra measured by two-dimensional angular correlation of annihilation radiation ({2D}-{ACAR}) positron spectroscopy. The method takes into account, at all its stages, two salient data features: the piecewise constant discretization of the {2D} physical momentum distribution into square pixels, performed by the setup, and the occurrence of a characteristic {2D} projected symmetry of the positron-electron pair momentum distribution. Several validating criteria are derived which secure significantly increased reliability of the output. The method is tested on {2D}-{ACAR} spectra measured on (R)Ba2Cu3O7-δ (R123; R= Y, Dy) single crystals. It resolves ridge Fermi surfaces ({FS}) up to 3rd Umklapp components on both kinds of R123 spectra. Moreover, on a c-axis-projected Y123 spectrum, measured at 300 K, it resolves a small but clear signature of the pillbox {FS} at the S point of the first Brillouin zone as well.

  1. Lights, Camera, Project-Based Learning!

    Science.gov (United States)

    Cox, Dannon G.; Meaney, Karen S.

    2018-01-01

    A physical education instructor incorporates a teaching method known as project-based learning (PBL) in his physical education curriculum. Utilizing video-production equipment to imitate the production of a televisions show, sixth-grade students attending a charter school invited college students to share their stories about physical activity and…

  2. Activity-based costing evaluation of a [(18)F]-fludeoxyglucose positron emission tomography study.

    Science.gov (United States)

    Krug, Bruno; Van Zanten, Annie; Pirson, Anne-Sophie; Crott, Ralph; Borght, Thierry Vander

    2009-10-01

    The aim of the study is to use the activity-based costing approach to give a better insight in the actual cost structure of a positron emission tomography procedure (FDG-PET) by defining the constituting components and by simulating the impact of possible resource or practice changes. The cost data were obtained from the hospital administration, personnel and vendor interviews as well as from structured questionnaires. A process map separates the process in 16 patient- and non-patient-related activities, to which the detailed cost data are related. One-way sensitivity analyses shows to which degree of uncertainty the different parameters affect the individual cost and evaluate the impact of possible resource or practice changes like the acquisition of a hybrid PET/CT device, the patient throughput or the sales price of a 370MBq (18)F-FDG patient dose. The PET centre spends 73% of time in clinical activities and the resting time after injection of the tracer (42%) is the single largest departmental cost element. The tracer cost and the operational time have the most influence on cost per procedure. The analysis shows a total cost per FDG-PET ranging from 859 Euro for a BGO PET camera to 1142 Euro for a 16 slices PET-CT system, with a distribution of the resource costs in decreasing order: materials (44%), equipment (24%), wage (16%), space (6%) and hospital overhead (10%). The cost of FDG-PET is mainly influenced by the cost of the radiopharmaceutical. Therefore, the latter rather than the operational time should be reduced in order to improve its cost-effectiveness.

  3. Range Camera Self-Calibration Based on Integrated Bundle Adjustment via Joint Setup with a 2D Digital Camera

    Directory of Open Access Journals (Sweden)

    Mehran Sattari

    2011-09-01

    Full Text Available Time-of-flight cameras, based on Photonic Mixer Device (PMD technology, are capable of measuring distances to objects at high frame rates, however, the measured ranges and the intensity data contain systematic errors that need to be corrected. In this paper, a new integrated range camera self-calibration method via joint setup with a digital (RGB camera is presented. This method can simultaneously estimate the systematic range error parameters as well as the interior and external orientation parameters of the camera. The calibration approach is based on photogrammetric bundle adjustment of observation equations originating from collinearity condition and a range errors model. Addition of a digital camera to the calibration process overcomes the limitations of small field of view and low pixel resolution of the range camera. The tests are performed on a dataset captured by a PMD[vision]-O3 camera from a multi-resolution test field of high contrast targets. An average improvement of 83% in RMS of range error and 72% in RMS of coordinate residual, over that achieved with basic calibration, was realized in an independent accuracy assessment. Our proposed calibration method also achieved 25% and 36% improvement on RMS of range error and coordinate residual, respectively, over that obtained by integrated calibration of the single PMD camera.

  4. Range camera self-calibration based on integrated bundle adjustment via joint setup with a 2D digital camera.

    Science.gov (United States)

    Shahbazi, Mozhdeh; Homayouni, Saeid; Saadatseresht, Mohammad; Sattari, Mehran

    2011-01-01

    Time-of-flight cameras, based on photonic mixer device (PMD) technology, are capable of measuring distances to objects at high frame rates, however, the measured ranges and the intensity data contain systematic errors that need to be corrected. In this paper, a new integrated range camera self-calibration method via joint setup with a digital (RGB) camera is presented. This method can simultaneously estimate the systematic range error parameters as well as the interior and external orientation parameters of the camera. The calibration approach is based on photogrammetric bundle adjustment of observation equations originating from collinearity condition and a range errors model. Addition of a digital camera to the calibration process overcomes the limitations of small field of view and low pixel resolution of the range camera. The tests are performed on a dataset captured by a PMD[vision]-O3 camera from a multi-resolution test field of high contrast targets. An average improvement of 83% in RMS of range error and 72% in RMS of coordinate residual, over that achieved with basic calibration, was realized in an independent accuracy assessment. Our proposed calibration method also achieved 25% and 36% improvement on RMS of range error and coordinate residual, respectively, over that obtained by integrated calibration of the single PMD camera.

  5. Development of mini linac-based positron source and an efficient positronium convertor for positively charged antihydrogen production

    Science.gov (United States)

    Muranaka, T.; Debu, P.; Dupré, P.; Liszkay, L.; Mansoulie, B.; Pérez, P.; Rey, J. M.; Ruiz, N.; Sacquin, Y.; Crivelli, P.; Gendotti, U.; Rubbia, A.

    2010-04-01

    We have installed in Saclay a facility for an intense positron source in November 2008. It is based on a compact 5.5 MeV electron linac connected to a reaction chamber with a tungsten target inside to produce positrons via pair production. The expected production rate for fast positrons is 5·1011 per second. The study of moderation of fast positrons and the construction of a slow positron trap are underway. In parallel, we have investigated an efficient positron-positronium convertor using porous silica materials. These studies are parts of a project to produce positively charged antihydrogen ions aiming to demonstrate the feasibility of a free fall antigravity measurement of neutral antihydrogen.

  6. Development of mini linac-based positron source and an efficient positronium convertor for positively charged antihydrogen production

    Energy Technology Data Exchange (ETDEWEB)

    Muranaka, T; Debu, P; Dupre, P; Liszkay, L; Mansoulie, B; Perez, P; Rey, J M; Ruiz, N; Sacquin, Y [Irfu, CEA-Saclay, F-91191 Gif-sur-Yvette Cedex (France); Crivelli, P; Gendotti, U; Rubbia, A, E-mail: tomoko.muranaka@cea.f [Institut fuer TelichenPhysik, ETHZ, CH-8093 Zuerich (Switzerland)

    2010-04-01

    We have installed in Saclay a facility for an intense positron source in November 2008. It is based on a compact 5.5 MeV electron linac connected to a reaction chamber with a tungsten target inside to produce positrons via pair production. The expected production rate for fast positrons is 5{center_dot}10{sup 11} per second. The study of moderation of fast positrons and the construction of a slow positron trap are underway. In parallel, we have investigated an efficient positron-positronium convertor using porous silica materials. These studies are parts of a project to produce positively charged antihydrogen ions aiming to demonstrate the feasibility of a free fall antigravity measurement of neutral antihydrogen.

  7. Analysis of unstructured video based on camera motion

    Science.gov (United States)

    Abdollahian, Golnaz; Delp, Edward J.

    2007-01-01

    Although considerable work has been done in management of "structured" video such as movies, sports, and television programs that has known scene structures, "unstructured" video analysis is still a challenging problem due to its unrestricted nature. The purpose of this paper is to address issues in the analysis of unstructured video and in particular video shot by a typical unprofessional user (i.e home video). We describe how one can make use of camera motion information for unstructured video analysis. A new concept, "camera viewing direction," is introduced as the building block of home video analysis. Motion displacement vectors are employed to temporally segment the video based on this concept. We then find the correspondence between the camera behavior with respect to the subjective importance of the information in each segment and describe how different patterns in the camera motion can indicate levels of interest in a particular object or scene. By extracting these patterns, the most representative frames, keyframes, for the scenes are determined and aggregated to summarize the video sequence.

  8. Activity-based costing evaluation of a [F-18]-fludeoxyglucose positron emission tomography study

    NARCIS (Netherlands)

    Krug, Bruno; Van Zanten, Annie; Pirson, Anne-Sophie; Crott, Ralph; Vander Borght, Thierry

    2009-01-01

    Objective: The aim of the study is to use the activity-based costing approach to give a better insight in the actual cost structure of a positron emission tomography procedure (FDG-PET) by defining the constituting components and by simulating the impact of possible resource or practice changes.

  9. Parallel Computational Intelligence-Based Multi-Camera Surveillance System

    Directory of Open Access Journals (Sweden)

    Sergio Orts-Escolano

    2014-04-01

    Full Text Available In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units. It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.

  10. Goal-oriented rectification of camera-based document images.

    Science.gov (United States)

    Stamatopoulos, Nikolaos; Gatos, Basilis; Pratikakis, Ioannis; Perantonis, Stavros J

    2011-04-01

    Document digitization with either flatbed scanners or camera-based systems results in document images which often suffer from warping and perspective distortions that deteriorate the performance of current OCR approaches. In this paper, we present a goal-oriented rectification methodology to compensate for undesirable document image distortions aiming to improve the OCR result. Our approach relies upon a coarse-to-fine strategy. First, a coarse rectification is accomplished with the aid of a computationally low cost transformation which addresses the projection of a curved surface to a 2-D rectangular area. The projection of the curved surface on the plane is guided only by the textual content's appearance in the document image while incorporating a transformation which does not depend on specific model primitives or camera setup parameters. Second, pose normalization is applied on the word level aiming to restore all the local distortions of the document image. Experimental results on various document images with a variety of distortions demonstrate the robustness and effectiveness of the proposed rectification methodology using a consistent evaluation methodology that encounters OCR accuracy and a newly introduced measure using a semi-automatic procedure.

  11. A color based rangefinder for an omnidirectional camera

    NARCIS (Netherlands)

    Nguyen, Q.; Visser, A.; Balakirsky, S.; Carpin, S.; Lewis, M.

    2009-01-01

    This paper proposes a method to use the omnidirectional camera as a rangefinder by using color detection. The omnicam rangefinder has been tested in USARSim for its accuracy and for its practical use to build maps of the environment. The results of the test shows that an omnidirectional camera can

  12. An autonomous sensor module based on a legacy CCTV camera

    Science.gov (United States)

    Kent, P. J.; Faulkner, D. A. A.; Marshall, G. F.

    2016-10-01

    A UK MoD funded programme into autonomous sensors arrays (SAPIENT) has been developing new, highly capable sensor modules together with a scalable modular architecture for control and communication. As part of this system there is a desire to also utilise existing legacy sensors. The paper reports upon the development of a SAPIENT-compliant sensor module using a legacy Close-Circuit Television (CCTV) pan-tilt-zoom (PTZ) camera. The PTZ camera sensor provides three modes of operation. In the first mode, the camera is automatically slewed to acquire imagery of a specified scene area, e.g. to provide "eyes-on" confirmation for a human operator or for forensic purposes. In the second mode, the camera is directed to monitor an area of interest, with zoom level automatically optimized for human detection at the appropriate range. Open source algorithms (using OpenCV) are used to automatically detect pedestrians; their real world positions are estimated and communicated back to the SAPIENT central fusion system. In the third mode of operation a "follow" mode is implemented where the camera maintains the detected person within the camera field-of-view without requiring an end-user to directly control the camera with a joystick.

  13. Camera Coverage Estimation Based on Multistage Grid Subdivision

    Directory of Open Access Journals (Sweden)

    Meizhen Wang

    2017-04-01

    Full Text Available Visual coverage is one of the most important quality indexes for depicting the usability of an individual camera or camera network. It is the basis for camera network deployment, placement, coverage-enhancement, planning, etc. Precision and efficiency are critical influences on applications, especially those involving several cameras. This paper proposes a new method to efficiently estimate superior camera coverage. First, the geographic area that is covered by the camera and its minimum bounding rectangle (MBR without considering obstacles is computed using the camera parameters. Second, the MBR is divided into grids using the initial grid size. The status of the four corners of each grid is estimated by a line of sight (LOS algorithm. If the camera, considering obstacles, covers a corner, the status is represented by 1, otherwise by 0. Consequently, the status of a grid can be represented by a code that is a combination of 0s or 1s. If the code is not homogeneous (not four 0s or four 1s, the grid will be divided into four sub-grids until the sub-grids are divided into a specific maximum level or their codes are homogeneous. Finally, after performing the process above, total camera coverage is estimated according to the size and status of all grids. Experimental results illustrate that the proposed method’s accuracy is determined by the method that divided the coverage area into the smallest grids at the maximum level, while its efficacy is closer to the method that divided the coverage area into the initial grids. It considers both efficiency and accuracy. The initial grid size and maximum level are two critical influences on the proposed method, which can be determined by weighing efficiency and accuracy.

  14. A subwavelength resolution microwave/6.3 GHz camera based on a metamaterial absorber

    Science.gov (United States)

    Xie, Yunsong; Fan, Xin; Chen, Yunpeng; Wilson, Jeffrey D.; Simons, Rainee N.; Xiao, John Q.

    2017-01-01

    The design, fabrication and characterization of a novel metamaterial absorber based camera with subwavelength spatial resolution are investigated. The proposed camera is featured with simple and lightweight design, easy portability, low cost, high resolution and sensitivity, and minimal image interference or distortion to the original field distribution. The imaging capability of the proposed camera was characterized in both near field and far field ranges. The experimental and simulated near field images both reveal that the camera produces qualitatively accurate images with negligible distortion to the original field distribution. The far field demonstration was done by coupling the designed camera with a microwave convex lens. The far field results further demonstrate that the camera can capture quantitatively accurate electromagnetic wave distribution in the diffraction limit. The proposed camera can be used in application such as non-destructive image and beam direction tracer.

  15. A subwavelength resolution microwave/6.3 GHz camera based on a metamaterial absorber.

    Science.gov (United States)

    Xie, Yunsong; Fan, Xin; Chen, Yunpeng; Wilson, Jeffrey D; Simons, Rainee N; Xiao, John Q

    2017-01-10

    The design, fabrication and characterization of a novel metamaterial absorber based camera with subwavelength spatial resolution are investigated. The proposed camera is featured with simple and lightweight design, easy portability, low cost, high resolution and sensitivity, and minimal image interference or distortion to the original field distribution. The imaging capability of the proposed camera was characterized in both near field and far field ranges. The experimental and simulated near field images both reveal that the camera produces qualitatively accurate images with negligible distortion to the original field distribution. The far field demonstration was done by coupling the designed camera with a microwave convex lens. The far field results further demonstrate that the camera can capture quantitatively accurate electromagnetic wave distribution in the diffraction limit. The proposed camera can be used in application such as non-destructive image and beam direction tracer.

  16. Camera calibration method of binocular stereo vision based on OpenCV

    Science.gov (United States)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  17. NIR spectrophotometric system based on a conventional CCD camera

    Science.gov (United States)

    Vilaseca, Meritxell; Pujol, Jaume; Arjona, Montserrat

    2003-05-01

    The near infrared spectral region (NIR) is useful in many applications. These include agriculture, the food and chemical industry, and textile and medical applications. In this region, spectral reflectance measurements are currently made with conventional spectrophotometers. These instruments are expensive since they use a diffraction grating to obtain monochromatic light. In this work, we present a multispectral imaging based technique for obtaining the reflectance spectra of samples in the NIR region (800 - 1000 nm), using a small number of measurements taken through different channels of a conventional CCD camera. We used methods based on the Wiener estimation, non-linear methods and principal component analysis (PCA) to reconstruct the spectral reflectance. We also analyzed, by numerical simulation, the number and shape of the filters that need to be used in order to obtain good spectral reconstructions. We obtained the reflectance spectra of a set of 30 spectral curves using a minimum of 2 and a maximum of 6 filters under the influence of two different halogen lamps with color temperatures Tc1 = 2852K and Tc2 = 3371K. The results obtained show that using between three and five filters with a large spectral bandwidth (FWHM = 60 nm), the reconstructed spectral reflectance of the samples was very similar to that of the original spectrum. The small amount of errors in the spectral reconstruction shows the potential of this method for reconstructing spectral reflectances in the NIR range.

  18. A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer.

    Science.gov (United States)

    Shen, Bailey Y; Mukai, Shizuo

    2017-01-01

    Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm × 91mm × 45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient.

  19. A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer

    Directory of Open Access Journals (Sweden)

    Bailey Y. Shen

    2017-01-01

    Full Text Available Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm×91mm×45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient.

  20. Photometric stereo-based single time-of-flight camera.

    Science.gov (United States)

    Kim, Sun Kwon; Kang, Byongmin; Heo, Jingu; Jung, Seung-Won; Choi, Ouk

    2014-01-01

    We present a method to enhance depth quality of a time-of-flight (ToF) camera without additional devices or hardware modifications. By controlling the turn-off patterns of the LEDs of the camera, we obtain depth and normal maps simultaneously. Sixteen subphase images are acquired with variations in gate-pulse timing and light emission pattern of the camera. The subphase images allow us to obtain a normal map, which are combined with depth maps for improved depth details. These details typically cannot be captured by conventional ToF cameras. By the proposed method, the average of absolute differences between the measured and laser-scanned depth maps has decreased from 4.57 to 3.77 mm.

  1. A hemispherical electronic eye camera based on compressible silicon optoelectronics.

    Science.gov (United States)

    Ko, Heung Cho; Stoykovich, Mark P; Song, Jizhou; Malyarchuk, Viktor; Choi, Won Mook; Yu, Chang-Jae; Geddes, Joseph B; Xiao, Jianliang; Wang, Shuodao; Huang, Yonggang; Rogers, John A

    2008-08-07

    The human eye is a remarkable imaging device, with many attractive design features. Prominent among these is a hemispherical detector geometry, similar to that found in many other biological systems, that enables a wide field of view and low aberrations with simple, few-component imaging optics. This type of configuration is extremely difficult to achieve using established optoelectronics technologies, owing to the intrinsically planar nature of the patterning, deposition, etching, materials growth and doping methods that exist for fabricating such systems. Here we report strategies that avoid these limitations, and implement them to yield high-performance, hemispherical electronic eye cameras based on single-crystalline silicon. The approach uses wafer-scale optoelectronics formed in unusual, two-dimensionally compressible configurations and elastomeric transfer elements capable of transforming the planar layouts in which the systems are initially fabricated into hemispherical geometries for their final implementation. In a general sense, these methods, taken together with our theoretical analyses of their associated mechanics, provide practical routes for integrating well-developed planar device technologies onto the surfaces of complex curvilinear objects, suitable for diverse applications that cannot be addressed by conventional means.

  2. A bionic camera-based polarization navigation sensor.

    Science.gov (United States)

    Wang, Daobin; Liang, Huawei; Zhu, Hui; Zhang, Shuai

    2014-07-21

    Navigation and positioning technology is closely related to our routine life activities, from travel to aerospace. Recently it has been found that Cataglyphis (a kind of desert ant) is able to detect the polarization direction of skylight and navigate according to this information. This paper presents a real-time bionic camera-based polarization navigation sensor. This sensor has two work modes: one is a single-point measurement mode and the other is a multi-point measurement mode. An indoor calibration experiment of the sensor has been done under a beam of standard polarized light. The experiment results show that after noise reduction the accuracy of the sensor can reach up to 0.3256°. It is also compared with GPS and INS (Inertial Navigation System) in the single-point measurement mode through an outdoor experiment. Through time compensation and location compensation, the sensor can be a useful alternative to GPS and INS. In addition, the sensor also can measure the polarization distribution pattern when it works in multi-point measurement mode.

  3. Study of CT-based positron range correction in high resolution 3D PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Cal-Gonzalez, J., E-mail: jacobo@nuclear.fis.ucm.es [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Herraiz, J.L. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Espana, S. [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Vicente, E. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Instituto de Estructura de la Materia, Consejo Superior de Investigaciones Cientificas (CSIC), Madrid (Spain); Herranz, E. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain); Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital General Universitario Gregorio Maranon, Madrid (Spain); Vaquero, J.J. [Dpto. de Bioingenieria e Ingenieria Espacial, Universidad Carlos III, Madrid (Spain); Udias, J.M. [Grupo de Fisica Nuclear, Dpto. Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid (Spain)

    2011-08-21

    Positron range limits the spatial resolution of PET images and has a different effect for different isotopes and positron propagation materials. Therefore it is important to consider it during image reconstruction, in order to obtain optimal image quality. Positron range distributions for most common isotopes used in PET in different materials were computed using the Monte Carlo simulations with PeneloPET. The range profiles were introduced into the 3D OSEM image reconstruction software FIRST and employed to blur the image either in the forward projection or in the forward and backward projection. The blurring introduced takes into account the different materials in which the positron propagates. Information on these materials may be obtained, for instance, from a segmentation of a CT image. The results of introducing positron blurring in both forward and backward projection operations was compared to using it only during forward projection. Further, the effect of different shapes of positron range profile in the quality of the reconstructed images with positron range correction was studied. For high positron energy isotopes, the reconstructed images show significant improvement in spatial resolution when positron range is taken into account during reconstruction, compared to reconstructions without positron range modeling.

  4. A subwavelength resolution microwave/6.3 GHz camera based on a metamaterial absorber

    OpenAIRE

    Yunsong Xie; Xin Fan; Yunpeng Chen; Jeffrey D. Wilson; Rainee N. Simons; John Q. Xiao

    2017-01-01

    The design, fabrication and characterization of a novel metamaterial absorber based camera with subwavelength spatial resolution are investigated. The proposed camera is featured with simple and lightweight design, easy portability, low cost, high resolution and sensitivity, and minimal image interference or distortion to the original field distribution. The imaging capability of the proposed camera was characterized in both near field and far field ranges. The experimental and simulated near...

  5. A real-time camera calibration system based on OpenCV

    Science.gov (United States)

    Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng

    2015-07-01

    Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.

  6. A luminescence imaging system based on a CCD camera

    DEFF Research Database (Denmark)

    Duller, G.A.T.; Bøtter-Jensen, L.; Markey, B.G.

    1997-01-01

    described here has a maximum spatial resolution of 17 mu m; though this may be varied under software control to alter the signal-to-noise ratio. The camera has been mounted on a Riso automated TL/OSL reader, and both the reader and the CCD are under computer control. In the near u.v and blue part...

  7. Oscillator based analog to digital converters applied for charge based radiation detectors in positron emission tomography

    OpenAIRE

    Völker, M.

    2014-01-01

    This thesis presents the development of a readout strategy and a front-end for radiation detectors especially adapted for positron emission tomography. The developed front-end is optimized for the implementation in modern CMOS technologies. On one hand, most of the signal processing is transferred into the digital domain to benefit from the high digital integration density. On the other hand, the circuits have to be robust against cross-talk and power supply noise. Low-power design methods ar...

  8. Camera-based single-molecule FRET detection with improved time resolution

    NARCIS (Netherlands)

    Farooq, S.; Hohlbein, J.C.

    2015-01-01

    The achievable time resolution of camera-based single-molecule detection is often limited by the frame rate of the camera. Especially in experiments utilizing single-molecule Fo¨rster resonance energy transfer (smFRET) to probe conformational dynamics of biomolecules, increasing the frame rate by

  9. Handheld Longwave Infrared Camera Based on Highly-Sensitive Quantum Well Infrared Photodetectors Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a compact handheld longwave infrared camera based on quantum well infrared photodetector (QWIP) focal plane array (FPA) technology. Based on...

  10. Vector-Based Ground Surface and Object Representation Using Cameras

    Science.gov (United States)

    2009-12-01

    LFSSWing setting was the same as daytime. Not just lack of illumination , but also other artificial lighting sources by other traveling car or streetlight ...camera problem is that data are affected by changes in illumination from one time to another, which results in an inconstant color source. This is the... illumination conditions. 24 CHAPTER 3 PATH FINDER SMART SENSOR Introduction The Path Finder Smart Sensor (PFSS) is a perception element and

  11. FASTICA based denoising for single sensor Digital Cameras images

    OpenAIRE

    Shawetangi kala; Raj Kumar Sahu

    2012-01-01

    Digital color cameras use a single sensor equipped with a color filter array (CFA) to capture scenes in color. Since each sensor cell can record only one color value, the other two missing components at each position need to be interpolated. The color interpolation process is usually called color demosaicking (CDM). The quality of demosaicked images is degraded due to the sensor noise introduced during the image acquisition process. Many advanced denoising algorithms, which are designed for ...

  12. On camera-based smoke and gas leakage detection

    Energy Technology Data Exchange (ETDEWEB)

    Nyboe, Hans Olav

    1999-07-01

    Gas detectors are found in almost every part of industry and in many homes as well. An offshore oil or gas platform may host several hundred gas detectors. The ability of the common point and open path gas detectors to detect leakages depends on their location relative to the location of a gas cloud. This thesis describes the development of a passive volume gas detector, that is, one than will detect a leakage anywhere in the area monitored. After the consideration of several detection techniques it was decided to use an ordinary monochrome camera as sensor. Because a gas leakage may perturb the index of refraction, parts of the background appear to be displaced from their true positions, and it is necessary to develop algorithms that can deal with small differences between images. The thesis develops two such algorithms. Many image regions can be defined and several feature values can be computed for each region. The value of the features depends on the pattern in the image regions. The classes studied in this work are: reference, gas, smoke and human activity. Test show that observation belonging to these classes can be classified fairly high accuracy. The features in the feature set were chosen and developed for this particular application. Basically, the features measure the magnitude of pixel differences, size of detected phenomena and image distortion. Interesting results from many experiments are presented. Most important, the experiments show that apparent motion caused by a gas leakage or heat convection can be detected by means of a monochrome camera. Small leakages of methane can be detected at a range of about four metres. Other gases, such as butane, where the densities differ more from the density of air than the density of methane does, can be detected further from the camera. Gas leakages large enough to cause condensation have been detected at a camera distance of 20 metres. 59 refs., 42 figs., 13 tabs.

  13. Positron depth profiling of the structural and electronic structure transformations of hydrogenated Mg-based thin films

    NARCIS (Netherlands)

    Eijt, S.W.H.; Kind, R.; Singh, S.; Schut, H.; Legerstee, W.J.; Hendrikx, R.W.A.; Svetchnikov, V.L.; Westerwaal, R.J.; Dam, B.

    2009-01-01

    We report positron depth-profiling studies on the hydrogen sorption behavior and phase evolution of Mg-based thin films. We show that the main changes in the depth profiles resulting from the hydrogenation to the respective metal hydrides are related to a clear broadening in the observed electron

  14. Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task

    Directory of Open Access Journals (Sweden)

    Nicholas T. Bott

    2017-06-01

    Full Text Available Background: Web cameras are increasingly part of the standard hardware of most smart devices. Eye movements can often provide a noninvasive “window on the brain,” and the recording of eye movements using web cameras is a burgeoning area of research.Objective: This study investigated a novel methodology for administering a visual paired comparison (VPC decisional task using a web camera.To further assess this method, we examined the correlation between a standard eye-tracking camera automated scoring procedure [obtaining images at 60 frames per second (FPS] and a manually scored procedure using a built-in laptop web camera (obtaining images at 3 FPS.Methods: This was an observational study of 54 clinically normal older adults.Subjects completed three in-clinic visits with simultaneous recording of eye movements on a VPC decision task by a standard eye tracker camera and a built-in laptop-based web camera. Inter-rater reliability was analyzed using Siegel and Castellan's kappa formula. Pearson correlations were used to investigate the correlation between VPC performance using a standard eye tracker camera and a built-in web camera.Results: Strong associations were observed on VPC mean novelty preference score between the 60 FPS eye tracker and 3 FPS built-in web camera at each of the three visits (r = 0.88–0.92. Inter-rater agreement of web camera scoring at each time point was high (κ = 0.81–0.88. There were strong relationships on VPC mean novelty preference score between 10, 5, and 3 FPS training sets (r = 0.88–0.94. Significantly fewer data quality issues were encountered using the built-in web camera.Conclusions: Human scoring of a VPC decisional task using a built-in laptop web camera correlated strongly with automated scoring of the same task using a standard high frame rate eye tracker camera.While this method is not suitable for eye tracking paradigms requiring the collection and analysis of fine-grained metrics, such as

  15. Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task.

    Science.gov (United States)

    Bott, Nicholas T; Lange, Alex; Rentz, Dorene; Buffalo, Elizabeth; Clopton, Paul; Zola, Stuart

    2017-01-01

    Background: Web cameras are increasingly part of the standard hardware of most smart devices. Eye movements can often provide a noninvasive "window on the brain," and the recording of eye movements using web cameras is a burgeoning area of research. Objective: This study investigated a novel methodology for administering a visual paired comparison (VPC) decisional task using a web camera.To further assess this method, we examined the correlation between a standard eye-tracking camera automated scoring procedure [obtaining images at 60 frames per second (FPS)] and a manually scored procedure using a built-in laptop web camera (obtaining images at 3 FPS). Methods: This was an observational study of 54 clinically normal older adults.Subjects completed three in-clinic visits with simultaneous recording of eye movements on a VPC decision task by a standard eye tracker camera and a built-in laptop-based web camera. Inter-rater reliability was analyzed using Siegel and Castellan's kappa formula. Pearson correlations were used to investigate the correlation between VPC performance using a standard eye tracker camera and a built-in web camera. Results: Strong associations were observed on VPC mean novelty preference score between the 60 FPS eye tracker and 3 FPS built-in web camera at each of the three visits (r = 0.88-0.92). Inter-rater agreement of web camera scoring at each time point was high (κ = 0.81-0.88). There were strong relationships on VPC mean novelty preference score between 10, 5, and 3 FPS training sets (r = 0.88-0.94). Significantly fewer data quality issues were encountered using the built-in web camera. Conclusions: Human scoring of a VPC decisional task using a built-in laptop web camera correlated strongly with automated scoring of the same task using a standard high frame rate eye tracker camera.While this method is not suitable for eye tracking paradigms requiring the collection and analysis of fine-grained metrics, such as fixation points, built

  16. Fluorine 18 FDG coincidence positron emission tomography using dual-head gamma camera in the follow-up of patient with head and neck cancers

    Energy Technology Data Exchange (ETDEWEB)

    Pai, M. S.; Park, C. H.; Koh, J. H.; Suh, J. H.; Joh, C. W.; Yoon, S. N.; Kim, S.; Hwang, K. H. [College of Medicine, Ajou Univ., Suwon (Korea, Republic of)

    1999-07-01

    Metabolic imaging with F-18-FDG has diagnostic potential to detect residual malignancy as well as the involvement of lymph node after or during the treatment, but it is not widely available because of high cost of PET operation. The alternative method to use F-18-FDG has been developed the coincident PET (CoDe PET) using gamma camera. Purpose is to evaluate the clinical usefulness of the F-18-FDG CoDe PET using gamma camera in differentiating residual/recurrent disease from post-therapy changes in patients with head and neck cancer. 55 cases F-18-FDG CoDe PET studies in 32 patients (Age : 25-79, mean : 50 13, M/F : 23/9) after therapy with various head and neck cancers were performed (11 undifferentiated carcinoma, 10 squamous cell carcinoma, 9 malignant lymphoma, 1 adenoid cystic cancer, 1 Ewing sarcoma). All patients were in the fasting stage for 6-12 hours and injected 3-10mCi of F-18-FDG 1 hour before the imaging. Images were obtained for 30 min (3 min per one rotation) with 20% photopeak window and 20% compton scatter window and reconstructed after filtered with METS filter. Attenuation correction was not done. Any visually detectable FDG uptake in the head and neck except the physiologic uptake were considered positive. All findings were validated either by biopsy or by clinical follow-up and compared with corresponding CT/MRI findings. Ten of eleven cases with residual disease and 41 of 44 cases which remained relapse free were correctly identified by CoDe PET. CoDe PET assessed nine more relapse free cases, in which CT/MRI were specificity (93%). FDG CoDe PET was especially helpful in patients with residual abnormalities noted on radiological imaging. F-18-FDG CoDe PET is a useful method for follow-up after the initial therapy in patients with head and neck cancers.

  17. Minicyclotron-based technology for the production of positron-emitting labelled radiopharmaceuticals

    Energy Technology Data Exchange (ETDEWEB)

    Barrio, J.R.; Bida, G.; Satyamurthy, N.; Padgett, H.C.; MacDonald, N.S.; Phelps, M.E.

    1983-01-01

    The use of short-lived positron emitters such as carbon 11, fluorine 18, nitrogen 13, and oxygen 15, together with positron-emission tomography (PET) for probing the dynamics of physiological and biochemical processes in the normal and diseased states in man is presently an active area of research. One of the pivotal elements for the continued growth and success of PET is the routine delivery of the desired positron emitting labelled compounds. To date, the cyclotron remains the accelerator of choice for production of medically useful radionuclides. The development of the technology to bring the use of cyclotrons to a clinical setting is discussed. (ACR)

  18. FIR Detectors/Cameras Based on GaN and Si Field-Effect Devices Project

    Data.gov (United States)

    National Aeronautics and Space Administration — SETI proposes to develop GaN and Si based multicolor FIR/THz cameras with detector elements and readout, signal processing electronics integrated on a single chip....

  19. Spectrally-Tunable Infrared Camera Based on Highly-Sensitive Quantum Well Infrared Photodetectors Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a SPECTRALLY-TUNABLE INFRARED CAMERA based on quantum well infrared photodetector (QWIP) focal plane array (FPA) technology. This will build on...

  20. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera.

    Science.gov (United States)

    Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng

    2017-06-20

    The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.

  1. Person re-identification across aerial and ground-based cameras by deep feature fusion

    Science.gov (United States)

    Schumann, Arne; Metzler, Jürgen

    2017-05-01

    Person re-identification is the task of correctly matching visual appearances of the same person in image or video data while distinguishing appearances of different persons. The traditional setup for re-identification is a network of fixed cameras. However, in recent years mobile aerial cameras mounted on unmanned aerial vehicles (UAV) have become increasingly useful for security and surveillance tasks. Aerial data has many characteristics different from typical camera network data. Thus, re-identification approaches designed for a camera network scenario can be expected to suffer a drop in accuracy when applied to aerial data. In this work, we investigate the suitability of features, which were shown to give robust results for re- identification in camera networks, for the task of re-identifying persons between a camera network and a mobile aerial camera. Specifically, we apply hand-crafted region covariance features and features extracted by convolu- tional neural networks which were learned on separate data. We evaluate their suitability for this new and as yet unexplored scenario. We investigate common fusion methods to combine the hand-crafted and learned features and propose our own deep fusion approach which is already applied during training of the deep network. We evaluate features and fusion methods on our own dataset. The dataset consists of fourteen people moving through a scene recorded by four fixed ground-based cameras and one mobile camera mounted on a small UAV. We discuss strengths and weaknesses of the features in the new scenario and show that our fusion approach successfully leverages the strengths of each feature and outperforms all single features significantly.

  2. Magnetic resonance-based motion correction for positron emission tomography imaging.

    Science.gov (United States)

    Ouyang, Jinsong; Li, Quanzheng; El Fakhri, Georges

    2013-01-01

    Positron emission tomography (PET) image quality is limited by patient motion. Emission data are blurred owing to cardiac and/or respiratory motion. Although spatial resolution is 4 mm for standard clinical whole-body PET scanners, the effective resolution can be as low as 1 cm owing to motion. Additionally, the deformation of attenuation medium causes image artifacts. Previously, gating has been used to "freeze" the motion, but led to significantly increased noise level. Simultaneous PET/magnetic resonance (MR) modality offers a new way to perform PET motion correction. MR can be used to measure 3-dimensional motion fields, which can then be incorporated into the iterative PET reconstruction to obtain motion-corrected PET images. In this report, we present MR imaging techniques to acquire dynamic images, a nonrigid image registration algorithm to extract motion fields from acquired MR images, and a PET reconstruction algorithm with motion correction. We also present results from both phantom and in vivo animal PET/MR studies. We demonstrate that MR-based PET motion correction using simultaneous PET/MR improves image quality and lesion detectability compared with gating and no motion correction. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. FPGA-Based Front-End Electronics for Positron Emission Tomography.

    Science.gov (United States)

    Haselman, Michael; Dewitt, Don; McDougald, Wendy; Lewellen, Thomas K; Miyaoka, Robert; Hauck, Scott

    2009-02-22

    Modern Field Programmable Gate Arrays (FPGAs) are capable of performing complex discrete signal processing algorithms with clock rates above 100MHz. This combined with FPGA's low expense, ease of use, and selected dedicated hardware make them an ideal technology for a data acquisition system for positron emission tomography (PET) scanners. Our laboratory is producing a high-resolution, small-animal PET scanner that utilizes FPGAs as the core of the front-end electronics. For this next generation scanner, functions that are typically performed in dedicated circuits, or offline, are being migrated to the FPGA. This will not only simplify the electronics, but the features of modern FPGAs can be utilizes to add significant signal processing power to produce higher resolution images. In this paper two such processes, sub-clock rate pulse timing and event localization, will be discussed in detail. We show that timing performed in the FPGA can achieve a resolution that is suitable for small-animal scanners, and will outperform the analog version given a low enough sampling period for the ADC. We will also show that the position of events in the scanner can be determined in real time using a statistical positioning based algorithm.

  4. PETALO, a new concept for a Positron Emission TOF Apparatus based on Liquid xenOn

    CERN Document Server

    Benlloch-Rodriguez, J M

    2016-01-01

    This master thesis presents a new type of Positron Emission TOF Apparatus using Liquid xenOn (PETALO). The detector is based in the Liquid Xenon Scintillating Cell (LXSC). The cell is a box filled with liquid xenon (LXe) whose transverse dimensions are chosen to optimize packing and with a thickness optimized to contain a large fraction of the incoming photons. The entry and exit faces of the box (relative to the incoming gammas direction) are instrumented with large silicon photomultipliers (SiPMs), coated with a wavelength shifter, tetraphenyl butadiene (TPB). The non-instrumented faces are covered by reflecting Teflon coated with TPB. In this thesis we show that the LXSC can display an energy resolution of 5% FWHM, much better than that of conventional solid scintillators such as LSO/LYSO. The LXSC can measure the interaction point of the incoming photon with a resolution in the three coordinates of 1 mm. The very fast scintillation time of LXe (2 ns) and the availability of suitable sensors and electronic...

  5. Laser-based terahertz-field-driven streak camera for the temporal characterization of ultrashort processes

    Energy Technology Data Exchange (ETDEWEB)

    Schuette, Bernd

    2011-09-15

    In this work, a novel laser-based terahertz-field-driven streak camera is presented. It allows for a pulse length characterization of femtosecond (fs) extreme ultraviolet (XUV) pulses by a cross-correlation with terahertz (THz) pulses generated with a Ti:sapphire laser. The XUV pulses are emitted by a source of high-order harmonic generation (HHG) in which an intense near-infrared (NIR) fs laser pulse is focused into a gaseous medium. The design and characterization of a high-intensity THz source needed for the streak camera is also part of this thesis. The source is based on optical rectification of the same NIR laser pulse in a lithium niobate crystal. For this purpose, the pulse front of the NIR beam is tilted via a diffraction grating to achieve velocity matching between NIR and THz beams within the crystal. For the temporal characterization of the XUV pulses, both HHG and THz beams are focused onto a gas target. The harmonic radiation creates photoelectron wavepackets which are then accelerated by the THz field depending on its phase at the time of ionization. This principle adopted from a conventional streak camera and now widely used in attosecond metrology. The streak camera presented here is an advancement of a terahertz-field-driven streak camera implemented at the Free Electron Laser in Hamburg (FLASH). The advantages of the laser-based streak camera lie in its compactness, cost efficiency and accessibility, while providing the same good quality of measurements as obtained at FLASH. In addition, its flexibility allows for a systematic investigation of streaked Auger spectra which is presented in this thesis. With its fs time resolution, the terahertz-field-driven streak camera thereby bridges the gap between attosecond and conventional cameras. (orig.)

  6. Conceptual design of a slow positron source based on a magnetic trap

    CERN Document Server

    Volosov, V I; Mezentsev, N A

    2001-01-01

    A unique 10.3 T superconducting wiggler was designed and manufactured at BINP SB RAS. The installation of this wiggler in the SPring-8 storage ring provides a possibility to generate a high-intensity beam of photons (SR) with energy above 1 MeV (Ando et al., J. Synchrotron Radiat. 5 (1998) 360). Conversion of photons to positrons on high-Z material (tungsten) targets creates an integrated positron flux more than 10 sup 1 sup 3 particles per second. The energy spectrum of the positrons has a maximum at 0.5 MeV and the half-width about 1 MeV (Plokhoi et al., Jpn. J. Appl. Phys. 38 (1999) 604). The traditional methods of positron moderation have the efficiency epsilon=N sub s /N sub f of 10 sup - sup 4 (metallic moderators) to 10 sup - sup 2 (solid rare gas moderators) (Mills and Gullikson, Appl. Phys. Lett. 49 (1986) 1121). The high flux of primary positrons restricts the choice to a tungsten moderator that has epsilon approx 10 sup - sup 4 only (Schultz, Nuc. Instr. and Meth. B 30 (1988) 94). The aim of our pr...

  7. MEDIUM FORMAT CAMERA EVALUATION BASED ON THE LATEST PHASE ONE TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    T. Tölg

    2016-06-01

    Full Text Available In early 2016, Phase One Industrial launched a new high resolution camera with a 100 MP CMOS sensor. CCD sensors excel at ISOs up to 200, but in lower light conditions, exposure time must be increased and Forward Motion Compensation (FMC has to be employed to avoid smearing the images. The CMOS sensor has an ISO range of up to 6400, which enables short exposures instead of using FMC. This paper aims to evaluate the strengths of each of the sensor types based on real missions over a test field in Speyer, Germany, used for airborne camera calibration. The test field area has about 30 Ground Control Points (GCPs, which enable a perfect scenario for a proper geometric evaluation of the cameras. The test field includes both a Siemen star and scale bars to show any blurring caused by forward motion. The result of the comparison showed that both cameras offer high accuracy photogrammetric results with post processing, including triangulation, calibration, orthophoto and DEM generation. The forward motion effect can be compensated by a fast shutter speed and a higher ISO range of the CMOS-based camera. The results showed no significant differences between cameras.

  8. Medium Format Camera Evaluation Based on the Latest Phase One Technology

    Science.gov (United States)

    Tölg, T.; Kemper, G.; Kalinski, D.

    2016-06-01

    In early 2016, Phase One Industrial launched a new high resolution camera with a 100 MP CMOS sensor. CCD sensors excel at ISOs up to 200, but in lower light conditions, exposure time must be increased and Forward Motion Compensation (FMC) has to be employed to avoid smearing the images. The CMOS sensor has an ISO range of up to 6400, which enables short exposures instead of using FMC. This paper aims to evaluate the strengths of each of the sensor types based on real missions over a test field in Speyer, Germany, used for airborne camera calibration. The test field area has about 30 Ground Control Points (GCPs), which enable a perfect scenario for a proper geometric evaluation of the cameras. The test field includes both a Siemen star and scale bars to show any blurring caused by forward motion. The result of the comparison showed that both cameras offer high accuracy photogrammetric results with post processing, including triangulation, calibration, orthophoto and DEM generation. The forward motion effect can be compensated by a fast shutter speed and a higher ISO range of the CMOS-based camera. The results showed no significant differences between cameras.

  9. Development of an angled Si-PM-based detector unit for positron emission mammography (PEM) system

    Energy Technology Data Exchange (ETDEWEB)

    Nakanishi, Kouhei, E-mail: nakanishi.kouhei@c.mbox.nagoya-u.ac.jp; Yamamoto, Seiichi

    2016-11-21

    Positron emission mammography (PEM) systems have higher sensitivity than clinical whole body PET systems because they have a smaller ring diameter. However, the spatial resolution of PEM systems is not high enough to detect early stage breast cancer. To solve this problem, we developed a silicon photomultiplier (Si-PM) based detector unit for the development of a PEM system. Since a Si-PM's channel is small, Si-PM can resolve small scintillator pixels to improve the spatial resolution. Also Si-PM based detectors have inherently high timing resolution and are able to reduce the random coincidence events by reducing the time window. We used 1.5×1.9×15 mm LGSO scintillation pixels and arranged them in an 8×24 matrix to form scintillator blocks. Four scintillator blocks were optically coupled to Si-PM arrays with an angled light guide to form a detector unit. Since the light guide has angles of 5.625°, we can arrange 64 scintillator blocks in a nearly circular shape (a regular 64-sided polygon) using 16 detector units. We clearly resolved the pixels of the scintillator blocks in a 2-dimensional position histogram where the averages of the peak-to-valley ratios (P/Vs) were 3.7±0.3 and 5.7±0.8 in the transverse and axial directions, respectively. The average energy resolution was 14.2±2.1% full-width at half-maximum (FWHM). By including the temperature dependent gain control electronics, the photo-peak channel shifts were controlled within ±1.5% with the temperature from 23 °C to 28 °C. With these results, in addition to the potential high timing performance of Si-PM based detectors, our developed detector unit is promising for the development of a high-resolution PEM system.

  10. Relative Panoramic Camera Position Estimation for Image-Based Virtual Reality Networks in Indoor Environments

    Science.gov (United States)

    Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.

    2017-09-01

    Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  11. An Airborne Multispectral Imaging System Based on Two Consumer-Grade Cameras for Agricultural Remote Sensing

    Directory of Open Access Journals (Sweden)

    Chenghai Yang

    2014-06-01

    Full Text Available This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS sensor with 5616 × 3744 pixels. One camera captures normal color images, while the other is modified to obtain near-infrared (NIR images. The color camera is also equipped with a GPS receiver to allow geotagged images. A remote control is used to trigger both cameras simultaneously. Images are stored in 14-bit RAW and 8-bit JPEG files in CompactFlash cards. The second-order transformation was used to align the color and NIR images to achieve subpixel alignment in four-band images. The imaging system was tested under various flight and land cover conditions and optimal camera settings were determined for airborne image acquisition. Images were captured at altitudes of 305–3050 m (1000–10,000 ft and pixel sizes of 0.1–1.0 m were achieved. Four practical application examples are presented to illustrate how the imaging system was used to estimate cotton canopy cover, detect cotton root rot, and map henbit and giant reed infestations. Preliminary analysis of example images has shown that this system has potential for crop condition assessment, pest detection, and other agricultural applications.

  12. Automated ethernet-based test setup for long wave infrared camera analysis and algorithm evaluation

    Science.gov (United States)

    Edeler, Torsten; Ohliger, Kevin; Lawrenz, Sönke; Hussmann, Stephan

    2009-06-01

    In this paper we consider a new way for automated camera calibration and specification. The proposed setup is optimized for working with uncooled long wave infrared (thermal) cameras, while the concept itself is not restricted to those cameras. Every component of the setup like black body source, climate chamber, remote power switch, and the camera itself is connected to a network via Ethernet and a Windows XP workstation is controlling all components by the use of the TCL - script language. Beside the job of communicating with the components the script tool is also capable to run Matlab code via the matlab kernel. Data exchange during the measurement is possible and offers a variety of different advantages from drastically reduction of the amount of data to enormous speedup of the measuring procedure due to data analysis during measurement. A parameter based software framework is presented to create generic test cases, where modification to the test scenario does not require any programming skills. In the second part of the paper the measurement results of a self developed GigE-Vision thermal camera are presented and correction algorithms, providing high quality image output, are shown. These algorithms are fully implemented in the FPGA of the camera to provide real time processing while maintaining GigE-Vision as standard transmission protocol as an interface to arbitrary software tools. Artefacts taken into account are spatial noise, defective pixel and offset drift due to self heating after power on.

  13. Simulation-based camera navigation training in laparoscopy-a randomized trial.

    Science.gov (United States)

    Nilsson, Cecilia; Sorensen, Jette Led; Konge, Lars; Westen, Mikkel; Stadeager, Morten; Ottesen, Bent; Bjerrum, Flemming

    2017-05-01

    Inexperienced operating assistants are often tasked with the important role of handling camera navigation during laparoscopic surgery. Incorrect handling can lead to poor visualization, increased operating time, and frustration for the operating surgeon-all of which can compromise patient safety. The objectives of this trial were to examine how to train laparoscopic camera navigation and to explore the transfer of skills to the operating room. A randomized, single-center superiority trial with three groups: The first group practiced simulation-based camera navigation tasks (camera group), the second group practiced performing a simulation-based cholecystectomy (procedure group), and the third group received no training (control group). Participants were surgical novices without prior laparoscopic experience. The primary outcome was assessment of camera navigation skills during a laparoscopic cholecystectomy. The secondary outcome was technical skills after training, using a previously developed model for testing camera navigational skills. The exploratory outcome measured participants' motivation toward the task as an operating assistant. Thirty-six participants were randomized. No significant difference was found in the primary outcome between the three groups (p = 0.279). The secondary outcome showed no significant difference between the interventions groups, total time 167 s (95% CI, 118-217) and 194 s (95% CI, 152-236) for the camera group and the procedure group, respectively (p = 0.369). Both interventions groups were significantly faster than the control group, 307 s (95% CI, 202-412), p = 0.018 and p = 0.045, respectively. On the exploratory outcome, the control group for two dimensions, interest/enjoyment (p = 0.030) and perceived choice (p = 0.033), had a higher score. Simulation-based training improves the technical skills required for camera navigation, regardless of practicing camera navigation or the procedure itself. Transfer to the

  14. Effect of motion artifact on digital camera based heart rate measurement.

    Science.gov (United States)

    Hassan, M A; Malik, A S; Saad, N; Fofi, D; Meriaudeau, F

    2017-07-01

    Remote health monitoring is an emerging field in biomedical technology. Digital camera based heart rate measurement method is a recent development which would make remote health monitoring reliable and sustainable in future. This paper presents an investigation on the effect of motion artifact on digital camera-based heart rate measurement. The paper will discuss details on the principles and effects of motion artifacts on photoplethysmography signals. An experiment is conducted using publicly available MAHNOB-HCI database. We have investigated the effects of static scenarios, scenarios involving rigid motion and scenarios involving non-rigid motion. The experiment was tested on state of the art digital camera based heart rate measuring methods. The results showed the effectiveness of the methods and provide a direction to overcome/minimize the effect of motion artifacts for future research.

  15. Design of a Polarised Positron Source Based on Laser Compton Scattering

    CERN Document Server

    Araki, S; Honda, Y; Kurihara, Y; Kuriki, M; Okugi, T; Omori, T; Taniguchi, T; Terunuma, N; Urakawa, J; Artru, X; Chevallier, M; Strakhovenko, V M; Bulyak, E; Gladkikh, P; Mönig, K; Chehab, R; Variola, A; Zomer, F; Guiducci, S; Raimondi, Pantaleo; Zimmermann, Frank; Sakaue, K; Hirose, T; Washio, M; Sasao, N; Yokoyama, H; Fukuda, M; Hirano, K; Takano, M; Takahashi, T; Sato, H; Tsunemi, A; Gao, J; Soskov, V

    2005-01-01

    We describe a scheme for producing polarised positrons at the ILC from polarised X-rays created by Compton scattering of a few-GeV electron beam off a CO2 or YAG laser. This scheme is very energy effective using high finesse laser cavities in conjunction with an electron storage ring.

  16. MO-AB-206-02: Testing Gamma Cameras Based On TG177 WG Report

    Energy Technology Data Exchange (ETDEWEB)

    Halama, J. [Loyola Univ. Medical Center (United States)

    2016-06-15

    This education session will cover the physics and operation principles of gamma cameras and PET scanners. The first talk will focus on PET imaging. An overview of the principles of PET imaging will be provided, including positron decay physics, and the transition from 2D to 3D imaging. More recent advances in hardware and software will be discussed, such as time-of-flight imaging, and improvements in reconstruction algorithms that provide for options such as depth-of-interaction corrections. Quantitative applications of PET will be discussed, as well as the requirements for doing accurate quantitation. Relevant performance tests will also be described. Learning Objectives: Be able to describe basic physics principles of PET and operation of PET scanners. Learn about recent advances in PET scanner hardware technology. Be able to describe advances in reconstruction techniques and improvements Be able to list relevant performance tests. The second talk will focus on gamma cameras. The Nuclear Medicine subcommittee has charged a task group (TG177) to develop a report on the current state of physics testing of gamma cameras, SPECT, and SPECT/CT systems. The report makes recommendations for performance tests to be done for routine quality assurance, annual physics testing, and acceptance tests, and identifies those needed satisfy the ACR accreditation program and The Joint Commission imaging standards. The report is also intended to be used as a manual with detailed instructions on how to perform tests under widely varying conditions. Learning Objectives: At the end of the presentation members of the audience will: Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of gamma cameras for planar imaging. Be familiar with the tests recommended for routine quality assurance, annual physics testing, and acceptance tests of SPECT systems. Be familiar with the tests of a SPECT/CT system that include the CT images

  17. Optimization of light field display-camera configuration based on display properties in spectral domain.

    Science.gov (United States)

    Bregović, Robert; Kovács, Péter Tamás; Gotchev, Atanas

    2016-02-08

    The visualization capability of a light field display is uniquely determined by its angular and spatial resolution referred to as display passband. In this paper we use a multidimensional sampling model for describing the display-camera channel. Based on the model, for a given display passband, we propose a methodology for determining the optimal distribution of ray generators in a projection-based light field display. We also discuss the required camera setup that can provide data with the necessary amount of details for such display that maximizes the visual quality and minimizes the amount of data.

  18. A calibration method based on virtual large planar target for cameras with large FOV

    Science.gov (United States)

    Yu, Lei; Han, Yangyang; Nie, Hong; Ou, Qiaofeng; Xiong, Bangshu

    2018-02-01

    In order to obtain high precision in camera calibration, a target should be large enough to cover the whole field of view (FOV). For cameras with large FOV, using a small target will seriously reduce the precision of calibration. However, using a large target causes many difficulties in making, carrying and employing the large target. In order to solve this problem, a calibration method based on the virtual large planar target (VLPT), which is virtually constructed with multiple small targets (STs), is proposed for cameras with large FOV. In the VLPT-based calibration method, first, the positions and directions of STs are changed several times to obtain a number of calibration images. Secondly, the VLPT of each calibration image is created by finding the virtual point corresponding to the feature points of the STs. Finally, intrinsic and extrinsic parameters of the camera are calculated by using the VLPTs. Experiment results show that the proposed method can not only achieve the similar calibration precision as those employing a large target, but also have good stability in the whole measurement area. Thus, the difficulties to accurately calibrate cameras with large FOV can be perfectly tackled by the proposed method with good operability.

  19. Home Camera-Based Fall Detection System for the Elderly

    Directory of Open Access Journals (Sweden)

    Koldo de Miguel

    2017-12-01

    Full Text Available Falls are the leading cause of injury and death in elderly individuals. Unfortunately, fall detectors are typically based on wearable devices, and the elderly often forget to wear them. In addition, fall detectors based on artificial vision are not yet available on the market. In this paper, we present a new low-cost fall detector for smart homes based on artificial vision algorithms. Our detector combines several algorithms (background subtraction, Kalman filtering and optical flow as input to a machine learning algorithm with high detection accuracy. Tests conducted on over 50 different fall videos have shown a detection ratio of greater than 96%.

  20. Home Camera-Based Fall Detection System for the Elderly.

    Science.gov (United States)

    de Miguel, Koldo; Brunete, Alberto; Hernando, Miguel; Gambao, Ernesto

    2017-12-09

    Falls are the leading cause of injury and death in elderly individuals. Unfortunately, fall detectors are typically based on wearable devices, and the elderly often forget to wear them. In addition, fall detectors based on artificial vision are not yet available on the market. In this paper, we present a new low-cost fall detector for smart homes based on artificial vision algorithms. Our detector combines several algorithms (background subtraction, Kalman filtering and optical flow) as input to a machine learning algorithm with high detection accuracy. Tests conducted on over 50 different fall videos have shown a detection ratio of greater than 96%.

  1. Noninvasive particle sizing using camera-based diffuse reflectance spectroscopy

    DEFF Research Database (Denmark)

    Abildgaard, Otto Højager Attermann; Frisvad, Jeppe Revall; Falster, Viggo

    2016-01-01

    Diffuse reflectance measurements are useful for noninvasive inspection of optical properties such as reduced scattering and absorption coefficients. Spectroscopic analysis of these optical properties can be used for particle sizing. Systems based on optical fiber probes are commonly employed...

  2. High-resolution Compton cameras based on Si/CdTe double-sided strip detectors

    Science.gov (United States)

    Odaka, Hirokazu; Ichinohe, Yuto; Takeda, Shin'ichiro; Fukuyama, Taro; Hagino, Koichi; Saito, Shinya; Sato, Tamotsu; Sato, Goro; Watanabe, Shin; Kokubun, Motohide; Takahashi, Tadayuki; Yamaguchi, Mitsutaka; Tanaka, Takaaki; Tajima, Hiroyasu; Nakazawa, Kazuhiro; Fukazawa, Yasushi

    2012-12-01

    We have developed a new Compton camera based on silicon (Si) and cadmium telluride (CdTe) semiconductor double-sided strip detectors (DSDs). The camera consists of a 500-μm-thick Si-DSD and four layers of 750-μm-thick CdTe-DSDs all of which have common electrode configuration segmented into 128 strips on each side with pitches of 250 μm. In order to realize high angular resolution and to reduce size of the detector system, a stack of DSDs with short stack pitches of 4 mm is utilized to make the camera. Taking advantage of the excellent energy and position resolutions of the semiconductor devices, the camera achieves high angular resolutions of 4.5° at 356 keV and 3.5° at 662 keV. To obtain such high resolutions together with an acceptable detection efficiency, we demonstrate data reduction methods including energy calibration using Compton scattering continuum and depth sensing in the CdTe-DSD. We also discuss imaging capability of the camera and show simultaneous multi-energy imaging.

  3. Volumetric Diffuse Optical Tomography for Small Animals Using a CCD-Camera-Based Imaging System

    Directory of Open Access Journals (Sweden)

    Zi-Jing Lin

    2012-01-01

    Full Text Available We report the feasibility of three-dimensional (3D volumetric diffuse optical tomography for small animal imaging by using a CCD-camera-based imaging system with a newly developed depth compensation algorithm (DCA. Our computer simulations and laboratory phantom studies have demonstrated that the combination of a CCD camera and DCA can significantly improve the accuracy in depth localization and lead to reconstruction of 3D volumetric images. This approach may present great interests for noninvasive 3D localization of an anomaly hidden in tissue, such as a tumor or a stroke lesion, for preclinical small animal models.

  4. Mach-zehnder based optical marker/comb generator for streak camera calibration

    Science.gov (United States)

    Miller, Edward Kirk

    2015-03-03

    This disclosure is directed to a method and apparatus for generating marker and comb indicia in an optical environment using a Mach-Zehnder (M-Z) modulator. High speed recording devices are configured to record image or other data defining a high speed event. To calibrate and establish time reference, the markers or combs are indicia which serve as timing pulses (markers) or a constant-frequency train of optical pulses (comb) to be imaged on a streak camera for accurate time based calibration and time reference. The system includes a camera, an optic signal generator which provides an optic signal to an M-Z modulator and biasing and modulation signal generators configured to provide input to the M-Z modulator. An optical reference signal is provided to the M-Z modulator. The M-Z modulator modulates the reference signal to a higher frequency optical signal which is output through a fiber coupled link to the streak camera.

  5. Omnidirectional stereo vision sensor based on single camera and catoptric system.

    Science.gov (United States)

    Zhou, Fuqiang; Chai, Xinghua; Chen, Xin; Song, Ya

    2016-09-01

    An omnidirectional stereo vision sensor based on one single camera and catoptric system is proposed. As crucial components, one camera and two pyramid mirrors are used for imaging. The omnidirectional measurement towards different directions in the horizontal field can be performed by four pairs of virtual cameras, with a consummate synchronism and an improved compactness. Moreover, the perspective projection invariance is ensured in the imaging process, which avoids the imaging distortion reflected by the curved mirrors. In this paper, the structure model of the sensor was established and a sensor prototype was designed. The influences of the structural parameters on the field of view and the measurement accuracy were also discussed. In addition, real experiments and analyses were performed to evaluate the performance of the proposed sensor in the measurement application. The results proved the feasibility of the sensor, and exhibited a considerable accuracy in 3D coordinate reconstruction.

  6. Person re-identification using height-based gait in colour depth camera

    NARCIS (Netherlands)

    John, V.; Englebienne, G.; Kröse, B.

    2013-01-01

    We address the problem of person re-identification in colour-depth camera using the height temporal information of people. Our proposed gait-based feature corresponds to the frequency response of the height temporal information. We demonstrate that the discriminative periodic motion associated with

  7. Cost Effective Paper-Based Colorimetric Microfluidic Devices and Mobile Phone Camera Readers for the Classroom

    Science.gov (United States)

    Koesdjojo, Myra T.; Pengpumkiat, Sumate; Wu, Yuanyuan; Boonloed, Anukul; Huynh, Daniel; Remcho, Thomas P.; Remcho, Vincent T.

    2015-01-01

    We have developed a simple and direct method to fabricate paper-based microfluidic devices that can be used for a wide range of colorimetric assay applications. With these devices, assays can be performed within minutes to allow for quantitative colorimetric analysis by use of a widely accessible iPhone camera and an RGB color reader application…

  8. Smart Camera Based on Embedded HW/SW Coprocessor

    Directory of Open Access Journals (Sweden)

    Dubois Julien

    2008-01-01

    Full Text Available Abstract This paper describes an image acquisition and a processing system based on a new coprocessor architecture designed for CMOS sensor imaging. The system exploits the full potential CMOS selective access imaging technology because the coprocessor unit is integrated into the image acquisition loop. The acquisition and coprocessing architecture are compatible with the majority of CMOS sensors. It enables the dynamic selection of a wide variety of acquisition modes as well as the reconfiguration and implementation of high-performance image preprocessing algorithms (calibration, filtering, denoising, binarization, pattern recognition. Furthermore, the processing and data transfer, from the CMOS sensor to the processor, can be operated simultaneously to increase achievable performances. The coprocessor architecture has been designed so as to obtain a unit that can be configured on the fly, in terms of type and number of chained processing stages (up to 8 successive predefined preprocessing stages, during the image acquisition process that can be defined by the user according to each specific application requirement. Examples of acquisition and processing performances are reported and compared to classical image acquisition systems based on standard modular PC platforms. The experimental results show a considerable increase of the achievable performances.

  9. Smart Camera Based on Embedded HW/SW Coprocessor

    Directory of Open Access Journals (Sweden)

    David Mauvilet

    2009-01-01

    Full Text Available This paper describes an image acquisition and a processing system based on a new coprocessor architecture designed for CMOS sensor imaging. The system exploits the full potential CMOS selective access imaging technology because the coprocessor unit is integrated into the image acquisition loop. The acquisition and coprocessing architecture are compatible with the majority of CMOS sensors. It enables the dynamic selection of a wide variety of acquisition modes as well as the reconfiguration and implementation of high-performance image preprocessing algorithms (calibration, filtering, denoising, binarization, pattern recognition. Furthermore, the processing and data transfer, from the CMOS sensor to the processor, can be operated simultaneously to increase achievable performances. The coprocessor architecture has been designed so as to obtain a unit that can be configured on the fly, in terms of type and number of chained processing stages (up to 8 successive predefined preprocessing stages, during the image acquisition process that can be defined by the user according to each specific application requirement. Examples of acquisition and processing performances are reported and compared to classical image acquisition systems based on standard modular PC platforms. The experimental results show a considerable increase of the achievable performances.

  10. Convolutional Neural Network-Based Human Detection in Nighttime Images Using Visible Light Camera Sensors

    Directory of Open Access Journals (Sweden)

    Jong Hyun Kim

    2017-05-01

    Full Text Available Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1 and two open databases (Korea advanced institute of science and technology (KAIST and computer vision center (CVC databases, as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.

  11. Convolutional Neural Network-Based Human Detection in Nighttime Images Using Visible Light Camera Sensors.

    Science.gov (United States)

    Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung

    2017-05-08

    Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR) illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1) and two open databases (Korea advanced institute of science and technology (KAIST) and computer vision center (CVC) databases), as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.

  12. Camera correlation focus: an image-based focusing technique

    Science.gov (United States)

    Reynolds, Greg; Hammond, Mike; Binns, Lewis A.

    2005-05-01

    Determining the focal position of an overlay target with respect to an objective lens is an important prerequisite of overlay metrology. At best, an out-of-focus image will provide less than optimal information for metrology; focal depth for a high-NA imaging system at the required magnification is of the order of 5 microns. In most cases poor focus will lead to poor measurement performance. In some cases, being out of focus will cause apparent contrast reversal and similar effects, due to optical wavelengths (i.e. about half a micron) being used; this can cause measurement failure on some algorithms. In the very worst case, being out of focus can cause pattern recognition to fail completely, leading to a missed measurement. Previous systems to date have had one of two forms. In the first, a scan through focus is performed, selecting the optimal position using a direct, image-based focus metric, such as the high-frequency component of a Fourier transform. This always gives an optimal or near-optimal focus position, even under wide process variation, but can be time consuming, requiring a relatively large number of images to be captured for each site visited. It also requires the optimal position to be included in the range of the scan; if initial uncertainty is large, then the focus scan needs to be longer, taking even more time. The second approach is to monitor some property which has a known relationship to focus. This is often calibrated with respect to a scan through focus. On subsequent measurements the output of this secondary system is taken as a focus position. This second system may be completely separate from the imaging system; the primary requirement is only that it is coupled to the imaging system. These systems are generally fast; only one measurement per site is required, and they are typically designed so that only limited image / signal processing is required. However, such techniques are less precise or accurate than performing a scan through

  13. Development of a compact scintillator-based high-resolution Compton camera for molecular imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kishimoto, A., E-mail: daphne3h-aya@ruri.waseda.jp [Research Institute for Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo (Japan); Kataoka, J.; Koide, A.; Sueoka, K.; Iwamoto, Y.; Taya, T. [Research Institute for Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo (Japan); Ohsuka, S. [Central Research Laboratory, Hamamatsu Photonics K.K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu, Shizuoka (Japan)

    2017-02-11

    The Compton camera, which shows gamma-ray distribution utilizing the kinematics of Compton scattering, is a promising detector capable of imaging across a wide range of energy. In this study, we aim to construct a small-animal molecular imaging system in a wide energy range by using the Compton camera. We developed a compact medical Compton camera based on a Ce-doped Gd{sub 3}Al{sub 2}Ga{sub 3}O{sub 12} (Ce:GAGG) scintillator and multi-pixel photon counter (MPPC). A basic performance confirmed that for 662 keV, the typical energy resolution was 7.4 % (FWHM) and the angular resolution was 4.5° (FWHM). We then used the medical Compton camera to conduct imaging experiments based on a 3-D imaging reconstruction algorithm using the multi-angle data acquisition method. The result confirmed that for a {sup 137}Cs point source at a distance of 4 cm, the image had a spatial resolution of 3.1 mm (FWHM). Furthermore, we succeeded in producing 3-D multi-color image of different simultaneous energy sources ({sup 22}Na [511 keV], {sup 137}Cs [662 keV], and {sup 54}Mn [834 keV]).

  14. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment

    OpenAIRE

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-01-01

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing exp...

  15. Fuzzy System-Based Target Selection for a NIR Camera-Based Gaze Tracker

    Science.gov (United States)

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Park, Kang Ryoung

    2017-01-01

    Gaze-based interaction (GBI) techniques have been a popular subject of research in the last few decades. Among other applications, GBI can be used by persons with disabilities to perform everyday tasks, as a game interface, and can play a pivotal role in the human computer interface (HCI) field. While gaze tracking systems have shown high accuracy in GBI, detecting a user’s gaze for target selection is a challenging problem that needs to be considered while using a gaze detection system. Past research has used the blinking of the eyes for this purpose as well as dwell time-based methods, but these techniques are either inconvenient for the user or requires a long time for target selection. Therefore, in this paper, we propose a method for fuzzy system-based target selection for near-infrared (NIR) camera-based gaze trackers. The results of experiments performed in addition to tests of the usability and on-screen keyboard use of the proposed method show that it is better than previous methods. PMID:28420114

  16. Fuzzy System-Based Target Selection for a NIR Camera-Based Gaze Tracker.

    Science.gov (United States)

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Park, Kang Ryoung

    2017-04-14

    Gaze-based interaction (GBI) techniques have been a popular subject of research in the last few decades. Among other applications, GBI can be used by persons with disabilities to perform everyday tasks, as a game interface, and can play a pivotal role in the human computer interface (HCI) field. While gaze tracking systems have shown high accuracy in GBI, detecting a user's gaze for target selection is a challenging problem that needs to be considered while using a gaze detection system. Past research has used the blinking of the eyes for this purpose as well as dwell time-based methods, but these techniques are either inconvenient for the user or requires a long time for target selection. Therefore, in this paper, we propose a method for fuzzy system-based target selection for near-infrared (NIR) camera-based gaze trackers. The results of experiments performed in addition to tests of the usability and on-screen keyboard use of the proposed method show that it is better than previous methods.

  17. Self-Calibration Method Based on Surface Micromaching of Light Transceiver Focal Plane for Optical Camera

    Directory of Open Access Journals (Sweden)

    Jin Li

    2016-10-01

    Full Text Available In remote sensing photogrammetric applications, inner orientation parameter (IOP calibration of remote sensing camera is a prerequisite for determining image position. However, achieving such a calibration without temporal and spatial limitations remains a crucial but unresolved issue to date. The accuracy of IOP calibration methods of a remote sensing camera determines the performance of image positioning. In this paper, we propose a high-accuracy self-calibration method without temporal and spatial limitations for remote sensing cameras. Our method is based on an auto-collimating dichroic filter combined with a surface micromachining (SM point-source focal plane. The proposed method can autonomously complete IOP calibration without the need of outside reference targets. The SM procedure is used to manufacture a light transceiver focal plane, which integrates with point sources, a splitter, and a complementary metal oxide semiconductor sensor. A dichroic filter is used to fabricate an auto-collimation light reflection element. The dichroic filter, splitter, and SM point-source focal plane are integrated into a camera to perform an integrated self-calibration. Experimental measurements confirm the effectiveness and convenience of the proposed method. Moreover, the method can achieve micrometer-level precision and can satisfactorily complete real-time calibration without temporal or spatial limitations.

  18. A Low-Cost Smartphone Sensor-Based UV Camera for Volcanic SO2 Emission Measurements

    Directory of Open Access Journals (Sweden)

    Thomas Charles Wilkes

    2017-01-01

    Full Text Available Recently, we reported on the development of low-cost ultraviolet (UV cameras, based on the modification of sensors designed for the smartphone market. These units are built around modified Raspberry Pi cameras (PiCams; ≈USD 25, and usable system sensitivity was demonstrated in the UVA and UVB spectral regions, of relevance to a number of application areas. Here, we report on the first deployment of PiCam devices in one such field: UV remote sensing of sulphur dioxide emissions from volcanoes; such data provide important insights into magmatic processes and are applied in hazard assessments. In particular, we report on field trials on Mt. Etna, where the utility of these devices in quantifying volcanic sulphur dioxide (SO2 emissions was validated. We furthermore performed side-by-side trials of these units against scientific grade cameras, which are currently used in this application, finding that the two systems gave virtually identical flux time series outputs, and that signal-to-noise characteristics of the PiCam units appeared to be more than adequate for volcanological applications. Given the low cost of these sensors, allowing two-filter SO2 camera systems to be assembled for ≈USD 500, they could be suitable for widespread dissemination in volcanic SO2 monitoring internationally.

  19. Positron emission mammography imaging

    Energy Technology Data Exchange (ETDEWEB)

    Moses, William W.

    2003-10-02

    This paper examines current trends in Positron Emission Mammography (PEM) instrumentation and the performance tradeoffs inherent in them. The most common geometry is a pair of parallel planes of detector modules. They subtend a larger solid angle around the breast than conventional PET cameras, and so have both higher efficiency and lower cost. Extensions to this geometry include encircling the breast, measuring the depth of interaction (DOI), and dual-modality imaging (PEM and x-ray mammography, as well as PEM and x-ray guided biopsy). The ultimate utility of PEM may not be decided by instrument performance, but by biological and medical factors, such as the patient to patient variation in radiotracer uptake or the as yet undetermined role of PEM in breast cancer diagnosis and treatment.

  20. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User’s Head Movement

    OpenAIRE

    Weiyuan Pan; Dongwook Jung; Hyo Sik Yoon; Dong Eun Lee; Rizwan Ali Naqvi; Kwan Woo Lee; Kang Ryoung Park

    2016-01-01

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without gr...

  1. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement

    National Research Council Canada - National Science Library

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-01-01

    .... Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens...

  2. RELATIVE PANORAMIC CAMERA POSITION ESTIMATION FOR IMAGE-BASED VIRTUAL REALITY NETWORKS IN INDOOR ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    M. Nakagawa

    2017-09-01

    Full Text Available Image-based virtual reality (VR is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  3. Fish-eye Camera Calibration Model Based on Vector Observations and Its Application

    Directory of Open Access Journals (Sweden)

    ZHAN Yinhu

    2016-03-01

    Full Text Available A fish-eye camera calibration model is presented, basic observations of which consist of both half angle of view and azimuth. Rodrigues matrix is introduced into the model, and three Rodrigues parameters instead of Euler angles are used to represent elements of exterior orientation in order to simplify the expressions and calculations of observation equations.The new model is compared with the existing models based on half angle of view constraint by actual star-map data processing, and the results indicate that the model is superior to control the azimuth error, while slightly inferior to constrain the error of half angle of view. It is advised that radial distortion parameters should be determined by the model based on half angle of view constraint at first, and other camera parameters should be calculated by the new model.

  4. Smart Camera System-on-Chip Architecture for Real-Time Brush Based Interactive Painting Systems

    OpenAIRE

    Claesen, Luc; Vandoren, Peter; VAN LAERHOVEN, Tom; Motten, Andy; Di Fiore, Fabian; Van Reeth, Frank; Liao, Jing; Yu, Jinhui

    2012-01-01

    Interactive virtual paint systems are very useful in editing all kinds of graphics artwork. Because of the digital tracking of strokes, interactive editing operations such as save, redo, resize etc. are possible. The structure of artwork generated can be used for animation in artwork cartoons. A novel System-onChip Smart Camera architecture is presented that can be used for tracking infrared fiber based brushes as well as real brushes in real-time. A dedicated SoC hardware implementation ...

  5. The Feasibility of Performing Particle Tracking Based Flow Measurements with Acoustic Cameras

    Science.gov (United States)

    2017-08-01

    Katija, K., S. P. Colin, J. H. Costello, and J. O. Dabiri. 2011. “Quantitatively Measuring In - Situ Flows Using a Self-Contained Underwater... Measurements with Acoustic Cameras” ERDC/CHL SR-17-1 ii Abstract Modern science lacks the capability to quantify flow velocity fields in turbid...transparent fluid (so the camera can observe the light reflected by the particles). Acoustic-based flow measurement equipment used in the field (e.g

  6. Camera-Based Control for Industrial Robots Using OpenCV Libraries

    Science.gov (United States)

    Seidel, Patrick A.; Böhnke, Kay

    This paper describes a control system for industrial robots whose reactions base on the analysis of images provided by a camera mounted on top of the robot. We show that such control system can be designed and implemented with an open source image processing library and cheap hardware. Using one specific robot as an example, we demonstrate the structure of a possible control algorithm running on a PC and its interaction with the robot.

  7. Fall Prevention Shoes Using Camera-Based Line-Laser Obstacle Detection System

    OpenAIRE

    Tzung-Han Lin; Chi-Yun Yang; Wen-Pin Shih

    2017-01-01

    Fall prevention is an important issue particularly for the elderly. This paper proposes a camera-based line-laser obstacle detection system to prevent falls in the indoor environment. When obstacles are detected, the system will emit alarm messages to catch the attention of the user. Because the elderly spend a lot of their time at home, the proposed line-laser obstacle detection system is designed mainly for indoor applications. Our obstacle detection system casts a laser line, which passes ...

  8. Obstacle Classification and 3D Measurement in Unstructured Environments Based on ToF Cameras

    Directory of Open Access Journals (Sweden)

    Hongshan Yu

    2014-06-01

    Full Text Available Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot’s movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient.

  9. Evaluation of Compton gamma camera prototype based on pixelated CdTe detectors.

    Science.gov (United States)

    Calderón, Y; Chmeissani, M; Kolstein, M; De Lorenzo, G

    2014-06-01

    A proposed Compton camera prototype based on pixelated CdTe is simulated and evaluated in order to establish its feasibility and expected performance in real laboratory tests. The system is based on module units containing a 2×4 array of square CdTe detectors of 10×10 mm 2 area and 2 mm thickness. The detectors are pixelated and stacked forming a 3D detector with voxel sizes of 2 × 1 × 2 mm 3 . The camera performance is simulated with Geant4-based Architecture for Medicine-Oriented Simulations(GAMOS) and the Origin Ensemble(OE) algorithm is used for the image reconstruction. The simulation shows that the camera can operate with up to 10 4 Bq source activities with equal efficiency and is completely saturated at 10 9 Bq. The efficiency of the system is evaluated using a simulated 18 F point source phantom in the center of the Field-of-View (FOV) achieving an intrinsic efficiency of 0.4 counts per second per kilobecquerel. The spatial resolution measured from the point spread function (PSF) shows a FWHM of 1.5 mm along the direction perpendicular to the scatterer, making it possible to distinguish two points at 3 mm separation with a peak-to-valley ratio of 8.

  10. Cramer-Rao lower bound optimization of an EM-CCD-based scintillation gamma camera.

    Science.gov (United States)

    Korevaar, Marc A N; Goorden, Marlies C; Beekman, Freek J

    2013-04-21

    Scintillation gamma cameras based on low-noise electron multiplication (EM-)CCDs can reach high spatial resolutions. For further improvement of these gamma cameras, more insight is needed into how various parameters that characterize these devices influence their performance. Here, we use the Cramer-Rao lower bound (CRLB) to investigate the sensitivity of the energy and spatial resolution of an EM-CCD-based gamma camera to several parameters. The gamma camera setup consists of a 3 mm thick CsI(Tl) scintillator optically coupled by a fiber optic plate to the E2V CCD97 EM-CCD. For this setup, the position and energy of incoming gamma photons are determined with a maximum-likelihood detection algorithm. To serve as the basis for the CRLB calculations, accurate models for the depth-dependent scintillation light distribution are derived and combined with a previously validated statistical response model for the EM-CCD. The sensitivity of the lower bounds for energy and spatial resolution to the EM gain and the depth-of-interaction (DOI) are calculated and compared to experimentally obtained values. Furthermore, calculations of the influence of the number of detected optical photons and noise sources in the image area on the energy and spatial resolution are presented. Trends predicted by CRLB calculations agree with experiments, although experimental values for spatial and energy resolution are typically a factor of 1.5 above the calculated lower bounds. Calculations and experiments both show that an intermediate EM gain setting results in the best possible spatial or energy resolution and that the spatial resolution of the gamma camera degrades rapidly as a function of the DOI. Furthermore, calculations suggest that a large improvement in gamma camera performance is achieved by an increase in the number of detected photons or a reduction of noise in the image area. A large noise reduction, as is possible with a new generation of EM-CCD electronics, may improve the

  11. Optical character recognition of camera-captured images based on phase features

    Science.gov (United States)

    Diaz-Escobar, Julia; Kober, Vitaly

    2015-09-01

    Nowadays most of digital information is obtained using mobile devices specially smartphones. In particular, it brings the opportunity for optical character recognition in camera-captured images. For this reason many recognition applications have been recently developed such as recognition of license plates, business cards, receipts and street signal; document classification, augmented reality, language translator and so on. Camera-captured images are usually affected by geometric distortions, nonuniform illumination, shadow, noise, which make difficult the recognition task with existing systems. It is well known that the Fourier phase contains a lot of important information regardless of the Fourier magnitude. So, in this work we propose a phase-based recognition system exploiting phase-congruency features for illumination/scale invariance. The performance of the proposed system is tested in terms of miss classifications and false alarms with the help of computer simulation.

  12. A real-time MTFC algorithm of space remote-sensing camera based on FPGA

    Science.gov (United States)

    Zhao, Liting; Huang, Gang; Lin, Zhe

    2018-01-01

    A real-time MTFC algorithm of space remote-sensing camera based on FPGA was designed. The algorithm can provide real-time image processing to enhance image clarity when the remote-sensing camera running on-orbit. The image restoration algorithm adopted modular design. The MTF measurement calculation module on-orbit had the function of calculating the edge extension function, line extension function, ESF difference operation, normalization MTF and MTFC parameters. The MTFC image filtering and noise suppression had the function of filtering algorithm and effectively suppressing the noise. The algorithm used System Generator to design the image processing algorithms to simplify the design structure of system and the process redesign. The image gray gradient dot sharpness edge contrast and median-high frequency were enhanced. The image SNR after recovery reduced less than 1 dB compared to the original image. The image restoration system can be widely used in various fields.

  13. Improved camera calibration method based on perpendicularity compensation for binocular stereo vision measurement system.

    Science.gov (United States)

    Jia, Zhenyuan; Yang, Jinghao; Liu, Wei; Wang, Fuji; Liu, Yang; Wang, Lingli; Fan, Chaonan; Zhao, Kai

    2015-06-15

    High-precision calibration of binocular vision systems plays an important role in accurate dimensional measurements. In this paper, an improved camera calibration method is proposed. First, an accurate intrinsic parameters calibration method based on active vision with perpendicularity compensation is developed. Compared to the previous work, this method eliminates the effect of non-perpendicularity of the camera motion on calibration accuracy. The principal point, scale factors, and distortion factors are calculated independently in this method, thereby allowing the strong coupling of these parameters to be eliminated. Second, an accurate global optimization method with only 5 images is presented. The results of calibration experiments show that the accuracy of the calibration method can reach 99.91%.

  14. Speed Control Based on ESO for the Pitching Axis of Satellite Cameras

    Directory of Open Access Journals (Sweden)

    BingYou Liu

    2016-01-01

    Full Text Available The pitching axis is the main axis of a satellite camera and is used to control the pitch posture of satellite cameras. A control strategy based on extended state observer (ESO is designed to obtain a fast response speed and highly accurate pitching axis control system and eliminate disturbances during the adjustment of pitch posture. First, a sufficient condition of stabilization for ESO is obtained by analyzing the steady-state error of the system under step input. Parameter tuning and disturbance compensation are performed by ESO. Second, the ESO of speed loop is designed by the speed equation of the pitching axis of satellite cameras. The ESO parameters are obtained by pole assignment. By ESO, the original state variable observes the motor angular speed and the extended state variable observes the load torque. Therefore, the external load disturbances of the control system are estimated in real time. Finally, simulation experiments are performed for the system on the cases of nonload starting, adding external disturbances on the system suddenly, and the load of system changing suddenly. Simulation results show that the control strategy based on ESO has better stability, adaptability, and robustness than the PI control strategy.

  15. Probing Positron Gravitation at HERA

    Energy Technology Data Exchange (ETDEWEB)

    Gharibyan, Vahagn

    2015-07-15

    An equality of particle and antiparticle gravitational interactions holds in general relativity and is supported by indirect observations. Here I develop a method based on high energy Compton scattering to measure the gravitational interaction of accelerated charged particles. Within that formalism the Compton spectra measured at HERA rule out the positron's anti-gravity and hint for a positron's 1.3(0.2)% weaker coupling to the gravitational field relative to an electron.

  16. Development of plenoptic infrared camera using low dimensional material based photodetectors

    Science.gov (United States)

    Chen, Liangliang

    Infrared (IR) sensor has extended imaging from submicron visible spectrum to tens of microns wavelength, which has been widely used for military and civilian application. The conventional bulk semiconductor materials based IR cameras suffer from low frame rate, low resolution, temperature dependent and highly cost, while the unusual Carbon Nanotube (CNT), low dimensional material based nanotechnology has been made much progress in research and industry. The unique properties of CNT lead to investigate CNT based IR photodetectors and imaging system, resolving the sensitivity, speed and cooling difficulties in state of the art IR imagings. The reliability and stability is critical to the transition from nano science to nano engineering especially for infrared sensing. It is not only for the fundamental understanding of CNT photoresponse induced processes, but also for the development of a novel infrared sensitive material with unique optical and electrical features. In the proposed research, the sandwich-structured sensor was fabricated within two polymer layers. The substrate polyimide provided sensor with isolation to background noise, and top parylene packing blocked humid environmental factors. At the same time, the fabrication process was optimized by real time electrical detection dielectrophoresis and multiple annealing to improve fabrication yield and sensor performance. The nanoscale infrared photodetector was characterized by digital microscopy and precise linear stage in order for fully understanding it. Besides, the low noise, high gain readout system was designed together with CNT photodetector to make the nano sensor IR camera available. To explore more of infrared light, we employ compressive sensing algorithm into light field sampling, 3-D camera and compressive video sensing. The redundant of whole light field, including angular images for light field, binocular images for 3-D camera and temporal information of video streams, are extracted and

  17. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras.

    Science.gov (United States)

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-03-16

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.

  18. Digital camera and smartphone as detectors in paper-based chemiluminometric genotyping of single nucleotide polymorphisms.

    Science.gov (United States)

    Spyrou, Elena M; Kalogianni, Despina P; Tragoulias, Sotirios S; Ioannou, Penelope C; Christopoulos, Theodore K

    2016-10-01

    Chemi(bio)luminometric assays have contributed greatly to various areas of nucleic acid analysis due to their simplicity and detectability. In this work, we present the development of chemiluminometric genotyping methods in which (a) detection is performed by using either a conventional digital camera (at ambient temperature) or a smartphone and (b) a lateral flow assay configuration is employed for even higher simplicity and suitability for point of care or field testing. The genotyping of the C677T single nucleotide polymorphism (SNP) of methylenetetrahydropholate reductase (MTHFR) gene is chosen as a model. The interrogated DNA sequence is amplified by polymerase chain reaction (PCR) followed by a primer extension reaction. The reaction products are captured through hybridization on the sensing areas (spots) of the strip. Streptavidin-horseradish peroxidase conjugate is used as a reporter along with a chemiluminogenic substrate. Detection of the emerging chemiluminescence from the sensing areas of the strip is achieved by digital camera or smartphone. For this purpose, we constructed a 3D-printed smartphone attachment that houses inexpensive lenses and converts the smartphone into a portable chemiluminescence imager. The device enables spatial discrimination of the two alleles of a SNP in a single shot by imaging of the strip, thus avoiding the need of dual labeling. The method was applied successfully to genotyping of real clinical samples. Graphical abstract Paper-based genotyping assays using digital camera and smartphone as detectors.

  19. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras

    Science.gov (United States)

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-01-01

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body. PMID:28300783

  20. Camera characterization using back-propagation artificial neutral network based on Munsell system

    Science.gov (United States)

    Liu, Ye; Yu, Hongfei; Shi, Junsheng

    2008-02-01

    The camera output RGB signals do not directly corresponded to the tristimulus values based on the CIE standard colorimetric observer, i.e., it is a device-independent color space. For achieving accurate color information, we need to do color characterization, which can be used to derive a transformation between camera RGB values and CIE XYZ values. In this paper we set up a Back-Propagation (BP) artificial neutral network to realize the mapping from camera RGB to CIE XYZ. We used the Munsell Book of Color with total number 1267 as color samples. Each patch of the Munsell Book of Color was recorded by camera, and the RGB values could be obtained. The Munsell Book of Color were taken in a light booth and the surround was kept dark. The viewing/illuminating geometry was 0/45 using D 65 illuminate. The lighting illuminating the reference target needs to be as uniform as possible. The BP network was a 5-layer one and (3-10-10-10-3), which was selected through our experiments. 1000 training samples were selected randomly from the 1267 samples, and the rest 267 samples were as the testing samples. Experimental results show that the mean color difference between the reproduced colors and target colors is 0.5 CIELAB color-difference unit, which was smaller than the biggest acceptable color difference 2 CIELAB color-difference unit. The results satisfy some applications for the more accurate color measurements, such as medical diagnostics, cosmetics production, the color reappearance of different media, etc.

  1. A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology

    Science.gov (United States)

    Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi

    2015-01-01

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187

  2. Camera on Vessel: A Camera-Based System to Measure Change in Water Volume in a Drinking Glass

    Directory of Open Access Journals (Sweden)

    Idowu Ayoola

    2015-09-01

    Full Text Available A major problem related to chronic health is patients’ “compliance” with new lifestyle changes, medical prescriptions, recommendations, or restrictions. Heart-failure and hemodialysis patients are usually placed on fluid restrictions due to their hemodynamic status. A holistic approach to managing fluid imbalance will incorporate the monitoring of salt-water intake, body-fluid retention, and fluid excretion in order to provide effective intervention at an early stage. Such an approach creates a need to develop a smart device that can monitor the drinking activities of the patient. This paper employs an empirical approach to infer the real water level in a conically shapped glass and the volume difference due to changes in water level. The method uses a low-resolution miniaturized camera to obtain images using an Arduino microcontroller. The images are processed in MATLAB. Conventional segmentation techniques (such as a Sobel filter to obtain a binary image are applied to extract the level gradient, and an ellipsoidal fitting helps to estimate the size of the cup. The fitting (using least-squares criterion between derived measurements in pixel and the real measurements shows a low covariance between the estimated measurement and the mean. The correlation between the estimated results to ground truth produced a variation of 3% from the mean.

  3. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration

    National Research Council Canada - National Science Library

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    .... This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object...

  4. A clinical gamma camera-based pinhole collimated system for high resolution small animal SPECT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Mejia, J.; Galvis-Alonso, O.Y., E-mail: mejia_famerp@yahoo.com.b [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Dept. de Biologia Molecular; Castro, A.A. de; Simoes, M.V. [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Dept. de Clinica Medica; Leite, J.P. [Universidade de Sao Paulo (FMRP/USP), Ribeirao Preto, SP (Brazil). Fac. de Medicina. Dept. de Neurociencias e Ciencias do Comportamento; Braga, J. [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil). Div. de Astrofisica

    2010-11-15

    The main objective of the present study was to upgrade a clinical gamma camera to obtain high resolution tomographic images of small animal organs. The system is based on a clinical gamma camera to which we have adapted a special-purpose pinhole collimator and a device for positioning and rotating the target based on a computer-controlled step motor. We developed a software tool to reconstruct the target's three-dimensional distribution of emission from a set of planar projections, based on the maximum likelihood algorithm. We present details on the hardware and software implementation. We imaged phantoms and heart and kidneys of rats. When using pinhole collimators, the spatial resolution and sensitivity of the imaging system depend on parameters such as the detector-to-collimator and detector-to-target distances and pinhole diameter. In this study, we reached an object voxel size of 0.6 mm and spatial resolution better than 2.4 and 1.7 mm full width at half maximum when 1.5- and 1.0-mm diameter pinholes were used, respectively. Appropriate sensitivity to study the target of interest was attained in both cases. Additionally, we show that as few as 12 projections are sufficient to attain good quality reconstructions, a result that implies a significant reduction of acquisition time and opens the possibility for radiotracer dynamic studies. In conclusion, a high resolution single photon emission computed tomography (SPECT) system was developed using a commercial clinical gamma camera, allowing the acquisition of detailed volumetric images of small animal organs. This type of system has important implications for research areas such as Cardiology, Neurology or Oncology. (author)

  5. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera

    Science.gov (United States)

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-01-01

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments. PMID:26404284

  6. New Stereo Vision Digital Camera System for Simultaneous Measurement of Cloud Base Height and Atmospheric Visibility

    Science.gov (United States)

    Janeiro, F. M.; Carretas, F.; Palma, N.; Ramos, P. M.; Wagner, F.

    2013-12-01

    Clouds play an important role in many aspects of everyday life. They affect both the local weather as well as the global climate and are an important parameter on climate change studies. Cloud parameters are also important for weather prediction models which make use of actual measurements. It is thus important to have low-cost instrumentation that can be deployed in the field to measure those parameters. This kind of instruments should also be automated and robust since they may be deployed in remote places and be subject to adverse weather conditions. Although clouds are very important in environmental systems, they are also an essential component of airplane safety when visual flight rules (VFR) are enforced, such as in most small aerodromes where it is not economically viable to install instruments for assisted flying. Under VFR there are strict limits on the height of the cloud base, cloud cover and atmospheric visibility that ensure the safety of the pilots and planes. Although there are instruments, available in the market, to measure those parameters, their relatively high cost makes them unavailable in many local aerodromes. In this work we present a new prototype which has been recently developed and deployed in a local aerodrome as proof of concept. It is composed by two digital cameras that capture photographs of the sky and allow the measurement of the cloud height from the parallax effect. The new developments consist on having a new geometry which allows the simultaneous measurement of cloud base height, wind speed at cloud base height and atmospheric visibility, which was not previously possible with only two cameras. The new orientation of the cameras comes at the cost of a more complex geometry to measure the cloud base height. The atmospheric visibility is calculated from the Lambert-Beer law after the measurement of the contrast between a set of dark objects and the background sky. The prototype includes the latest hardware developments that

  7. Comparison of the temperature accuracy between smart phone based and high-end thermal cameras using a temperature gradient phantom

    Science.gov (United States)

    Klaessens, John H.; van der Veen, Albert; Verdaasdonk, Rudolf M.

    2017-03-01

    Recently, low cost smart phone based thermal cameras are being considered to be used in a clinical setting for monitoring physiological temperature responses such as: body temperature change, local inflammations, perfusion changes or (burn) wound healing. These thermal cameras contain uncooled micro-bolometers with an internal calibration check and have a temperature resolution of 0.1 degree. For clinical applications a fast quality measurement before use is required (absolute temperature check) and quality control (stability, repeatability, absolute temperature, absolute temperature differences) should be performed regularly. Therefore, a calibrated temperature phantom has been developed based on thermistor heating on both ends of a black coated metal strip to create a controllable temperature gradient from room temperature 26 °C up to 100 °C. The absolute temperatures on the strip are determined with software controlled 5 PT-1000 sensors using lookup tables. In this study 3 FLIR-ONE cameras and one high end camera were checked with this temperature phantom. The results show a relative good agreement between both low-cost and high-end camera's and the phantom temperature gradient, with temperature differences of 1 degree up to 6 degrees between the camera's and the phantom. The measurements were repeated as to absolute temperature and temperature stability over the sensor area. Both low-cost and high-end thermal cameras measured relative temperature changes with high accuracy and absolute temperatures with constant deviations. Low-cost smart phone based thermal cameras can be a good alternative to high-end thermal cameras for routine clinical measurements, appropriate to the research question, providing regular calibration checks for quality control.

  8. [Measuring human arm motion parameters based on high-speed camera].

    Science.gov (United States)

    Zhao, Dongbin; Zhang, Wenzeng; Sun, Zhenguo; Chen, Qiang

    2002-01-01

    A sensing method based on high-speed camera is proposed to recognize human arm motion in this paper. A sensing system for human arm motion was established. A fast image processing algorithm was developed to accurately extract marker positions in the image. Angle parameter results were further improved with the instantaneous joint center principle. The human motion information results can serve as the research references of medical treatment, gym, bionics, and so on. The sensing method can also be applied to other fields of the human motion recognition.

  9. Transmission positron microscopes

    Energy Technology Data Exchange (ETDEWEB)

    Doyama, Masao [Teikyo University of Science and Technology, Uenohara, Yamanashi 409-0193 (Japan)]. E-mail: doyama@ntu.ac.jp; Kogure, Yoshiaki [Teikyo University of Science and Technology, Uenohara, Yamanashi 409-0193 (Japan); Inoue, Miyoshi [Teikyo University of Science and Technology, Uenohara, Yamanashi 409-0193 (Japan); Kurihara, Toshikazu [Institute of Materials Structure Science (IMSS), High Energy Accelerator, Research Organization (KEK), Ohno 1-1, Tsukuba, Ibaraki 305-0801 (Japan); Yoshiie, Toshimasa [Reactor Research Institute, Kyoto University, Noda, Kumatori, Osaka 590-0451 (Japan); Oshima, Ryuichiro [Research Institute for Advanced Science and Technology, Osaka Prefecture University (Japan); Matsuya, Miyuki [Electron Optics Laboratory (JEOL) Ltd., Musashino 3-1-2, Akishima 196-0021 (Japan)

    2006-02-28

    Immediate and near-future plans for transmission positron microscopes being built at KEK, Tsukuba, Japan, are described. The characteristic feature of this project is remolding a commercial electron microscope to a positron microscope. A point source of electrons kept at a negative high voltage is changed to a point source of positrons kept at a high positive voltage. Positional resolution of transmission microscopes should be theoretically the same as electron microscopes. Positron microscopes utilizing trapping of positrons have always positional ambiguity due to the diffusion of positrons.

  10. Hyperspectral Longwave Infrared Focal Plane Array and Camera Based on Quantum Well Infrared Photodetectors Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a hyperspectral camera imaging in a large number of sharp hyperspectral bands in the thermal infrared. The camera is particularly suitable for...

  11. Hyperspectral Longwave Infrared Focal Plane Array and Camera Based on Quantum Well Infrared Photodetectors Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a hyperspectral focal plane array and camera imaging in a large number of sharp hyperspectral bands in the thermal infrared. The camera is...

  12. Gamma camera based method for 131I capsule counting: an alternate method to Uptake probe method.

    Science.gov (United States)

    Menon, Biju K; Uday, Awasare S; Singh, Baghel N

    2017-11-10

    The main objective of this study was to check the validity of using gamma camera as an alternate method to thyroid uptake probe, for counting 25uCi (0.925 MBq) and 50uCi (1.85 MBq) 131I capsules before administration to thyroid patients. Methods: - 10 sets each of 25uCi (0.925 MBq) and 50uCi (1.85 MBq) 131I capsules received from Board Of Radiation and Isotope Technology, Department Of Atomic Energy, India (BRIT, DAE) have been counted individually using thyroid uptake probe for 10 seconds following institutional protocol and also by keeping individual capsule of a set with 8cm gap between each of them .These capsules were also scanned by Scintillation gamma camera for 100 seconds. Capsules having counts within the range of mean ±2 Standard Deviation (SD) were accepted for patient administration. After analysing both the data, correlation coefficient between these two methods has been evaluated. Results: Scanned images were analysed by drawing Identical ROI around each set of 25uCi (0.925 MBq) and 50uCi (1.85 MBq) 131I capsules. Capsules with counts within 2 Standard Deviation from mean were accepted for patient administration. Good correlation coefficient (r >0.95) was observed between these two counts set. Conclusion: Gamma camera based 131I -capsule counting method is an easy and time saving method compared to probe based capsule counting method as we can scan a set of capsules in a single acquisition. It can provide uniformity information for a batch of 131I -capsules and avoid the time consuming method of individual capsule counting with the thyroid uptake probe. Copyright © 2017 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  13. Development of NEMA-based Software for Gamma Camera Quality Control

    OpenAIRE

    Rova, Andrew; Celler, Anna; Hamarneh, Ghassan

    2007-01-01

    We have developed a cross-platform software application that implements all of the basic standardized nuclear medicine scintillation camera quality control analyses, thus serving as an independent complement to camera manufacturers’ software. Our application allows direct comparison of data and statistics from different cameras through its ability to uniformly analyze a range of file types. The program has been tested using multiple gamma cameras, and its results agree with comparable analysi...

  14. The Camera-Based Assessment Survey System (C-BASS): A towed camera platform for reef fish abundance surveys and benthic habitat characterization in the Gulf of Mexico

    Science.gov (United States)

    Lembke, Chad; Grasty, Sarah; Silverman, Alex; Broadbent, Heather; Butcher, Steven; Murawski, Steven

    2017-12-01

    An ongoing challenge for fisheries management is to provide cost-effective and timely estimates of habitat stratified fish densities. Traditional approaches use modified commercial fishing gear (such as trawls and baited hooks) that have biases in species selectivity and may also be inappropriate for deployment in some habitat types. Underwater visual and optical approaches offer the promise of more precise and less biased assessments of relative fish abundance, as well as direct estimates of absolute fish abundance. A number of video-based approaches have been developed and the technology for data acquisition, calibration, and synthesis has been developing rapidly. Beginning in 2012, our group of engineers and researchers at the University of South Florida has been working towards the goal of completing large scale, video-based surveys in the eastern Gulf of Mexico. This paper discusses design considerations and development of a towed camera system for collection of video-based data on commercially and recreationally important reef fishes and benthic habitat on the West Florida Shelf. Factors considered during development included potential habitat types to be assessed, sea-floor bathymetry, vessel support requirements, personnel requirements, and cost-effectiveness of system components. This regional-specific effort has resulted in a towed platform called the Camera-Based Assessment Survey System, or C-BASS, which has proven capable of surveying tens of kilometers of video transects per day and has the ability to cost-effective population estimates of reef fishes and coincident benthic habitat classification.

  15. Positron emission tomography (PET) guided glioblastoma targeting by a fullerene-based nanoplatform with fast renal clearance.

    Science.gov (United States)

    Peng, Yayun; Yang, Dongzhi; Lu, Weifei; Hu, Xiongwei; Hong, Hao; Cai, Ting

    2017-10-01

    Various carbonaceous nanomaterials, including fullerene, carbon nanotube, graphene, and carbon dots, have attracted increasing attention during past decades for their potential applications in biological imaging and therapy. In this study, we have developed a fullerene-based tumor-targeted positron emission tomography (PET) imaging probe. Water-soluble functionalized C60 conjugates were radio-labeled with 64Cu and modified with cyclo (Arg-Gly-Asp) peptides (cRGD) for targeting of integrin αvβ3 in glioblastoma. The specificity of fluorescein-labeled C60 conjugates against cellular integrin αvβ3 was evaluated in U87MG (integrin αvβ3 positive) and MCF-7 cells (integrin αvβ3 negative) by confocal fluorescence microscopy and flow cytometry. Our results indicated that cRGD-conjugated C60 derivatives showed better cellular internalization compared with C60 derivatives without the cRGD attachment. Moreover, an interesting finding on intra-nuclei transportation of cRGD-conjugated C60 derivatives was observed in U87MG cells. In vivo serial PET studies showed preferential accumulation of cRGD-conjugated C60 derivatives at in U87MG tumors. In addition, the pharmacokinetic profiles of these fullerene-based nanoparticles conjugated with cRGD and 1,4,7-triazacyclononane-1,4,7-triacetic acid (NOTA) fit well with the three compartment model. The renal clearance of C60-based nanoparticles is remarkably fast, which makes this material very promising for safer cancer theranostic applications. Safety is one of the major concerns for nanomedicine and nanomaterials with fast clearance profile are highly desirable. Fullerene is a distinct type of zero-dimensional carbon nanomaterial with ultrasmall size, uniform dispersity, and versatile reactivity. Here we have developed a fullerene-based tumor-targeted positron emission tomography imaging probe using water-soluble functionalized C60 conjugates radio-labeled with 64Cu and modified with cyclo (Arg-Gly-Asp) peptides (cRGD) for

  16. Comparison of Positron Emission Tomography Quantification Using Magnetic Resonance- and Computed Tomography-Based Attenuation Correction in Physiological Tissues and Lesions: A Whole-Body Positron Emission Tomography/Magnetic Resonance Study in 66 Patients.

    Science.gov (United States)

    Seith, Ferdinand; Gatidis, Sergios; Schmidt, Holger; Bezrukov, Ilja; la Fougère, Christian; Nikolaou, Konstantin; Pfannenberg, Christina; Schwenzer, Nina

    2016-01-01

    Attenuation correction (AC) in fully integrated positron emission tomography (PET)/magnetic resonance (MR) systems plays a key role for the quantification of tracer uptake. The aim of this prospective study was to assess the accuracy of standardized uptake value (SUV) quantification using MR-based AC in direct comparison with computed tomography (CT)-based AC of the same PET data set on a large patient population. Sixty-six patients (22 female; mean [SD], 61 [11] years) were examined by means of combined PET/CT and PET/MR (11C-choline, 18F-FDG, or 68Ga-DOTATATE) subsequently. Positron emission tomography images from PET/MR examinations were corrected with MR-derived AC based on tissue segmentation (PET(MR)). The same PET data were corrected using CT-based attenuation maps (μ-maps) derived from PET/CT after nonrigid registration of the CT to the MR-based μ-map (PET(MRCT)). Positron emission tomography SUVs were quantified placing regions of interest or volumes of interest in 6 different body regions as well as PET-avid lesions, respectively. The relative differences of quantitative PET values when using MR-based AC versus CT-based AC were varying depending on the organs and body regions assessed. In detail, the mean (SD) relative differences of PET SUVs were as follows: -7.8% (11.5%), blood pool; -3.6% (5.8%), spleen; -4.4% (5.6%)/-4.1% (6.2%), liver; -0.6% (5.0%), muscle; -1.3% (6.3%), fat; -40.0% (18.7%), bone; 1.6% (4.4%), liver lesions; -6.2% (6.8%), bone lesions; and -1.9% (6.2%), soft tissue lesions. In 10 liver lesions, distinct overestimations greater than 5% were found (up to 10%). In addition, overestimations were found in 2 bone lesions and 1 soft tissue lesion adjacent to the lung (up to 28.0%). Results obtained using different PET tracers show that MR-based AC is accurate in most tissue types, with SUV deviations generally of less than 10%. In bone, however, underestimations can be pronounced, potentially leading to inaccurate SUV quantifications. In

  17. Analytically based photon scatter modeling for a multipinhole cardiac SPECT camera.

    Science.gov (United States)

    Pourmoghaddas, Amir; Wells, R Glenn

    2016-11-01

    Dedicated cardiac SPECT scanners have improved performance over standard gamma cameras allowing reductions in acquisition times and/or injected activity. One approach to improving performance has been to use pinhole collimators, but this can cause position-dependent variations in attenuation, sensitivity, and spatial resolution. CT attenuation correction (AC) and an accurate system model can compensate for many of these effects; however, scatter correction (SC) remains an outstanding issue. In addition, in cameras using cadmium-zinc-telluride-based detectors, a large portion of unscattered photons is detected with reduced energy (low-energy tail). Consequently, application of energy-based SC approaches in these cameras leads to a higher increase in noise than with standard cameras due to the subtraction of true counts detected in the low-energy tail. Model-based approaches with parallel-hole collimator systems accurately calculate scatter based on the physics of photon interactions in the patient and camera and generate lower-noise estimates of scatter than energy-based SC. In this study, the accuracy of a model-based SC method was assessed using physical phantom studies on the GE-Discovery NM530c and its performance was compared to a dual energy window (DEW)-SC method. The analytical photon distribution (APD) method was used to calculate the distribution of probabilities that emitted photons will scatter in the surrounding scattering medium and be subsequently detected. APD scatter calculations for (99m)Tc-SPECT (140 ± 14 keV) were validated with point-source measurements and 15 anthropomorphic cardiac-torso phantom experiments and varying levels of extra-cardiac activity causing scatter inside the heart. The activity inserted into the myocardial compartment of the phantom was first measured using a dose calibrator. CT images were acquired on an Infinia Hawkeye (GE Healthcare) SPECT/CT and coregistered with emission data for AC. For comparison, DEW scatter

  18. Ventilation/Perfusion Positron Emission Tomography--Based Assessment of Radiation Injury to Lung.

    Science.gov (United States)

    Siva, Shankar; Hardcastle, Nicholas; Kron, Tomas; Bressel, Mathias; Callahan, Jason; MacManus, Michael P; Shaw, Mark; Plumridge, Nikki; Hicks, Rodney J; Steinfort, Daniel; Ball, David L; Hofman, Michael S

    2015-10-01

    To investigate (68)Ga-ventilation/perfusion (V/Q) positron emission tomography (PET)/computed tomography (CT) as a novel imaging modality for assessment of perfusion, ventilation, and lung density changes in the context of radiation therapy (RT). In a prospective clinical trial, 20 patients underwent 4-dimensional (4D)-V/Q PET/CT before, midway through, and 3 months after definitive lung RT. Eligible patients were prescribed 60 Gy in 30 fractions with or without concurrent chemotherapy. Functional images were registered to the RT planning 4D-CT, and isodose volumes were averaged into 10-Gy bins. Within each dose bin, relative loss in standardized uptake value (SUV) was recorded for ventilation and perfusion, and loss in air-filled fraction was recorded to assess RT-induced lung fibrosis. A dose-effect relationship was described using both linear and 2-parameter logistic fit models, and goodness of fit was assessed with Akaike Information Criterion (AIC). A total of 179 imaging datasets were available for analysis (1 scan was unrecoverable). An almost perfectly linear negative dose-response relationship was observed for perfusion and air-filled fraction (r(2)=0.99, Pfit as evaluated by AIC. Perfusion, ventilation, and the air-filled fraction decreased 0.75 ± 0.03%, 0.71 ± 0.06%, and 0.49 ± 0.02%/Gy, respectively. Within high-dose regions, higher baseline perfusion SUV was associated with greater rate of loss. At 50 Gy and 60 Gy, the rate of loss was 1.35% (P=.07) and 1.73% (P=.05) per SUV, respectively. Of 8/20 patients with peritumoral reperfusion/reventilation during treatment, 7/8 did not sustain this effect after treatment. Radiation-induced regional lung functional deficits occur in a dose-dependent manner and can be estimated by simple linear models with 4D-V/Q PET/CT imaging. These findings may inform future studies of functional lung avoidance using V/Q PET/CT. Crown Copyright © 2015. Published by Elsevier Inc. All rights reserved.

  19. Performance evaluation of a hand-held, semiconductor (CdZnTe)-based gamma camera

    CERN Document Server

    Abe, A; Lee, J; Oka, T; Shizukuishi, K; Kikuchi, T; Inoue, T; Jimbo, M; Ryuo, H; Bickel, C

    2003-01-01

    We have designed and developed a small field of view gamma camera, the eZ SCOPE, based on use of a CdZnTe semiconductor. This device utilises proprietary signal processing technology and an interface to a computer-based imaging system. The purpose of this study was to evaluate the performance of the eZ scope in comparison with currently employed gamma camera technology. The detector is a single wafer of 5-mm-thick CdZnTe that is divided into a 16 x 16 array (256 pixels). The sensitive area of the detector is a square of dimension 3.2 cm. Two parallel-hole collimators are provided with the system and have a matching (256 hole) pattern to the CdZnTe detector array: a low-energy, high-resolution parallel-hole (LEHR) collimator fabricated of lead and a low-energy, high-sensitivity parallel-hole (LEHS) collimator fabricated of tungsten. Performance measurements and the data analysis were done according to the procedures of the NEMA standard. We also studied the long-term stability of the system with continuous use...

  20. Camera phone-based quantitative analysis of C-reactive protein ELISA.

    Science.gov (United States)

    McGeough, Cathy M; O'Driscoll, Stephen

    2013-10-01

    We demonstrate the use of a camera phone as a low-cost optical detector for quantitative analysis of a high-sensitivity C-reactive protein (hs-CRP) enzyme-linked immunosorbent assay (ELISA). The camera phone was used to acquire images of the ELISA carried out in a conventional 96 well plate. Colorimetric analysis of the images was used to determine a standard curve that exhibited excellent agreement with a fitted 4-parameter logistic model (R²=0.998). The limit of detection (LOD) for this approach was determined to be 0.026 ± 0.002 μg/ml (1.035 ± 0.079 μM) CRP. Furthermore, these results were found to be in very close agreement with measurements obtained for the same assay using a laboratory-based instrument. These findings indicate the basic technology to enable low-cost quantitative home-based monitoring of an important clinical biomarker of inflammatory disease may already be present in the patient's home.

  1. Extraction of character areas from digital camera based color document images and OCR system

    Science.gov (United States)

    Chung, Y. K.; Chi, S. Y.; Bae, K. S.; Kim, K. K.; Jang, D.; Kim, K. C.; Choi, Y. W.

    2005-09-01

    When document images are obtained from digital cameras, many imaging problems have to be solved for better extraction of characters from the images. Variation of illumination intensity sensitively affects to color values. A simple colored document image could be converted to a monochrome image by a traditional method and then a binarization algorithm is used. But this method is not stably working to the variation of illumination because sensitivity of colors to variation of illumination. For narrowly distributed colors, the conversion is not working well. Secondly, in case that the number of colors is more than two, it is not easy to figure out which color is for character and which others are for background. This paper discusses about an extraction method from a colored document image using a color process algorithm based on characteristics of color features. Variation of intensities and color distribution are used to classify character areas and background areas. A document image is segmented into several color groups and similar color groups are merged. In final step, only two colored groups are left for the character and background. The extracted character areas from the document images are entered into optical character recognition system. This method solves a color problem, which comes from traditional scanner based OCR systems. This paper also describes the OCR system for character conversion of a colored document image. Our method is working for the colored document images of cellular phones and digital cameras in real world.

  2. Fall Prevention Shoes Using Camera-Based Line-Laser Obstacle Detection System

    Directory of Open Access Journals (Sweden)

    Tzung-Han Lin

    2017-01-01

    Full Text Available Fall prevention is an important issue particularly for the elderly. This paper proposes a camera-based line-laser obstacle detection system to prevent falls in the indoor environment. When obstacles are detected, the system will emit alarm messages to catch the attention of the user. Because the elderly spend a lot of their time at home, the proposed line-laser obstacle detection system is designed mainly for indoor applications. Our obstacle detection system casts a laser line, which passes through a horizontal plane and has a specific height to the ground. A camera, whose optical axis has a specific inclined angle to the plane, will observe the laser pattern to obtain the potential obstacles. Based on this configuration, the distance between the obstacles and the system can be further determined by a perspective transformation called homography. After conducting the experiments, critical parameters of the algorithms can be determined, and the detected obstacles can be classified into different levels of danger, causing the system to send different alarm messages.

  3. Fall Prevention Shoes Using Camera-Based Line-Laser Obstacle Detection System.

    Science.gov (United States)

    Lin, Tzung-Han; Yang, Chi-Yun; Shih, Wen-Pin

    2017-01-01

    Fall prevention is an important issue particularly for the elderly. This paper proposes a camera-based line-laser obstacle detection system to prevent falls in the indoor environment. When obstacles are detected, the system will emit alarm messages to catch the attention of the user. Because the elderly spend a lot of their time at home, the proposed line-laser obstacle detection system is designed mainly for indoor applications. Our obstacle detection system casts a laser line, which passes through a horizontal plane and has a specific height to the ground. A camera, whose optical axis has a specific inclined angle to the plane, will observe the laser pattern to obtain the potential obstacles. Based on this configuration, the distance between the obstacles and the system can be further determined by a perspective transformation called homography. After conducting the experiments, critical parameters of the algorithms can be determined, and the detected obstacles can be classified into different levels of danger, causing the system to send different alarm messages.

  4. Camera based low-cost system to monitor hydrological parameters in small catchments

    Science.gov (United States)

    Eltner, Anette; Sardemann, Hannes; Kröhnert, Melanie; Schwalbe, Ellen

    2017-04-01

    Gauging stations in small catchments to measure hydrological parameters are usually solely installed at few selected locations. Thus, extreme events that can evolve rapidly, particularly in small catchments (especially in mountainous areas), potentially causing severe damage, are insufficiently documented eventually leading to difficulties of modeling and forecasting of these events. A conceptual approach using a low-cost camera based alternative is introduced to measure water level, flow velocity and changing river cross sections. Synchronized cameras are used for 3D reconstruction of the water surface, enabling the location of flow velocity vectors measured in video sequences. Furthermore, water levels are measured automatically using an image based approach originally developed for smartphone applications. Additional integration of a thermal sensor can increase the speed and reliability of the water level extraction. Finally, the reconstruction of the water surface as well as the surrounding topography allows for the detection of changing morphology. The introduced approach can help to increase the density of the monitoring system of hydrological parameters in (remote) small catchments and subsequently might be used as warning system for extreme events.

  5. Assessment of S Values in Stylized and Voxel-Based Rat Models for Positron-Emitting Radionuclides

    NARCIS (Netherlands)

    Xie, Tianwu; Zaidi, Habib

    2013-01-01

    Positron emission tomography (PET) is a powerful tool in small animal research, enabling noninvasive quantitative imaging of biochemical processes in living subjects. However, the dosimetric characteristics of small animal PET imaging are usually overlooked, although the radiation dose may be

  6. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User’s Head Movement

    Science.gov (United States)

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user’s head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768

  7. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User’s Head Movement

    Directory of Open Access Journals (Sweden)

    Weiyuan Pan

    2016-08-01

    Full Text Available Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user’s head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.

  8. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.

    Science.gov (United States)

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-08-30

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments.

  9. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment

    Directory of Open Access Journals (Sweden)

    Tao Yang

    2016-08-01

    Full Text Available This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV during a landing process. The system mainly include three novel parts: (1 Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2 Large scale outdoor camera array calibration module; and (3 Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS-denied environments.

  10. Early sinkhole detection using a drone-based thermal camera and image processing

    Science.gov (United States)

    Lee, Eun Ju; Shin, Sang Young; Ko, Byoung Chul; Chang, Chunho

    2016-09-01

    Accurate advance detection of the sinkholes that are occurring more frequently now is an important way of preventing human fatalities and property damage. Unlike naturally occurring sinkholes, human-induced ones in urban areas are typically due to groundwater disturbances and leaks of water and sewage caused by large-scale construction. Although many sinkhole detection methods have been developed, it is still difficult to predict sinkholes that occur in depth areas. In addition, conventional methods are inappropriate for scanning a large area because of their high cost. Therefore, this paper uses a drone combined with a thermal far-infrared (FIR) camera to detect potential sinkholes over a large area based on computer vision and pattern classification techniques. To make a standard dataset, we dug eight holes of depths 0.5-2 m in increments of 0.5 m and with a maximum width of 1 m. We filmed these using the drone-based FIR camera at a height of 50 m. We first detect candidate regions by analysing cold spots in the thermal images based on the fact that a sinkhole typically has a lower thermal energy than its background. Then, these regions are classified into sinkhole and non-sinkhole classes using a pattern classifier. In this study, we ensemble the classification results based on a light convolutional neural network (CNN) and those based on a Boosted Random Forest (BRF) with handcrafted features. We apply the proposed ensemble method successfully to sinkhole data for various sizes and depths in different environments, and prove that the CNN ensemble and the BRF one with handcrafted features are better at detecting sinkholes than other classifiers or standalone CNN.

  11. The cooling control system for focal plane assembly of astronomical satellite camera based on TEC

    Science.gov (United States)

    He, Yuqing; Du, Yunfei; Gao, Wei; Li, Baopeng; Fan, Xuewu; Yang, Wengang

    2017-02-01

    The dark current noise existing in the CCD of the astronomical observation camera has a serious influence on its working performance, reducing the working temperature of CCD can suppress the influence of dark current effectively. By analyzing the relationship between the CCD chip and the dark current noise, the optimum working temperature of the red band CCD focal plane is identified as -75°. According to the refrigeration temperature, a cooling control system for focal plane based on a thermoelectric cooler (TEC) was designed. It is required that the system can achieve high precision temperature control for the target. In the cooling control system, the 80C32 microcontroller was used as its systematic core processor. The advanced PID control algorithm is adopted to control the temperature of the top end of TEC. The bottom end of the TEC setting a constant value according to the target temperature used to assist the upper TEC to control the temperature. The experimental results show that the cooling system satisfies the requirements of the focal plane for the astronomical observation camera, it can reach the working temperature of -75° and the accuracy of ±2°.

  12. Real-time 3D measurement based on structured light illumination considering camera lens distortion

    Science.gov (United States)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, ShiLing

    2014-12-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. In traditional 3-D measurement system where the processing time is not a key factor, camera lens distortion correction is performed directly. However, for the time-critical high-speed applications, the time-consuming correction algorithm is inappropriate to be performed directly during the real-time process. To cope with this issue, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. And a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the merit of the LUT, the 3-D reconstruction can be achieved at 92.34 frames per second.

  13. Parkinson's disease assessment based on gait analysis using an innovative RGB-D camera system.

    Science.gov (United States)

    Rocha, Ana Patrícia; Choupina, Hugo; Fernandes, José Maria; Rosas, Maria José; Vaz, Rui; Silva Cunha, João Paulo

    2014-01-01

    Movement-related diseases, such as Parkinson's disease (PD), progressively affect the motor function, many times leading to severe motor impairment and dramatic loss of the patients' quality of life. Human motion analysis techniques can be very useful to support clinical assessment of this type of diseases. In this contribution, we present a RGB-D camera (Microsoft Kinect) system and its evaluation for PD assessment. Based on skeleton data extracted from the gait of three PD patients treated with deep brain stimulation and three control subjects, several gait parameters were computed and analyzed, with the aim of discriminating between non-PD and PD subjects, as well as between two PD states (stimulator ON and OFF). We verified that among the several quantitative gait parameters, the variance of the center shoulder velocity presented the highest discriminative power to distinguish between non-PD, PD ON and PD OFF states (p = 0.004). Furthermore, we have shown that our low-cost portable system can be easily mounted in any hospital environment for evaluating patients' gait. These results demonstrate the potential of using a RGB-D camera as a PD assessment tool.

  14. A pixellated gamma-camera based on CdTe detectors clinical interests and performances

    CERN Document Server

    Chambron, J; Eclancher, B; Scheiber, C; Siffert, P; Hage-Ali, M; Regal, R; Kazandjian, A; Prat, V; Thomas, S; Warren, S; Matz, R; Jahnke, A; Karman, M; Pszota, A; Németh, L

    2000-01-01

    A mobile gamma camera dedicated to nuclear cardiology, based on a 15 cmx15 cm detection matrix of 2304 CdTe detector elements, 2.83 mmx2.83 mmx2 mm, has been developed with a European Community support to academic and industrial research centres. The intrinsic properties of the semiconductor crystals - low-ionisation energy, high-energy resolution, high attenuation coefficient - are potentially attractive to improve the gamma-camera performances. But their use as gamma detectors for medical imaging at high resolution requires production of high-grade materials and large quantities of sophisticated read-out electronics. The decision was taken to use CdTe rather than CdZnTe, because the manufacturer (Eurorad, France) has a large experience for producing high-grade materials, with a good homogeneity and stability and whose transport properties, characterised by the mobility-lifetime product, are at least 5 times greater than that of CdZnTe. The detector matrix is divided in 9 square units, each unit is composed ...

  15. Performance evaluation of a hand-held, semiconductor (CdZnTe)-based gamma camera.

    Science.gov (United States)

    Abe, Aya; Takahashi, Nobukazu; Lee, Jin; Oka, Takashi; Shizukuishi, Kazuya; Kikuchi, Tatsuya; Inoue, Tomio; Jimbo, Masao; Ryuo, Hideki; Bickel, Chris

    2003-06-01

    We have designed and developed a small field of view gamma camera, the eZ SCOPE, based on use of a CdZnTe semiconductor. This device utilises proprietary signal processing technology and an interface to a computer-based imaging system. The purpose of this study was to evaluate the performance of the eZ scope in comparison with currently employed gamma camera technology. The detector is a single wafer of 5-mm-thick CdZnTe that is divided into a 16x16 array (256 pixels). The sensitive area of the detector is a square of dimension 3.2 cm. Two parallel-hole collimators are provided with the system and have a matching (256 hole) pattern to the CdZnTe detector array: a low-energy, high-resolution parallel-hole (LEHR) collimator fabricated of lead and a low-energy, high-sensitivity parallel-hole (LEHS) collimator fabricated of tungsten. Performance measurements and the data analysis were done according to the procedures of the NEMA standard. We also studied the long-term stability of the system with continuous use and variations in ambient temperature. Results were as follows. INTRINSIC ENERGY RESOLUTION: 8.6% FWHM at 141 keV.LINEARITY: There was excellent linearity between the observed photopeaks and the known gamma ray energies for the given isotopes. INTRINSIC SYSTEM UNIFORMITY: For the central field of view, the integral uniformity and the differential uniformity were, respectively, 1.6% and 1.3% with the LEHR collimator and 1.9% and 1.2% with the LEHS collimator. SYSTEM SPATIAL RESOLUTION: The FWHM measurements made at the surface of the collimator were 2.2 mm (LEHR) and 2.9 mm (LEHS).CONTRAST TEST: The average S/N ratios (i.e. counts in the irradiated pixel divided by counts in the surrounding pixels) for the inner ring pixels (8)/outer ring pixels (16) using the LEHS collimator and LEHR collimator were 3.2%/0.2% and 3.7%/0.3%, respectively. COUNT RATE CHARACTERISTICS: We could not determine the maximum count rate and the 20% loss count rate from these data because

  16. Improving the segmentation for weed recognition applications based on standard RGB cameras using optical filters

    DEFF Research Database (Denmark)

    Stigaard Laursen, Morten; Jørgensen, Rasmus Nyholm; Midtiby, Henrik

    diseases, weeds and fungus. A common method for interpretation is based on the leaf shape. However in order to reliably achieve a good description of the shape a good segmentation is required. The excess green index is one of the most common methods for green vegetation segmentation within agriculture....... This method utilizes that most vegetation reflects more green light than blue and red. As silicon based image sensors is also sensitive to near-infrared light a typical rgb-camera will have a filter in place to block the near-infrared light. When using excess green the ideal filter would be a sinc...... for green vegetation segmentation we are able to attain a significantly improved segmentation under controlled illumination....

  17. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2016-03-01

    Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  18. Approach to Hand Tracking and Gesture Recognition Based on Depth-Sensing Cameras and EMG Monitoring

    Directory of Open Access Journals (Sweden)

    Ondrej

    2014-06-01

    Full Text Available In this paper, a new approach for hand tracking and gesture recognition based on the Leap Motion device and surface electromyography (SEMG is presented. The system is about to process the depth image information and the electrical activity produced by skeletal muscles on forearm. The purpose of such combination is enhancement in the gesture recognition rate. As a first we analyse the conventional approaches toward hand tracking and gesture recognition and present the results of various researches. Successive topic gives brief overview of depth-sensing cameras with focus on Leap motion device where we test its accuracy of fingers recognition. The vision-SEMG-based system is to be potentially applicable to many areas of human computer interaction.

  19. First experience with THE AUTOLAP™ SYSTEM: an image-based robotic camera steering device.

    Science.gov (United States)

    Wijsman, Paul J M; Broeders, Ivo A M J; Brenkman, Hylke J; Szold, Amir; Forgione, Antonello; Schreuder, Henk W R; Consten, Esther C J; Draaisma, Werner A; Verheijen, Paul M; Ruurda, Jelle P; Kaufman, Yuval

    2017-11-03

    Robotic camera holders for endoscopic surgery have been available for 20 years but market penetration is low. The current camera holders are controlled by voice, joystick, eyeball tracking, or head movements, and this type of steering has proven to be successful but excessive disturbance of surgical workflow has blocked widespread introduction. The Autolap™ system (MST, Israel) uses a radically different steering concept based on image analysis. This may improve acceptance by smooth, interactive, and fast steering. These two studies were conducted to prove safe and efficient performance of the core technology. A total of 66 various laparoscopic procedures were performed with the AutoLap™ by nine experienced surgeons, in two multi-center studies; 41 cholecystectomies, 13 fundoplications including hiatal hernia repair, 4 endometriosis surgeries, 2 inguinal hernia repairs, and 6 (bilateral) salpingo-oophorectomies. The use of the AutoLap™ system was evaluated in terms of safety, image stability, setup and procedural time, accuracy of imaged-based movements, and user satisfaction. Surgical procedures were completed with the AutoLap™ system in 64 cases (97%). The mean overall setup time of the AutoLap™ system was 4 min (04:08 ± 0.10). Procedure times were not prolonged due to the use of the system when compared to literature average. The reported user satisfaction was 3.85 and 3.96 on a scale of 1 to 5 in two studies. More than 90% of the image-based movements were accurate. No system-related adverse events were recorded while using the system. Safe and efficient use of the core technology of the AutoLap™ system was demonstrated with high image stability and good surgeon satisfaction. The results support further clinical studies that will focus on usability, improved ergonomics and additional image-based features.

  20. KEK-IMSS Slow Positron Facility

    Energy Technology Data Exchange (ETDEWEB)

    Hyodo, T; Wada, K; Yagishita, A; Kosuge, T; Saito, Y; Kurihara, T; Kikuchi, T; Shirakawa, A; Sanami, T; Ikeda, M; Ohsawa, S; Kakihara, K; Shidara, T, E-mail: toshio.hyodo@kek.jp [High Energy Accelerator Research Organization (KEK) 1-1 Oho, Tsukuba, Ibaraki, 305-0801 (Japan)

    2011-12-01

    The Slow Positron Facility at the Institute of Material Structure Science (IMSS) of High Energy Accelerator Research Organization (KEK) is a user dedicated facility with an energy tunable (0.1 - 35 keV) slow positron beam produced by a dedicated 55MeV linac. The present beam line branches have been used for the positronium time-of-flight (Ps-TOF) measurements, the transmission positron microscope (TPM) and the photo-detachment of Ps negative ions (Ps{sup -}). During the year 2010, a reflection high-energy positron diffraction (RHEPD) measurement station is going to be installed. The slow positron generator (converter/ moderator) system will be modified to get a higher slow positron intensity, and a new user-friendly beam line power-supply control and vacuum monitoring system is being developed. Another plan for this year is the transfer of a {sup 22}Na-based slow positron beam from RIKEN. This machine will be used for the continuous slow positron beam applications and for the orientation training of those who are interested in beginning researches with a slow positron beam.

  1. KEK-IMSS Slow Positron Facility

    Science.gov (United States)

    Hyodo, T.; Wada, K.; Yagishita, A.; Kosuge, T.; Saito, Y.; Kurihara, T.; Kikuchi, T.; Shirakawa, A.; Sanami, T.; Ikeda, M.; Ohsawa, S.; Kakihara, K.; Shidara, T.

    2011-12-01

    The Slow Positron Facility at the Institute of Material Structure Science (IMSS) of High Energy Accelerator Research Organization (KEK) is a user dedicated facility with an energy tunable (0.1 - 35 keV) slow positron beam produced by a dedicated 55MeV linac. The present beam line branches have been used for the positronium time-of-flight (Ps-TOF) measurements, the transmission positron microscope (TPM) and the photo-detachment of Ps negative ions (Ps-). During the year 2010, a reflection high-energy positron diffraction (RHEPD) measurement station is going to be installed. The slow positron generator (converter/ moderator) system will be modified to get a higher slow positron intensity, and a new user-friendly beam line power-supply control and vacuum monitoring system is being developed. Another plan for this year is the transfer of a 22Na-based slow positron beam from RIKEN. This machine will be used for the continuous slow positron beam applications and for the orientation training of those who are interested in beginning researches with a slow positron beam.

  2. Positron Interaction in Polymers

    Science.gov (United States)

    Bas, Corine; Albérola, N. Dominique; Barthe, Marie-France; de Baerdemaeker, Jérémie; Dauwe, Charles

    A series of dense copolyimide membranes was characterized using positron annihilation spectroscopy. The positron annihilation lifetime spectroscopy performed on film with a classical positron source gives informations on the positronium fraction formed and also on the hole size within the film. The Doppler broadening spectra (DBS) of the gamma annihilation rays coupled with a variable energy positron beam allow the microstructural analyses as a function of the film depth. Experimental data were also linked to the chemical structure of the polyimides. It was found that the presence of the fluorine atoms strongly affects the positron annihilitation process and especially the DBS responses.

  3. Traffic camera system development

    Science.gov (United States)

    Hori, Toshi

    1997-04-01

    The intelligent transportation system has generated a strong need for the development of intelligent camera systems to meet the requirements of sophisticated applications, such as electronic toll collection (ETC), traffic violation detection and automatic parking lot control. In order to achieve the highest levels of accuracy in detection, these cameras must have high speed electronic shutters, high resolution, high frame rate, and communication capabilities. A progressive scan interline transfer CCD camera, with its high speed electronic shutter and resolution capabilities, provides the basic functions to meet the requirements of a traffic camera system. Unlike most industrial video imaging applications, traffic cameras must deal with harsh environmental conditions and an extremely wide range of light. Optical character recognition is a critical function of a modern traffic camera system, with detection and accuracy heavily dependent on the camera function. In order to operate under demanding conditions, communication and functional optimization is implemented to control cameras from a roadside computer. The camera operates with a shutter speed faster than 1/2000 sec. to capture highway traffic both day and night. Consequently camera gain, pedestal level, shutter speed and gamma functions are controlled by a look-up table containing various parameters based on environmental conditions, particularly lighting. Lighting conditions are studied carefully, to focus only on the critical license plate surface. A unique light sensor permits accurate reading under a variety of conditions, such as a sunny day, evening, twilight, storms, etc. These camera systems are being deployed successfully in major ETC projects throughout the world.

  4. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras

    Science.gov (United States)

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-01

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487

  5. Handbook of camera monitor systems the automotive mirror-replacement technology based on ISO 16505

    CERN Document Server

    2016-01-01

    This handbook offers a comprehensive overview of Camera Monitor Systems (CMS), ranging from the ISO 16505-based development aspects to practical realization concepts. It offers readers a wide-ranging discussion of the science and technology of CMS as well as the human-interface factors of such systems. In addition, it serves as a single reference source with contributions from leading international CMS professionals and academic researchers. In combination with the latest version of UN Regulation No. 46, the normative framework of ISO 16505 permits CMS to replace mandatory rearview mirrors in series production vehicles. The handbook includes scientific and technical background information to further readers’ understanding of both of these regulatory and normative texts. It is a key reference in the field of automotive CMS for system designers, members of standardization and regulation committees, engineers, students and researchers.

  6. Development of a DSP-based real-time position calculation circuit for a beta camera

    CERN Document Server

    Yamamoto, S; Kanno, I

    2000-01-01

    A digital signal processor (DSP)-based position calculation circuit was developed and tested for a beta camera. The previous position calculation circuit which employed flash analog-to-digital (A-D) converters for A-D conversion and ratio calculation produced significant line artifacts in the image due to the differential non-linearity of the A-D converters. The new position calculation circuit uses four A-D converters for A-D conversion of the analog signals from the position sensitive photomultiplier tube (PSPMT). The DSP reads the A-D signals and calculates the ratio of X sub a /(X sub a +X sub b) and Y sub a /(Y sub a +Y sub b) on an event-by-event basis. The DSP also magnifies the image to fit the useful field of view (FOV) and rejects the events out of the FOV. The line artifacts in the image were almost eliminated.

  7. Ontological Representation of Light Wave Camera Data to Support Vision-Based AmI

    Directory of Open Access Journals (Sweden)

    José Manuel Molina

    2012-09-01

    Full Text Available Recent advances in technologies for capturing video data have opened a vast amount of new application areas in visual sensor networks. Among them, the incorporation of light wave cameras on Ambient Intelligence (AmI environments provides more accurate tracking capabilities for activity recognition. Although the performance of tracking algorithms has quickly improved, symbolic models used to represent the resulting knowledge have not yet been adapted to smart environments. This lack of representation does not allow to take advantage of the semantic quality of the information provided by new sensors. This paper advocates for the introduction of a part-based representational level in cognitive-based systems in order to accurately represent the novel sensors’ knowledge. The paper also reviews the theoretical and practical issues in part-whole relationships proposing a specific taxonomy for computer vision approaches. General part-based patterns for human body and transitive part-based representation and inference are incorporated to an ontology-based previous framework to enhance scene interpretation in the area of video-based AmI. The advantages and new features of the model are demonstrated in a Social Signal Processing (SSP application for the elaboration of live market researches.

  8. Development of a non-delay line constant fraction discriminator based on the Padé approximant for time-of-flight positron emission tomography scanners

    Science.gov (United States)

    Kim, S. Y.; Ko, G. B.; Kwon, S. I.; Lee, J. S.

    2015-01-01

    In positron emission tomography, the constant fraction discriminator (CFD) circuit is used to acquire accurate arrival times for the annihilation photons with minimum sensitivity to time walk. As the number of readout channels increases, it becomes difficult to use conventional CFDs because of the large amount of space required for the delay line part of the circuit. To make the CFD compact, flexible, and easily controllable, a non-delay-line CFD based on the Padé approximant is proposed. The non-delay-line CFD developed in this study is shown to have timing performance that is similar to that of a conventional delay-line-based CFD in terms of the coincidence resolving time of a fast photomultiplier tube detector. This CFD can easily be applied to various positron emission tomography system designs that contain high-density detectors with multi-channel structures.

  9. Applications of a streak-camera-based imager with simultaneous high space and time resolution

    Science.gov (United States)

    Klick, David I.; Knight, Frederick K.

    1993-01-01

    A high-speed imaging device has been built that is capable of recording several hundred images over a time span of 25 to 400 ns. The imager is based on a streak camera, which provides both spatial and temporal resolution. The system's current angular resolution is 16 X 16 pixels, with a time resolution of 250 ps. It was initially employed to provide 3-D images of objects, in conjunction with a short-pulse (approximately 100 ps) laser. For the 3-D (angle-angle-range) laser radar, the 250 ps time resolution corresponds to a range resolution of 4 cm. In the 3-D system, light from a short-pulse laser (a frequency-doubled, Q-switched, mode-locked Nd:YAG laser operating at a wavelength of 532 nm) flood-illuminates a target of linear dimension approximately 1 m. The returning light from the target is imaged, and the image is dissected by a 16 X 16 array of optical fibers. At the other end of the fiber optic image converter, the 256 fibers form a vertical line array, which is input to the slit of a streak camera. The streak camera sweeps the input line across the output phosphor screen so that horizontal position is directly proportional to time. The resulting 2-D image (fiber location vs. time) at the phosphor is read by an intensified (SIT) vidicon TV tube, and the image is digitized and stored. A computer subsequently decodes the image, unscrambling the linear pixels into an angle-angle image at each time or range bin. We are left with a series of snapshots, each one depicting the portion of target surface in a given range bin. The pictures can be combined to form a 3-D realization of the target. Continuous recording of many images over a short time span is of use in imaging other transient phenomena. These applications share a need for multiple images from a nonrepeatable transient event of time duration on the order of nanoseconds. Applications discussed for the imager include (1) pulsed laser beam diagnostics -- measuring laser beam spatial and temporal structure, (2

  10. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography.

    Science.gov (United States)

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-07

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.

  11. Method for validating cloud mask obtained from satellite measurements using ground-based sky camera.

    Science.gov (United States)

    Letu, Husi; Nagao, Takashi M; Nakajima, Takashi Y; Matsumae, Yoshiaki

    2014-11-01

    Error propagation in Earth's atmospheric, oceanic, and land surface parameters of the satellite products caused by misclassification of the cloud mask is a critical issue for improving the accuracy of satellite products. Thus, characterizing the accuracy of the cloud mask is important for investigating the influence of the cloud mask on satellite products. In this study, we proposed a method for validating multiwavelength satellite data derived cloud masks using ground-based sky camera (GSC) data. First, a cloud cover algorithm for GSC data has been developed using sky index and bright index. Then, Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data derived cloud masks by two cloud-screening algorithms (i.e., MOD35 and CLAUDIA) were validated using the GSC cloud mask. The results indicate that MOD35 is likely to classify ambiguous pixels as "cloudy," whereas CLAUDIA is likely to classify them as "clear." Furthermore, the influence of error propagations caused by misclassification of the MOD35 and CLAUDIA cloud masks on MODIS derived reflectance, brightness temperature, and normalized difference vegetation index (NDVI) in clear and cloudy pixels was investigated using sky camera data. It shows that the influence of the error propagation by the MOD35 cloud mask on the MODIS derived monthly mean reflectance, brightness temperature, and NDVI for clear pixels is significantly smaller than for the CLAUDIA cloud mask; the influence of the error propagation by the CLAUDIA cloud mask on MODIS derived monthly mean cloud products for cloudy pixels is significantly smaller than that by the MOD35 cloud mask.

  12. Maximum-likelihood scintillation detection for EM-CCD based gamma cameras

    Energy Technology Data Exchange (ETDEWEB)

    Korevaar, Marc A N; Goorden, Marlies C; Heemskerk, Jan W T; Beekman, Freek J, E-mail: M.A.N.Korevaar@TUDelft.nl [Department of Radiation, Radionuclides and Reactors, Section of Radiation Detection and Medical Imaging, Applied Sciences, Delft University of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)

    2011-08-07

    Gamma cameras based on charge-coupled devices (CCDs) coupled to continuous scintillation crystals can combine a good detection efficiency with high spatial resolutions with the aid of advanced scintillation detection algorithms. A previously developed analytical multi-scale algorithm (MSA) models the depth-dependent light distribution but does not take statistics into account. Here we present and validate a novel statistical maximum-likelihood algorithm (MLA) that combines a realistic light distribution model with an experimentally validated statistical model. The MLA was tested for an electron multiplying CCD optically coupled to CsI(Tl) scintillators of different thicknesses. For {sup 99m}Tc imaging, the spatial resolution (for perpendicular and oblique incidence), energy resolution and signal-to-background counts ratio (SBR) obtained with the MLA were compared with those of the MSA. Compared to the MSA, the MLA improves the energy resolution by more than a factor of 1.6 and the SBR is enhanced by more than a factor of 1.3. For oblique incidence (approximately 45{sup 0}), the depth-of-interaction corrected spatial resolution is improved by a factor of at least 1.1, while for perpendicular incidence the MLA resolution does not consistently differ significantly from the MSA result for all tested scintillator thicknesses. For the thickest scintillator (3 mm, interaction probability 66% at 141 keV) a spatial resolution (perpendicular incidence) of 147 {mu}m full width at half maximum (FWHM) was obtained with an energy resolution of 35.2% FWHM. These results of the MLA were achieved without prior calibration of scintillations as is needed for many statistical scintillation detection algorithms. We conclude that the MLA significantly improves the gamma camera performance compared to the MSA.

  13. An Iterative Distortion Compensation Algorithm for Camera Calibration Based on Phase Target.

    Science.gov (United States)

    Xu, Yongjia; Gao, Feng; Ren, Hongyu; Zhang, Zonghua; Jiang, Xiangqian

    2017-05-23

    Camera distortion is a critical factor affecting the accuracy of camera calibration. A conventional calibration approach cannot satisfy the requirement of a measurement system demanding high calibration accuracy due to the inaccurate distortion compensation. This paper presents a novel camera calibration method with an iterative distortion compensation algorithm. The initial parameters of the camera are calibrated by full-field camera pixels and the corresponding points on a phase target. An iterative algorithm is proposed to compensate for the distortion. A 2D fitting and interpolation method is also developed to enhance the accuracy of the phase target. Compared to the conventional calibration method, the proposed method does not rely on a distortion mathematical model, and is stable and effective in terms of complex distortion conditions. Both the simulation work and experimental results show that the proposed calibration method is more than 100% more accurate than the conventional calibration method.

  14. Instrumentation optimization for positron emission mammography

    Energy Technology Data Exchange (ETDEWEB)

    Moses, William W.; Qi, Jinyi

    2003-06-05

    The past several years have seen designs for PET cameras optimized to image the breast, commonly known as Positron Emission Mammography or PEM cameras. The guiding principal behind PEM instrumentation is that a camera whose field of view is restricted to a single breast has higher performance and lower cost than a conventional PET camera. The most common geometry is a pair of parallel planes of detector modules, although geometries that encircle the breast have also been proposed. The ability of the detector modules to measure the depth of interaction (DOI) is also a relevant feature. This paper finds that while both the additional solid angle coverage afforded by encircling the breast and the decreased blurring afforded by the DOI measurement improve performance, the ability to measure DOI is more important than the ability to encircle the breast.

  15. Fundamental limits of positron emission mammography

    Energy Technology Data Exchange (ETDEWEB)

    Moses, William W.; Qi, Jinyi

    2001-06-01

    We explore the causes of performance limitation in positron emission mammography cameras. We compare two basic camera geometries containing the same volume of 511 keV photon detectors, one with a parallel plane geometry and another with a rectangular geometry. We find that both geometries have similar performance for the phantom imaged (in Monte Carlo simulation), even though the solid angle coverage of the rectangular camera is about 50 percent higher than the parallel plane camera. The reconstruction algorithm used significantly affects the resulting image; iterative methods significantly outperform the commonly used focal plane tomography. Finally, the characteristics of the tumor itself, specifically the absolute amount of radiotracer taken up by the tumor, will significantly affect the imaging performance.

  16. Fast time-of-flight camera based surface registration for radiotherapy patient positioning.

    Science.gov (United States)

    Placht, Simon; Stancanello, Joseph; Schaller, Christian; Balda, Michael; Angelopoulou, Elli

    2012-01-01

    This work introduces a rigid registration framework for patient positioning in radiotherapy, based on real-time surface acquisition by a time-of-flight (ToF) camera. Dynamic properties of the system are also investigated for future gating/tracking strategies. A novel preregistration algorithm, based on translation and rotation-invariant features representing surface structures, was developed. Using these features, corresponding three-dimensional points were computed in order to determine initial registration parameters. These parameters became a robust input to an accelerated version of the iterative closest point (ICP) algorithm for the fine-tuning of the registration result. Distance calibration and Kalman filtering were used to compensate for ToF-camera dependent noise. Additionally, the advantage of using the feature based preregistration over an "ICP only" strategy was evaluated, as well as the robustness of the rigid-transformation-based method to deformation. The proposed surface registration method was validated using phantom data. A mean target registration error (TRE) for translations and rotations of 1.62 ± 1.08 mm and 0.07° ± 0.05°, respectively, was achieved. There was a temporal delay of about 65 ms in the registration output, which can be seen as negligible considering the dynamics of biological systems. Feature based preregistration allowed for accurate and robust registrations even at very large initial displacements. Deformations affected the accuracy of the results, necessitating particular care in cases of deformed surfaces. The proposed solution is able to solve surface registration problems with an accuracy suitable for radiotherapy cases where external surfaces offer primary or complementary information to patient positioning. The system shows promising dynamic properties for its use in gating/tracking applications. The overall system is competitive with commonly-used surface registration technologies. Its main benefit is the

  17. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    Energy Technology Data Exchange (ETDEWEB)

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L [UT MD Anderson Cancer Center, Houston, TX (United States); UT Graduate School of Biomedical Sciences, Houston, TX (United States); Yang, J; Beadle, B [UT MD Anderson Cancer Center, Houston, TX (United States)

    2014-06-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.

  18. Towards the development of a SiPM-based camera for the Cherenkov Telescope Array

    Directory of Open Access Journals (Sweden)

    Ambrosi G.

    2017-01-01

    Full Text Available The Italian National Institute for Nuclear Physics (INFN is involved in the development of a prototype for a camera based on Silicon Photomultipliers (SiPMs for the Cherenkov Telescope Array (CTA, a new generation of telescopes for ground-based gamma-ray astronomy. In this framework, an R&D program within the ‘Progetto Premiale TElescopi CHErenkov made in Italy (TECHE.it’ for the development of SiPMs suitable for Cherenkov light detection in the Near-Ultraviolet (NUV has been carried out. The developed device is a NUV High-Density (NUV-HD SiPM based on a micro cell of 30 μm × 30 μm and an area of 6 mm × 6 mm, produced by Fondazione Bruno Kessler (FBK. A full characterization of the single NUV-HD SiPM will be presented. A matrix of 8 × 8 single NUV-HD SiPMs will be part of the focal plane of the Schwarzschild- Couder Telescope prototype (pSCT for CTA. An update on recent tests on the detectors arranged in this matrix configuration and on the front-end electronics will be given.

  19. Monte Carlo-based evaluation of S-values in mouse models for positron-emitting radionuclides

    NARCIS (Netherlands)

    Xie, Tianwu; Zaidi, Habib

    2013-01-01

    In addition to being a powerful clinical tool, Positron emission tomography (PET) is also used in small laboratory animal research to visualize and track certain molecular processes associated with diseases such as cancer, heart disease and neurological disorders in living small animal models of

  20. Temperament, character and serotonin activity in the human brain: a positron emission tomography study based on a general population cohort.

    Science.gov (United States)

    Tuominen, L; Salo, J; Hirvonen, J; Någren, K; Laine, P; Melartin, T; Isometsä, E; Viikari, J; Cloninger, C R; Raitakari, O; Hietala, J; Keltikangas-Järvinen, L

    2013-04-01

    The psychobiological model of personality by Cloninger and colleagues originally hypothesized that interindividual variability in the temperament dimension 'harm avoidance' (HA) is explained by differences in the activity of the brain serotonin system. We assessed brain serotonin transporter (5-HTT) density in vivo with positron emission tomography (PET) in healthy individuals with high or low HA scores using an 'oversampling' study design. Method Subjects consistently in either upper or lower quartiles for the HA trait were selected from a population-based cohort in Finland (n = 2075) with pre-existing Temperament and Character Inventory (TCI) scores. A total of 22 subjects free of psychiatric and somatic disorders were included in the matched high- and low-HA groups. The main outcome measure was regional 5-HTT binding potential (BPND) in high- and low-HA groups estimated with PET and [11C]N,N-dimethyl-2-(2-amino-4-methylphenylthio)benzylamine ([11C]MADAM). In secondary analyses, 5-HTT BPND was correlated with other TCI dimensions. 5-HTT BPND did not differ between high- and low-HA groups in the midbrain or any other brain region. This result remained the same even after adjusting for other relevant TCI dimensions. Higher 5-HTT BPND in the raphe nucleus predicted higher scores in 'self-directedness'. This study does not support an association between the temperament dimension HA and serotonin transporter density in healthy subjects. However, we found a link between high serotonin transporter density and high 'self-directedness' (ability to adapt and control one's behaviour to fit situations in accord with chosen goals and values). We suggest that biological factors are more important in explaining variability in character than previously thought.

  1. Study of material properties important for an optical property modulation-based radiation detection method for positron emission tomography

    Science.gov (United States)

    Tao, Li; Daghighian, Henry M.; Levin, Craig S.

    2017-01-01

    Abstract. We compare the performance of two detector materials, cadmium telluride (CdTe) and bismuth silicon oxide (BSO), for optical property modulation-based radiation detection method for positron emission tomography (PET), which is a potential new direction to dramatically improve the annihilation photon pair coincidence time resolution. We have shown that the induced current flow in the detector crystal resulting from ionizing radiation determines the strength of optical modulation signal. A larger resistivity is favorable for reducing the dark current (noise) in the detector crystal, and thus the higher resistivity BSO crystal has a lower (50% lower on average) noise level than CdTe. The CdTe and BSO crystals can achieve the same sensitivity under laser diode illumination at the same crystal bias voltage condition while the BSO crystal is not as sensitive to 511-keV photons as the CdTe crystal under the same crystal bias voltage. The amplitude of the modulation signal induced by 511-keV photons in BSO crystal is around 30% of that induced in CdTe crystal under the same bias condition. In addition, we have found that the optical modulation strength increases linearly with crystal bias voltage before saturation. The modulation signal with CdTe tends to saturate at bias voltages higher than 1500 V due to its lower resistivity (thus larger dark current) while the modulation signal strength with BSO still increases after 3500 V. Further increasing the bias voltage for BSO could potentially further enhance the modulation strength and thus, the sensitivity. PMID:28180132

  2. Positron annihilation spectroscopy on a beam of positrons the LEPTA facility

    Science.gov (United States)

    Ahmanova, E. V.; Eseev, M. K.; Kobets, A. G.; Meshkov, I. N.; Orlov, O. S.; Sidorin, A. A.; Siemek, K.; Horodek, P.

    2017-01-01

    The results and possibilities of the samples surfaces research by the Doppler method of positron annihilation spectroscopy (PAS) for a monochromatic beam of positrons at the LEPTA facility are presented in this paper. Method with high-resolution sensitivity to defects like vacancies and dislocations allows scanning of the surface and near-surface sample layers to a depth of several micrometers by the method of Doppler broadening of annihilation lines. The opportunities for the development of a PAS method based on the measurement of the positron lifetime in the sample irradiated by ordered flow of positrons from the injector of accelerator complex LEPTA at JINR are discussed.

  3. Comparison of a new laser beam wound camera and a digital photoplanimetry-based method for wound measurement in horses.

    Science.gov (United States)

    Van Hecke, L L; De Mil, T A; Haspeslagh, M; Chiers, K; Martens, A M

    2015-03-01

    The aim of this study was to compare the accuracy, precision, inter- and intra-operator reliability of a new laser beam (LB) wound camera and a digital photoplanimetry-based (DPB) method for measuring the dimensions of equine wounds. Forty-one wounds were created on equine cadavers. The area, circumference, maximum depth and volume of each wound were measured four times with both techniques by two operators. A silicone cast was made of each wound and served as the reference standard to measure the wound dimensions. The DPB method had a higher accuracy and precision in determining the wound volume compared with the LB camera, which had a higher accuracy in determining the wound area and maximum depth and better precision in determining the area and circumference. The LB camera also had a significantly higher overall inter-operator reliability for measuring the wound area, circumference and volume. In contrast, the DPB method had poor intra-operator reliability for the wound circumference. The LB camera was more user-friendly than the DPB method. The LB wound camera is recommended as the better objective method to assess the dimensions of wounds in horses, despite its poorer performance for the measurement of wound volume. However, if the wound measurements are performed by one operator on cadavers or animals under general anaesthesia, the DPB method is a less expensive and valid alternative. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Kinetic model-based factor analysis of dynamic sequences for 82-rubidium cardiac positron emission tomography.

    Science.gov (United States)

    Klein, R; Beanlands, R S; Wassenaar, R W; Thorn, S L; Lamoureux, M; DaSilva, J N; Adler, A; deKemp, R A

    2010-08-01

    Factor analysis has been pursued as a means to decompose dynamic cardiac PET images into different tissue types based on their unique temporal signatures to improve quantification of physiological function. In this work, the authors present a novel kinetic model-based (MB) method that includes physiological models of factor relationships within the decomposition process. The physiological accuracy of MB decomposed (82)Rb cardiac PET images is evaluated using simulated and experimental data. Precision of myocardial blood flow (MBF) measurement is also evaluated. A gamma-variate model was used to describe the transport of (82)Rb in arterial blood from the right to left ventricle, and a one-compartment model to describe the exchange between blood and myocardium. Simulations of canine and rat heart imaging were performed to evaluate parameter estimation errors. Arterial blood sampling in rats and (11)CO blood pool imaging in dogs were used to evaluate factor and structure accuracy. Variable infusion duration studies in canine were used to evaluate MB structure and global MBF reproducibility. All results were compared to a previously published minimal structure overlap (MSO) method. Canine heart simulations demonstrated that MB has lower root-mean-square error (RMSE) than MSO for both factor (0.2% vs 0.5%, p structure (3.0% vs 4.7%, p structures: 3.0% vs 6.7%, p structures compared to a (11)CO blood pool image in dogs (8.5% vs 8.8%, p =0.23). Myocardial structures were more reproducible with MB than with MSO (RMSE=3.9% vs 6.2%, p structures (RMSE=4.9% vs 5.6%, p =0.006). Finally, MBF values tended to be more reproducible with MB compared to MSO (CV= 10% vs 18%, p =0.16). The execution time of MB was, on average, 2.4 times shorter than MSO (p parameters. Kinetic model-based factor analysis can be used to provide physiologically accurate decomposition of (82)Rb dynamic PET images, and may improve the precision of MBF quantification.

  5. GPU-based View Interpolation for Smooth Camera Transitions in Soccer

    OpenAIRE

    GOORTS, Patrik; ROGMANS, Sammy; Bekaert, Philippe

    2013-01-01

    We present a system, capable of synthesizing free viewpoint video for smooth camera transitions in soccer scenes. The broadcaster can choose any camera viewpoint between the real, fixed cameras. This way, action can be followed across the field in a smooth manner, a frozen image or a replay can be viewed from multiple angles, and the broadcasted image can be transitioned from one to the other side of the field in a smooth manner to avoid orientation-related confusion of the viewers. We use a ...

  6. Pedestrian Detection Based on Adaptive Selection of Visible Light or Far-Infrared Light Camera Image by Fuzzy Inference System and Convolutional Neural Network-Based Verification.

    Science.gov (United States)

    Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung

    2017-07-08

    A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.

  7. Ultraviolet Imaging with Low Cost Smartphone Sensors: Development and Application of a Raspberry Pi-Based UV Camera

    Directory of Open Access Journals (Sweden)

    Thomas C. Wilkes

    2016-10-01

    Full Text Available Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV camera system. This was achieved via adaptation of Raspberry Pi cameras, which are based on back-illuminated complementary metal-oxide semiconductor (CMOS sensors, and we demonstrated the utility of these devices for applications at wavelengths as low as 310 nm, by remotely sensing power station smokestack emissions in this spectral region. Given the very low cost of these units, ≈ USD 25, they are suitable for widespread proliferation in a variety of UV imaging applications, e.g., in atmospheric science, volcanology, forensics and surface smoothness measurements.

  8. Ultraviolet Imaging with Low Cost Smartphone Sensors: Development and Application of a Raspberry Pi-Based UV Camera.

    Science.gov (United States)

    Wilkes, Thomas C; McGonigle, Andrew J S; Pering, Tom D; Taggart, Angus J; White, Benjamin S; Bryant, Robert G; Willmott, Jon R

    2016-10-06

    Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV) camera system. This was achieved via adaptation of Raspberry Pi cameras, which are based on back-illuminated complementary metal-oxide semiconductor (CMOS) sensors, and we demonstrated the utility of these devices for applications at wavelengths as low as 310 nm, by remotely sensing power station smokestack emissions in this spectral region. Given the very low cost of these units, ≈ USD 25, they are suitable for widespread proliferation in a variety of UV imaging applications, e.g., in atmospheric science, volcanology, forensics and surface smoothness measurements.

  9. New camera-based microswitch technology to monitor small head and mouth responses of children with multiple disabilities.

    Science.gov (United States)

    Lancioni, Giulio E; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N; O'Reilly, Mark F; Green, Vanessa A; Furniss, Fred

    2014-06-01

    Assessing a new camera-based microswitch technology, which did not require the use of color marks on the participants' face. Two children with extensive multiple disabilities participated. The responses selected for them consisted of small, lateral head movements and mouth closing or opening. The intervention was carried out according to a multiple probe design across responses. The technology involved a computer with a CPU using a 2-GHz clock, a USB video camera with a 16-mm lens, a USB cable connecting the camera and the computer, and a special software program written in ISO C++ language. The new technology was satisfactorily used with both children. Large increases in their responding were observed during the intervention periods (i.e. when the responses were followed by preferred stimulation). The new technology may be an important resource for persons with multiple disabilities and minimal motor behavior.

  10. Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor

    Science.gov (United States)

    Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung

    2017-01-01

    Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments. PMID:28867775

  11. A Spot Reminder System for the Visually Impaired Based on a Smartphone Camera

    Science.gov (United States)

    Takizawa, Hotaka; Orita, Kazunori; Aoyagi, Mayumi; Ezaki, Nobuo; Mizuno, Shinji

    2017-01-01

    The present paper proposes a smartphone-camera-based system to assist visually impaired users in recalling their memories related to important locations, called spots, that they visited. The memories are recorded as voice memos, which can be played back when the users return to the spots. Spot-to-spot correspondence is determined by image matching based on the scale invariant feature transform. The main contribution of the proposed system is to allow visually impaired users to associate arbitrary voice memos with arbitrary spots. The users do not need any special devices or systems except smartphones and do not need to remember the spots where the voice memos were recorded. In addition, the proposed system can identify spots in environments that are inaccessible to the global positioning system. The proposed system has been evaluated by two experiments: image matching tests and a user study. The experimental results suggested the effectiveness of the system to help visually impaired individuals, including blind individuals, recall information about regularly-visited spots. PMID:28165403

  12. New Lower-Limb Gait Asymmetry Indices Based on a Depth Camera

    Directory of Open Access Journals (Sweden)

    Edouard Auvinet

    2015-02-01

    Full Text Available Background: Various asymmetry indices have been proposed to compare the spatiotemporal, kinematic and kinetic parameters of lower limbs during the gait cycle. However, these indices rely on gait measurement systems that are costly and generally require manual examination, calibration procedures and the precise placement of sensors/markers on the body of the patient. Methods: To overcome these issues, this paper proposes a new asymmetry index, which uses an inexpensive, easy-to-use and markerless depth camera (Microsoft Kinect™ output. This asymmetry index directly uses depth images provided by the Kinect™ without requiring joint localization. It is based on the longitudinal spatial difference between lower-limb movements during the gait cycle. To evaluate the relevance of this index, fifteen healthy subjects were tested on a treadmill walking normally and then via an artificially-induced gait asymmetry with a thick sole placed under one shoe. The gait movement was simultaneously recorded using a Kinect™ placed in front of the subject and a motion capture system. Results: The proposed longitudinal index distinguished asymmetrical gait (p < 0.001, while other symmetry indices based on spatiotemporal gait parameters failed using such Kinect™ skeleton measurements. Moreover, the correlation coefficient between this index measured by Kinect™ and the ground truth of this index measured by motion capture is 0.968. Conclusion: This gait asymmetry index measured with a Kinect™ is low cost, easy to use and is a promising development for clinical gait analysis.

  13. Remote Marker-Based Tracking for UAV Landing Using Visible-Light Camera Sensor.

    Science.gov (United States)

    Nguyen, Phong Ha; Kim, Ki Wan; Lee, Young Won; Park, Kang Ryoung

    2017-08-30

    Unmanned aerial vehicles (UAVs), which are commonly known as drones, have proved to be useful not only on the battlefields where manned flight is considered too risky or difficult, but also in everyday life purposes such as surveillance, monitoring, rescue, unmanned cargo, aerial video, and photography. More advanced drones make use of global positioning system (GPS) receivers during the navigation and control loop which allows for smart GPS features of drone navigation. However, there are problems if the drones operate in heterogeneous areas with no GPS signal, so it is important to perform research into the development of UAVs with autonomous navigation and landing guidance using computer vision. In this research, we determined how to safely land a drone in the absence of GPS signals using our remote maker-based tracking algorithm based on the visible light camera sensor. The proposed method uses a unique marker designed as a tracking target during landing procedures. Experimental results show that our method significantly outperforms state-of-the-art object trackers in terms of both accuracy and processing time, and we perform test on an embedded system in various environments.

  14. Design of video surveillance and tracking system based on attitude and heading reference system and PTZ camera

    Science.gov (United States)

    Yang, Jian; Xie, Xiaofang; Wang, Yan

    2017-04-01

    Based on the AHRS (Attitude and Heading Reference System) and PTZ (Pan/Tilt/Zoom) camera, we designed a video monitoring and tracking system. The overall structure of the system and the software design are given. The key technologies such as serial port communication and head attitude tracking are introduced, and the codes of the key part are given.

  15. Camera-based microswitch technology to monitor mouth, eyebrow, and eyelid responses of children with profound multiple disabilities

    NARCIS (Netherlands)

    Lancioni, G.E.; Bellini, D.; Oliva, D.; Singh, N.N.; O'Reilly, M.F.; Sigafoos, J.; Lang, R.B.; Didden, H.C.M.

    2011-01-01

    A camera-based microswitch technology was recently used to successfully monitor small eyelid and mouth responses of two adults with profound multiple disabilities (Lancioni et al., Res Dev Disab 31:1509-1514, 2010a). This technology, in contrast with the traditional optic microswitches used for

  16. Camera-Based Microswitch Technology for Eyelid and Mouth Responses of Persons with Profound Multiple Disabilities: Two Case Studies

    Science.gov (United States)

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff

    2010-01-01

    These two studies assessed camera-based microswitch technology for eyelid and mouth responses of two persons with profound multiple disabilities and minimal motor behavior. This technology, in contrast with the traditional optic microswitches used for those responses, did not require support frames on the participants' face but only small color…

  17. Fabrication and Characterization of 640x486 GaAs Based Quantum Well Infrared Photodetector (QWIP) Snapshot Camera

    Science.gov (United States)

    Gunapala, S. D.; Bandara, S. V.; Liu, J. K.; Hong, W.; Sundaram, M.; Carralejo, R.; Shott, C. A.; Maker, P. D.; Miller, R. E.

    1997-01-01

    In this paper, we discuss the development of this very sensitive long wavelength infrared (LWIR) camera based on a GaAs/AlGaAs QWIP focal plane array (FPA) and its performance in quantum efficiency, NE(delta)T, uniformity, and operability.

  18. Camera-Based Microswitch Technology to Monitor Mouth, Eyebrow, and Eyelid Responses of Children with Profound Multiple Disabilities

    Science.gov (United States)

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Lang, Russell; Didden, Robert

    2011-01-01

    A camera-based microswitch technology was recently used to successfully monitor small eyelid and mouth responses of two adults with profound multiple disabilities (Lancioni et al., Res Dev Disab 31:1509-1514, 2010a). This technology, in contrast with the traditional optic microswitches used for those responses, did not require support frames on…

  19. Two Persons with Multiple Disabilities Use Camera-Based Microswitch Technology to Control Stimulation with Small Mouth and Eyelid Responses

    Science.gov (United States)

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Lang, Russell

    2012-01-01

    Background: A camera-based microswitch technology was recently developed to monitor small facial responses of persons with multiple disabilities and allow those responses to control environmental stimulation. This study assessed such a technology with 2 new participants using slight variations of previous responses. Method: The technology involved…

  20. Automated Ground-based Time-lapse Camera Monitoring of West Greenland ice sheet outlet Glaciers: Challenges and Solutions

    Science.gov (United States)

    Ahn, Y.; Box, J. E.; Balog, J.; Lewinter, A.

    2008-12-01

    Monitoring Greenland outlet glaciers using remotely sensed data has drawn a great attention in earth science communities for decades and time series analysis of sensory data has provided important variability information of glacier flow by detecting speed and thickness changes, tracking features and acquiring model input. Thanks to advancements of commercial digital camera technology and increased solid state storage, we activated automatic ground-based time-lapse camera stations with high spatial/temporal resolution in west Greenland outlet and collected one-hour interval data continuous for more than one year at some but not all sites. We believe that important information of ice dynamics are contained in these data and that terrestrial mono-/stereo-photogrammetry can provide theoretical/practical fundamentals in data processing along with digital image processing techniques. Time-lapse images over periods in west Greenland indicate various phenomenon. Problematic is rain, snow, fog, shadows, freezing of water on camera enclosure window, image over-exposure, camera motion, sensor platform drift, and fox chewing of instrument cables, and the pecking of plastic window by ravens. Other problems include: feature identification, camera orientation, image registration, feature matching in image pairs, and feature tracking. Another obstacle is that non-metric digital camera contains large distortion to be compensated for precise photogrammetric use. Further, a massive number of images need to be processed in a way that is sufficiently computationally efficient. We meet these challenges by 1) identifying problems in possible photogrammetric processes, 2) categorizing them based on feasibility, and 3) clarifying limitation and alternatives, while emphasizing displacement computation and analyzing regional/temporal variability. We experiment with mono and stereo photogrammetric techniques in the aide of automatic correlation matching for efficiently handling the enormous

  1. Target Volume Delineation in Dynamic Positron Emission Tomography Based on Time Activity Curve Differences

    Science.gov (United States)

    Teymurazyan, Artur

    Tumor volume delineation plays a critical role in radiation treatment planning and simulation, since inaccurately defined treatment volumes may lead to the overdosing of normal surrounding structures and potentially missing the cancerous tissue. However, the imaging modality almost exclusively used to determine tumor volumes, X-ray Computed Tomography (CT), does not readily exhibit a distinction between cancerous and normal tissue. It has been shown that CT data augmented with PET can improve radiation treatment plans by providing functional information not available otherwise. Presently, static PET scans account for the majority of procedures performed in clinical practice. In the radiation therapy (RT) setting, these scans are visually inspected by a radiation oncologist for the purpose of tumor volume delineation. This approach, however, often results in significant interobserver variability when comparing contours drawn by different experts on the same PET/CT data sets. For this reason, a search for more objective contouring approaches is underway. The major drawback of conventional tumor delineation in static PET images is the fact that two neighboring voxels of the same intensity can exhibit markedly different overall dynamics. Therefore, equal intensity voxels in a static analysis of a PET image may be falsely classified as belonging to the same tissue. Dynamic PET allows the evaluation of image data in the temporal domain, which often describes specific biochemical properties of the imaged tissues. Analysis of dynamic PET data can be used to improve classification of the imaged volume into cancerous and normal tissue. In this thesis we present a novel tumor volume delineation approach (Single Seed Region Growing algorithm in 4D (dynamic) PET or SSRG/4D-PET) in dynamic PET based on TAC (Time Activity Curve) differences. A partially-supervised approach is pursued in order to allow an expert reader to utilize the information available from other imaging

  2. A gait analysis method based on a depth camera for fall prevention.

    Science.gov (United States)

    Dubois, Amandine; Charpillet, Francois

    2014-01-01

    This paper proposes a markerless system whose purpose is to help preventing falls of elderly people at home. To track human movements, the Microsoft Kinect camera is used which allows to acquire at the same time a RGB image and a depth image. Several articles show that the analysis of some gait parameters could allow fall risk assessment. We developed a system which extracts three gait parameters (the length and the duration of steps and the speed of the gait) by tracking the center of mass of the person. To check the validity of our system, the accuracy of the gait parameters obtained with the camera is evaluated. In an experiment, eleven subjects walked on an actimetric carpet, perpendicularly to the camera which filmed the scene. The three gait parameters obtained by the carpet are compared with those of the camera. In this study, four situations were tested to evaluate the robustness of our model. The subjects walked normally, making small steps, wearing a skirt and in front of the camera. The results showed that the system is accurate when there is one camera fixed perpendicularly. Thus we believe that the presented method is accurate enough to be used in real fall prevention applications.

  3. piscope - A Python based software package for the analysis of volcanic SO2 emissions using UV SO2 cameras

    Science.gov (United States)

    Gliss, Jonas; Stebel, Kerstin; Kylling, Arve; Solvejg Dinger, Anna; Sihler, Holger; Sudbø, Aasmund

    2017-04-01

    UV SO2 cameras have become a common method for monitoring SO2 emission rates from volcanoes. Scattered solar UV radiation is measured in two wavelength windows, typically around 310 nm and 330 nm (distinct / weak SO2 absorption) using interference filters. The data analysis comprises the retrieval of plume background intensities (to calculate plume optical densities), the camera calibration (to convert optical densities into SO2 column densities) and the retrieval of gas velocities within the plume as well as the retrieval of plume distances. SO2 emission rates are then typically retrieved along a projected plume cross section, for instance a straight line perpendicular to the plume propagation direction. Today, for most of the required analysis steps, several alternatives exist due to ongoing developments and improvements related to the measurement technique. We present piscope, a cross platform, open source software toolbox for the analysis of UV SO2 camera data. The code is written in the Python programming language and emerged from the idea of a common analysis platform incorporating a selection of the most prevalent methods found in literature. piscope includes several routines for plume background retrievals, routines for cell and DOAS based camera calibration including two individual methods to identify the DOAS field of view (shape and position) within the camera images. Gas velocities can be retrieved either based on an optical flow analysis or using signal cross correlation. A correction for signal dilution (due to atmospheric scattering) can be performed based on topographic features in the images. The latter requires distance retrievals to the topographic features used for the correction. These distances can be retrieved automatically on a pixel base using intersections of individual pixel viewing directions with the local topography. The main features of piscope are presented based on dataset recorded at Mt. Etna, Italy in September 2015.

  4. Evaluation of a CdTe semiconductor based compact gamma camera for sentinel lymph node imaging

    Energy Technology Data Exchange (ETDEWEB)

    Russo, Paolo; Curion, Assunta S.; Mettivier, Giovanni; Esposito, Michela; Aurilio, Michela; Caraco, Corradina; Aloj, Luigi; Lastoria, Secondo [Dipartimento di Scienze Fisiche, Universita di Napoli Federico II, I-80126 Napoli (Italy) and Istituto Nazionale di Fisica Nucleare, Sezione di Napoli, I-80126 Napoli (Italy); Medicina Nucleare, Istituto Nazionale per lo Studio e la Cura dei Tumori, Fondazione G. Pascale, I-80131 Napoli (Italy)

    2011-03-15

    Purpose: The authors assembled a prototype compact gamma-ray imaging probe (MediPROBE) for sentinel lymph node (SLN) localization. This probe is based on a semiconductor pixel detector. Its basic performance was assessed in the laboratory and clinically in comparison with a conventional gamma camera. Methods: The room-temperature CdTe pixel detector (1 mm thick) has 256x256 square pixels arranged with a 55 {mu}m pitch (sensitive area 14.08x14.08 mm{sup 2}), coupled pixel-by-pixel via bump-bonding to the Medipix2 photon-counting readout CMOS integrated circuit. The imaging probe is equipped with a set of three interchangeable knife-edge pinhole collimators (0.94, 1.2, or 2.1 mm effective diameter at 140 keV) and its focal distance can be regulated in order to set a given field of view (FOV). A typical FOV of 70 mm at 50 mm skin-to-collimator distance corresponds to a minification factor 1:5. The detector is operated at a single low-energy threshold of about 20 keV. Results: For {sup 99m}Tc, at 50 mm distance, a background-subtracted sensitivity of 6.5x10{sup -3} cps/kBq and a system spatial resolution of 5.5 mm FWHM were obtained for the 0.94 mm pinhole; corresponding values for the 2.1 mm pinhole were 3.3x10{sup -2} cps/kBq and 12.6 mm. The dark count rate was 0.71 cps. Clinical images in three patients with melanoma indicate detection of the SLNs with acquisition times between 60 and 410 s with an injected activity of 26 MBq {sup 99m}Tc and prior localization with standard gamma camera lymphoscintigraphy. Conclusions: The laboratory performance of this imaging probe is limited by the pinhole collimator performance and the necessity of working in minification due to the limited detector size. However, in clinical operative conditions, the CdTe imaging probe was effective in detecting SLNs with adequate resolution and an acceptable sensitivity. Sensitivity is expected to improve with the future availability of a larger CdTe detector permitting operation at shorter

  5. POTENTIAL OF UAV-BASED LASER SCANNER AND MULTISPECTRAL CAMERA DATA IN BUILDING INSPECTION

    Directory of Open Access Journals (Sweden)

    D. Mader

    2016-06-01

    Full Text Available Conventional building inspection of bridges, dams or large constructions in general is rather time consuming and often cost expensive due to traffic closures and the need of special heavy vehicles such as under-bridge inspection units or other large lifting platforms. In consideration that, an unmanned aerial vehicle (UAV will be more reliable and efficient as well as less expensive and simpler to operate. The utilisation of UAVs as an assisting tool in building inspections is obviously. Furthermore, light-weight special sensors such as infrared and thermal cameras as well as laser scanner are available and predestined for usage on unmanned aircraft systems. Such a flexible low-cost system is realized in the ADFEX project with the goal of time-efficient object exploration, monitoring and damage detection. For this purpose, a fleet of UAVs, equipped with several sensors for navigation, obstacle avoidance and 3D object-data acquisition, has been developed and constructed. This contribution deals with the potential of UAV-based data in building inspection. Therefore, an overview of the ADFEX project, sensor specifications and requirements of building inspections in general are given. On the basis of results achieved in practical studies, the applicability and potential of the UAV system in building inspection will be presented and discussed.

  6. New Lower-Limb Gait Asymmetry Indices Based on a Depth Camera

    Science.gov (United States)

    Auvinet, Edouard; Multon, Franck; Meunier, Jean

    2015-01-01

    Background: Various asymmetry indices have been proposed to compare the spatiotemporal, kinematic and kinetic parameters of lower limbs during the gait cycle. However, these indices rely on gait measurement systems that are costly and generally require manual examination, calibration procedures and the precise placement of sensors/markers on the body of the patient. Methods: To overcome these issues, this paper proposes a new asymmetry index, which uses an inexpensive, easy-to-use and markerless depth camera (Microsoft Kinect™) output. This asymmetry index directly uses depth images provided by the Kinect™ without requiring joint localization. It is based on the longitudinal spatial difference between lower-limb movements during the gait cycle. To evaluate the relevance of this index, fifteen healthy subjects were tested on a treadmill walking normally and then via an artificially-induced gait asymmetry with a thick sole placed under one shoe. The gait movement was simultaneously recorded using a Kinect™ placed in front of the subject and a motion capture system. Results: The proposed longitudinal index distinguished asymmetrical gait (p gait parameters failed using such Kinect™ skeleton measurements. Moreover, the correlation coefficient between this index measured by Kinect™ and the ground truth of this index measured by motion capture is 0.968. Conclusion: This gait asymmetry index measured with a Kinect™ is low cost, easy to use and is a promising development for clinical gait analysis. PMID:25719863

  7. PCA-based spatially adaptive denoising of CFA images for single-sensor digital cameras.

    Science.gov (United States)

    Zheng, Lei; Lukac, Rastislav; Wu, Xiaolin; Zhang, David

    2009-04-01

    Single-sensor digital color cameras use a process called color demosiacking to produce full color images from the data captured by a color filter array (CAF). The quality of demosiacked images is degraded due to the sensor noise introduced during the image acquisition process. The conventional solution to combating CFA sensor noise is demosiacking first, followed by a separate denoising processing. This strategy will generate many noise-caused color artifacts in the demosiacking process, which are hard to remove in the denoising process. Few denoising schemes that work directly on the CFA images have been presented because of the difficulties arisen from the red, green and blue interlaced mosaic pattern, yet a well-designed "denoising first and demosiacking later" scheme can have advantages such as less noise-caused color artifacts and cost-effective implementation. This paper presents a principle component analysis (PCA)-based spatially-adaptive denoising algorithm, which works directly on the CFA data using a supporting window to analyze the local image statistics. By exploiting the spatial and spectral correlations existing in the CFA image, the proposed method can effectively suppress noise while preserving color edges and details. Experiments using both simulated and real CFA images indicate that the proposed scheme outperforms many existing approaches, including those sophisticated demosiacking and denoising schemes, in terms of both objective measurement and visual evaluation.

  8. Design of motion adjusting system for space camera based on ultrasonic motor

    Science.gov (United States)

    Xu, Kai; Jin, Guang; Gu, Song; Yan, Yong; Sun, Zhiyuan

    2011-08-01

    Drift angle is a transverse intersection angle of vector of image motion of the space camera. Adjusting the angle could reduce the influence on image quality. Ultrasonic motor (USM) is a new type of actuator using ultrasonic wave stimulated by piezoelectric ceramics. They have many advantages in comparison with conventional electromagnetic motors. In this paper, some improvement was designed for control system of drift adjusting mechanism. Based on ultrasonic motor T-60 was designed the drift adjusting system, which is composed of the drift adjusting mechanical frame, the ultrasonic motor, the driver of Ultrasonic Motor, the photoelectric encoder and the drift adjusting controller. The TMS320F28335 DSP was adopted as the calculation and control processor, photoelectric encoder was used as sensor of position closed loop system and the voltage driving circuit designed as generator of ultrasonic wave. It was built the mathematic model of drive circuit of the ultrasonic motor T-60 using matlab modules. In order to verify the validity of the drift adjusting system, was introduced the source of the disturbance, and made simulation analysis. It designed the control systems of motor drive for drift adjusting system with the improved PID control. The drift angle adjusting system has such advantages as the small space, simple configuration, high position control precision, fine repeatability, self locking property and low powers. It showed that the system could accomplish the mission of drift angle adjusting excellent.

  9. Safety impacts of red light cameras at signalized intersections based on cellular automata models.

    Science.gov (United States)

    Chai, C; Wong, Y D; Lum, K M

    2015-01-01

    This study applies a simulation technique to evaluate the hypothesis that red light cameras (RLCs) exert important effects on accident risks. Conflict occurrences are generated by simulation and compared at intersections with and without RLCs to assess the impact of RLCs on several conflict types under various traffic conditions. Conflict occurrences are generated through simulating vehicular interactions based on an improved cellular automata (CA) model. The CA model is calibrated and validated against field observations at approaches with and without RLCs. Simulation experiments are conducted for RLC and non-RLC intersections with different geometric layouts and traffic demands to generate conflict occurrences that are analyzed to evaluate the hypothesis that RLCs exert important effects on road safety. The comparison of simulated conflict occurrences show favorable safety impacts of RLCs on crossing conflicts and unfavorable impacts for rear-end conflicts during red/amber phases. Corroborative results are found from broad analysis of accident occurrence. RLCs are found to have a mixed effect on accident risk at signalized intersections: crossing collisions are reduced, whereas rear-end collisions may increase. The specially developed CA model is found to be a feasible safety assessment tool.

  10. Efficient Smart CMOS Camera Based on FPGAs Oriented to Embedded Image Processing

    Directory of Open Access Journals (Sweden)

    Ignacio Bravo

    2011-02-01

    Full Text Available This article describes an image processing system based on an intelligent ad-hoc camera, whose two principle elements are a high speed 1.2 megapixel Complementary Metal Oxide Semiconductor (CMOS sensor and a Field Programmable Gate Array (FPGA. The latter is used to control the various sensor parameter configurations and, where desired, to receive and process the images captured by the CMOS sensor. The flexibility and versatility offered by the new FPGA families makes it possible to incorporate microprocessors into these reconfigurable devices, and these are normally used for highly sequential tasks unsuitable for parallelization in hardware. For the present study, we used a Xilinx XC4VFX12 FPGA, which contains an internal Power PC (PPC microprocessor. In turn, this contains a standalone system which manages the FPGA image processing hardware and endows the system with multiple software options for processing the images captured by the CMOS sensor. The system also incorporates an Ethernet channel for sending processed and unprocessed images from the FPGA to a remote node. Consequently, it is possible to visualize and configure system operation and captured and/or processed images remotely.

  11. Practical Stabilization of Uncertain Nonholonomic Mobile Robots Based on Visual Servoing Model with Uncalibrated Camera Parameters

    Directory of Open Access Journals (Sweden)

    Hua Chen

    2013-01-01

    Full Text Available The practical stabilization problem is addressed for a class of uncertain nonholonomic mobile robots with uncalibrated visual parameters. Based on the visual servoing kinematic model, a new switching controller is presented in the presence of parametric uncertainties associated with the camera system. In comparison with existing methods, the new design method is directly used to control the original system without any state or input transformation, which is effective to avoid singularity. Under the proposed control law, it is rigorously proved that all the states of closed-loop system can be stabilized to a prescribed arbitrarily small neighborhood of the zero equilibrium point. Furthermore, this switching control technique can be applied to solve the practical stabilization problem of a kind of mobile robots with uncertain parameters (and angle measurement disturbance which appeared in some literatures such as Morin et al. (1998, Hespanha et al. (1999, Jiang (2000, and Hong et al. (2005. Finally, the simulation results show the effectiveness of the proposed controller design approach.

  12. Carded Tow Real-Time Color Assessment: A Spectral Camera-Based System.

    Science.gov (United States)

    Furferi, Rocco; Governi, Lapo; Volpe, Yary; Carfagni, Monica

    2016-08-31

    One of the most important parameters to be controlled during the production of textile yarns obtained by mixing pre-colored fibers, is the color correspondence between the manufactured yarn and a given reference, usually provided by a designer or a customer. Obtaining yarns from raw pre-colored fibers is a complex manufacturing process entailing a number of steps such as laboratory sampling, color recipe corrections, blowing, carding and spinning. Carding process is the one devoted to transform a "fuzzy mass" of tufted fibers into a regular mass of untwisted fibers, named "tow". During this process, unfortunately, the correspondence between the color of the tow and the target one cannot be assured, thus leading to yarns whose color differs from the one used for reference. To solve this issue, the main aim of this work is to provide a system able to perform a spectral camera-based real-time measurement of a carded tow, to assess its color correspondence with a reference carded fabric and, at the same time, to monitor the overall quality of the tow during the carding process. Tested against a number of differently colored carded fabrics, the proposed system proved its effectiveness in reliably assessing color correspondence in real-time.

  13. Carded Tow Real-Time Color Assessment: A Spectral Camera-Based System

    Directory of Open Access Journals (Sweden)

    Rocco Furferi

    2016-08-01

    Full Text Available One of the most important parameters to be controlled during the production of textile yarns obtained by mixing pre-colored fibers, is the color correspondence between the manufactured yarn and a given reference, usually provided by a designer or a customer. Obtaining yarns from raw pre-colored fibers is a complex manufacturing process entailing a number of steps such as laboratory sampling, color recipe corrections, blowing, carding and spinning. Carding process is the one devoted to transform a “fuzzy mass” of tufted fibers into a regular mass of untwisted fibers, named “tow”. During this process, unfortunately, the correspondence between the color of the tow and the target one cannot be assured, thus leading to yarns whose color differs from the one used for reference. To solve this issue, the main aim of this work is to provide a system able to perform a spectral camera-based real-time measurement of a carded tow, to assess its color correspondence with a reference carded fabric and, at the same time, to monitor the overall quality of the tow during the carding process. Tested against a number of differently colored carded fabrics, the proposed system proved its effectiveness in reliably assessing color correspondence in real-time.

  14. Positron annihilation microprobe

    Energy Technology Data Exchange (ETDEWEB)

    Canter, K.F. [Brandeis Univ., Waltham, MA (United States)

    1997-03-01

    Advances in positron annihilation microprobe development are reviewed. The present resolution achievable is 3 {mu}m. The ultimate resolution is expected to be 0.1 {mu}m which will enable the positron microprobe to be a valuable tool in the development of 0.1 {mu}m scale electronic devices in the future. (author)

  15. SPM: Scanning positron microscope

    Directory of Open Access Journals (Sweden)

    Marcel Dickmann

    2015-08-01

    Full Text Available The Munich scanning positron microscope, operated by the Universität der Bundeswehr München and the Technische Universität München, located at NEPOMUC, permits positron lifetime measurements with a lateral resolution in the µm range and within an energy range of 1 – 20 keV.

  16. Energy relations of positron-electron pairs emitted from surfaces.

    Science.gov (United States)

    Brandt, I S; Wei, Z; Schumann, F O; Kirschner, J

    2014-09-05

    The impact of a primary positron onto a surface may lead to the emission of a correlated positron-electron pair. By means of a lab-based positron beam we studied this pair emission from various surfaces. We analyzed the energy spectra in a symmetric emission geometry. We found that the available energy is shared in an unequal manner among the partners. On average the positron carries a larger fraction of the available energy. The unequal energy sharing is a consequence of positron and electron being distinguishable particles. We provide a model which explains the experimental findings.

  17. MMW/THz imaging using upconversion to visible, based on glow discharge detector array and CCD camera

    Science.gov (United States)

    Aharon, Avihai; Rozban, Daniel; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, Natan S.

    2017-10-01

    An inexpensive upconverting MMW/THz imaging method is suggested here. The method is based on glow discharge detector (GDD) and silicon photodiode or simple CCD/CMOS camera. The GDD was previously found to be an excellent room-temperature MMW radiation detector by measuring its electrical current. The GDD is very inexpensive and it is advantageous due to its wide dynamic range, broad spectral range, room temperature operation, immunity to high power radiation, and more. An upconversion method is demonstrated here, which is based on measuring the visual light emitting from the GDD rather than its electrical current. The experimental setup simulates a setup that composed of a GDD array, MMW source, and a basic CCD/CMOS camera. The visual light emitting from the GDD array is directed to the CCD/CMOS camera and the change in the GDD light is measured using image processing algorithms. The combination of CMOS camera and GDD focal plane arrays can yield a faster, more sensitive, and very inexpensive MMW/THz camera, eliminating the complexity of the electronic circuits and the internal electronic noise of the GDD. Furthermore, three dimensional imaging systems based on scanning prohibited real time operation of such imaging systems. This is easily solved and is economically feasible using a GDD array. This array will enable us to acquire information on distance and magnitude from all the GDD pixels in the array simultaneously. The 3D image can be obtained using methods like frequency modulation continuous wave (FMCW) direct chirp modulation, and measuring the time of flight (TOF).

  18. Real-time multiple objects tracking on Raspberry-Pi-based smart embedded camera

    Science.gov (United States)

    Dziri, Aziz; Duranton, Marc; Chapuis, Roland

    2016-07-01

    Multiple-object tracking constitutes a major step in several computer vision applications, such as surveillance, advanced driver assistance systems, and automatic traffic monitoring. Because of the number of cameras used to cover a large area, these applications are constrained by the cost of each node, the power consumption, the robustness of the tracking, the processing time, and the ease of deployment of the system. To meet these challenges, the use of low-power and low-cost embedded vision platforms to achieve reliable tracking becomes essential in networks of cameras. We propose a tracking pipeline that is designed for fixed smart cameras and which can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on a low-cost embedded smart camera composed of a Raspberry-Pi board and a RaspiCam camera. The tracking quality and the processing speed obtained with the proposed pipeline are evaluated on publicly available datasets and compared to the state-of-the-art methods.

  19. Synthetic neutron camera and spectrometer in JET based on AFSI-ASCOT simulations

    Science.gov (United States)

    Sirén, P.; Varje, J.; Weisen, H.; Koskela, T.; contributors, JET

    2017-09-01

    The ASCOT Fusion Source Integrator (AFSI) has been used to calculate neutron production rates and spectra corresponding to the JET 19-channel neutron camera (KN3) and the time-of-flight spectrometer (TOFOR) as ideal diagnostics, without detector-related effects. AFSI calculates fusion product distributions in 4D, based on Monte Carlo integration from arbitrary reactant distribution functions. The distribution functions were calculated by the ASCOT Monte Carlo particle orbit following code for thermal, NBI and ICRH particle reactions. Fusion cross-sections were defined based on the Bosch-Hale model and both DD and DT reactions have been included. Neutrons generated by AFSI-ASCOT simulations have already been applied as a neutron source of the Serpent neutron transport code in ITER studies. Additionally, AFSI has been selected to be a main tool as the fusion product generator in the complete analysis calculation chain: ASCOT - AFSI - SERPENT (neutron and gamma transport Monte Carlo code) - APROS (system and power plant modelling code), which encompasses the plasma as an energy source, heat deposition in plant structures as well as cooling and balance-of-plant in DEMO applications and other reactor relevant analyses. This conference paper presents the first results and validation of the AFSI DD fusion model for different auxiliary heating scenarios (NBI, ICRH) with very different fast particle distribution functions. Both calculated quantities (production rates and spectra) have been compared with experimental data from KN3 and synthetic spectrometer data from ControlRoom code. No unexplained differences have been observed. In future work, AFSI will be extended for synthetic gamma diagnostics and additionally, AFSI will be used as part of the neutron transport calculation chain to model real diagnostics instead of ideal synthetic diagnostics for quantitative benchmarking.

  20. Human Detection Based on the Generation of a Background Image by Using a Far-Infrared Light Camera

    Directory of Open Access Journals (Sweden)

    Eun Som Jeon

    2015-03-01

    Full Text Available The need for computer vision-based human detection has increased in fields, such as security, intelligent surveillance and monitoring systems. However, performance enhancement of human detection based on visible light cameras is limited, because of factors, such as nonuniform illumination, shadows and low external light in the evening and night. Consequently, human detection based on thermal (far-infrared light cameras has been considered as an alternative. However, its performance is influenced by the factors, such as low image resolution, low contrast and the large noises of thermal images. It is also affected by the high temperature of backgrounds during the day. To solve these problems, we propose a new method for detecting human areas in thermal camera images. Compared to previous works, the proposed research is novel in the following four aspects. One background image is generated by median and average filtering. Additional filtering procedures based on maximum gray level, size filtering and region erasing are applied to remove the human areas from the background image. Secondly, candidate human regions in the input image are located by combining the pixel and edge difference images between the input and background images. The thresholds for the difference images are adaptively determined based on the brightness of the generated background image. Noise components are removed by component labeling, a morphological operation and size filtering. Third, detected areas that may have more than two human regions are merged or separated based on the information in the horizontal and vertical histograms of the detected area. This procedure is adaptively operated based on the brightness of the generated background image. Fourth, a further procedure for the separation and removal of the candidate human regions is performed based on the size and ratio of the height to width information of the candidate regions considering the camera viewing direction

  1. Human detection based on the generation of a background image by using a far-infrared light camera.

    Science.gov (United States)

    Jeon, Eun Som; Choi, Jong-Suk; Lee, Ji Hoon; Shin, Kwang Yong; Kim, Yeong Gon; Le, Toan Thanh; Park, Kang Ryoung

    2015-03-19

    The need for computer vision-based human detection has increased in fields, such as security, intelligent surveillance and monitoring systems. However, performance enhancement of human detection based on visible light cameras is limited, because of factors, such as nonuniform illumination, shadows and low external light in the evening and night. Consequently, human detection based on thermal (far-infrared light) cameras has been considered as an alternative. However, its performance is influenced by the factors, such as low image resolution, low contrast and the large noises of thermal images. It is also affected by the high temperature of backgrounds during the day. To solve these problems, we propose a new method for detecting human areas in thermal camera images. Compared to previous works, the proposed research is novel in the following four aspects. One background image is generated by median and average filtering. Additional filtering procedures based on maximum gray level, size filtering and region erasing are applied to remove the human areas from the background image. Secondly, candidate human regions in the input image are located by combining the pixel and edge difference images between the input and background images. The thresholds for the difference images are adaptively determined based on the brightness of the generated background image. Noise components are removed by component labeling, a morphological operation and size filtering. Third, detected areas that may have more than two human regions are merged or separated based on the information in the horizontal and vertical histograms of the detected area. This procedure is adaptively operated based on the brightness of the generated background image. Fourth, a further procedure for the separation and removal of the candidate human regions is performed based on the size and ratio of the height to width information of the candidate regions considering the camera viewing direction and perspective

  2. The positron beam at the stuttgart pelletron accelerator and its applications to β +γ positron lifetime measurements

    Science.gov (United States)

    Bauer, W.; Maier, K.; Major, J.; Schaefer, H.-E.; Seeger, A.; Carstanjen, H.-D.; Decker, W.; Diehl, J.; Stoll, H.

    1987-08-01

    A slow-positron source has been installed in the therminal of an electrostatic 6.5 MeV accelerator and provides a monoenergetic positron beam in the few-MeV range. It will be used to operate a “fast” positron lifetime spectrometer based on β + γ coincidences. The properties of the beam, the expected performance of the spectrometer, its advantages over conventional γγ lifetime measurements, a number of intended applications, as well as recent positron-electron scattering experiments and plans for positron channelling and channelling-radiation studies are outlined.

  3. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    Science.gov (United States)

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  4. A New Approach for Combining Time-of-Flight and RGB Cameras Based on Depth-Dependent Planar Projective Transformations.

    Science.gov (United States)

    Salinas, Carlota; Fernández, Roemi; Montes, Héctor; Armada, Manuel

    2015-09-23

    Image registration for sensor fusion is a valuable technique to acquire 3D and colour information for a scene. Nevertheless, this process normally relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. The combination of ToF and RGB cameras is an instance that problem. Typically, the fusion of these sensors is based on the extrinsic parameter computation of the coordinate transformation between the two cameras. This leads to a loss of colour information because of the low resolution of the ToF camera, and sophisticated algorithms are required to minimize this issue. This work proposes a method for sensor registration with non-common features and that avoids the loss of colour information. The depth information is used as a virtual feature for estimating a depth-dependent homography lookup table (Hlut). The homographies are computed within sets of ground control points of 104 images. Since the distance from the control points to the ToF camera are known, the working distance of each element on the Hlut is estimated. Finally, two series of experimental tests have been carried out in order to validate the capabilities of the proposed method.

  5. A New Approach for Combining Time-of-Flight and RGB Cameras Based on Depth-Dependent Planar Projective Transformations

    Directory of Open Access Journals (Sweden)

    Carlota Salinas

    2015-09-01

    Full Text Available Image registration for sensor fusion is a valuable technique to acquire 3D and colour information for a scene. Nevertheless, this process normally relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. The combination of ToF and RGB cameras is an instance that problem. Typically, the fusion of these sensors is based on the extrinsic parameter computation of the coordinate transformation between the two cameras. This leads to a loss of colour information because of the low resolution of the ToF camera, and sophisticated algorithms are required to minimize this issue. This work proposes a method for sensor registration with non-common features and that avoids the loss of colour information. The depth information is used as a virtual feature for estimating a depth-dependent homography lookup table (Hlut. The homographies are computed within sets of ground control points of 104 images. Since the distance from the control points to the ToF camera are known, the working distance of each element on the Hlut is estimated. Finally, two series of experimental tests have been carried out in order to validate the capabilities of the proposed method.

  6. Infrared camera based thermometry for quality assurance of superficial hyperthermia applicators.

    Science.gov (United States)

    Müller, Johannes; Hartmann, Josefin; Bert, Christoph

    2016-04-07

    The purpose of this work was to provide a feasible and easy to apply phantom-based quality assurance (QA) procedure for superficial hyperthermia (SHT) applicators by means of infrared (IR) thermography. The VarioCAM hr head (InfraTec, Dresden, Germany) was used to investigate the SA-812, the SA-510 and the SA-308 applicators (all: Pyrexar Medical, Salt Lake City, UT, USA). Probe referencing and thermal equilibrium procedures were applied to determine the emissivity of the muscle-equivalent agar phantom. Firstly, the disturbing potential of thermal conduction on the temperature distribution inside the phantom was analyzed through measurements after various heating times (5-50 min). Next, the influence of the temperature of the water bolus between the SA-812 applicator and the phantom's surface was evaluated by varying its temperature. The results are presented in terms of characteristic values (extremal temperatures, percentiles and effective field sizes (EFS)) and temperature-area-histograms (TAH). Lastly, spiral antenna applicators were compared by the introduced characteristics. The emissivity of the used phantom was found to be ε  =  0.91  ±  0.03, the results of both methods coincided. The influence of thermal conduction with regard to heating time was smaller than expected; the EFS of the SA-812 applicator had a size of (68.6  ±  6.7) cm(2), averaged group variances were  ±3.0 cm(2). The TAHs show that the influence of the water bolus is mostly limited to depths of  camera a very useful tool in SHT technical QA.

  7. A SPECT Scanner for Rodent Imaging Based on Small-Area Gamma Cameras

    Science.gov (United States)

    Lage, Eduardo; Villena, José L.; Tapias, Gustavo; Martinez, Naira P.; Soto-Montenegro, Maria L.; Abella, Mónica; Sisniega, Alejandro; Pino, Francisco; Ros, Domènec; Pavia, Javier; Desco, Manuel; Vaquero, Juan J.

    2010-10-01

    We developed a cost-effective SPECT scanner prototype (rSPECT) for in vivo imaging of rodents based on small-area gamma cameras. Each detector consists of a position-sensitive photomultiplier tube (PS-PMT) coupled to a 30 x 30 Nal(Tl) scintillator array and electronics attached to the PS-PMT sockets for adapting the detector signals to an in-house developed data acquisition system. The detector components are enclosed in a lead-shielded case with a receptacle to insert the collimators. System performance was assessed using 99mTc for a high-resolution parallel-hole collimator, and for a 0.75-mm pinhole collimator with a 60° aperture angle and a 42-mm collimator length. The energy resolution is about 10.7% of the photopeak energy. The overall system sensitivity is about 3 cps/μCi/detector and planar spatial resolution ranges from 2.4 mm at 1 cm source-to-collimator distance to 4.1 mm at 4.5 cm with parallel-hole collimators. With pinhole collimators planar spatial resolution ranges from 1.2 mm at 1 cm source-to-collimator distance to 2.4 mm at 4.5 cm; sensitivity at these distances ranges from 2.8 to 0.5 cps/μCi/detector. Tomographic hot-rod phantom images are presented together with images of bone, myocardium and brain of living rodents to demonstrate the feasibility of preclinical small-animal studies with the rSPECT.

  8. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  9. Depolarization in the ILC Linac-To-Ring Positron beamline

    Energy Technology Data Exchange (ETDEWEB)

    Kovalenko, Valentyn; Ushakov, Andriy [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Moortgat-Pick, Gudrid [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Riemann, Sabine [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2012-02-15

    To achieve the physics goals of future Linear Colliders, it is important that electron and positron beams are polarized. The positron source planned for the International Linear Collider (ILC) is based on a helical undulator system and can deliver a polarised beam with vertical stroke Pe{sup +} vertical stroke {>=} 60%. To ensure that no significant polarization is lost during the transport of the electron and positron beams from the source to the interaction region, spin tracking has to be included in all transport elements which can contribute to a loss of polarization. These are the positron source, the damping ring, the spin rotators, the main linac and the beam delivery system. In particular, the dynamics of the polarized positron beam is required to be investigated. The results of positron spin tracking and depolarization study at the Positron-Linac-To-Ring (PLTR) beamline are presented. (orig.)

  10. Simulation-based camera navigation training in laparoscopy-a randomized trial

    DEFF Research Database (Denmark)

    Nilsson, Cecilia; Sørensen, Jette Led; Konge, Lars

    2017-01-01

    BACKGROUND: Inexperienced operating assistants are often tasked with the important role of handling camera navigation during laparoscopic surgery. Incorrect handling can lead to poor visualization, increased operating time, and frustration for the operating surgeon-all of which can compromise pat...

  11. Single camera spectral domain polarization-sensitive optical coherence tomography based on orthogonal channels by time divided detection

    Science.gov (United States)

    He, Youwu; Li, Zhifang; Zhang, Ying; Li, Hui

    2017-11-01

    We demonstrate a simple polarization-sensitive spectral-domain optical coherence tomography implement by using a single line-scan camera based on time divided detection. Two light shutters were placed on the dual assembly reference arm that provides a divided detection between the orthogonal vertical and horizontal polarized lights. The relative reflectivity and the retardance information were available by recombining the two orthogonal polarization images. This system can be employed to implement high speed polarization-sensitive OCT images.

  12. Observing low-level stratiform clouds and determining its base height at night by sky camera measurements

    Science.gov (United States)

    Kolláth, Kornél; Kolláth, Zoltán

    2017-04-01

    The amount and base height of low-level clouds are critical parameters in aviation meteorology. New techniques which can extend the geographic area coverage and characteristics of the cloudiness could be beneficial. In recent years, sky camera systems became more and more popular as a meteorological observation tool. Recent commercial digital cameras with increasingly sensitive sensors provide cheap opportunities for luminance measurements of the night sky. We introduce a new observation method for determining cloud base height analogous to the triangulation principle of searchlight ceilometer. We show that light pollution (the upward component of artificial lights) could be used passively as cloud ceiling projector in various environments. The method was tested in one year period from one observation site in central Budapest. Comparison with the Budapest airport cloud observation data could be performed. In the case of homogeneous stratus cloud sheets, we found that the base height could be estimated with reasonable accuracy via the illumination of the clouds from the stronger ornamental lights in the city. Case studies with different local light pollution characteristics (e.g. smaller settlements, different observation distances) will be presented. Limitations of the method will be discussed. The main problem to be addressed is how can we assimilate nighttime sky camera data into other routine meteorological observations available at night regarding low-level clouds.

  13. Efficient Cryosolid Positron Moderators

    Science.gov (United States)

    2012-08-01

    matter - antimatter atoms in crossed electric and magnetic fields.” 20. A.P. Mills, Jr. and E.M. Gullikson, Appl. Phys. Lett. 49, 1121 (1986). “Solid...channel electron multiplier FTIR ...........................fourier transform IR HEDM ........................high energy density matter HPGe...public release; distribution unlimited; 96ABW-2012-0348 1. INTRODUCTION 1.A. Positrons The positron (e+) is the antimatter counterpart to the

  14. Comparison of - and Mutual Informaton Based Calibration of Terrestrial Laser Scanner and Digital Camera for Deformation Monitoring

    Science.gov (United States)

    Omidalizarandi, M.; Neumann, I.

    2015-12-01

    In the current state-of-the-art, geodetic deformation analysis of natural and artificial objects (e.g. dams, bridges,...) is an ongoing research in both static and kinematic mode and has received considerable interest by researchers and geodetic engineers. In this work, due to increasing the accuracy of geodetic deformation analysis, a terrestrial laser scanner (TLS; here the Zoller+Fröhlich IMAGER 5006) and a high resolution digital camera (Nikon D750) are integrated to complementarily benefit from each other. In order to optimally combine the acquired data of the hybrid sensor system, a highly accurate estimation of the extrinsic calibration parameters between TLS and digital camera is a vital preliminary step. Thus, the calibration of the aforementioned hybrid sensor system can be separated into three single calibrations: calibration of the camera, calibration of the TLS and extrinsic calibration between TLS and digital camera. In this research, we focus on highly accurate estimating extrinsic parameters between fused sensors and target- and targetless (mutual information) based methods are applied. In target-based calibration, different types of observations (image coordinates, TLS measurements and laser tracker measurements for validation) are utilized and variance component estimation is applied to optimally assign adequate weights to the observations. Space resection bundle adjustment based on the collinearity equations is solved using Gauss-Markov and Gauss-Helmert model. Statistical tests are performed to discard outliers and large residuals in the adjustment procedure. At the end, the two aforementioned approaches are compared and advantages and disadvantages of them are investigated and numerical results are presented and discussed.

  15. A possible role for silicon microstrip detectors in nuclear medicine Compton imaging of positron emitters

    CERN Document Server

    Scannavini, M G; Royle, G J; Cullum, I; Raymond, M; Hall, G; Iles, G

    2002-01-01

    Collimation of gamma-rays based on Compton scatter could provide in principle high resolution and high sensitivity, thus becoming an advantageous method for the imaging of radioisotopes of clinical interest. A small laboratory prototype of a Compton camera is being constructed in order to initiate studies aimed at assessing the feasibility of Compton imaging of positron emitters. The design of the camera is based on the use of a silicon collimator consisting of a stack of double-sided, AC-coupled microstrip detectors (area 6x6 cm sup 2 , 500 mu m thickness, 128 channels/side). Two APV6 chips are employed for signal readout on opposite planes of each detector. This work presents the first results on the noise performance of the silicon strip detectors. Measurements of the electrical characteristics of the detector are also reported. On the basis of the measured noise, an angular resolution of approximately 5 deg. is predicted for the Compton collimator.

  16. Mars Observer Camera

    OpenAIRE

    Malin, M. C.; Danielson, G. E.; Ingersoll, A. P.; Masursky, H.; J. Veverka(Massachusetts Institute of Technology, Cambridge, U.S.A.); Ravine, M. A.; Soulanille, T.A.

    1992-01-01

    The Mars Observer camera (MOC) is a three-component system (one narrow-angle and two wide-angle cameras) designed to take high spatial resolution pictures of the surface of Mars and to obtain lower spatial resolution, synoptic coverage of the planet's surface and atmosphere. The cameras are based on the “push broom” technique; that is, they do not take “frames” but rather build pictures, one line at a time, as the spacecraft moves around the planet in its orbit. MOC is primarily a telescope f...

  17. Clinical application of in vivo treatment delivery verification based on PET/CT imaging of positron activity induced at high energy photon therapy

    Science.gov (United States)

    Janek Strååt, Sara; Andreassen, Björn; Jonsson, Cathrine; Noz, Marilyn E.; Maguire, Gerald Q., Jr.; Näfstadius, Peder; Näslund, Ingemar; Schoenahl, Frederic; Brahme, Anders

    2013-08-01

    The purpose of this study was to investigate in vivo verification of radiation treatment with high energy photon beams using PET/CT to image the induced positron activity. The measurements of the positron activation induced in a preoperative rectal cancer patient and a prostate cancer patient following 50 MV photon treatments are presented. A total dose of 5 and 8 Gy, respectively, were delivered to the tumors. Imaging was performed with a 64-slice PET/CT scanner for 30 min, starting 7 min after the end of the treatment. The CT volume from the PET/CT and the treatment planning CT were coregistered by matching anatomical reference points in the patient. The treatment delivery was imaged in vivo based on the distribution of the induced positron emitters produced by photonuclear reactions in tissue mapped on to the associated dose distribution of the treatment plan. The results showed that spatial distribution of induced activity in both patients agreed well with the delivered beam portals of the treatment plans in the entrance subcutaneous fat regions but less so in blood and oxygen rich soft tissues. For the preoperative rectal cancer patient however, a 2 ± (0.5) cm misalignment was observed in the cranial-caudal direction of the patient between the induced activity distribution and treatment plan, indicating a beam patient setup error. No misalignment of this kind was seen in the prostate cancer patient. However, due to a fast patient setup error in the PET/CT scanner a slight mis-position of the patient in the PET/CT was observed in all three planes, resulting in a deformed activity distribution compared to the treatment plan. The present study indicates that the induced positron emitters by high energy photon beams can be measured quite accurately using PET imaging of subcutaneous fat to allow portal verification of the delivered treatment beams. Measurement of the induced activity in the patient 7 min after receiving 5 Gy involved count rates which were about

  18. Design of Belief Propagation Based on FPGA for the Multistereo CAFADIS Camera

    Directory of Open Access Journals (Sweden)

    José Manuel Rodríguez-Ramos

    2010-10-01

    Full Text Available In this paper we describe a fast, specialized hardware implementation of the belief propagation algorithm for the CAFADIS camera, a new plenoptic sensor patented by the University of La Laguna. This camera captures the lightfield of the scene and can be used to find out at which depth each pixel is in focus. The algorithm has been designed for FPGA devices using VHDL. We propose a parallel and pipeline architecture to implement the algorithm without external memory. Although the BRAM resources of the device increase considerably, we can maintain real-time restrictions by using extremely high-performance signal processing capability through parallelism and by accessing several memories simultaneously. The quantifying results with 16 bit precision have shown that performances are really close to the original Matlab programmed algorithm.

  19. Design of a fast multi-hit position sensitive detector based on a CCD camera

    CERN Document Server

    Renaud, L; Da Costa, G; Deconihout, B

    2002-01-01

    A new position sensitive detector has been designed for time-of-flight mass spectrometry. It combines a double micro-channel plate stage with a phosphor screen, the conductive coating of which is divided into an array of strip-like-shaped anodes. Time-of-flight signals are measured on the strip array with a 0.5 ns resolution, while a CCD camera records light-spots generated by ion impacts on the phosphor screen. With this particular imaging device, it is possible to accurately assign time-of-flight to positions recorded by the camera. This paper describes the main features of this new position sensitive detector and results obtained with a three-dimensional atom probe are presented.

  20. Color correction for projected image on colored-screen based on a camera

    Science.gov (United States)

    Kim, Dae-Chul; Lee, Tae-Hyoung; Choi, Myong-Hui; Ha, Yeong-Ho

    2011-01-01

    Recently, projector is one of the most common display devices not only for presentation at offices and classes, but for entertainment at home and theater. The use of mobile projector expands applications to meeting at fields and presentation on any spots. Accordingly, the projection is not always guaranteed on white screen, causing some color distortion. Several algorithms have been suggested to correct the projected color on the light colored screen. These have limitation on the use of measurement equipment which can't bring always, also lack of accuracy due to transform matrix obtained by using small number of patches. In this paper, color correction method using general still camera as convenient measurement equipment is proposed to match the colors between on white and colored screens. A patch containing 9 ramps of each channel are firstly projected on white and light colored screens, then captured by the camera, respectively, Next, digital values are obtained by the captured image for each ramp patch on both screens, resulting in different values to the same patch. After that, we check which ramp patch on colored screen has the same digital value on white screen, repeating this procedure for all ramp patches. The difference between corresponding ramp patches reveals the quantity of color shift. Then, color correction matrix is obtained by regression method using matched values. Differently from previous methods, the use of general still camera allows to measure regardless of places. In addition, two captured images on white and colored screen with ramp patches inform the color shift for 9 steps of each channel, enabling accurate construction of transform matrix. Nonlinearity of camera characteristics is also considered by using regression method to construct transform matrix. In the experimental results, the proposed method gives better color correction on the objective and subjective evaluation than the previous methods.

  1. Split ring resonator based THz-driven electron streak camera featuring femtosecond resolution.

    Science.gov (United States)

    Fabiańska, Justyna; Kassier, Günther; Feurer, Thomas

    2014-07-10

    Through combined three-dimensional electromagnetic and particle tracking simulations we demonstrate a THz driven electron streak camera featuring a temporal resolution on the order of a femtosecond. The ultrafast streaking field is generated in a resonant THz sub-wavelength antenna which is illuminated by an intense single-cycle THz pulse. Since electron bunches and THz pulses are generated with parts of the same laser system, synchronization between the two is inherently guaranteed.

  2. Scent Lure Effect on Camera-Trap Based Leopard Density Estimates.

    Directory of Open Access Journals (Sweden)

    Alexander Richard Braczkowski

    Full Text Available Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a 'control' and 'treatment' survey on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96 or temporal activity of female (p = 0.12 or male leopards (p = 0.79, and the assumption of geographic closure was met for both surveys (p >0.05. The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90. Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28-9.28 leopards/100km2 were considerably higher than estimates from spatially-explicit methods (3.40-3.65 leopards/100km2. The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted.

  3. Hardware-based smart camera for recovering high dynamic range video from multiple exposures

    Science.gov (United States)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2014-10-01

    In many applications such as video surveillance or defect detection, the perception of information related to a scene is limited in areas with strong contrasts. The high dynamic range (HDR) capture technique can deal with these limitations. The proposed method has the advantage of automatically selecting multiple exposure times to make outputs more visible than fixed exposure ones. A real-time hardware implementation of the HDR technique that shows more details both in dark and bright areas of a scene is an important line of research. For this purpose, we built a dedicated smart camera that performs both capturing and HDR video processing from three exposures. What is new in our work is shown through the following points: HDR video capture through multiple exposure control, HDR memory management, HDR frame generation, and representation under a hardware context. Our camera achieves a real-time HDR video output at 60 fps at 1.3 megapixels and demonstrates the efficiency of our technique through an experimental result. Applications of this HDR smart camera include the movie industry, the mass-consumer market, military, automotive industry, and surveillance.

  4. Visual fatigue modeling for stereoscopic video shot based on camera motion

    Science.gov (United States)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  5. Towards Camera-LIDAR Fusion-Based Terrain Modelling for Planetary Surfaces: Review and Analysis

    Directory of Open Access Journals (Sweden)

    Affan Shaukat

    2016-11-01

    Full Text Available In recent decades, terrain modelling and reconstruction techniques have increased research interest in precise short and long distance autonomous navigation, localisation and mapping within field robotics. One of the most challenging applications is in relation to autonomous planetary exploration using mobile robots. Rovers deployed to explore extraterrestrial surfaces are required to perceive and model the environment with little or no intervention from the ground station. Up to date, stereopsis represents the state-of-the art method and can achieve short-distance planetary surface modelling. However, future space missions will require scene reconstruction at greater distance, fidelity and feature complexity, potentially using other sensors like Light Detection And Ranging (LIDAR. LIDAR has been extensively exploited for target detection, identification, and depth estimation in terrestrial robotics, but is still under development to become a viable technology for space robotics. This paper will first review current methods for scene reconstruction and terrain modelling using cameras in planetary robotics and LIDARs in terrestrial robotics; then we will propose camera-LIDAR fusion as a feasible technique to overcome the limitations of either of these individual sensors for planetary exploration. A comprehensive analysis will be presented to demonstrate the advantages of camera-LIDAR fusion in terms of range, fidelity, accuracy and computation.

  6. Infrared Camera

    Science.gov (United States)

    1997-01-01

    A sensitive infrared camera that observes the blazing plumes from the Space Shuttle or expendable rocket lift-offs is capable of scanning for fires, monitoring the environment and providing medical imaging. The hand-held camera uses highly sensitive arrays in infrared photodetectors known as quantum well infrared photo detectors (QWIPS). QWIPS were developed by the Jet Propulsion Laboratory's Center for Space Microelectronics Technology in partnership with Amber, a Raytheon company. In October 1996, QWIP detectors pointed out hot spots of the destructive fires speeding through Malibu, California. Night vision, early warning systems, navigation, flight control systems, weather monitoring, security and surveillance are among the duties for which the camera is suited. Medical applications are also expected.

  7. Individualized Positron Emission Tomography–Based Isotoxic Accelerated Radiation Therapy Is Cost-Effective Compared With Conventional Radiation Therapy: A Model-Based Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Bongers, Mathilda L., E-mail: ml.bongers@vumc.nl [Department of Epidemiology and Biostatistics, VU University Medical Center, Amsterdam (Netherlands); Coupé, Veerle M.H. [Department of Epidemiology and Biostatistics, VU University Medical Center, Amsterdam (Netherlands); De Ruysscher, Dirk [Radiation Oncology University Hospitals Leuven/KU Leuven, Leuven (Belgium); Department of Radiation Oncology, GROW Research Institute, Maastricht University Medical Center, Maastricht (Netherlands); Oberije, Cary; Lambin, Philippe [Department of Radiation Oncology, GROW Research Institute, Maastricht University Medical Center, Maastricht (Netherlands); Uyl-de Groot, Cornelia A. [Department of Epidemiology and Biostatistics, VU University Medical Center, Amsterdam (Netherlands); Institute for Medical Technology Assessment, Erasmus University Rotterdam, Rotterdam (Netherlands)

    2015-03-15

    Purpose: To evaluate long-term health effects, costs, and cost-effectiveness of positron emission tomography (PET)-based isotoxic accelerated radiation therapy treatment (PET-ART) compared with conventional fixed-dose CT-based radiation therapy treatment (CRT) in non-small cell lung cancer (NSCLC). Methods and Materials: Our analysis uses a validated decision model, based on data of 200 NSCLC patients with inoperable stage I-IIIB. Clinical outcomes, resource use, costs, and utilities were obtained from the Maastro Clinic and the literature. Primary model outcomes were the difference in life-years (LYs), quality-adjusted life-years (QALYs), costs, and the incremental cost-effectiveness and cost/utility ratio (ICER and ICUR) of PET-ART versus CRT. Model outcomes were obtained from averaging the predictions for 50,000 simulated patients. A probabilistic sensitivity analysis and scenario analyses were carried out. Results: The average incremental costs per patient of PET-ART were €569 (95% confidence interval [CI] €−5327-€6936) for 0.42 incremental LYs (95% CI 0.19-0.61) and 0.33 QALYs gained (95% CI 0.13-0.49). The base-case scenario resulted in an ICER of €1360 per LY gained and an ICUR of €1744 per QALY gained. The probabilistic analysis gave a 36% probability that PET-ART improves health outcomes at reduced costs and a 64% probability that PET-ART is more effective at slightly higher costs. Conclusion: On the basis of the available data, individualized PET-ART for NSCLC seems to be cost-effective compared with CRT.

  8. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition.

    Science.gov (United States)

    Sun, Ryan; Bouchard, Matthew B; Hillman, Elizabeth M C

    2010-08-02

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software's framework and provide details to guide users with development of this and similar software.

  9. A new pnCCD-based color X-ray camera for fast spatial and energy-resolved measurements

    Energy Technology Data Exchange (ETDEWEB)

    Ordavo, I., E-mail: ivan.ordavo@pnsensor.de [PNSensor GmbH, Roemerstrasse 28, 80803 Muenchen (Germany); PNDetector GmbH, Emil-Nolde-Strasse 10, 81735 Muenchen (Germany); Ihle, S. [PNSensor GmbH, Roemerstrasse 28, 80803 Muenchen (Germany); Arkadiev, V. [Institut fuer angewandte Photonik e.V., Rudower Chaussee 29/31, 12489 Berlin (Germany); Scharf, O. [BAM Federal Institute for Materials Research and Testing, Richard-Willstaetter-Strasse 11, 12489 Berlin (Germany); Soltau, H. [PNSensor GmbH, Roemerstrasse 28, 80803 Muenchen (Germany); Bjeoumikhov, A.; Bjeoumikhova, S. [IFG - Institute for Scientific Instruments GmbH, Rudower Chaussee 29/31, 12489 Berlin (Germany); Buzanich, G. [BAM Federal Institute for Materials Research and Testing, Richard-Willstaetter-Strasse 11, 12489 Berlin (Germany); Gubzhokov, R.; Guenther, A. [IFG - Institute for Scientific Instruments GmbH, Rudower Chaussee 29/31, 12489 Berlin (Germany); Hartmann, R.; Holl, P. [PNSensor GmbH, Roemerstrasse 28, 80803 Muenchen (Germany); Kimmel, N. [Max-Planck-Institut fuer extraterrestrische Physik, Giessenbachstrasse, 85748 Garching (Germany); Max-Planck-Institut Halbleiterlabor, Otto-Hahn-Ring 6, 81739 Muenchen (Germany); Kuehbacher, M. [Helmholtz Centre Berlin for Materials and Energy, Hahn-Meitner-Platz 1, 14109 Berlin (Germany); Lang, M. [PNSensor GmbH, Roemerstrasse 28, 80803 Muenchen (Germany); Langhoff, N. [IFG - Institute for Scientific Instruments GmbH, Rudower Chaussee 29/31, 12489 Berlin (Germany); Liebel, A. [PNDetector GmbH, Emil-Nolde-Strasse 10, 81735 Muenchen (Germany); Radtke, M.; Reinholz, U.; Riesemeier, H. [BAM Federal Institute for Materials Research and Testing, Richard-Willstaetter-Strasse 11, 12489 Berlin (Germany); and others

    2011-10-21

    We present a new high resolution X-ray imager based on a pnCCD detector and a polycapillary optics. The properties of the pnCCD like high quantum efficiency, high energy resolution and radiation hardness are maintained, while color corrected polycapillary lenses are used to direct the fluorescence photons from every spot on a sample to a corresponding pixel on the detector. The camera is sensitive to photons from 3 to 40 keV with still 30% quantum efficiency at 20 keV. The pnCCD is operated in split frame mode allowing a high frame rate of 400 Hz with an energy resolution of 152 eV for Mn K{alpha} (5.9 keV) at 450 kcps. In single-photon counting mode (SPC), the time, energy and position of every fluorescence photon is recorded for every frame. A dedicated software enables the visualization of the elements distribution in real time without the need of post-processing the data. A description of the key components including detector, X-ray optics and camera is given. First experiments show the capability of the camera to perform fast full-field X-Ray Fluorescence (FF-XRF) for element analysis. The imaging performance with a magnifying optics (3x) has also been successfully tested.

  10. On-Line Detection of Defects on Fruit by Machinevision Systems Based on Three-Color-Cameras Systems

    Science.gov (United States)

    Xul, Qiaobao; Zou, Xiaobo; Zhao, Jiewen

    How to identify apple stem-ends and calyxes from defects is still a challenging project due to the complexity of the process. It is know that the stem-ends and calyxes could not appear at the same image. Therefore, a contaminated apple distinguishing method is developed in this article. That is, if there are two or more doubtful blobs on an applés image, the apple is contaminated one. There is no complex imaging process and pattern recognition in this method, because it is only need to find how many blobs (including the stem-ends and calyxes) in an applés image. Machine vision systems which based 3 color cameras are presented in this article regarding the online detection of external defects. On this system, the fruits placed on rollers are rotating while moving, and each camera which placed on the line grabs 3 images from an apple. After the apple segmented from the black background by multi-thresholds method, defect's segmentation and counting is performed on the applés images. Good separation between normal and contaminated apples was obtained for threecamera system (94.5%), comparing to one-camera system (63.3%), twocamera system (83.7%). The disadvantage of this method is that it could not distinguish different defects types. Defects of apples, such as bruising, scab, fungal growth, and disease, are treated as the same.

  11. A deep learning based fusion of RGB camera information and magnetic localization information for endoscopic capsule robots.

    Science.gov (United States)

    Turan, Mehmet; Shabbir, Jahanzaib; Araujo, Helder; Konukoglu, Ender; Sitti, Metin

    2017-01-01

    A reliable, real time localization functionality is crutial for actively controlled capsule endoscopy robots, which are an emerging, minimally invasive diagnostic and therapeutic technology for the gastrointestinal (GI) tract. In this study, we extend the success of deep learning approaches from various research fields to the problem of sensor fusion for endoscopic capsule robots. We propose a multi-sensor fusion based localization approach which combines endoscopic camera information and magnetic sensor based localization information. The results performed on real pig stomach dataset show that our method achieves sub-millimeter precision for both translational and rotational movements.

  12. Performance of the Tachyon Time-of-Flight PET Camera.

    Science.gov (United States)

    Peng, Q; Choong, W-S; Vu, C; Huber, J S; Janecek, M; Wilson, D; Huesman, R H; Qi, Jinyi; Zhou, Jian; Moses, W W

    2015-02-01

    We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon's detector module is optimized for timing by coupling the 6.15 × 25 mm(2) side of 6.15 × 6.15 × 25 mm(3) LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/- ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. The results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3.

  13. Applications of nucleoside-based molecular probes for the in vivo assessment of tumour biochemistry using positron emission tomography (PET

    Directory of Open Access Journals (Sweden)

    Leonard I. Wiebe

    2007-05-01

    Full Text Available Positron emission tomography (PET is a non-invasive nuclear imaging technique. In PET, radiolabelled molecules decay by positron emission. The gamma rays resulting from positron annihilation are detected in coincidence and mapped to produce three dimensional images of radiotracer distribution in the body. Molecular imaging with PET refers to the use of positron-emitting biomolecules that are highly specific substrates for target enzymes, transport proteins or receptor proteins. Molecular imaging with PET produces spatial and temporal maps of the target-related processes. Molecular imaging is an important analytical tool in diagnostic medical imaging, therapy monitoring and the development of new drugs. Molecular imaging has its roots in molecular biology. Originally, molecular biology meant the biology of gene expression, but now molecular biology broadly encompasses the macromolecular biology and biochemistry of proteins, complex carbohydrates and nucleic acids. To date, molecular imaging has focused primarily on proteins, with emphasis on monoclonal antibodies and their derivative forms, small-molecule enzyme substrates and components of cell membranes, including transporters and transmembrane signalling elements. This overview provides an introduction to nucleosides, nucleotides and nucleic acids in the context of molecular imaging.A tomografia por emissão de pósitrons (TEP é uma técnica de imagem não invasiva da medicina nuclear. A TEP utiliza moléculas marcadas com emissores de radiação beta positiva (pósitrons. As radiações gama medidas que resultam do aniquilamento dos pósitrons são detectadas por um sistema de coincidência e mapeadas para produzir uma imagem tridimensional da distribuição do radiotraçador no corpo. A imagem molecular com TEP refere-se ao uso de biomoléculas marcadas com emissor de pósitron que são substratos altamente específicos para alvos como enzimas, proteínas transportadoras ou receptores prot

  14. Study of a new architecture of gamma cameras with Cd/ZnTe/CdTe semiconductors; Etude d'une nouvelle architecture de gamma camera a base de semi-conducteurs CdZnTe /CdTe

    Energy Technology Data Exchange (ETDEWEB)

    Guerin, L

    2007-11-15

    This thesis studies new semi conductors for gammas cameras in order to improve the quality of image in nuclear medicine. The chapter 1 reminds the general principle of the imaging gamma, by describing the radiotracers, the channel of detection and the types of Anger gamma cameras acquisition. The physiological, physical and technological limits of the camera are then highlighted, to better identify the needs of future gamma cameras. The chapter 2 is dedicated to a bibliographical study. At first, semi-conductors used in imaging gamma are presented, and more particularly semi-conductors CDTE and CdZnTe, by distinguishing planar detectors and monolithic pixelated detectors. Secondly, the classic collimators of the gamma cameras, used in clinical routine for the most part of between them, are described. Their geometry is presented, as well as their characteristics, their advantages and their inconveniences. The chapter 3 is dedicated to a state of art of the simulation codes dedicated to the medical imaging and the methods of reconstruction in imaging gamma. These states of art allow to introduce the software of simulation and the methods of reconstruction used within the framework of this thesis. The chapter 4 presents the new architecture of gamma camera proposed during this work of thesis. It is structured in three parts. The first part justifies the use of semiconducting detectors CdZnTe, in particular the monolithic pixelated detectors, by bringing to light their advantages with regard to the detection modules based on scintillator. The second part presents gamma cameras to base of detectors CdZnTe (prototypes or commercial products) and their associated collimators, as well as the interest of an association of detectors CdZnTe in the classic collimators. Finally, the third part presents in detail the HiSens architecture. The chapter 5 describes both software of simulation used within the framework of this thesis to estimate the performances of the Hi

  15. CCD Camera

    Science.gov (United States)

    Roth, Roger R.

    1983-01-01

    A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

  16. Development of a Compton camera for medical applications based on silicon strip and scintillation detectors

    Energy Technology Data Exchange (ETDEWEB)

    Krimmer, J., E-mail: j.krimmer@ipnl.in2p3.fr [Institut de Physique Nucléaire de Lyon, Université de Lyon, Université Lyon 1, CNRS/IN2P3 UMR 5822, 69622 Villeurbanne cedex (France); Ley, J.-L. [Institut de Physique Nucléaire de Lyon, Université de Lyon, Université Lyon 1, CNRS/IN2P3 UMR 5822, 69622 Villeurbanne cedex (France); Abellan, C.; Cachemiche, J.-P. [Aix-Marseille Université, CNRS/IN2P3, CPPM UMR 7346, 13288 Marseille (France); Caponetto, L.; Chen, X.; Dahoumane, M.; Dauvergne, D. [Institut de Physique Nucléaire de Lyon, Université de Lyon, Université Lyon 1, CNRS/IN2P3 UMR 5822, 69622 Villeurbanne cedex (France); Freud, N. [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA - Lyon, Université Lyon 1, Centre Léon Bérard (France); Joly, B.; Lambert, D.; Lestand, L. [Clermont Université, Université Blaise Pascal, CNRS/IN2P3, Laboratoire de Physique Corpusculaire, BP 10448, F-63000 Clermont-Ferrand (France); Létang, J.M. [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA - Lyon, Université Lyon 1, Centre Léon Bérard (France); Magne, M. [Clermont Université, Université Blaise Pascal, CNRS/IN2P3, Laboratoire de Physique Corpusculaire, BP 10448, F-63000 Clermont-Ferrand (France); and others

    2015-07-01

    A Compton camera is being developed for the purpose of ion-range monitoring during hadrontherapy via the detection of prompt-gamma rays. The system consists of a scintillating fiber beam tagging hodoscope, a stack of double sided silicon strip detectors (90×90×2 mm{sup 3}, 2×64 strips) as scatter detectors, as well as bismuth germanate (BGO) scintillation detectors (38×35×30 mm{sup 3}, 100 blocks) as absorbers. The individual components will be described, together with the status of their characterization.

  17. Evaluation of Moving Object Detection Based on Various Input Noise Using Fixed Camera

    Science.gov (United States)

    Kiaee, N.; Hashemizadeh, E.; Zarrinpanjeh, N.

    2017-09-01

    Detecting and tracking objects in video has been as a research area of interest in the field of image processing and computer vision. This paper evaluates the performance of a novel method for object detection algorithm in video sequences. This process helps us to know the advantage of this method which is being used. The proposed framework compares the correct and wrong detection percentage of this algorithm. This method was evaluated with the collected data in the field of urban transport which include car and pedestrian in fixed camera situation. The results show that the accuracy of the algorithm will decreases because of image resolution reduction.

  18. COMPACT CdZnTe-BASED GAMMA CAMERA FOR PROSTATE CANCER IMAGING

    Energy Technology Data Exchange (ETDEWEB)

    CUI, Y.; LALL, T.; TSUI, B.; YU, J.; MAHLER, G.; BOLOTNIKOV, A.; VASKA, P.; DeGERONIMO, G.; O' CONNOR, P.; MEINKEN, G.; JOYAL, J.; BARRETT, J.; CAMARDA, G.; HOSSAIN, A.; KIM, K.H.; YANG, G.; POMPER, M.; CHO, S.; WEISMAN, K.; SEO, Y.; BABICH, J.; LaFRANCE, N.; AND JAMES, R.B.

    2011-10-23

    In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high false-positive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integrated-circuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera

  19. TARGETLESS CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    L. Barazzetti

    2012-09-01

    Full Text Available In photogrammetry a camera is considered calibrated if its interior orientation parameters are known. These encompass the principal distance, the principal point position and some Additional Parameters used to model possible systematic errors. The current state of the art for automated camera calibration relies on the use of coded targets to accurately determine the image correspondences. This paper presents a new methodology for the efficient and rigorous photogrammetric calibration of digital cameras which does not require any longer the use of targets. A set of images depicting a scene with a good texture are sufficient for the extraction of natural corresponding image points. These are automatically matched with feature-based approaches and robust estimation techniques. The successive photogrammetric bundle adjustment retrieves the unknown camera parameters and their theoretical accuracies. Examples, considerations and comparisons with real data and different case studies are illustrated to show the potentialities of the proposed methodology.

  20. Demonstration of First 9 Micron cutoff 640 x 486 GaAs Based Quantum Well Infrared PhotoDetector (QWIP) Snap-Shot Camera

    Science.gov (United States)

    Gunapala, S.; Bandara, S. V.; Liu, J. K.; Hong, W.; Sundaram, M.; Maker, P. D.; Muller, R. E.

    1997-01-01

    In this paper, we discuss the development of this very sensitive long waelength infrared (LWIR) camera based on a GaAs/AlGaAs QWIP focal plane array (FPA) and its performance in quantum efficiency, NEAT, uniformity, and operability.

  1. Neutron cameras for ITER

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P. [ITER San Diego Joint Work Site, La Jolla, CA (United States)] [and others

    1998-12-31

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from {sup 16}N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with {sup 16}N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins.

  2. SU-F-T-235: Optical Scan Based Collision Avoidance Using Multiple Stereotactic Cameras During Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Cardan, R; Popple, R; Dobelbower, M; De Los Santos, J; Fiveash, J [The University of Alabama at Birmingham, Birmingham, AL (United States)

    2016-06-15

    Purpose: To demonstrate the ability to quickly generate an accurate collision avoidance map using multiple stereotactic cameras during simulation. Methods: Three Kinect stereotactic cameras were placed in the CT simulation room and optically calibrated to the DICOM isocenter. Immediately before scanning, the patient was optically imaged to generate a 3D polygon mesh, which was used to calculate the collision avoidance area using our previously developed framework. The mesh was visually compared to the CT scan body contour to ensure accurate coordinate alignment. To test the accuracy of the collision calculation, the patient and machine were physically maneuvered in the treatment room to calculated collision boundaries. Results: The optical scan and collision calculation took 38.0 seconds and 2.5 seconds to complete respectively. The collision prediction accuracy was determined using a receiver operating curve (ROC) analysis, where the true positive, true negative, false positive and false negative values were 837, 821, 43, and 79 points respectively. The ROC accuracy was 93.1% over the sampled collision space. Conclusion: We have demonstrated a framework which is fast and accurate for predicting collision avoidance for treatment which can be determined during the normal simulation process. Because of the speed, the system could be used to add a layer of safety with a negligible impact on the normal patient simulation experience. This information could be used during treatment planning to explore the feasible geometries when optimizing plans. Research supported by Varian Medical Systems.

  3. An automatic markerless registration method for neurosurgical robotics based on an optical camera.

    Science.gov (United States)

    Meng, Fanle; Zhai, Fangwen; Zeng, Bowei; Ding, Hui; Wang, Guangzhi

    2017-11-03

    Current markerless registration methods for neurosurgical robotics use the facial surface to match the robot space with the image space, and acquisition of the facial surface usually requires manual interaction and constrains the patient to a supine position. To overcome these drawbacks, we propose a registration method that is automatic and does not constrain patient position. An optical camera attached to the robot end effector captures images around the patient's head from multiple views. Then, high coverage of the head surface is reconstructed from the images through multi-view stereo vision. Since the acquired head surface point cloud contains color information, a specific mark that is manually drawn on the patient's head prior to the capture procedure can be extracted to automatically accomplish coarse registration rather than using facial anatomic landmarks. Then, fine registration is achieved by registering the high coverage of the head surface without relying solely on the facial region, thus eliminating patient position constraints. The head surface was acquired by the camera with a good repeatability accuracy. The average target registration error of 8 different patient positions measured with targets inside a head phantom was [Formula: see text], while the mean surface registration error was [Formula: see text]. The method proposed in this paper achieves automatic markerless registration in multiple patient positions and guarantees registration accuracy inside the head. This method provides a new approach for establishing the spatial relationship between the image space and the robot space.

  4. Visual Odometry Based on Structural Matching of Local Invariant Features Using Stereo Camera Sensor

    Directory of Open Access Journals (Sweden)

    Antonio Bandera

    2011-07-01

    Full Text Available This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields.

  5. Patient positioning in radiotherapy based on surface imaging using time of flight cameras.

    Science.gov (United States)

    Gilles, M; Fayad, H; Miglierini, P; Clement, J F; Scheib, S; Cozzi, L; Bert, J; Boussion, N; Schick, U; Pradier, O; Visvikis, D

    2016-08-01

    To evaluate the patient positioning accuracy in radiotherapy using a stereo-time of flight (ToF)-camera system. A system using two ToF cameras was used to scan the surface of the patients in order to position them daily on the treatment couch. The obtained point clouds were registered to (a) detect translations applied to the table (intrafraction motion) and (b) predict the displacement to be applied in order to place the patient in its reference position (interfraction motion). The measures provided by this system were compared to the effectively applied translations. The authors analyzed 150 fractions including lung, pelvis/prostate, and head and neck cancer patients. The authors obtained small absolute errors for displacement detection: 0.8 ± 0.7, 0.8 ± 0.7, and 0.7 ± 0.6 mm along the vertical, longitudinal, and lateral axes, respectively, and 0.8 ± 0.7 mm for the total norm displacement. Lung cancer patients presented the largest errors with a respective mean of 1.1 ± 0.9, 0.9 ± 0.9, and 0.8 ± 0.7 mm. The proposed stereo-ToF system allows for sufficient accuracy and faster patient repositioning in radiotherapy. Its capability to track the complete patient surface in real time could allow, in the future, not only for an accurate positioning but also a real time tracking of any patient intrafraction motion (translation, involuntary, and breathing).

  6. Quality controls for gamma cameras and PET cameras: development of a free open-source ImageJ program

    Science.gov (United States)

    Carlier, Thomas; Ferrer, Ludovic; Berruchon, Jean B.; Cuissard, Regis; Martineau, Adeline; Loonis, Pierre; Couturier, Olivier

    2005-04-01

    Acquisition data and treatments for quality controls of gamma cameras and Positron Emission Tomography (PET) cameras are commonly performed with dedicated program packages, which are running only on manufactured computers and differ from each other, depending on camera company and program versions. The aim of this work was to develop a free open-source program (written in JAVA language) to analyze data for quality control of gamma cameras and PET cameras. The program is based on the free application software ImageJ and can be easily loaded on any computer operating system (OS) and thus on any type of computer in every nuclear medicine department. Based on standard parameters of quality control, this program includes 1) for gamma camera: a rotation center control (extracted from the American Association of Physics in Medicine, AAPM, norms) and two uniformity controls (extracted from the Institute of Physics and Engineering in Medicine, IPEM, and National Electronic Manufacturers Association, NEMA, norms). 2) For PET systems, three quality controls recently defined by the French Medical Physicist Society (SFPM), i.e. spatial resolution and uniformity in a reconstructed slice and scatter fraction, are included. The determination of spatial resolution (thanks to the Point Spread Function, PSF, acquisition) allows to compute the Modulation Transfer Function (MTF) in both modalities of cameras. All the control functions are included in a tool box which is a free ImageJ plugin and could be soon downloaded from Internet. Besides, this program offers the possibility to save on HTML format the uniformity quality control results and a warning can be set to automatically inform users in case of abnormal results. The architecture of the program allows users to easily add any other specific quality control program. Finally, this toolkit is an easy and robust tool to perform quality control on gamma cameras and PET cameras based on standard computation parameters, is free, run on

  7. Positron emission tomography

    CERN Document Server

    Paans, A M J

    2006-01-01

    Positron Emission Tomography (PET) is a method for measuring biochemical and physiological processes in vivo in a quantitative way by using radiopharmaceuticals labelled with positron emitting radionuclides such as 11C, 13N, 15O and 18F and by measuring the annihilation radiation using a coincidence technique. This includes also the measurement of the pharmacokinetics of labelled drugs and the measurement of the effects of drugs on metabolism. Also deviations of normal metabolism can be measured and insight into biological processes responsible for diseases can be obtained. At present the combined PET/CT scanner is the most frequently used scanner for whole-body scanning in the field of oncology.

  8. Macrocyclic diamide ligand systems: potential chelators for 64Cu- and 68Ga-based positron emission tomography imaging agents.

    Science.gov (United States)

    Barnard, Peter J; Holland, Jason P; Bayly, Simon R; Wadas, Thaddeus J; Anderson, Carolyn J; Dilworth, Jonathan R

    2009-08-03

    The N(4)-macrocyclic ligand 2,10-dioxo-1,4,8,11-tetraazabicyclo[11.4.0]1,12-heptadeca-1(12),14,16-triene H(2)L has been synthesized by the [1 + 1] condensation reaction between N,N'-bis(chloroacetyl)-1,2-phenylenediamine and 1,3-propylenediamine. The coordination chemistry of this ligand has been investigated with the metal ions Cu(II), Ni(II), Zn(II), and Ga(III) (complexes 1, 2, 3 and 4, respectively). H(2)L and its metal complexes have been fully characterized by the use of NMR, UV/vis, electron paramagnetic resonance, and elemental analysis where appropriate. The four metal complexes 1-4 have been structurally characterized by X-ray crystallography which confirmed that in all cases the amide nitrogen atoms are deprotonated and coordinated to the metal center. Complexes 3 and 4 are five-coordinate with a water molecule and chloride ion occupying the apical site, respectively. Cyclic voltammetric measurements on complex 1 show that this complex is oxidized reversibly with a half-wave potential, E(1/2) = 0.47 V, and reduced irreversibly at E(P) = -1.84 V. Density functional theory calculations reproduce the geometries of the four complexes. The one-electron reduction and oxidation potentials for 1 were calculated by using two solvent models, DMF and H(2)O. The calculations indicated that the one electron oxidation of 1 may involve removal of an electron from the ligand as opposed to the metal center, producing a diradical. The diamide macrocyle is of interest for the development of new positron emission tomography (PET) and single photon emission computed tomography (SPECT) imaging agents, and a radiolabeled complex has been synthesized with the positron emitting isotope (64)Cu. In vivo biodistribution studies for the (64)Cu labeled complex, (64)Cu-1, in male Lewis rats, showed that the activity is cleared rapidly from the blood within 1-2 h post-administration.

  9. Smart pixel camera based signal processing in an interferometric test station for massive parallel inspection of MEMS and MOEMS

    Science.gov (United States)

    Styk, Adam; Lambelet, Patrick; Røyset, Arne; Kujawińska, Małgorzata; Gastinger, Kay

    2010-09-01

    The paper presents the electro-optical design of an interferometric inspection system for massive parallel inspection of Micro(Opto)ElectroMechanicalSystems (M(O)EMS). The basic idea is to adapt a micro-optical probing wafer to the M(O)EMS wafer under test. The probing wafer is exchangeable and contains a micro-optical interferometer array: a low coherent interferometer (LCI) array based on a Mirau configuration and a laser interferometer (LI) array based on a Twyman-Green configuration. The interference signals are generated in the micro-optical interferometers and are applied for M(O)EMS shape and deformation measurements by means of LCI and for M(O)EMS vibration analysis (the resonance frequency and spatial mode distribution) by means of LI. Distributed array of 5×5 smart pixel imagers detects the interferometric signals. The signal processing is based on the "on pixel" processing capacity of the smart pixel camera array, which can be utilised for phase shifting, signal demodulation or envelope maximum determination. Each micro-interferometer image is detected by the 140 × 146 pixels sub-array distributed in the imaging plane. In the paper the architecture of cameras with smart-pixel approach are described and their application for massive parallel electrooptical detection and data reduction is discussed. The full data processing paths for laser interferometer and low coherent interferometer are presented.

  10. Formation of the color image based on the vidicon TV camera

    Science.gov (United States)

    Iureva, Radda A.; Maltseva, Nadezhda K.; Dunaev, Vadim I.

    2016-09-01

    The main goal of nuclear safety is to protect from accidents in nuclear power plant (NPP) against radiation arising during normal operation of nuclear installations, or as a result of accidents on them. The most important task in any activities aimed at the maintenance of NPP is a constant maintenance of the desired level of security and reliability. The periodic non-destructive testing during operation provides the most relevant criteria for the integrity of the components of the primary circuit pressure. The objective of this study is to develop a system for forming a color image on the television camera on vidicon which is used to conduct non-destructive testing in conditions of increased radiation at NPPs.

  11. Performance Evaluation of a Microchannel Plate based X-ray Camera with a Reflecting Grid

    Science.gov (United States)

    Visco, A.; Drake, R. P.; Harding, E. C.; Rathore, G. K.

    2006-10-01

    Microchannel Plates (MCPs) are used in a variety of imaging systems as a means of amplifying the incident radiation. Using a microchannel plate mount recently developed at the University of Michigan, the effects of a metal reflecting grid are explored. Employing the reflecting grid, we create a potential difference above the MCP input surface that forces ejected electrons back into the pores, which may prove to increase the quantum efficiency of the camera. We investigate the changes in the pulse height distribution, modular transfer function, and Quantum efficiency of MCPs caused by the introduction of the reflecting grid. Work supported by the Naval Research Laboratory, National Nuclear Security Administration under the Stewardship Science Academic Alliances program through DOE Research Grant DE-FG52-03NA00064, and through DE FG53 2005 NA26014, and Livermore National Laboratory.

  12. The findings of F-18 FDG camera-based coincidence PET in acute leukemia

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, S. N.; Joh, C. W.; Lee, M. H. [Ajou University School of Medicine, Suwon (Korea, Republic of)

    2002-07-01

    We evaluated the usefulness of F-18 FDG coincidence PET (CoDe-PET) using a dual-head gamma camera in the assessment of patients with acute leukemia. F-18 FDG CoDE-PET studies were performed in 5 patients with acute leukemia (6 ALL and 2 AML) before or after treatment. CoDe-PET was performed utilizing a dual-head gamma camera equipped with 5/8 inch NaI(Tl) crystal. Image acquisition began 60 minutes after the injection of F-18 FDG in the fasting state. A whole trunk from cervical to inguinal regions or selected region were scanned. No attenuation correction was made and image reconstruction was done using filtered back-projection. CoDe-PET studies were evaluated visually. F-18 FDG image performed in 5 patients with ALL before therapy depicted multiple lymph node involvement and diffuse increased uptake involving axial skeleton, pelvis and femurs. F-18 FDG image done in 2 AML after chemotherapy showed only diffuse increased uptake in sternum, ribs, spine, pelvis and proximal femur and these may be due to G-CSF stimulation effect in view of drug history. But bone marrow histology showed scattered blast cell suggesting incomplete remission in one and completer remission in another. F-18 image done in 1 ALL after therapy showed no abnormal uptake. CoDe-PET with F-18 FDG in acute lymphoblastic lymphoma showed multiple lymphnode and bone marrow involvement in whole body. Therefore we conclude that CoDe-PET with F-18 FDG usefulness for evaluation of extent in acute lymphoblastic leukemia. But there was a limitation to assess therapy effectiveness during therapy due to reactive bone marrow.

  13. The Theory of Positrons

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 12. The Theory of Positrons. Richard P Feynman. Classics Volume 2 Issue 12 December 1997 pp 107-107. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/002/12/0107-0107. Author Affiliations.

  14. Positron excitation of neon

    Science.gov (United States)

    Parcell, L. A.; Mceachran, R. P.; Stauffer, A. D.

    1990-01-01

    The differential and total cross section for the excitation of the 3s1P10 and 3p1P1 states of neon by positron impact were calculated using a distorted-wave approximation. The results agree well with experimental conclusions.

  15. A Theoretical Model for Fast Evaluation of Position Linearity and Spatial Resolution in Gamma Cameras Based on Monolithic Scintillators

    Science.gov (United States)

    Galasso, Matteo; Fabbri, Andrea; Borrazzo, Cristian; Cencelli, Valentino Orsolini; Pani, Roberto

    2016-06-01

    In this work, we developed a model that is able to predict in a few seconds the response of a gamma camera based on continuous scintillator in terms of linearity and spatial resolution in the whole field of view (FoV). This model will be useful during the design phase of a SPECT or PET detector in order to predict and optimize gamma camera performance by varying the parameter values of its components (scintillator, light guides, and photodetector). Starting from a model of the scintillation light distribution on the photodetector sensitive surface, a theoretical analysis based on the estimation theory is carried out in order to find the analytical expressions of bias and FWHM related to four interaction position estimation methods: the classical Center of Gravity method (Anger Logic), an enhanced Center of Gravity method, a Mean Square Error fitting method, and the Maximum Likelihood Estimation method. Afterwards, spatial resolution as well as depth of interaction (DOI) distribution effects are evaluated by processing biases and FWHMs at different DOIs. The comparison between the model and GEANT4 Monte Carlo simulations of four different detection systems has been carried out. Our model prediction errors of spatial resolution, in terms of percentage RMSDs with respect to the simulated spatial resolution, are lower than 13.2% in the whole FoV for three estimation methods. The computational time to calculate spatial resolutions with the model in the whole FoV is five order of magnitudes faster than an equivalent standard Monte Carlo simulation.

  16. Quantitative Fluorescence Assays Using a Self-Powered Paper-Based Microfluidic Device and a Camera-Equipped Cellular Phone.

    Science.gov (United States)

    Thom, Nicole K; Lewis, Gregory G; Yeung, Kimy; Phillips, Scott T

    2014-01-01

    Fluorescence assays often require specialized equipment and, therefore, are not easily implemented in resource-limited environments. Herein we describe a point-of-care assay strategy in which fluorescence in the visible region is used as a readout, while a camera-equipped cellular phone is used to capture the fluorescent response and quantify the assay. The fluorescence assay is made possible using a paper-based microfluidic device that contains an internal fluidic battery, a surface-mount LED, a 2-mm section of a clear straw as a cuvette, and an appropriately-designed small molecule reagent that transforms from weakly fluorescent to highly fluorescent when exposed to a specific enzyme biomarker. The resulting visible fluorescence is digitized by photographing the assay region using a camera-equipped cellular phone. The digital images are then quantified using image processing software to provide sensitive as well as quantitative results. In a model 30 min assay, the enzyme β-D-galactosidase was measured quantitatively down to 700 pM levels. This Communication describes the design of these types of assays in paper-based microfluidic devices and characterizes the key parameters that affect the sensitivity and reproducibility of the technique.

  17. Trend of digital camera and interchangeable zoom lenses with high ratio based on patent application over the past 10 years

    Science.gov (United States)

    Sensui, Takayuki

    2012-10-01

    Although digitalization has tripled consumer-class camera market scale, extreme reductions in prices of fixed-lens cameras has reduced profitability. As a result, a number of manufacturers have entered the market of the System DSC i.e. digital still camera with interchangeable lens, where large profit margins are possible, and many high ratio zoom lenses with image stabilization functions have been released. Quiet actuators are another indispensable component. Design with which there is little degradation in performance due to all types of errors is preferred for good balance in terms of size, lens performance, and the rate of quality to sub-standard products. Decentering, such as that caused by tilting, sensitivity of moving groups is especially important. In addition, image stabilization mechanisms actively shift lens groups. Development of high ratio zoom lenses with vibration reduction mechanism is confronted by the challenge of reduced performance due to decentering, making control over decentering sensitivity between lens groups everything. While there are a number of ways to align lenses (axial alignment), shock resistance and ability to stand up to environmental conditions must also be considered. Naturally, it is very difficult, if not impossible, to make lenses smaller and achieve a low decentering sensitivity at the same time. 4-group zoom construction is beneficial in making lenses smaller, but decentering sensitivity is greater. 5-group zoom configuration makes smaller lenses more difficult, but it enables lower decentering sensitivities. At Nikon, the most advantageous construction is selected for each lens based on specifications. The AF-S DX NIKKOR 18-200mm f/3.5-5.6G ED VR II and AF-S NIKKOR 28-300mm f/3.5-5.6G ED VR are excellent examples of this.

  18. Multimodal optical setup based on spectrometer and cameras combination for biological tissue characterization with spatially modulated illumination

    Science.gov (United States)

    Baruch, Daniel; Abookasis, David

    2017-04-01

    The application of optical techniques as tools for biomedical research has generated substantial interest for the ability of such methodologies to simultaneously measure biochemical and morphological parameters of tissue. Ongoing optimization of optical techniques may introduce such tools as alternative or complementary to conventional methodologies. The common approach shared by current optical techniques lies in the independent acquisition of tissue's optical properties (i.e., absorption and reduced scattering coefficients) from reflected or transmitted light. Such optical parameters, in turn, provide detailed information regarding both the concentrations of clinically relevant chromophores and macroscopic structural variations in tissue. We couple a noncontact optical setup with a simple analysis algorithm to obtain absorption and scattering coefficients of biological samples under test. Technically, a portable picoprojector projects serial sinusoidal patterns at low and high spatial frequencies, while a spectrometer and two independent CCD cameras simultaneously acquire the reflected diffuse light through a single spectrometer and two separate CCD cameras having different bandpass filters at nonisosbestic and isosbestic wavelengths in front of each. This configuration fills the gaps in each other's capabilities for acquiring optical properties of tissue at high spectral and spatial resolution. Experiments were performed on both tissue-mimicking phantoms as well as hands of healthy human volunteers to quantify their optical properties as proof of concept for the present technique. In a separate experiment, we derived the optical properties of the hand skin from the measured diffuse reflectance, based on a recently developed camera model. Additionally, oxygen saturation levels of tissue measured by the system were found to agree well with reference values. Taken together, the present results demonstrate the potential of this integrated setup for diagnostic and

  19. Positron annihilation in boron nitride

    Directory of Open Access Journals (Sweden)

    N.Amrane

    2006-01-01

    Full Text Available Electron and positron charge densities are calculated as a function of position in the unit cell for boron nitride. Wave functions are derived from pseudopotential band structure calculations and the independent particle approximation (IPM, respectively, for electrons and positrons. It is observed that the positron density is maximum in the open interstices and is excluded not only from ion cores but also to a considerable degree from valence bonds. Electron-positron momentum densities are calculated for (001,110 planes. The results are used in order to analyse the positron effects in BN.

  20. Research on detecting heterogeneous fibre from cotton based on linear CCD camera

    Science.gov (United States)

    Zhang, Xian-bin; Cao, Bing; Zhang, Xin-peng; Shi, Wei

    2009-07-01

    The heterogeneous fibre in cotton make a great impact on production of cotton textile, it will have a bad effect on the quality of product, thereby affect economic benefits and market competitive ability of corporation. So the detecting and eliminating of heterogeneous fibre is particular important to improve machining technics of cotton, advance the quality of cotton textile and reduce production cost. There are favorable market value and future development for this technology. An optical detecting system obtains the widespread application. In this system, we use a linear CCD camera to scan the running cotton, then the video signals are put into computer and processed according to the difference of grayscale, if there is heterogeneous fibre in cotton, the computer will send an order to drive the gas nozzle to eliminate the heterogeneous fibre. In the paper, we adopt monochrome LED array as the new detecting light source, it's lamp flicker, stability of luminous intensity, lumens depreciation and useful life are all superior to fluorescence light. We analyse the reflection spectrum of cotton and various heterogeneous fibre first, then select appropriate frequency of the light source, we finally adopt violet LED array as the new detecting light source. The whole hardware structure and software design are introduced in this paper.

  1. Research on Deep Joints and Lode Extension Based on Digital Borehole Camera Technology

    Directory of Open Access Journals (Sweden)

    Han Zengqiang

    2015-09-01

    Full Text Available Structure characteristics of rock and orebody in deep borehole are obtained by borehole camera technology. By investigating on the joints and fissures in Shapinggou molybdenum mine, the dominant orientation of joint fissure in surrounding rock and orebody were statistically analyzed. Applying the theory of metallogeny and geostatistics, the relationship between joint fissure and lode’s extension direction is explored. The results indicate that joints in the orebody of ZK61borehole have only one dominant orientation SE126° ∠68°, however, the dominant orientations of joints in surrounding rock were SE118° ∠73°, SW225° ∠70° and SE122° ∠65°, NE79° ∠63°. Then a preliminary conclusion showed that the lode’s extension direction is specific and it is influenced by joints of surrounding rock. Results of other boreholes are generally agree well with the ZK61, suggesting the analysis reliably reflects the lode’s extension properties and the conclusion presents important references for deep ore prospecting.

  2. The design of visualization telemetry system based on camera module of the commercial smartphone

    Science.gov (United States)

    Wang, Chao; Ye, Zhao; Wu, Bin; Yin, Huan; Cao, Qipeng; Zhu, Jun

    2017-09-01

    Satellite telemetry is the vital indicators to estimate the performance of the satellite. The telemetry data, the threshold range and the variation tendency collected during the whole operational life of the satellite, can guide and evaluate the subsequent design of the satellite in the future. The rotational parts on the satellite (e.g. solar arrays, antennas and oscillating mirrors) affect collecting the solar energy and the other functions of the satellite. Visualization telemetries (pictures, video) are captured to interpret the status of the satellite qualitatively in real time as an important supplement for troubleshooting. The mature technology of commercial off-the-shelf (COTS) products have obvious advantages in terms of the design of construction, electronics, interfaces and image processing. Also considering the weight, power consumption, and cost, it can be directly used in our application or can be adopted for secondary development. In this paper, characteristic simulations of solar arrays radiation in orbit are presented, and a suitable camera module of certain commercial smartphone is adopted after the precise calculation and the product selection process. Considering the advantages of the COTS devices, which can solve both the fundamental and complicated satellite problems, this technique proposed is innovative to the project implementation in the future.

  3. Visible Watermarking Technique Based on Human Visual System for Single Sensor Digital Cameras

    Directory of Open Access Journals (Sweden)

    Hector Santoyo-Garcia

    2017-01-01

    Full Text Available In this paper we propose a visible watermarking algorithm, in which a visible watermark is embedded into the Bayer Colour Filter Array (CFA domain. The Bayer CFA is the most common raw image representation for images captured by single sensor digital cameras equipped in almost all mobile devices. In proposed scheme, the captured image is watermarked before it is compressed and stored in the storage system. Then this method enforces the rightful ownership of the watermarked image, since there is no other version of the image rather than the watermarked one. We also take into consideration the Human Visual System (HVS so that the proposed technique provides desired characteristics of a visible watermarking scheme, such that the embedded watermark is sufficiently perceptible and at same time not obtrusive in colour and grey-scale images. Unlike other Bayer CFA domain visible watermarking algorithms, in which only binary watermark pattern is supported, proposed watermarking algorithm allows grey-scale and colour images as watermark patterns. It is suitable for advertisement purpose, such as digital library and e-commerce, besides copyright protection.

  4. Automatic Human Facial Expression Recognition Based on Integrated Classifier From Monocular Video with Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available An automatic recognition framework for human facial expressions from a monocular video with an uncalibrated camera is proposed. The expression characteristics are first acquired from a kind of deformable template, similar to a facial muscle distribution. After associated regularization, the time sequences from the trait changes in space-time under complete expressional production are then arranged line by line in a matrix. Next, the matrix dimensionality is reduced by a method of manifold learning of neighborhood-preserving embedding. Finally, the refined matrix containing the expression trait information is recognized by a classifier that integrates the hidden conditional random field (HCRF and support vector machine (SVM. In an experiment using the Cohn–Kanade database, the proposed method showed a comparatively higher recognition rate than the individual HCRF or SVM methods in direct recognition from two-dimensional human face traits. Moreover, the proposed method was shown to be more robust than the typical Kotsia method because the former contains more structural characteristics of the data to be classified in space-time

  5. Radiometric cross Calibration of Gaofen-1 WFV Cameras Using Landsat-8 OLI Images: A Simple Image-Based Method

    Directory of Open Access Journals (Sweden)

    Juan Li

    2016-05-01

    Full Text Available WFV (Wide Field of View cameras on-board Gaofen-1 satellite (gaofen means high resolution provide unparalleled global observations with both high spatial and high temporal resolutions. However, the accuracy of the radiometric calibration remains unknown. Using an improved cross calibration method, the WFV cameras were re-calibrated with well-calibrated Landsat-8 OLI (Operational Land Imager data as reference. An objective method was proposed to guarantee the homogeneity and sufficient dynamic coverage for calibration sites and reference sensors. The USGS spectral library was used to match the most appropriate hyperspectral data, based on which the spectral band differences between WFV and OLI were adjusted. The TOA (top-of-atmosphere reflectance of the cross-calibrated WFV agreed very well with that of OLI, with the mean differences between the two sensors less than 5% for most of the reflectance ranges of the four spectral bands, after accounting for the spectral band difference between the two sensors. Given the calibration error of 3% for Landsat-8 OLI TOA reflectance, the uncertainty of the newly-calibrated WFV should be within 8%. The newly generated calibration coefficients established confidence when using Gaofen-1 WFV observations for their further quantitative applications, and the proposed simple cross calibration method here could be easily extended to other operational or planned satellite missions.

  6. Simulation-based evaluation and optimization of a new CdZnTe gamma-camera architecture (HiSens).

    Science.gov (United States)

    Robert, Charlotte; Montémont, Guillaume; Rebuffel, Véronique; Buvat, Irène; Guérin, Lucie; Verger, Loïck

    2010-05-07

    A new gamma-camera architecture named HiSens is presented and evaluated. It consists of a parallel hole collimator, a pixelated CdZnTe (CZT) detector associated with specific electronics for 3D localization and dedicated reconstruction algorithms. To gain in efficiency, a high aperture collimator is used. The spatial resolution is preserved thanks to accurate 3D localization of the interactions inside the detector based on a fine sampling of the CZT detector and on the depth of interaction information. The performance of this architecture is characterized using Monte Carlo simulations in both planar and tomographic modes. Detective quantum efficiency (DQE) computations are then used to optimize the collimator aperture. In planar mode, the simulations show that the fine CZT detector pixelization increases the system sensitivity by 2 compared to a standard Anger camera without loss in spatial resolution. These results are then validated against experimental data. In SPECT, Monte Carlo simulations confirm the merits of the HiSens architecture observed in planar imaging.

  7. Simulation-based evaluation and optimization of a new CdZnTe gamma-camera architecture (HiSens)

    Energy Technology Data Exchange (ETDEWEB)

    Robert, Charlotte; Montemont, Guillaume; Rebuffel, Veronique; Guerin, Lucie; Verger, Loick [CEA, LETI, MINATEC, F38054 Grenoble (France); Buvat, Irene [IMNC-UMR 8165 CNRS, Universites Paris 7 et Paris 11, Bat 104, 91406 Orsay (France)

    2010-05-07

    A new gamma-camera architecture named HiSens is presented and evaluated. It consists of a parallel hole collimator, a pixelated CdZnTe (CZT) detector associated with specific electronics for 3D localization and dedicated reconstruction algorithms. To gain in efficiency, a high aperture collimator is used. The spatial resolution is preserved thanks to accurate 3D localization of the interactions inside the detector based on a fine sampling of the CZT detector and on the depth of interaction information. The performance of this architecture is characterized using Monte Carlo simulations in both planar and tomographic modes. Detective quantum efficiency (DQE) computations are then used to optimize the collimator aperture. In planar mode, the simulations show that the fine CZT detector pixelization increases the system sensitivity by 2 compared to a standard Anger camera without loss in spatial resolution. These results are then validated against experimental data. In SPECT, Monte Carlo simulations confirm the merits of the HiSens architecture observed in planar imaging.

  8. Vision-Based Cooperative Pose Estimation for Localization in Multi-Robot Systems Equipped with RGB-D Cameras

    Directory of Open Access Journals (Sweden)

    Xiaoqin Wang

    2014-12-01

    Full Text Available We present a new vision based cooperative pose estimation scheme for systems of mobile robots equipped with RGB-D cameras. We first model a multi-robot system as an edge-weighted graph. Then, based on this model, and by using the real-time color and depth data, the robots with shared field-of-views estimate their relative poses in pairwise. The system does not need the existence of a single common view shared by all robots, and it works in 3D scenes without any specific calibration pattern or landmark. The proposed scheme distributes working loads evenly in the system, hence it is scalable and the computing power of the participating robots is efficiently used. The performance and robustness were analyzed both on synthetic and experimental data in different environments over a range of system configurations with varying number of robots and poses.

  9. FPGA-Based HD Camera System for the Micropositioning of Biomedical Micro-Objects Using a Contactless Micro-Conveyor

    Directory of Open Access Journals (Sweden)

    Elmar Yusifli

    2017-03-01

    Full Text Available With recent advancements, micro-object contactless conveyers are becoming an essential part of the biomedical sector. They help avoid any infection and damage that can occur due to external contact. In this context, a smart micro-conveyor is devised. It is a Field Programmable Gate Array (FPGA-based system that employs a smart surface for conveyance along with an OmniVision complementary metal-oxide-semiconductor (CMOS HD camera for micro-object position detection and tracking. A specific FPGA-based hardware design and VHSIC (Very High Speed Integrated Circuit Hardware Description Language (VHDL implementation are realized. It is done without employing any Nios processor or System on a Programmable Chip (SOPC builder based Central Processing Unit (CPU core. It keeps the system efficient in terms of resource utilization and power consumption. The micro-object positioning status is captured with an embedded FPGA-based camera driver and it is communicated to the Image Processing, Decision Making and Command (IPDC module. The IPDC is programmed in C++ and can run on a Personal Computer (PC or on any appropriate embedded system. The IPDC decisions are sent back to the FPGA, which pilots the smart surface accordingly. In this way, an automated closed-loop system is employed to convey the micro-object towards a desired location. The devised system architecture and implementation principle is described. Its functionality is also verified. Results have confirmed the proper functionality of the developed system, along with its outperformance compared to other solutions.

  10. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    Science.gov (United States)

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  11. Monitoring system for isolated limb perfusion based on a portable gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Orero, A.; Muxi, A.; Rubi, S.; Duch, J. [Servei de Medicina Nuclear, Hospital Clinic, Barcelona (Spain); Vidal-Sicart, S.; Pons, F. [Servei de Medicina Nuclear, Hospital Clinic, Barcelona (Spain); Inst. d' Investigacions Biomediques August Pi i Sunyer (IDIBAPS), Barcelona (Spain); Red Tematica de Investigacion Cooperativa en Cancer (RTICC), Barcelona (Spain); Roe, N. [Servei de Medicina Nuclear, Hospital Clinic, Barcelona (Spain); CIBER de Bioingenieria, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona (Spain); Rull, R. [Servei de Cirurgia, Hospital Clinic, Barcelona (Spain); Pavon, N. [Inst. de Fisica Corpuscular, CSIC - UV, Valencia (Spain); Pavia, J. [Servei de Medicina Nuclear, Hospital Clinic, Barcelona (Spain); Inst. d' Investigacions Biomediques August Pi i Sunyer (IDIBAPS), Barcelona (Spain); CIBER de Bioingenieria, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona (Spain)

    2009-07-01

    Background: The treatment of malignant melanoma or sarcomas on a limb using extremity perfusion with tumour necrosis factor (TNF-{alpha}) and melphalan can result in a high degree of systemic toxicity if there is any leakage from the isolated blood territory of the limb into the systemic vascular territory. Leakage is currently controlled by using radiotracers and heavy external probes in a procedure that requires continuous manual calculations. The aim of this work was to develop a light, easily transportable system to monitor limb perfusion leakage by controlling systemic blood pool radioactivity with a portable gamma camera adapted for intraoperative use as an external probe, and to initiate its application in the treatment of MM patients. Methods: A special collimator was built for maximal sensitivity. Software for acquisition and data processing in real time was developed. After testing the adequacy of the system, it was used to monitor limb perfusion leakage in 16 patients with malignant melanoma to be treated with perfusion of TNF-{alpha} and melphalan. Results: The field of view of the detector system was 13.8 cm, which is appropriate for the monitoring, since the area to be controlled was the precordial zone. The sensitivity of the system was 257 cps/MBq. When the percentage of leakage reaches 10% the associated absolute error is {+-}1%. After a mean follow-up period of 12 months, no patients have shown any significant or lasting side-effects. Partial or complete remission of lesions was seen in 9 out of 16 patients (56%) after HILP with TNF-{alpha} and melphalan. Conclusion: The detector system together with specially developed software provides a suitable automatic continuous monitoring system of any leakage that may occur during limb perfusion. This technique has been successfully implemented in patients for whom perfusion with TNF-{alpha} and melphalan has been indicated. (orig.)

  12. Application of slow positrons to coating degradation

    Energy Technology Data Exchange (ETDEWEB)

    Cao, H.; Zhang, R.; Chen, H.M.; Mallon, P.; Huang, C.-M.; He, Y.; Sandreczki, T.C.; Jean, Y.C. E-mail: jeany@umkc.edu; Nielsen, B.; Friessnegg, T.; Suzuki, R.; Ohdaira, T

    2000-06-01

    Photodegradation of a polyurethane-based topcoat induced by accelerated UV irradiation is studied using Doppler broadened energy spectra (DBES) and positron annihilation lifetime (PAL) spectroscopies coupled with slow positron technique. Significant and similar variations of S-parameter and ortho-positronium intensity (I{sub 3}) in coatings are observed as functions of depth and of exposure time. The decrease of S is interpreted as a result of an increase of crosslink density and a reduction of free-volume and hole fraction during the degradation process.

  13. Solvated Positron Chemistry. II

    DEFF Research Database (Denmark)

    Mogensen, O. E.

    1979-01-01

    reactive. The rate constants were 3.9 × 1010 M−1 s−1, 4.4 × 1010 M−1 s−1, and 6.3 × 1010 M−1 s−1 for Cl−, Br−, and I−, respectively, at low (less, approximate 0.03 M) X− concentrations. A 25% decrease in the rate constant caused by the addition of 1 M ethanol to the I− solutions was i The influence...... in the Cl− case) at higher concentrations. This saturation and the high-concentration effects-in the angular correlation results were interpreted as caused by rather complicated spur effects, wh It is proposed that spur electrons may pick off the positron from the [X−, e+ states with an efficiency which......The reaction of the hydrated positron, eaq+ with Cl−, Br−, and I− ions in aqueous solutions was studied by means of positron The measured angular correlation curves for [Cl−, e+], [Br−, e+, and [I−, e+] bound states were in good agreement with th Because of this agreement and the fact...

  14. Calorimetry Hadronic with semidigital reading based on camera of resistive planes of glass for experiments on collision linear e + e-; Calorimetr@a hadr@nica con lectura semidigital basada en c@mara de planos resistivos de vidrio para experimentos en colisionadores lineales e + e-

    Energy Technology Data Exchange (ETDEWEB)

    Berenguer Antequera, J.

    2015-07-01

    Calorimetry Hadronic with semidigital reading based on camera of resistive planes of glass for experiments on collision linear e + e-. Electron-positron linear colliders have been proposed as next generation particle colliders to complement and extend the physics programme of the LHC (Large Hadron Collider) at CERN. Currently, two projects, ILC (International Linear Collider) and CLIC (Compact LInear Collider), have been suggested by the international community to reach this purpose. The requirements for a detector for both linear colliders are defined by the precision needed to fully exploit the physics potential of these colliders. In particular, one of the most important requirements is an excellent jet energy resolution. This can be achieved with the particle-flow concept in which the overall detector performance for jet reconstruction is optimised by reconstructing each particle individually. For this reason, the calorimeter system has to have unprecedented granularity fulfilling the task of shower separation and providing excellent jet energy resolution and background separation. (Author)

  15. Time-Based Readout of a Silicon Photomultiplier (SiPM) for Time of Flight Positron Emission Tomography (TOF-PET)

    CERN Document Server

    Powolny, F; Brunner, S E; Hillemanns, H; Meyer, T; Garutti, E; Williams, M C S; Auffray, E; Shen, W; Goettlich, M; Jarron, P; Schultz-Coulon, H C

    2011-01-01

    Time of flight (TOF) measurements in positron emission tomography (PET) are very challenging in terms of timing performance, and should ideally achieve less than 100 ps FWHM precision. We present a time-based differential technique to read out silicon photomultipliers (SiPMs) which has less than 20 ps FWHM electronic jitter. The novel readout is a fast front end circuit (NINO) based on a first stage differential current mode amplifier with 20 Omega input resistance. Therefore the amplifier inputs are connected differentially to the SiPM's anode and cathode ports. The leading edge of the output signal provides the time information, while the trailing edge provides the energy information. Based on a Monte Carlo photon-generation model, HSPICE simulations were run with a 3 x 3 mm(2) SiPM-model, read out with a differential current amplifier. The results of these simulations are presented here and compared with experimental data obtained with a 3 x 3 x 15 mm(3) LSO crystal coupled to a SiPM. The measured time coi...

  16. Sensor for In-Motion Continuous 3D Shape Measurement Based on Dual Line-Scan Cameras

    Directory of Open Access Journals (Sweden)

    Bo Sun

    2016-11-01

    Full Text Available The acquisition of three-dimensional surface data plays an increasingly important role in the industrial sector. Numerous 3D shape measurement techniques have been developed. However, there are still limitations and challenges in fast measurement of large-scale objects or high-speed moving objects. The innovative line scan technology opens up new potentialities owing to the ultra-high resolution and line rate. To this end, a sensor for in-motion continuous 3D shape measurement based on dual line-scan cameras is presented. In this paper, the principle and structure of the sensor are investigated. The image matching strategy is addressed and the matching error is analyzed. The sensor has been verified by experiments and high-quality results are obtained.

  17. A method for evaluating image quality of monochrome and color displays based on luminance by use of a commercially available color digital camera

    Energy Technology Data Exchange (ETDEWEB)

    Tokurei, Shogo, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp [Department of Health Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, Fukuoka 812-8582, Japan and Department of Radiology, Yamaguchi University Hospital, 1-1-1 Minamikogushi, Ube, Yamaguchi 755-8505 (Japan); Morishita, Junji, E-mail: shogo.tokurei@gmail.com, E-mail: junjim@med.kyushu-u.ac.jp [Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, Fukuoka 812-8582 (Japan)

    2015-08-15

    Purpose: The aim of this study is to propose a method for the quantitative evaluation of image quality of both monochrome and color liquid-crystal displays (LCDs) using a commercially available color digital camera. Methods: The intensities of the unprocessed red (R), green (G), and blue (B) signals of a camera vary depending on the spectral sensitivity of the image sensor used in the camera. For consistent evaluation of image quality for both monochrome and color LCDs, the unprocessed RGB signals of the camera were converted into gray scale signals that corresponded to the luminance of the LCD. Gray scale signals for the monochrome LCD were evaluated by using only the green channel signals of the camera. For the color LCD, the RGB signals of the camera were converted into gray scale signals by employing weighting factors (WFs) for each RGB channel. A line image displayed on the color LCD was simulated on the monochrome LCD by using a software application for subpixel driving in order to verify the WF-based conversion method. Furthermore, the results obtained by different types of commercially available color cameras and a photometric camera were compared to examine the consistency of the authors’ method. Finally, image quality for both the monochrome and color LCDs was assessed by measuring modulation transfer functions (MTFs) and Wiener spectra (WS). Results: The authors’ results demonstrated that the proposed method for calibrating the spectral sensitivity of the camera resulted in a consistent and reliable evaluation of the luminance of monochrome and color LCDs. The MTFs and WS showed different characteristics for the two LCD types owing to difference in the subpixel structure. The MTF in the vertical direction of the color LCD was superior to that of the monochrome LCD, although the WS in the vertical direction of the color LCD was inferior to that of the monochrome LCD as a result of luminance fluctuations in RGB subpixels. Conclusions: The authors

  18. A method for evaluating image quality of monochrome and color displays based on luminance by use of a commercially available color digital camera.

    Science.gov (United States)

    Tokurei, Shogo; Morishita, Junji

    2015-08-01

    The aim of this study is to propose a method for the quantitative evaluation of image quality of both monochrome and color liquid-crystal displays (LCDs) using a commercially available color digital camera. The intensities of the unprocessed red (R), green (G), and blue (B) signals of a camera vary depending on the spectral sensitivity of the image sensor used in the camera. For consistent evaluation of image quality for both monochrome and color LCDs, the unprocessed RGB signals of the camera were converted into gray scale signals that corresponded to the luminance of the LCD. Gray scale signals for the monochrome LCD were evaluated by using only the green channel signals of the camera. For the color LCD, the RGB signals of the camera were converted into gray scale signals by employing weighting factors (WFs) for each RGB channel. A line image displayed on the color LCD was simulated on the monochrome LCD by using a software application for subpixel driving in order to verify the WF-based conversion method. Furthermore, the results obtained by different types of commercially available color cameras and a photometric camera were compared to examine the consistency of the authors' method. Finally, image quality for both the monochrome and color LCDs was assessed by measuring modulation transfer functions (MTFs) and Wiener spectra (WS). The authors' results demonstrated that the proposed method for calibrating the spectral sensitivity of the camera resulted in a consistent and reliable evaluation of the luminance of monochrome and color LCDs. The MTFs and WS showed different characteristics for the two LCD types owing to difference in the subpixel structure. The MTF in the vertical direction of the color LCD was superior to that of the monochrome LCD, although the WS in the vertical direction of the color LCD was inferior to that of the monochrome LCD as a result of luminance fluctuations in RGB subpixels. The authors' method based on the use of a commercially

  19. Positron studies of polymeric coatings

    Energy Technology Data Exchange (ETDEWEB)

    Jean, Y.C. E-mail: jeany@umkc.edu; Mallon, P.E.; Zhang, R.; Chen Hongmin; Li Ying; Zhang Junjie; Wu, Y.C.; Sandreczki, T.C.; Suzuki, R.; Ohdaira, T.; Gu, X.; Nguyen, T

    2003-11-01

    In complicated coating systems, positrons have shown sensitivity in detecting the early stage of deterioration due to weathering, specially, in probing a specific location or depth of coatings from the surface through interfaces and the bulk. Existing extensive experimental positron data show that positron annihilation signals respond quantitatively to the deterioration process due to weathering. Now it is possible to detect the very early stage of coating deterioration at the atomic and molecular scale by using positrons, typically in days as compared to years by conventional methods. This paper summarizes recent positron studies in polymeric coatings. Correlations between positron data and a variety of chemical, physical and engineering data from ESR, AFM, cross-link density, gloss, and cyclic loading are presented.

  20. Development of a Positron Source for JLab at the IAC

    Energy Technology Data Exchange (ETDEWEB)

    Forest, Tony [Idaho State Univ., Pocatello, ID (United States)

    2013-10-12

    We report on the research performed towards the development of a positron sour for Jefferson Lab's (JLab) Continuous Electron Beam Accelerator Facility (CEBAF) in Newport News, VA. The first year of work was used to benchmark the predictions of our current simulation with positron production efficiency measurements at the IAC. The second year used the benchmarked simulation to design a beam line configuration which optimized positron production efficiency while minimizing radioactive waste as well as design and construct a positron converter target. The final year quantified the performance of the positron source. This joint research and development project brought together the experiences of both electron accelerator facilities. Our intention is to use the project as a spring board towards developing a program of accelerator based research and education which will train students to meet the needs of both facilities as well as provide a pool of trained scientists.

  1. Quantum resonances in reflection of relativistic electrons and positrons

    Energy Technology Data Exchange (ETDEWEB)

    Eykhorn, Yu.L.; Korotchenko, K.B. [National Research Tomsk Polytechnic University, 30, Lenin Avenue, Tomsk 634050 (Russian Federation); Pivovarov, Yu.L. [National Research Tomsk Polytechnic University, 30, Lenin Avenue, Tomsk 634050 (Russian Federation); Tomsk State University, 36, Lenin Avenue, Tomsk 634050 (Russian Federation); Takabayashi, Y. [SAGA Light Source, 8-7 Yayoigaoka, Tosu, Saga 841-0005 (Japan)

    2015-07-15

    Calculations based on the use of realistic potential of the system of crystallographic planes confirm earlier results on existence of resonances in reflection of relativistic electrons and positrons by the crystal surface, if the crystallographic planes are parallel to the surface.The physical reason of predicted phenomena, similar to the band structure of transverse energy levels, is connected with the Bloch form of the wave functions of electrons (positrons) near the crystallographic planes, which appears both in the case of planar channeling of relativistic electrons (positrons) and in reflection by a crystal surface. Calculations show that positions of maxima in reflection of relativistic electrons and positrons by crystal surface specifically depend on the angle of incidence with respect to the crystal surface and relativistic factor of electrons/positrons. These maxima form the Darwin tables similar to that in ultra-cold neutron diffraction.

  2. Formation of a high intensity low energy positron string

    Science.gov (United States)

    Donets, E. D.; Donets, E. E.; Syresin, E. M.; Itahashi, T.; Dubinov, A. E.

    2004-05-01

    The possibility of a high intensity low energy positron beam production is discussed. The proposed Positron String Trap (PST) is based on the principles and technology of the Electron String Ion Source (ESIS) developed in JINR during the last decade. A linear version of ESIS has been used successfully for the production of intense highly charged ion beams of various elements. Now the Tubular Electron String Ion Source (TESIS) concept is under study and this opens really new promising possibilities in physics and technology. In this report, we discuss the application of the tubular-type trap for the storage of positrons cooled to the cryogenic temperatures of 0.05 meV. It is intended that the positron flux at the energy of 1-5 eV, produced by the external source, is injected into the Tubular Positron Trap which has a similar construction as the TESIS. Then the low energy positrons are captured in the PST Penning trap and are cooled down because of their synchrotron radiation in the strong (5-10 T) applied magnetic field. It is expected that the proposed PST should permit storing and cooling to cryogenic temperature of up to 5×109 positrons. The accumulated cooled positrons can be used further for various physics applications, for example, antihydrogen production.

  3. Application of positrons to the study of thin technological films

    CERN Document Server

    Nathwani, M

    2001-01-01

    Positron Doppler broadening experiments using variable-energy positron beams with positron implantation energy range 0-25 keV and 0-30 keV, respectively, have been performed on a selection of thin technological films. By measuring the spectrum of the 511 keV annihilation gamma-rays photopeak the profile of the Doppler broadening of the photopeak, due to the motion of the annihilating positron-electron pair, can be analysed. Varying the incident positron energy enables the positron t$ probe a sample at different depths which makes it possible to study samples by analysing the Doppler broadening of the photopeak as a function of positron depth. The Doppler broadening experiments on gallium nitride films with different crystallographic orientations revealed distortions in the Doppler broadened profile at low energies. The distortions were identified to be a consequence of significant para-positronium annihilation taking place near the sample surface. A parameter based on the proportion of positrons trapped at an...

  4. Applications of positron depth profiling

    Energy Technology Data Exchange (ETDEWEB)

    Hakvoort, R.A.

    1993-12-23

    In this thesis some contributions of the positron-depth profiling technique to materials science have been described. Following studies are carried out: Positron-annihilation measurements on neon-implanted steel; Void creation in silicon by helium implantation; Density of vacancy-type defects present in amorphous silicon prepared by ion implantation; Measurements of other types of amorphous silicon; Epitaxial cobalt disilicide prepared by cobalt outdiffusion. Positron-annihilation experiments on low-pressure CVD silicon-nitride films. (orig./MM).

  5. Laser Created Relativistic Positron Jets

    Energy Technology Data Exchange (ETDEWEB)

    Chen, H; Wilks, S C; Meyerhofer, D D; Bonlie, J; Chen, C D; Chen, S N; Courtois, C; Elberson, L; Gregori, G; Kruer, W; Landoas, O; Mithen, J; Murphy, C; Nilson, P; Price, D; Scheider, M; Shepherd, R; Stoeckl, C; Tabak, M; Tommasini, R; Beiersdorder, P

    2009-10-08

    Electron-positron jets with MeV temperature are thought to be present in a wide variety of astrophysical phenomena such as active galaxies, quasars, gamma ray bursts and black holes. They have now been created in the laboratory in a controlled fashion by irradiating a gold target with an intense picosecond duration laser pulse. About 10{sup 11} MeV positrons are emitted from the rear surface of the target in a 15 to 22-degree cone for a duration comparable to the laser pulse. These positron jets are quasi-monoenergetic (E/{delta}E {approx} 5) with peak energies controllable from 3-19 MeV. They have temperatures from 1-4 MeV in the beam frame in both the longitudinal and transverse directions. Positron production has been studied extensively in recent decades at low energies (sub-MeV) in areas related to surface science, positron emission tomography, basic antimatter science such as antihydrogen experiments, Bose-Einstein condensed positronium, and basic plasma physics. However, the experimental tools to produce very high temperature positrons and high-flux positron jets needed to simulate astrophysical positron conditions have so far been absent. The MeV temperature jets of positrons and electrons produced in our experiments offer a first step to evaluate the physics models used to explain some of the most energetic phenomena in the universe.

  6. Positron Emission Tomography (PET)

    Energy Technology Data Exchange (ETDEWEB)

    Welch, M.J.

    1990-01-01

    Positron emission tomography (PET) assesses biochemical processes in the living subject, producing images of function rather than form. Using PET, physicians are able to obtain not the anatomical information provided by other medical imaging techniques, but pictures of physiological activity. In metaphoric terms, traditional imaging methods supply a map of the body's roadways, its, anatomy; PET shows the traffic along those paths, its biochemistry. This document discusses the principles of PET, the radiopharmaceuticals in PET, PET research, clinical applications of PET, the cost of PET, training of individuals for PET, the role of the United States Department of Energy in PET, and the futures of PET. 22 figs.

  7. Positron Emission Tomography with Three-Dimensional Reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Erlandsson, K.

    1996-10-01

    The development of two different low-cost scanners for positron emission tomography (PET) based on 3D acquisition are presented. The first scanner consists of two rotating scintillation cameras, and produces quantitative images, which have shown to be clinically useful. The second one is a system with two opposed sets of detectors, based on the limited angle tomography principle, dedicated for mammographic studies. The development of low-cost PET scanners can increase the clinical impact of PET, which is an expensive modality, only available at a few centres world-wide and mainly used as a research tool. A 3D reconstruction method was developed that utilizes all the available data. The size of the data-sets is considerably reduced, using the single-slice rebinning approximation. The 3D reconstruction is divided into 1D axial deconvolution and 2D transaxial reconstruction, which makes it relatively fast. This method was developed for the rotating scanner, but was also implemented for multi-ring scanners with and without inter plane septa. An iterative 3D reconstruction method was developed for the limited angle scanner, based on the new concept of `mobile pixels`, which reduces the finite pixel errors and leads to an improved signal to noise ratio. 100 refs.

  8. Non-invasive seedingless measurements of the flame transfer function using high-speed camera-based laser vibrometry

    Science.gov (United States)

    Gürtler, Johannes; Greiffenhagen, Felix; Woisetschläger, Jakob; Haufe, Daniel; Czarske, Jürgen

    2017-06-01

    The characterization of modern jet engines or stationary gas turbines running with lean combustion by means of swirl-stabilized flames necessitates seedingless optical field measurements of the flame transfer function, i.e. the ratio of the fluctuating heat release rate inside the flame volume, the instationary flow velocity at the combustor outlet and the time average of both quantities. For this reason, a high-speed camera-based laser interferometric vibrometer is proposed for spatio-temporally resolved measurements of the flame transfer function inside a swirl-stabilized technically premixed flame. Each pixel provides line-of-sight measurements of the heat release rate due to the linear coupling to fluctuations of the refractive index along the laser beam, which are based on density fluctuations inside the flame volume. Additionally, field measurements of the instationary flow velocity are possible due to correlation of simultaneously measured pixel signals and the known distance between the measurement positions. Thus, the new system enables the spatially resolved detection of the flame transfer function and instationary flow behavior with a single measurement for the first time. The presented setup offers single pixel resolution with measurement rates up to 40 kHz at an maximum image resolution of 256 px x 128 px. Based on a comparison with reference measurements using a standard pointwise laser interferometric vibrometer, the new system is validated and a discussion of the measurement uncertainty is presented. Finally, the measurement of refractive index fluctuations inside a flame volume is demonstrated.

  9. Single bunch longitudinal measurements at the Cornell Electron-Positron Storage Ring

    Directory of Open Access Journals (Sweden)

    R. Holtzapple

    2000-03-01

    Full Text Available Measurements of the beam's bunch length in the Cornell Electron-Positron Storage Ring (CESR have been made using a streak camera. The streak camera uses visible synchrotron radiation produced by the beam to measure its longitudinal distribution. A description of CESR, the experimental setup, the streak camera used, and systematic errors and analysis techniques of the streak camera are described in this paper. The dependence of the bunch distribution on the current and accelerating rf voltage for a single bunch CESR was measured and compared with a theoretical model of CESR. The CESR vacuum chamber impedance is determined from the measured bunch distributions and is presented in this paper.

  10. On the Integration of Medium Wave Infrared Cameras for Vision-Based Navigation

    Science.gov (United States)

    2015-03-01

    DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base, Ohio DISTRIBUTION STATEMENT A: APPROVED...June 2007. [3] M. Smearcheck, D. Marietta , J. Raquet, D. Ruff, and A Herrera. Expandable flight reference data processing software, 2014. [4] David

  11. Software development and its description for Geoid determination based on Spherical-Cap-Harmonics Modelling using digital-zenith camera and gravimetric measurements hybrid data

    Science.gov (United States)

    Morozova, K.; Jaeger, R.; Balodis, J.; Kaminskis, J.

    2017-10-01

    Over several years the Institute of Geodesy and Geoinformatics (GGI) was engaged in the design and development of a digital zenith camera. At the moment the camera developments are finished and tests by field measurements are done. In order to check these data and to use them for geoid model determination DFHRS (Digital Finite element Height reference surface (HRS)) v4.3. software is used. It is based on parametric modelling of the HRS as a continous polynomial surface. The HRS, providing the local Geoid height N, is a necessary geodetic infrastructure for a GNSS-based determination of physcial heights H from ellipsoidal GNSS heights h, by H=h-N. The research and this publication is dealing with the inclusion of the data of observed vertical deflections from digital zenith camera into the mathematical model of the DFHRS approach and software v4.3. A first target was to test out and validate the mathematical model and software, using additionally real data of the above mentioned zenith camera observations of deflections of the vertical. A second concern of the research was to analyze the results and the improvement of the Latvian quasi-geoid computation compared to the previous version HRS computed without zenith camera based deflections of the vertical. The further development of the mathematical model and software concerns the use of spherical-cap-harmonics as the designed carrier function for the DFHRS v.5. It enables - in the sense of the strict integrated geodesy approach, holding also for geodetic network adjustment - both a full gravity field and a geoid and quasi-geoid determination. In addition, it allows the inclusion of gravimetric measurements, together with deflections of the vertical from digital-zenith cameras, and all other types of observations. The theoretical description of the updated version of DFHRS software and methods are discussed in this publication.

  12. Implementation of an image acquisition and processing system based on FlexRIO, CameraLink and areaDetector

    Energy Technology Data Exchange (ETDEWEB)

    Esquembri, S.; Ruiz, M. [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Barrera, E., E-mail: eduardo.barrera@upm.es [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Sanz, D.; Bustos, A. [Instrumentation and Applied Acoustic Research Group, Technical University of Madrid (UPM), Madrid (Spain); Castro, R.; Vega, J. [National Fusion Laboratory, CIEMAT, Madrid (Spain)

    2016-11-15

    Highlights: • The system presented acquires and process images from any CameraLink compliant camera. • The frame grabber implanted with FlexRIO technology have image time stamping and preprocessing capabilities. • The system is integrated into EPICS using areaDetector for a flexible configuration of image the acquisition and processing chain. • Is fully compatible with the architecture of the ITER Fast Controllers. - Abstract: Image processing systems are commonly used in current physics experiments, such as nuclear fusion experiments. These experiments usually require multiple cameras with different resolutions, framerates and, frequently, different software drivers. The integration of heterogeneous types of cameras without a unified hardware and software interface increases the complexity of the acquisition system. This paper presents the implementation of a distributed image acquisition and processing system for CameraLink cameras. This system implements a camera frame grabber using Field Programmable Gate Arrays (FPGAs), a reconfigurable hardware platform that allows for image acquisition and real-time preprocessing. The frame grabber is integrated into Experimental Physics and Industrial Control System (EPICS) using the areaDetector EPICS software module, which offers a common interface shared among tens of cameras to configure the image acquisition and process these images in a distributed control system. The use of areaDetector also allows the image processing to be parallelized and concatenated using: multiple computers; areaDetector plugins; and the areaDetector standard type for data, NDArrays. The architecture developed is fully compatible with ITER Fast Controllers and the entire system has been validated using a camera hardware simulator that stream videos from fusion experiment databases.

  13. Time-of-flight-assisted Kinect camera-based people detection for intuitive human robot cooperation in the surgical operating room.

    Science.gov (United States)

    Beyl, Tim; Nicolai, Philip; Comparetti, Mirko D; Raczkowsky, Jörg; De Momi, Elena; Wörn, Heinz

    2016-07-01

    Scene supervision is a major tool to make medical robots safer and more intuitive. The paper shows an approach to efficiently use 3D cameras within the surgical operating room to enable for safe human robot interaction and action perception. Additionally the presented approach aims to make 3D camera-based scene supervision more reliable and accurate. A camera system composed of multiple Kinect and time-of-flight cameras has been designed, implemented and calibrated. Calibration and object detection as well as people tracking methods have been designed and evaluated. The camera system shows a good registration accuracy of 0.05 m. The tracking of humans is reliable and accurate and has been evaluated in an experimental setup using operating clothing. The robot detection shows an error of around 0.04 m. The robustness and accuracy of the approach allow for an integration into modern operating room. The data output can be used directly for situation and workflow detection as well as collision avoidance.

  14. A fall prediction methodology for elderly based on a depth camera.

    Science.gov (United States)

    Alazrai, Rami; Mowafi, Yaser; Hamad, Eyad

    2015-01-01

    With the aging of society population, efficient tracking of elderly activities of daily living (ADLs) has gained interest. Advancements of assisting computing and sensor technologies have made it possible to support elderly people to perform real-time acquisition and monitoring for emergency and medical care. In an earlier study, we proposed an anatomical-plane-based human activity representation for elderly fall detection, namely, motion-pose geometric descriptor (MPGD). In this paper, we present a prediction framework that utilizes the MPGD to construct an accumulated histograms-based representation of an ongoing human activity. The accumulated histograms of MPGDs are then used to train a set of support-vector-machine classifiers with a probabilistic output to predict fall in an ongoing human activity. Evaluation results of the proposed framework, using real case scenarios, demonstrate the efficacy of the framework in providing a feasible approach towards accurately predicting elderly falls.

  15. Accelerometer and Camera-Based Strategy for Improved Human Fall Detection

    KAUST Repository

    Zerrouki, Nabil

    2016-10-29

    In this paper, we address the problem of detecting human falls using anomaly detection. Detection and classification of falls are based on accelerometric data and variations in human silhouette shape. First, we use the exponentially weighted moving average (EWMA) monitoring scheme to detect a potential fall in the accelerometric data. We used an EWMA to identify features that correspond with a particular type of fall allowing us to classify falls. Only features corresponding with detected falls were used in the classification phase. A benefit of using a subset of the original data to design classification models minimizes training time and simplifies models. Based on features corresponding to detected falls, we used the support vector machine (SVM) algorithm to distinguish between true falls and fall-like events. We apply this strategy to the publicly available fall detection databases from the university of Rzeszow’s. Results indicated that our strategy accurately detected and classified fall events, suggesting its potential application to early alert mechanisms in the event of fall situations and its capability for classification of detected falls. Comparison of the classification results using the EWMA-based SVM classifier method with those achieved using three commonly used machine learning classifiers, neural network, K-nearest neighbor and naïve Bayes, proved our model superior.

  16. Camera-based ratiometric fluorescence transduction of nucleic acid hybridization with reagentless signal amplification on a paper-based platform using immobilized quantum dots as donors.

    Science.gov (United States)

    Noor, M Omair; Krull, Ulrich J

    2014-10-21

    Paper-based diagnostic assays are gaining increasing popularity for their potential application in resource-limited settings and for point-of-care screening. Achievement of high sensitivity with precision and accuracy can be challenging when using paper substrates. Herein, we implement the red-green-blue color palette of a digital camera for quantitative ratiometric transduction of nucleic acid hybridization on a paper-based platform using immobilized quantum dots (QDs) as donors in fluorescence resonance energy transfer (FRET). A nonenzymatic and reagentless means of signal enhancement for QD-FRET assays on paper substrates is based on the use of dry paper substrates for data acquisition. This approach offered at least a 10-fold higher assay sensitivity and at least a 10-fold lower limit of detection (LOD) as compared to hydrated paper substrates. The surface of paper was modified with imidazole groups to assemble a transduction interface that consisted of immobilized QD-probe oligonucleotide conjugates. Green-emitting QDs (gQDs) served as donors with Cy3 as an acceptor. A hybridization event that brought the Cy3 acceptor dye in close proximity to the surface of immobilized gQDs was responsible for a FRET-sensitized emission from the acceptor dye, which served as an analytical signal. A hand-held UV lamp was used as an excitation source and ratiometric analysis using an iPad camera was possible by a relative intensity analysis of the red (Cy3 photoluminescence (PL)) and green (gQD PL) color channels of the digital camera. For digital imaging using an iPad camera, the LOD of the assay in a sandwich format was 450 fmol with a dynamic range spanning 2 orders of magnitude, while an epifluorescence microscope detection platform offered a LOD of 30 fmol and a dynamic range spanning 3 orders of magnitude. The selectivity of the hybridization assay was demonstrated by detection of a single nucleotide polymorphism at a contrast ratio of 60:1. This work provides an

  17. Cyclotrons and positron emitting radiopharmaceuticals

    Energy Technology Data Exchange (ETDEWEB)

    Wolf, A.P.; Fowler, J.S.

    1984-01-01

    The state of the art of Positron Emission Tomography (PET) technology as related to cyclotron use and radiopharmaceutical production is reviewed. The paper discusses available small cyclotrons, the positron emitters which can be produced and the yields possible, target design, and radiopharmaceutical development and application. 97 refs., 12 tabs. (ACR)

  18. Atomic collisions involving pulsed positrons

    DEFF Research Database (Denmark)

    Merrison, J. P.; Bluhme, H.; Field, D.

    2000-01-01

    instantaneous intensities be achieved with in-beam accumulation, but more importantly many orders of magnitude improvement in energy and spatial resolution can be achieved using positron cooling. Atomic collisions can be studied on a new energy scale with unprecedented precion and control. The use...... of accelerators for producing intense positron pulses will be discussed in the context of atomic physics experiments....

  19. A cooled CCD camera-based protocol provides an effective solution for in vitro monitoring of luciferase.

    Science.gov (United States)

    Afshari, Amirali; Uhde-Stone, Claudia; Lu, Biao

    2015-03-13

    Luciferase assay has become an increasingly important technique to monitor a wide range of biological processes. However, the mainstay protocols require a luminometer to acquire and process the data, therefore limiting its application to specialized research labs. To overcome this limitation, we have developed an alternative protocol that utilizes a commonly available cooled charge-coupled device (CCCD), instead of a luminometer for data acquiring and processing. By measuring activities of different luciferases, we characterized their substrate specificity, assay linearity, signal-to-noise levels, and fold-changes via CCCD. Next, we defined the assay parameters that are critical for appropriate use of CCCD for different luciferases. To demonstrate the usefulness in cultured mammalian cells, we conducted a case study to examine NFκB gene activation in response to inflammatory signals in human embryonic kidney cells (HEK293 cells). We found that data collected by CCCD camera was equivalent to those acquired by luminometer, thus validating the assay protocol. In comparison, The CCCD-based protocol is readily amenable to live-cell and high-throughput applications, offering fast simultaneous data acquisition and visual and quantitative data presentation. In conclusion, the CCCD-based protocol provides a useful alternative for monitoring luciferase reporters. The wide availability of CCCD will enable more researchers to use luciferases to monitor and quantify biological processes. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. TEQUILA: NIR camera/spectrograph based on a Rockwell 1024x1024 HgCdTe FPA

    Science.gov (United States)

    Ruiz, Elfego; Sohn, Erika; Cruz-Gonzales, Irene; Salas, Luis; Parraga, Antonio; Perez, Manuel; Torres, Roberto; Cobos Duenas, Francisco J.; Gonzalez, Gaston; Langarica, Rosalia; Tejada, Carlos; Sanchez, Beatriz; Iriarte, Arturo; Valdez, J.; Gutierrez, Leonel; Lazo, Francisco; Angeles, Fernando

    1998-08-01

    We describe the configuration and operation modes of the IR camera/spectrograph: TEQUILA based on a 1024 X 1024 HgCdTe FPA. The optical system will allow three possible modes of operation: direct imaging, low and medium resolution spectroscopy and polarimetry. The basic system is being designed to consist of the following: 1) A LN(subscript 2) dewar that allocates the FPA together with the preamplifiers and a 24 filter position cylinder. 2) Control and readout electronics based on DSP modules linked to a workstation through fiber optics. 3) An opto-mechanical assembly cooled to -30 degrees that provides an efficient operation of the instrument in its various modes. 4) A control module for the moving parts of the instrument. The opto-mechanical assembly will have the necessary provision to install a scanning Fabry-Perot interferometer and an adaptive optics correction system. The final image acquisition and control of the whole instrument is carried out in a workstation to provide the observer with a friendly environment. The system will operate at the 2.1 m telescope at the Observatorio Astronomico Nacional in San Pedro Martir, B.C. (Mexico), and is intended to be a first-light instrument for the new 7.8m Mexican IR-Optical Telescope.

  1. Fuzzy logic-based approach to wavelet denoising of 3D images produced by time-of-flight cameras.

    Science.gov (United States)

    Jovanov, Ljubomir; Pižurica, Aleksandra; Philips, Wilfried

    2010-10-25

    In this paper we present a new denoising method for the depth images of a 3D imaging sensor, based on the time-of-flight principle. We propose novel ways to use luminance-like information produced by a time-of flight camera along with depth images. Firstly, we propose a wavelet-based method for estimating the noise level in depth images, using luminance information. The underlying idea is that luminance carries information about the power of the optical signal reflected from the scene and is hence related to the signal-to-noise ratio for every pixel within the depth image. In this way, we can efficiently solve the difficult problem of estimating the non-stationary noise within the depth images. Secondly, we use luminance information to better restore object boundaries masked with noise in the depth images. Information from luminance images is introduced into the estimation formula through the use of fuzzy membership functions. In particular, we take the correlation between the measured depth and luminance into account, and the fact that edges (object boundaries) present in the depth image are likely to occur in the luminance image as well. The results on real 3D images show a significant improvement over the state-of-the-art in the field.

  2. A Study of Deep CNN-Based Classification of Open and Closed Eyes Using a Visible Light Camera Sensor

    Directory of Open Access Journals (Sweden)

    Ki Wan Kim

    2017-06-01

    Full Text Available The necessity for the classification of open and closed eyes is increasing in various fields, including analysis of eye fatigue in 3D TVs, analysis of the psychological states of test subjects, and eye status tracking-based driver drowsiness detection. Previous studies have used various methods to distinguish between open and closed eyes, such as classifiers based on the features obtained from image binarization, edge operators, or texture analysis. However, when it comes to eye images with different lighting conditions and resolutions, it can be difficult to find an optimal threshold for image binarization or optimal filters for edge and texture extraction. In order to address this issue, we propose a method to classify open and closed eye images with different conditions, acquired by a visible light camera, using a deep residual convolutional neural network. After conducting performance analysis on both self-collected and open databases, we have determined that the classification accuracy of the proposed method is superior to that of existing methods.

  3. Positron Emission Tomography of the Heart

    Science.gov (United States)

    Schelbert, H. R.; Phelps, M. E.; Kuhl, D. E.

    1979-01-01

    Positron emission computed tomography (PCT) represents an important new tool for the noninvasive evaluation and, more importantly, quantification of myocardial performance. Most currently available techniques permit assessment of only one aspect of cardiac function, i.e., myocardial perfusion by gamma scintillation camera imaging with Thallium-201 or left ventricular function by echocardiography or radionuclide angiocardiography. With PCT it may become possible to study all three major segments of myocardial performance, i.e., regional blood flow, mechanical function and, most importantly, myocardial metabolism. Each of these segments can either be evaluated separately or in combination. This report briefly describes the principles and technological advantages of the imaging device, reviews currently available radioactive tracers and how they can be employed for the assessment of flow, function and metabolism; and, lastly, discusses possible applications of PCT for the study of cardiac physiology or its potential role in the diagnosis of cardiac disease.

  4. Full-color stereoscopic single-pixel camera based on DMD technology

    Science.gov (United States)

    Salvador-Balaguer, Eva; Clemente, Pere; Tajahuerce, Enrique; Pla, Filiberto; Lancis, Jesús

    2017-02-01

    Imaging systems based on microstructured illumination and single-pixel detection offer several advantages over conventional imaging techniques. They are an effective method for imaging through scattering media even in the dynamic case. They work efficiently under low light levels, and the simplicity of the detector makes it easy to design imaging systems working out of the visible spectrum and to acquire multidimensional information. In particular, several approaches have been proposed to record 3D information. The technique is based on sampling the object with a sequence of microstructured light patterns codified onto a programmable spatial light modulator while light intensity is measured with a single-pixel detector. The image is retrieved computationally from the photocurrent fluctuations provided by the detector. In this contribution we describe an optical system able to produce full-color stereoscopic images by using few and simple optoelectronic components. In our setup we use an off-the-shelf digital light projector (DLP) based on a digital micromirror device (DMD) to generate the light patterns. To capture the color of the scene we take advantage of the codification procedure used by the DLP for color video projection. To record stereoscopic views we use a 90° beam splitter and two mirrors, allowing us two project the patterns form two different viewpoints. By using a single monochromatic photodiode we obtain a pair of color images that can be used as input in a 3-D display. To reduce the time we need to project the patterns we use a compressive sampling algorithm. Experimental results are shown.

  5. Performance of compact ICU (intensified camera unit) with autogating based on video signal

    Science.gov (United States)

    de Groot, Arjan; Linotte, Peter; van Veen, Django; de Witte, Martijn; Laurent, Nicolas; Hiddema, Arend; Lalkens, Fred; van Spijker, Jan

    2007-10-01

    High quality night vision digital video is nowadays required for many observation, surveillance and targeting applications, including several of the current soldier modernization programs. We present the performance increase that is obtained when combining a state-of-the-art image intensifier with a low power consumption CMOS image sensor. Based on the content of the video signal, the gating and gain of the image intensifier are optimized for best SNR. The options of the interface with a separate laser in the application for range gated imaging are discussed.

  6. TestDose: A nuclear medicine software based on Monte Carlo modeling for generating gamma camera acquisitions and dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, Marie-Paule, E-mail: marie-paule.garcia@univ-brest.fr; Villoing, Daphnée [UMR 1037 INSERM/UPS, CRCT, 133 Route de Narbonne, 31062 Toulouse (France); McKay, Erin [St George Hospital, Gray Street, Kogarah, New South Wales 2217 (Australia); Ferrer, Ludovic [ICO René Gauducheau, Boulevard Jacques Monod, St Herblain 44805 (France); Cremonesi, Marta; Botta, Francesca; Ferrari, Mahila [European Institute of Oncology, Via Ripamonti 435, Milano 20141 (Italy); Bardiès, Manuel [UMR 1037 INSERM/UPS, CRCT, 133 Route de Narbonne, Toulouse 31062 (France)

    2015-12-15

    computation performed on the ICRP 110 model is also presented. Conclusions: The proposed platform offers a generic framework to implement any scintigraphic imaging protocols and voxel/organ-based dosimetry computation. Thanks to the modular nature of TestDose, other imaging modalities could be supported in the future such as positron emission tomography.

  7. PSD Camera Based Position and Posture Control of Redundant Robot Considering Contact Motion

    Science.gov (United States)

    Oda, Naoki; Kotani, Kentaro

    The paper describes a position and posture controller design based on the absolute position by external PSD vision sensor for redundant robot manipulator. The redundancy enables a potential capability to avoid obstacle while continuing given end-effector jobs under contact with middle link of manipulator. Under contact motion, the deformation due to joint torsion obtained by comparing internal and external position sensor, is actively suppressed by internal/external position hybrid controller. The selection matrix of hybrid loop is given by the function of the deformation. And the detected deformation is also utilized in the compliant motion controller for passive obstacle avoidance. The validity of the proposed method is verified by several experimental results of 3link planar redundant manipulator.

  8. Radiometric cross Calibration of Gaofen-1 WFV Cameras Using Landsat-8 OLI Images: A Simple Image-Based Method

    OpenAIRE

    Juan Li; Lian Feng; Xiaoping Pang; Weishu Gong; Xi Zhao

    2016-01-01

    WFV (Wide Field of View) cameras on-board Gaofen-1 satellite (gaofen means high resolution) provide unparalleled global observations with both high spatial and high temporal resolutions. However, the accuracy of the radiometric calibration remains unknown. Using an improved cross calibration method, the WFV cameras were re-calibrated with well-calibrated Landsat-8 OLI (Operational Land Imager) data as reference. An objective method was proposed to guarantee the homogeneity and sufficient dyna...

  9. Positron Emission Tomography/Computed Tomography Imaging of Residual Skull Base Chordoma Before Radiotherapy Using Fluoromisonidazole and Fluorodeoxyglucose: Potential Consequences for Dose Painting

    Energy Technology Data Exchange (ETDEWEB)

    Mammar, Hamid, E-mail: hamid.mammar@unice.fr [Radiation Oncology Department, Antoine Lacassagne Center, Nice (France); CNRS-UMR 6543, Institute of Developmental Biology and Cancer, University of Nice Sophia Antipolis, Nice (France); Kerrou, Khaldoun; Nataf, Valerie [Department of Nuclear Medicine and Radiopharmacy, Tenon Hospital, and University Pierre et Marie Curie, Paris (France); Pontvert, Dominique [Proton Therapy Center of Orsay, Curie Institute, Paris (France); Clemenceau, Stephane [Department of Neurosurgery, Pitie-Salpetriere Hospital, Paris (France); Lot, Guillaume [Department of Neurosurgery, Adolph De Rothschild Foundation, Paris (France); George, Bernard [Department of Neurosurgery, Lariboisiere Hospital, Paris (France); Polivka, Marc [Department of Pathology, Lariboisiere Hospital, Paris (France); Mokhtari, Karima [Department of Pathology, Pitie-Salpetriere Hospital, Paris (France); Ferrand, Regis; Feuvret, Loiec; Habrand, Jean-louis [Proton Therapy Center of Orsay, Curie Institute, Paris (France); Pouyssegur, Jacques; Mazure, Nathalie [CNRS-UMR 6543, Institute of Developmental Biology and Cancer, University of Nice Sophia Antipolis, Nice (France); Talbot, Jean-Noeel [Department of Nuclear Medicine and Radiopharmacy, Tenon Hospital, and University Pierre et Marie Curie, Paris (France)

    2012-11-01

    Purpose: To detect the presence of hypoxic tissue, which is known to increase the radioresistant phenotype, by its uptake of fluoromisonidazole (18F) (FMISO) using hybrid positron emission tomography/computed tomography (PET/CT) imaging, and to compare it with the glucose-avid tumor tissue imaged with fluorodeoxyglucose (18F) (FDG), in residual postsurgical skull base chordoma scheduled for radiotherapy. Patients and Methods: Seven patients with incompletely resected skull base chordomas were planned for high-dose radiotherapy (dose {>=}70 Gy). All 7 patients underwent FDG and FMISO PET/CT. Images were analyzed qualitatively by visual examination and semiquantitatively by computing the ratio of the maximal standardized uptake value (SUVmax) of the tumor and cerebellum (T/C R), with delineation of lesions on conventional imaging. Results: Of the eight lesion sites imaged with FDG PET/CT, only one was visible, whereas seven of nine lesions were visible on FMISO PET/CT. The median SUVmax in the tumor area was 2.8 g/mL (minimum 2.1; maximum 3.5) for FDG and 0.83 g/mL (minimum 0.3; maximum 1.2) for FMISO. The T/C R values ranged between 0.30 and 0.63 for FDG (median, 0.41) and between 0.75 and 2.20 for FMISO (median,1.59). FMISO T/C R >1 in six lesions suggested the presence of hypoxic tissue. There was no correlation between FMISO and FDG uptake in individual chordomas (r = 0.18, p = 0.7). Conclusion: FMISO PET/CT enables imaging of the hypoxic component in residual chordomas. In the future, it could help to better define boosted volumes for irradiation and to overcome the radioresistance of these lesions. No relationship was founded between hypoxia and glucose metabolism in these tumors after initial surgery.

  10. Heads up and camera down: a vision-based tracking modality for mobile mixed reality.

    Science.gov (United States)

    DiVerdi, Stephen; Höllerer, Tobias

    2008-01-01

    Anywhere Augmentation pursues the goal of lowering the initial investment of time and money necessary to participate in mixed reality work, bridging the gap between researchers in the field and regular computer users. Our paper contributes to this goal by introducing the GroundCam, a cheap tracking modality with no significant setup necessary. By itself, the GroundCam provides high frequency, high resolution relative position information similar to an inertial navigation system, but with significantly less drift. We present the design and implementation of the GroundCam, analyze the impact of several design and run-time factors on tracking accuracy, and consider the implications of extending our GroundCam to different hardware configurations. Motivated by the performance analysis, we developed a hybrid tracker that couples the GroundCam with a wide area tracking modality via a complementary Kalman filter, resulting in a powerful base for indoor and outdoor mobile mixed reality work. To conclude, the performance of the hybrid tracker and its utility within mixed reality applications is discussed.

  11. Optimization-based non-cooperative spacecraft pose estimation using stereo cameras during proximity operations.

    Science.gov (United States)

    Zhang, Limin; Zhu, Feng; Hao, Yingming; Pan, Wang

    2017-05-20

    Pose estimation for spacecraft is widely recognized as an important technology for space applications. Many space missions require accurate relative pose between the chaser and the target spacecraft. Stereo-vision is a usual mean to estimate the pose of non-cooperative targets during proximity operations. However, the uncertainty of stereo-vision measurement is still an outstanding issue that needs to be solved. With binocular structure and the geometric structure of the object, we present a robust pose estimation method for non-cooperative spacecraft. Because the solar panel can provide strict geometry constraints, our approach takes the corner points of which as features. After stereo matching, an optimization-based method is proposed to estimate the relative pose between the two spacecraft. Simulation results show that our method improves the precision and robustness of pose estimation. Our system improves the performance with maximum 3D localization error less than 5% and relative rotation angle error less than 1°. Our laboratory experiments further validate the method.

  12. Positron Emission Tomography Based Elucidation of the Enhanced Permeability and Retention Effect in Dogs with Cancer Using Copper-64 Liposomes

    DEFF Research Database (Denmark)

    Hansen, Anders Elias; Petersen, Anncatrine Luisa; Henriksen, Jonas Rosager

    2015-01-01

    included carcinomas displayed high uptake levels of liposomes, whereas one of four sarcomas displayed signs of liposome retention. We conclude that nanocarrier-radiotracers could be important in identifying cancer patients that will benefit from nanocarrier-based therapeutics in clinical practice....

  13. Characterization of a transmission positron/positronium converter for antihydrogen production

    Science.gov (United States)

    Aghion, S.; Amsler, C.; Ariga, T.; Bonomi, G.; Brusa, R. S.; Caccia, M.; Caravita, R.; Castelli, F.; Cerchiari, G.; Comparat, D.; Consolati, G.; Demetrio, A.; Di Noto, L.; Doser, M.; Ereditato, A.; Evans, C.; Ferragut, R.; Fesel, J.; Fontana, A.; Gerber, S.; Giammarchi, M.; Gligorova, A.; Guatieri, F.; Haider, S.; Hinterberger, A.; Holmestad, H.; Kellerbauer, A.; Krasnický, D.; Lagomarsino, V.; Lansonneur, P.; Lebrun, P.; Malbrunot, C.; Mariazzi, S.; Matveev, V.; Mazzotta, Z.; Müller, S. R.; Nebbia, G.; Nedelec, P.; Oberthaler, M.; Pacifico, N.; Pagano, D.; Penasa, L.; Petracek, V.; Povolo, L.; Prelz, F.; Prevedelli, M.; Ravelli, L.; Resch, L.; Rienäcker, B.; Robert, J.; Røhne, O. M.; Rotondi, A.; Sacerdoti, M.; Sandaker, H.; Santoro, R.; Scampoli, P.; Simon, M.; Smestad, L.; Sorrentino, F.; Testera, G.; Tietje, I. C.; Widmann, E.; Yzombard, P.; Zimmer, C.; Zmeskal, J.; Zurlo, N.; Andersen, S. L.; Chevallier, J.; Uggerhøj, U. I.; Lyckegaard, F.

    2017-09-01

    In this work a characterization study of forward emission from a thin, meso-structured silica positron/positronium (Ps) converter following implantation of positrons in light of possible antihydrogen production is presented. The target consisted of a ∼1 μm thick ultraporous silica film e-gun evaporated onto a 20 nm carbon foil. The Ps formation and emission was studied via Single Shot Positron Annihilation Lifetime Spectroscopy measurements after implantation of pulses with 3 - 4 ·107 positrons and 10 ns temporal width. The forward emission of implanted positrons and secondary electrons was investigated with a micro-channel plate - phosphor screen assembly, connected either to a CCD camera for imaging of the impinging particles, or to a fast photomultiplier tube to extract information about their time of flight. The maximum Ps formation fraction was estimated to be ∼10%. At least 10% of the positrons implanted with an energy of 3.3 keV are forward-emitted with a scattering angle smaller than 50° and maximum kinetic energy of 1.2 keV. At least 0.1-0.2 secondary electrons per implanted positron were also found to be forward-emitted with a kinetic energy of a few eV. The possible application of this kind of positron/positronium converter for antihydrogen production is discussed.

  14. The DRAGO gamma camera.

    Science.gov (United States)

    Fiorini, C; Gola, A; Peloso, R; Longoni, A; Lechner, P; Soltau, H; Strüder, L; Ottobrini, L; Martelli, C; Lui, R; Madaschi, L; Belloli, S

    2010-04-01

    In this work, we present the results of the experimental characterization of the DRAGO (DRift detector Array-based Gamma camera for Oncology), a detection system developed for high-spatial resolution gamma-ray imaging. This camera is based on a monolithic array of 77 silicon drift detectors (SDDs), with a total active area of 6.7 cm(2), coupled to a single 5-mm-thick CsI(Tl) scintillator crystal. The use of an array of SDDs provides a high quantum efficiency for the detection of the scintillation light together with a very low electronics noise. A very compact detection module based on the use of integrated readout circuits was developed. The performances achieved in gamma-ray imaging using this camera are reported here. When imaging a 0.2 mm collimated (57)Co source (122 keV) over different points of the active area, a spatial resolution ranging from 0.25 to 0.5 mm was measured. The depth-of-interaction capability of the detector, thanks to the use of a Maximum Likelihood reconstruction algorithm, was also investigated by imaging a collimated beam tilted to an angle of 45 degrees with respect to the scintillator surface. Finally, the imager was characterized with in vivo measurements on mice, in a real preclinical environment.

  15. Matching the Best Viewing Angle in Depth Cameras for Biomass Estimation Based on Poplar Seedling Geometry

    Directory of Open Access Journals (Sweden)

    Dionisio Andújar

    2015-06-01

    Full Text Available In energy crops for biomass production a proper plant structure is important to optimize wood yields. A precise crop characterization in early stages may contribute to the choice of proper cropping techniques. This study assesses the potential of the Microsoft Kinect for Windows v.1 sensor to determine the best viewing angle of the sensor to estimate the plant biomass based on poplar seedling geometry. Kinect Fusion algorithms were used to generate a 3D point cloud from the depth video stream. The sensor was mounted in different positions facing the tree in order to obtain depth (RGB-D images from different angles. Individuals of two different ages, e.g., one month and one year old, were scanned. Four different viewing angles were compared: top view (0°, 45° downwards view, front view (90° and ground upwards view (−45°. The ground-truth used to validate the sensor readings consisted of a destructive sampling in which the height, leaf area and biomass (dry weight basis were measured in each individual plant. The depth image models agreed well with 45°, 90° and −45° measurements in one-year poplar trees. Good correlations (0.88 to 0.92 between dry biomass and the area measured with the Kinect were found. In addition, plant height was accurately estimated with a few centimeters error. The comparison between different viewing angles revealed that top views showed poorer results due to the fact the top leaves occluded the rest of the tree. However, the other views led to good results. Conversely, small poplars showed better correlations with actual parameters from the top view (0°. Therefore, although the Microsoft Kinect for Windows v.1 sensor provides good opportunities for biomass estimation, the viewing angle must be chosen taking into account the developmental stage of the crop and the desired parameters. The results of this study indicate that Kinect is a promising tool for a rapid canopy characterization, i.e., for estimating crop

  16. Cloud detection and movement estimation based on sky camera images using neural networks and the Lucas-Kanade method

    Science.gov (United States)

    Tuominen, Pekko; Tuononen, Minttu

    2017-06-01

    One of the key elements in short-term solar forecasting is the detection of clouds and their movement. This paper discusses a new method for extracting cloud cover and cloud movement information from ground based camera images using neural networks and the Lucas-Kanade method. Two novel features of the algorithm are that it performs well both inside and outside of the circumsolar region, i.e. the vicinity of the sun, and is capable of deciding a threefold sun state. More precisely, the sun state can be detected to be either clear, partly covered by clouds or overcast. This is possible due to the absence of a shadow band in the imaging system. Visual validation showed that the new algorithm performed well in detecting clouds of varying color and contrast in situations referred to as difficult for commonly used thresholding methods. Cloud motion field results were computed from two consecutive sky images by solving the optical flow problem with the fast to compute Lucas-Kanade method. A local filtering scheme developed in this study was used to remove noisy motion vectors and it is shown that this filtering technique results in a motion field with locally nearly uniform directions and smooth global changes in direction trends. Thin, transparent clouds still pose a challenge for detection and leave room for future improvements in the algorithm.

  17. Conjugate observation of auroral finger-like structures by ground-based all-sky cameras and THEMIS satellites

    Science.gov (United States)

    Nishi, Katsuki; Shiokawa, Kazuo; Frühauff, Dennis

    2017-07-01

    In this study, we analyze the first conjugate observation of auroral finger-like structures using ground-based all-sky cameras and the Time History of Events and Macroscale Interactions during Substorms (THEMIS) satellites and investigated associated physical processes that are a cause of auroral fragmentation into patches. Two events are reported: one is a conjugate event, and the other is a nearly conjugate event. The conjugate event was observed at Narsarsuaq (magnetic latitude: 65.3°N), Greenland, at 0720-0820 UT (0506-0606 LT) on 17 February 2012. Analysis of the event revealed the following observational facts: (1) variation of parallel electron energy fluxes observed by THEMIS-E shows a correspondence to the auroral intensity variation, (2) plasma pressure and magnetic pressure fluctuate in antiphase with time scales of 5-20 min, and (3) perpendicular ion velocity is very small (less than 50 km/s). In the latter event, observed at Gakona, Alaska, on 2 February 2008, the THEMIS-D satellite passed across higher latitudes of finger-like structures. The data from THEMIS-D also showed the antiphase fluctuation between plasma pressure and magnetic pressure and the small perpendicular ion velocity. From these observations, we suggest that the finger-like structures are caused by a pressure-driven instability in the balance of plasma and magnetic pressures in the magnetosphere.

  18. Positrons, Positronium, Positron and Positronium Complexes in Crystal. Features of Their Properties in Phonon Atmosphere

    Directory of Open Access Journals (Sweden)

    Eugene P. Prokopev

    2012-10-01

    Full Text Available The article, Basing on the example of ionic crystals shows that polarization of crystal framework by oppositely charged polarons (positronium atom (ps invokes the change of positronium binding energy and leads to the renormalization of electron and positron effective masses as well. Such interaction of electron and positronium atom of positron with optical phonons leads to additional repelling interaction, besides coulomb attractive. Furthermore, the existence of positronium atom with major and minor radius is possible in the atmosphere of crystal phonons.

  19. Optimization of attenuation correction for positron emission tomography studies of thorax and pelvis using count-based transmission scans.

    Science.gov (United States)

    Boellaard, R; van Lingen, A; van Balen, S C M; Lammertsma, A A

    2004-02-21

    The quality of thorax and pelvis transmission scans and therefore of attenuation correction in PET depends on patient thickness and transmission rod source strength. The purpose of the present study was to assess the feasibility of using count-based transmission scans, thereby guaranteeing more consistent image quality and more precise quantification than with fixed transmission scan duration. First, the relation between noise equivalent counts (NEC) of 10 min calibration transmission scans and rod source activity was determined over a period of 1.5 years. Second, the relation between transmission scan counts and uniform phantom diameter was studied numerically, determining the relative contribution of counts from lines of response passing through the phantom as compared with the total number of counts. Finally, the relation between patient weight and transmission scan duration was determined for 35 patients, who were scanned at the level of thorax or pelvis. After installation of new rod sources, the NEC of transmission scans first increased slightly (5%) with decreasing rod source activity and after 3 months decreased with a rate of 2-3% per month. The numerical simulation showed that the number of transmission scan counts from lines of response passing through the phantom increased with phantom diameter up to 7 cm. For phantoms larger than 7 cm, the number of these counts decreased at approximately the same rate as the total number of transmission scan counts. Patient data confirmed that the total number of transmission scan counts decreased with increasing patient weight with about 0.5% kg(-1). It can be concluded that count-based transmission scans compensate for radioactive decay of the rod sources. With count-based transmission scans, rod sources can be used for up to 1.5 years at the cost of a 50% increased transmission scan duration. For phantoms with diameters of more than 7 cm and for patients scanned at the level of thorax or pelvis, use of count-based

  20. Making Relativistic Positrons Using Ultra-Intense Short Pulse Lasers

    Energy Technology Data Exchange (ETDEWEB)

    Chen, H; Wilks, S; Bonlie, J; Chen, C; Chen, S; Cone, K; Elberson, L; Gregori, G; Liang, E; Price, D; Van Maren, R; Meyerhofer, D D; Mithen, J; Murphy, C V; Myatt, J; Schneider, M; Shepherd, R; Stafford, D; Tommasini, R; Beiersdorfer, P

    2009-08-24

    This paper describes a new positron source produced using ultra-intense short pulse lasers. Although it has been studied in theory since as early as the 1970s, the use of lasers as a valuable new positron source was not demonstrated experimentally until recent years, when the petawatt-class short pulse lasers were developed. In 2008 and 2009, in a series of experiments performed at Lawrence Livermore National Laboratory, a large number of positrons were observed after shooting a millimeter thick solid gold target. Up to 2 x 10{sup 10} positrons per steradian ejected out the back of {approx}mm thick gold targets were detected. The targets were illuminated with short ({approx}1 ps) ultra-intense ({approx}1 x 10{sup 20} W/cm{sup 2}) laser pulses. These positrons are produced predominantly by the Bethe-Heitler process, and have an effective temperature of 2-4 MeV, with the distribution peaking at 4-7 MeV. The angular distribution of the positrons is anisotropic. For a wide range of applications, this new laser based positron source with its unique characteristics may complements the existing sources using radioactive isotopes and accelerators.

  1. An automated normative-based fluorodeoxyglucose positron emission tomography image-analysis procedure to aid Alzheimer disease diagnosis using statistical parametric mapping and interactive image display

    Science.gov (United States)

    Chen, Kewei; Ge, Xiaolin; Yao, Li; Bandy, Dan; Alexander, Gene E.; Prouty, Anita; Burns, Christine; Zhao, Xiaojie; Wen, Xiaotong; Korn, Ronald; Lawson, Michael; Reiman, Eric M.

    2006-03-01

    Having approved fluorodeoxyglucose positron emission tomography (FDG PET) for the diagnosis of Alzheimer's disease (AD) in some patients, the Centers for Medicare and Medicaid Services suggested the need to develop and test analysis techniques to optimize diagnostic accuracy. We developed an automated computer package comparing an individual's FDG PET image to those of a group of normal volunteers. The normal control group includes FDG-PET images from 82 cognitively normal subjects, 61.89+/-5.67 years of age, who were characterized demographically, clinically, neuropsychologically, and by their apolipoprotein E genotype (known to be associated with a differential risk for AD). In addition, AD-affected brain regions functionally defined as based on a previous study (Alexander, et al, Am J Psychiatr, 2002) were also incorporated. Our computer package permits the user to optionally select control subjects, matching the individual patient for gender, age, and educational level. It is fully streamlined to require minimal user intervention. With one mouse click, the program runs automatically, normalizing the individual patient image, setting up a design matrix for comparing the single subject to a group of normal controls, performing the statistics, calculating the glucose reduction overlap index of the patient with the AD-affected brain regions, and displaying the findings in reference to the AD regions. In conclusion, the package automatically contrasts a single patient to a normal subject database using sound statistical procedures. With further validation, this computer package could be a valuable tool to assist physicians in decision making and communicating findings with patients and patient families.

  2. The NEAT Camera Project

    Science.gov (United States)

    Jr., Ray L. Newburn

    1995-01-01

    The NEAT (Near Earth Asteroid Tracking) camera system consists of a camera head with a 6.3 cm square 4096 x 4096 pixel CCD, fast electronics, and a Sun Sparc 20 data and control computer with dual CPUs, 256 Mbytes of memory, and 36 Gbytes of hard disk. The system was designed for optimum use with an Air Force GEODSS (Ground-based Electro-Optical Deep Space Surveillance) telescope. The GEODSS telescopes have 1 m f/2.15 objectives of the Ritchey-Chretian type, designed originally for satellite tracking. Installation of NEAT began July 25 at the Air Force Facility on Haleakala, a 3000 m peak on Maui in Hawaii.

  3. Measuring cues for stand-off deception detection based on full-body nonverbal features in body-worn cameras

    Science.gov (United States)

    Bouma, Henri; Burghouts, Gertjan; den Hollander, Richard; van der Zee, Sophie; Baan, Jan; ten Hove, Johan-Martijn; van Diepen, Sjaak; van den Haak, Paul; van Rest, Jeroen

    2016-10-01

    Deception detection is valuable in the security domain to distinguish truth from lies. It is desirable in many security applications, such as suspect and witness interviews and airport passenger screening. Interviewers are constantly trying to assess the credibility of a statement, usually based on intuition without objective technical support. However, psychological research has shown that humans can hardly perform better than random guessing. Deception detection is a multi-disciplinary research area with an interest from different fields, such as psychology and computer science. In the last decade, several developments have helped to improve the accuracy of lie detection (e.g., with a concealed information test, increasing the cognitive load, or measurements with motion capture suits) and relevant cues have been discovered (e.g., eye blinking or fiddling with the fingers). With an increasing presence of mobile phones and bodycams in society, a mobile, stand-off, automatic deception detection methodology based on various cues from the whole body would create new application opportunities. In this paper, we study the feasibility of measuring these visual cues automatically on different parts of the body, laying the groundwork for stand-off deception detection in more flexible and mobile deployable sensors, such as body-worn cameras. We give an extensive overview of recent developments in two communities: in the behavioral-science community the developments that improve deception detection with a special attention to the observed relevant non-verbal cues, and in the computer-vision community the recent methods that are able to measure these cues. The cues are extracted from several body parts: the eyes, the mouth, the head and the fullbody pose. We performed an experiment using several state-of-the-art video-content-analysis (VCA) techniques to assess the quality of robustly measuring these visual cues.

  4. Analytical Study of the Effect of the System Geometry on Photon Sensitivity and Depth of Interaction of Positron Emission Mammography

    Directory of Open Access Journals (Sweden)

    Pablo Aguiar

    2012-01-01

    Full Text Available Positron emission mammography (PEM cameras are novel-dedicated PET systems optimized to image the breast. For these cameras it is essential to achieve an optimum trade-off between sensitivity and spatial resolution and therefore the main challenge for the novel cameras is to improve the sensitivity without degrading the spatial resolution. We carry out an analytical study of the effect of the different detector geometries on the photon sensitivity and the angle of incidence of the detected photons which is related to the DOI effect and therefore to the intrinsic spatial resolution. To this end, dual head detectors were compared to box and different polygon-detector configurations. Our results showed that higher sensitivity and uniformity were found for box and polygon-detector configurations compared to dual-head cameras. Thus, the optimal configuration in terms of sensitivity is a PEM scanner based on a polygon of twelve (dodecagon or more detectors. We have shown that this configuration is clearly superior to dual-head detectors and slightly higher than box, octagon, and hexagon detectors. Nevertheless, DOI effects are increased for this configuration compared to dual head and box scanners and therefore an accurate compensation for this effect is required.

  5. Applications and advances of positron beam spectroscopy: appendix a

    Energy Technology Data Exchange (ETDEWEB)

    Howell, R. H., LLNL

    1997-11-05

    Over 50 scientists from DOE-DP, DOE-ER, the national laboratories, academia and industry attended a workshop held on November 5-7, 1997 at Lawrence Livermore National Laboratory jointly sponsored by the DOE-Division of Materials Science, The Materials Research Institute at LLNL and the University of California Presidents Office. Workshop participants were charged to address two questions: Is there a need for a national center for materials analysis using positron techniques and can the capabilities at Lawrence Livermore National Laboratory serve this need. To demonstrate the need for a national center the workshop participants discussed the technical advantages enabled by high positron currents and advanced measurement techniques, the role that these techniques will play in materials analysis and the demand for the data. There were general discussions lead by review talks on positron analysis techniques, and their applications to problems in semiconductors, polymers and composites, metals and engineering materials, surface analysis and advanced techniques. These were followed by focus sessions on positron analysis opportunities in these same areas. Livermore now leads the world in materials analysis capabilities by positrons due to developments in response to demands of science based stockpile stewardship. There was a detailed discussion of the LLNL capabilities and a tour of the facilities. The Livermore facilities now include the worlds highest current beam of keV positrons, a scanning pulsed positron microprobe under development capable of three dimensional maps of defect size and concentration, an MeV positron beam for defect analysis of large samples, and electron momentum spectroscopy by positrons. This document is a supplement to the written summary report. It contains a complete schedule, list of attendees and the vuegraphs for the presentations in the review and focus sessions.

  6. Study on low-energy positron polarimetry

    Indian Academy of Sciences (India)

    A polarised positron source has been proposed for the design of the international linear collider (ILC). In order to optimise the positron beam, a measurement of its degree of polarisation close to the positron creation point is desired. In this contribution, methods for determining the positron polarisation at low energies are ...

  7. Measuring dispersed spot of positioning CMOS camera from star image quantitative interpretation based on a bivariate-error least squares curve fitting algorithm

    Science.gov (United States)

    Bu, Fan; Qiu, Yuehong; Yao, Dalei; Yan, Xingtao

    2017-02-01

    For a positioning CMOS camera, we put forward a system which can measure quantitatively dispersed spot parameters and the degree of energy concentration of certain optical system. Based on this method, the detection capability of the positioning CMOS camera can be verified. The measuring method contains some key instruments, such as 550mm collimator, 0.2mm star point, turntable and a positioning CMOS camera. Firstly, the definition of dispersed spot parameters is introduced. Then, the steps of measuring dispersed spot parameters are listed. The energy center of dispersed spot is calculated using centroid algorithm, and then a bivariate-error least squares curve Gaussian fitting method is presented to fit dispersion spot energy distribution curve. Finally, the connected region shaped by the energy contour of the defocused spots is analyzed. The diameter equal to the area which is 80% of the total energy of defocused spots and the energy percentage to the 3×3 central area of the image size are both calculated. The experimental results show that 80% of the total energy of defocused spots is concentrated in the diameter of the inner circle of 15μm, and the percentage to the 3×3 pixels central area can achieve 80% and even higher. Therefore, the method meets the needs of the optical systems in positioning CMOS camera for the imaging quality control.

  8. Change detection and characterization of volcanic activity using ground based low-light and near infrared cameras to monitor incandescence and thermal signatures

    Science.gov (United States)

    Harrild, Martin; Webley, Peter; Dehn, Jonathan

    2015-04-01

    Knowledge and understanding of precursory events and thermal signatures are vital for monitoring volcanogenic processes, as activity can often range from low level lava effusion to large explosive eruptions, easily capable of ejecting ash up to aircraft cruise altitudes. Using ground based remote sensing techniques to monitor and detect this activity is essential, but often the required equipment and maintenance is expensive. Our investigation explores the use of low-light cameras to image volcanic activity in the visible to near infrared (NIR) portion of the electromagnetic spectrum. These cameras are ideal for monitoring as they are cheap, consume little power, are easily replaced and can provide near real-time data. We focus here on the early detection of volcanic activity, using automated scripts, that capture streaming online webcam imagery and evaluate image pixel brightness values to determine relative changes and flag increases in activity. The script is written in Python, an open source programming language, to reduce the overall cost to potential consumers and increase the application of these tools across the volcanological community. In addition, by performing laboratory tests to determine the spectral response of these cameras, a direct comparison of collocated low-light and thermal infrared cameras has allowed approximate eruption temperatures and effusion rates to be determined from pixel brightness. The results of a field campaign in June, 2013 to Stromboli volcano, Italy, are also presented here. Future field campaigns to Latin America will include collaborations with INSIVUMEH in Guatemala, to apply our techniques to Fuego and Santiaguito volcanoes.

  9. New algorithm for intraocular lens power calculations after myopic laser in situ keratomileusis based on rotating Scheimpflug camera data.

    Science.gov (United States)

    Potvin, Richard; Hill, Warren

    2015-02-01

    To develop an algorithm to calculate intraocular lens (IOL) power for eyes with previous laser in situ keratomileusis (LASIK) for myopia based on data from a rotating Scheimpflug camera and to compare calculations with those of current formulas. East Valley Ophthalmology, Mesa, Arizona, USA. Observational case series. Relevant IOL calculation and postoperative refractive data were obtained for eyes of patients who had previous myopic LASIK and subsequent cataract surgery. Initial screening and correlation analysis identified Pentacam Scheimpflug keratometry (K) values appropriate for use in calculating a "best K" for IOL power calculations in these eyes. Error analysis identified other eye measures to improve results. Final results were compared with results from 9 other calculation methods available on the American Society of Cataract and Refractive Surgery (ASCRS) web site. The study obtained data from 101 eyes of 77 patients. More than 200 Scheimpflug K-formula combinations were evaluated for each eye. The true net power in the 4.0 mm zone centered on the corneal apex provided the best adjusted K reading for IOL power calculation in the Shammas no-history formula. The final formula had good outcomes, with 34%, 66%, and 91% of eyes being within ±0.25 diopter (D), ±0.50 D, and ±1.00 D of the refractive target, respectively. These results compare favorably to the best formulas on the ASCRS web site. The no-history formula derived using the Scheimpflug device's true net power in the 4.0 mm zone centered on the corneal apex appears to be an accurate method for determining IOL power after LASIK for myopia. Corroboration with additional data sets is suggested. Neither author has a financial or proprietary interest in any material or method mentioned. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  10. Positron annihilation in superconductive metals

    Energy Technology Data Exchange (ETDEWEB)

    Dekhtjar, I.J.

    1969-03-10

    A correlation is shown between the parameters of superconductive metals and those of positron annihilation. Particular attention is paid to the density states obtained from the electron specific heat.

  11. Development and testing of a positron accumulator for antihydrogen production

    CERN Document Server

    Collier, M; Meshkov, O I; Van der Werf, D P; Charlton, M

    1999-01-01

    A positron accumulator based on a modified Penning-Malmberg trap has been constructed and undergone preliminary testing prior to being shipped to CERN in Geneva where it will be a part of an experiment to synthesize low-energy antihydrogen. It utilises nitrogen buffer gas to cool and trap a continuous beam of positrons emanating from a /sup 22/Na radioactive source. A solid neon moderator slows the positrons from the source down to epithermal energies of a few eV before being injected into the trap. It is estimated that around 10/sup 8/ positrons can be trapped and cooled to ambient temperature within 5 minutes in this scheme using a 10 mCi source. (8 refs).

  12. Test bed for real-time image acquisition and processing systems based on FlexRIO, CameraLink, and EPICS

    Energy Technology Data Exchange (ETDEWEB)

    Barrera, E., E-mail: eduardo.barrera@upm.es [Grupo de Investigación en Instrumentación y Acústica Aplicada, Universidad Politécnica de Madrid (UPM) (Spain); Ruiz, M.; Sanz, D. [Grupo de Investigación en Instrumentación y Acústica Aplicada, Universidad Politécnica de Madrid (UPM) (Spain); Vega, J.; Castro, R. [Asociación EURATOM/CIEMAT para Fusión, Madrid (Spain); Juárez, E.; Salvador, R. [Centro de Investigación en Tecnologías Software y Sistemas Multimedia para la Sostenibilidad, Universidad Politécnica de Madrid (UPM) (Spain)

    2014-05-15

    Highlights: • The test bed allows for the validation of real-time image processing techniques. • Offers FPGA (FlexRIO) image processing that does not require CPU intervention. • Is fully compatible with the architecture of the ITER Fast Controllers. • Provides flexibility and easy integration in distributed experiments based on EPICS. - Abstract: Image diagnostics are becoming standard ones in nuclear fusion. At present, images are typically analyzed off-line. However, real-time processing is occasionally required (for instance, hot-spot detection or pattern recognition tasks), which will be the objective for the next generation of fusion devices. In this paper, a test bed for image generation, acquisition, and real-time processing is presented. The proposed solution is built using a Camera Link simulator, a Camera Link frame-grabber, a PXIe chassis, and offers software interface with EPICS. The Camera Link simulator (PCIe card PCIe8 DVa C-Link from Engineering Design Team) generates simulated image data (for example, from video-movies stored in fusion databases) using a Camera Link interface to mimic the frame sequences produced with diagnostic cameras. The Camera Link frame-grabber (FlexRIO Solution from National Instruments) includes a field programmable gate array (FPGA) for image acquisition using a Camera Link interface; the FPGA allows for the codification of ad-hoc image processing algorithms using LabVIEW/FPGA software. The frame grabber is integrated in a PXIe chassis with system architecture similar to that of the ITER Fast Controllers, and the frame grabber provides a software interface with EPICS to program all of its functionalities, capture the images, and perform the required image processing. The use of these four elements allows for the implementation of a test bed system that permits the development and validation of real-time image processing techniques in an architecture that is fully compatible with that of the ITER Fast Controllers

  13. The influence of electron multiplication and internal X-ray fluorescence on the performance of a scintillator-based gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Hall, David J., E-mail: d.j.hall@open.ac.uk [e2v centre for electronic imaging, The Open University, Walton Hall, Milton Keynes MK7 6AA (United Kingdom); Holland, Andrew; Soman, Matthew [e2v centre for electronic imaging, The Open University, Walton Hall, Milton Keynes MK7 6AA (United Kingdom)

    2012-06-21

    When considering the 'standard' gamma-camera, one might picture an array of photo-multiplier tubes or a similar array of small-area detectors. This array of imaging detectors would be attached to a corresponding array of scintillator modules (or a solid layer of scintillator) in order to give a high detection efficiency in the energy region of interest, usually 8-140 keV. Over recent years, developments of gamma-cameras capable of achieving much higher spatial resolutions have led to a new range of systems based on Charge-Coupled Devices with some form of signal multiplication between the scintillator and the CCD in order for one to distinguish the light output from the scintillator above the CCD noise. The use of an Electron-Multiplying Charge-Coupled Device (EM-CCD) incorporates the gain process within the CCD through a form of 'impact ionisation', however, the gain process introduces an 'excess noise factor' due to the probabilistic nature of impact ionisation and this additional noise consequently has an impact on the spatial and spectral resolution of the detector. Internal fluorescence in the scintillator, producing K-shell X-ray fluorescence photons that can be detected alongside the incident gamma-rays, also has a major impact on the imaging capabilities of gamma-cameras. This impact varies dramatically from the low spatial resolution to high spatial resolution camera system. Through a process of simulation and experimental testing focussed on the high spatial resolution (EM-CCD based) variant, the factors affecting the performance of gamma-camera systems are discussed and the results lead to important conclusions to be considered for the development of future systems. This paper presents a study into the influence of the EM-CCD gain process and the internal X-ray fluorescence in the scintillator on the performance of scintillator-based gamma cameras (CCD-based or otherwise), making use of Monte Carlo simulations to demonstrate

  14. Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV

    Directory of Open Access Journals (Sweden)

    J. W. Park

    2016-06-01

    Full Text Available Recently, aerial photography with unmanned aerial vehicle (UAV system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments’s LTE (long-term evolution, Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area’s that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision, RTKLIB, Open Drone Map.

  15. A Novel in Situ FPAR Measurement Method for Low Canopy Vegetation Based on a Digital Camera and Reference Panel

    Directory of Open Access Journals (Sweden)

    Quanjun Jiao

    2013-01-01

    Full Text Available The fraction of absorbed photosynthetically active radiation (FPAR is a key parameter in describing the exchange of fluxes of energy, mass and momentum between the surface and atmosphere. In this study, we present a method to measure FPAR using a digital camera and a reference panel. A digital camera was used to capture color images of low canopy vegetation, which contained a reference panel in one corner of the field of view (FOV. The digital image was classified into photosynthetically active vegetation, ground litter, sunlit soil, shadow soil, and the reference panel. The relative intensity of the incident photosynthetically active radiation (PAR, scene-reflected PAR, exposed background absorbed PAR and the green vegetation-covered ground absorbed PAR were derived from the digital camera image, and then FPAR was calculated. This method was validated on eight plots with four vegetation species using FPAR measured by a SunScan instrument. A linear correlation with a coefficient of determination (R2 of 0.942 and mean absolute error (MAE of 0.031 was observed between FPAR values derived from the digital camera and measurement using the SunScan instrument. The result suggests that the present method can be used to accurately measure the FPAR of low canopy vegetation.

  16. Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV

    Science.gov (United States)

    Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.

    2016-06-01

    Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.

  17. Cold-start capability in virtual-reality laparoscopic camera navigation: a base for tailored training in undergraduates.

    Science.gov (United States)

    Paschold, Markus; Niebisch, Stefan; Kronfeld, Kai; Herzer, Manfred; Lang, Hauke; Kneist, Werner

    2013-06-01

    Frequently medical students have to fulfill the role as the camera operator in laparoscopic procedures. Published work concerning camera navigation skills, especially in medical students, is rare. Therefore, our purpose was to evaluate personal characteristics and abilities that may affect virtual-reality laparoscopic camera navigation (VR-LCN) performance in a large cohort of first-time virtual-reality laparoscopy users. First-time virtual-reality laparoscopy users (n = 488) were enrolled prospectively. The tasks included VR-LCN using a 0° and 30° angled laparoscope separately. Scores were correlated with demographics and students' self-assessment in univariate and multivariate analyses. Six variables were associated with better VR-LCN results in the univariate analysis. On multivariate analysis, only male gender (odds ratio 2.3, 95 % confidence interval 1.4-3.9; p = 0.002) and higher self-confidence to assist in a laparoscopic operation (odds ratio 1.7, 95 % confidence interval 1.1-2.6; p = 0.014) were identified as predictive factors for a better 30° angled VR-LCN performance. Our study indicates that medical students' self-confidence regarding their ability to navigate a camera in a laparoscopic procedure and male gender predict a better first-time VR-LCN performance. These findings may provide a basis for a tailored educational approach.

  18. Creating personalized memories from social events: community-based support for multi-camera recordings of school concerts

    NARCIS (Netherlands)

    R.L. Guimarães (Rodrigo); P.S. Cesar Garcia (Pablo Santiago); D.C.A. Bulterman (Dick); V. Zsombori; I. Kegel

    2011-01-01

    htmlabstractThe wide availability of relatively high-quality cameras makes it easy for many users to capture video fragments of social events such as concerts, sports events or community gatherings. The wide availability of simple sharing tools makes it nearly as easy to upload individual fragments

  19. New generation of meteorology cameras

    Science.gov (United States)

    Janout, Petr; Blažek, Martin; Páta, Petr

    2017-12-01

    A new generation of the WILLIAM (WIde-field aLL-sky Image Analyzing Monitoring system) camera includes new features such as monitoring of rain and storm clouds during the day observation. Development of the new generation of weather monitoring cameras responds to the demand for monitoring of sudden weather changes. However, new WILLIAM cameras are ready to process acquired image data immediately, release warning against sudden torrential rains, and send it to user's cell phone and email. Actual weather conditions are determined from image data, and results of image processing are complemented by data from sensors of temperature, humidity, and atmospheric pressure. In this paper, we present the architecture, image data processing algorithms of mentioned monitoring camera and spatially-variant model of imaging system aberrations based on Zernike polynomials.

  20. Hyper thin 3D edge measurement of honeycomb core structures based on the triangular camera-projector layout & phase-based stereo matching.

    Science.gov (United States)

    Jiang, Hongzhi; Zhao, Huijie; Li, Xudong; Quan, Chenggen

    2016-03-07

    We propose a novel hyper thin 3D edge measurement technique to measure the profile of 3D outer envelope of honeycomb core structures. The width of the edges of the honeycomb core is less than 0.1 mm. We introduce a triangular layout design consisting of two cameras and one projector to measure hyper thin 3D edges and eliminate data interference from the walls. A phase-shifting algorithm and the multi-frequency heterodyne phase-unwrapping principle are applied for phase retrievals on edges. A new stereo matching method based on phase mapping and epipolar constraint is presented to solve correspondence searching on the edges and remove false matches resulting in 3D outliers. Experimental results demonstrate the effectiveness of the proposed method for measuring the 3D profile of honeycomb core structures.

  1. The CLIC electron and positron polarized sources

    CERN Document Server

    Rinolfi, Louis; Bulyak, Eugene; Chehab, Robert; Dadoun, Olivier; Gai, Wei; Gladkikh, Peter; Kamitani, Takuya; Kuriki, Masao; Liu, Wanming; Maryuama, Takashi; Omori, Tsunehiko; Poelker, Matt; Sheppard, John; Urakawa, Junji; Variola, Alessandro; Vivoli, Alessandro; Yakimenko, Vitaly; Zhou, Feng; Zimmermann, Frank

    2010-01-01

    The CLIC polarized electron source is based on a DC gun where the photocathode is illuminated by a laser beam. Each micro-bunch has a charge of 6x109 e−, a width of 100 ps and a repetition rate of 2 GHz. A peak current of 10 A in the micro-bunch is a challenge for the surface charge limit of the photo-cathode. Two options are feasible to generate the 2 GHz e− bunch train: 100 ps micro-bunches can be extracted from the photo-cathode either by a 2 GHz laser system or by generating a macro-bunch using a ~200 ns laser pulse and a subsequent RF bunching system to produce the appropriate micro-bunch structure. Recent results obtained by SLAC, for the latter case, are presented. The polarized positron source is based on a positron production scheme in which polarized photons are produced by a laser Compton scattering process. The resulting circularly-polarized gamma photons are sent onto a target, producing pairs of longitudinally polarized electrons and positrons. The Compton backscattering process occurs eithe...

  2. Human tracking over camera networks: a review

    Science.gov (United States)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  3. Event Processing for Modular Gamma Cameras with Tiled Multi-Anode Photomultiplier Tubes.

    Science.gov (United States)

    Salçın, Esen; Furenlid, Lars R

    2012-01-01

    Multi-anode photomultiplier tubes (MAPMTs) are good candidates as light sensors for a new generation of modular scintillation cameras for Single-photon emission computed tomography (SPECT) and Positron emission tomography (PET) applications. MAPMTs can provide improved intrinsic spatial resolution (interaction (DOI). However, the area of a single MAPMT module is small for a modular gamma camera, so we are designing read-out electronics that will allow multiple individual MAPMT modules to be optically coupled to a single monolithic scintillator crystal. In order to allow such flexibility, the read-out electronics, which we refer to as the event processor, must be compact and adaptable. In combining arrays of MAPMTs, which may each have 64 to 1024 anodes per unit, issues need to be overcome with amplifying, digitizing, and recording potentially very large numbers of channels per gamma-ray event. In this study, we have investigated different event-processor strategies for gamma cameras with multiple MAPMTs that will employ maximum-likelihood (ML) methods for estimation of 3D spatial location, deposited energy and time of occurrence of events. We simulated anode signals for hypothetical gamma-camera geometries based on models of the stochastic processes inherent in scintillation cameras. The comparison between different triggering and read-out schemes was carried out by quantifying the information content in the anode signals via the Fisher Information Matrix (FIM). We observed that a decline in spatial resolution at the edges of the individual MAPMTs could be improved by the inclusion of neighboring MAPMT anode signals for events near the tiling boundaries. Thus in order to maintain spatial resolution uniformity throughout the modular camera face, we propose dividing an MAPMT's array of anode signals into regions such to help determine when triggers from one MAPMT need to be passed to a neighboring MAPMT so that it can contribute anode information for events between

  4. Perceptual Color Characterization of Cameras

    Directory of Open Access Journals (Sweden)

    Javier Vazquez-Corral

    2014-12-01

    Full Text Available Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as \\(XYZ\\, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a \\(3 \\times 3\\ matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson al., to perform a perceptual color characterization. In particular, we search for the \\(3 \\times 3\\ matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE \\(\\Delta E\\ error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3for the \\(\\Delta E\\ error, 7& for the S-CIELAB error and 13% for the CID error measures.

  5. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  6. Determination of the radiance of cylindrical light diffusers: design of a one-axis charge-coupled device camera-based goniometer setup

    Science.gov (United States)

    Pitzschke, Andreas; Bertholet, Jenny; Lovisa, Blaise; Zellweger, Matthieu; Wagnières, Georges

    2017-03-01

    A one-axis charge-coupled device camera-based goniometer setup was developed to measure the three-dimensional radiance profile (longitudinal, azimuthal, and polar) of cylindrical light diffusers in air and water. An algorithm was programmed to project the two-dimensional camera data onto the diffuser coordinates. The optical system was designed to achieve a spatial resolution on the diffuser surface in the submillimeter range. The detection threshold of the detector was well below the values of measured radiance. The radiance profiles of an exemplary cylindrical diffuser measured in air showed local deviations in radiance below 10% for wavelengths at 635 and 671 nm. At 808 nm, deviations in radiance became larger, up to 45%, most probable due to the manufacturing process of the diffuser. Radiance profiles measured in water were less Lambertian than in air due to the refractive index matching privileging the radial decoupling of photons from the optical fiber.

  7. The future of space imaging. Report of a community-based study of an advanced camera for the Hubble Space Telescope

    Science.gov (United States)

    Brown, Robert A. (Editor)

    1993-01-01

    The scientific and technical basis for an Advanced Camera (AC) for the Hubble Space Telescope (HST) is discussed. In March 1992, the NASA Program Scientist for HST invited the Space Telescope Science Institute to conduct a community-based study of an AC, which would be installed on a scheduled HST servicing mission in 1999. The study had three phases: a broad community survey of views on candidate science program and required performance of the AC, an analysis of technical issues relating to its implementation, and a panel of experts to formulate conclusions and prioritize recommendations. From the assessment of the imaging tasks astronomers have proposed for or desired from HST, we believe the most valuable 1999 instrument would be a camera with both near ultraviolet/optical (NUVO) and far ultraviolet (FUV) sensitivity, and with both wide field and high resolution options.

  8. The calibration of cellphone camera-based colorimetric sensor array and its application in the determination of glucose in urine.

    Science.gov (United States)

    Jia, Ming-Yan; Wu, Qiong-Shui; Li, Hui; Zhang, Yu; Guan, Ya-Feng; Feng, Liang

    2015-12-15

    In this work, a novel approach that can calibrate the colors obtained with a cellphone camera was proposed for the colorimetric sensor array. The variations of ambient light conditions, imaging positions and even cellphone brands could all be compensated via taking the black and white backgrounds of the sensor array as references, thereby yielding accurate measurements. The proposed calibration approach was successfully applied to the detection of glucose in urine by a colorimetric sensor array. Snapshots of the glucose sensor array by a cellphone camera were calibrated by the proposed compensation method and the urine samples at different glucose concentrations were well discriminated with no confusion after a hierarchical clustering analysis. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Model-based correction for scatter and tailing effects in simultaneous 99mTc and 123I imaging for a CdZnTe cardiac SPECT camera.

    Science.gov (United States)

    Holstensson, M; Erlandsson, K; Poludniowski, G; Ben-Haim, S; Hutton, B F

    2015-04-21

    An advantage of semiconductor-based dedicated cardiac single photon emission computed tomography (SPECT) cameras when compared to conventional Anger cameras is superior energy resolution. This provides the potential for improved separation of the photopeaks in dual radionuclide imaging, such as combined use of (99m)Tc and (123)I . There is, however, the added complexity of tailing effects in the detectors that must be accounted for. In this paper we present a model-based correction algorithm which extracts the useful primary counts of (99m)Tc and (123)I from projection data. Equations describing the in-patient scatter and tailing effects in the detectors are iteratively solved for both radionuclides simultaneously using a maximum a posteriori probability algorithm with one-step-late evaluation. Energy window-dependent parameters for the equations describing in-patient scatter are estimated using Monte Carlo simulations. Parameters for the equations describing tailing effects are estimated using virtually scatter-free experimental measurements on a dedicated cardiac SPECT camera with CdZnTe-detectors. When applied to a phantom study with both (99m)Tc and (123)I, results show that the estimated spatial distribution of events from (99m)Tc in the (99m)Tc photopeak energy window is very similar to that measured in a single (99m)Tc phantom study. The extracted images of primary events display increased cold lesion contrasts for both (99m)Tc and (123)I.

  10. Tests of a new CCD-camera based neutron radiography detector system at the reactor stations in Munich and Vienna

    Energy Technology Data Exchange (ETDEWEB)

    Lehmann, E.; Pleinert, H. [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Schillinger, B. [Technische Univ. Muenchen (Germany); Koerner, S. [Atominstitut der Oesterreichischen Universitaeten, Vienna (Austria)

    1997-09-01

    The performance of the new neutron radiography detector designed at PSI with a cooled high sensitive CCD-camera was investigated under real neutronic conditions at three beam ports of two reactor stations. Different converter screens were applied for which the sensitivity and the modulation transfer function (MTF) could be obtained. The results are very encouraging concerning the utilization of this detector system as standard tool at the radiography stations at the spallation source SINQ. (author) 3 figs., 5 refs.

  11. Programming implementation of performance testing of low light level ICCD camera based on LabVIEW software

    Science.gov (United States)

    Ni, Li; Ye, Qiong; Qian, Yunsheng

    2016-10-01

    Low light level (LLL) imaging technology major roles in the night and in other low light illumination stage, through a variety of low light level image intensifier and charge-coupled device (CCD), gains image information on the target acquisition, photoelectric conversion and high-performance enhancement, storing and displaying. In order to comprehensively test the parameters such as intensified charge-coupled device (ICCD) signal noise ratio (SNR) and dynamic range, this paper uses Laboratory Virtual Instrument Engineering Platform (LabVIEW) software for programming. Data acquisition is the core of the entire software programming, according to the function; it is divided into three parts: a) initializing acquisition cards; b) data collection and storage of useful data; c) closing the acquisition card. NI PXIe-5122 analog acquisition card and PXIe-1435 digital acquisition card were used to collect pal cameras and camera link cameras' shooting pictures, developing with analog interface and the digital interface of ICCD test work. After obtaining data, we can then analyze the performance of the camera by calculating the data according to the principle programmed parameters. Experimental testing process, the use of half-moon test target signal to noise ratio, dynamic range parameters and uniformity test target will be normal. Meanwhile, in order to increase the practicality of the program, we also add the database module into the program. LabSQL is a free, multi-database, cross-platform database access LabVIEW Toolkit. Using LabSQL can access almost any type of database, perform a variety of inquiries and record various operations. With just a simple programming, database access can be achieved in LabVIEW.

  12. Implications of Articulating Machinery on Operator Line of Sight and Efficacy of Camera Based Proximity Detection Systems

    Directory of Open Access Journals (Sweden)

    Nicholas Schwabe

    2017-07-01

    Full Text Available The underground mining industry, and some above ground operations, rely on the use of heavy equipment that articulates to navigate corners in the tight confines of the tunnels. Poor line of sight (LOS has been identified as a problem for safe operation of this machinery. Proximity detection systems, such as a video system designed to provide a 360 degree view around the machine have been implemented to improve the available LOS for the operator. A four-camera system was modeled in a computer environment to assess LOS on a 3D cad model of a typical, articulated machine. When positioned without any articulation, the system is excellent at removing blind spots for a machine driving straight forward or backward in a straight tunnel. Further analysis reveals that when the machine articulates in a simulated corner section, some camera locations are no longer useful for improving LOS into the corner. In some cases, the operator has a superior view into the corner, when compared to the best available view from the camera. The work points to the need to integrate proximity detection systems at the design, build, and manufacture stage, and to consider proper policy and procedures that would address the gains and limits of the systems prior to implementation.

  13. A new star tracker concept for satellite attitude determination based on a multi-purpose panoramic camera

    Science.gov (United States)

    Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele; Pernechele, Claudio; Dionisio, Cesare

    2017-11-01

    This paper presents an innovative algorithm developed for attitude determination of a space platform. The algorithm exploits images taken from a multi-purpose panoramic camera equipped with hyper-hemispheric lens and used as star tracker. The sensor architecture is also original since state-of-the-art star trackers accurately image as many stars as possible within a narrow- or medium-size field-of-view, while the considered sensor observes an extremely large portion of the celestial sphere but its observation capabilities are limited by the features of the optical system. The proposed original approach combines algorithmic concepts, like template matching and point cloud registration, inherited from the computer vision and robotic research fields, to carry out star identification. The final aim is to provide a robust and reliable initial attitude solution (lost-in-space mode), with a satisfactory accuracy level in view of the multi-purpose functionality of the sensor and considering its limitations in terms of resolution and sensitivity. Performance evaluation is carried out within a simulation environment in which the panoramic camera operation is realistically reproduced, including perturbations in the imaged star pattern. Results show that the presented algorithm is able to estimate attitude with accuracy better than 1° with a success rate around 98% evaluated by densely covering the entire space of the parameters representing the camera pointing in the inertial space.

  14. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen...

  15. Digital Pinhole Camera

    Science.gov (United States)

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  16. Ratiometric fluorescence transduction by hybridization after isothermal amplification for determination of zeptomole quantities of oligonucleotide biomarkers with a paper-based platform and camera-based detection

    Energy Technology Data Exchange (ETDEWEB)

    Noor, M. Omair; Hrovat, David [Chemical Sensors Group, Department of Chemical and Physical Sciences, University of Toronto Mississauga, 3359 Mississauga Road, Mississauga, ON L5L 1C6 (Canada); Moazami-Goudarzi, Maryam [Department of Cell and Systems Biology, University of Toronto Mississauga, 3359 Mississauga Road, Mississauga, ON L5L 1C6 (Canada); Espie, George S. [Department of Cell and Systems Biology, University of Toronto Mississauga, 3359 Mississauga Road, Mississauga, ON L5L 1C6 (Canada); Department of Biology, University of Toronto Mississauga, 3359 Mississauga Road, Mississauga, ON L5L 1C6 (Canada); Krull, Ulrich J., E-mail: ulrich.krull@utoronto.ca [Chemical Sensors Group, Department of Chemical and Physical Sciences, University of Toronto Mississauga, 3359 Mississauga Road, Mississauga, ON L5L 1C6 (Canada)

    2015-07-23

    Highlights: • Solid-phase QD-FRET transduction of isothermal tHDA amplicons on paper substrates. • Ratiometric QD-FRET transduction improves assay precision and lowers the detection limit. • Zeptomole detection limit by an iPad camera after isothermal amplification. • Tunable assay sensitivity by immobilizing different amounts of QD–probe bioconjugates. - Abstract: Paper is a promising platform for the development of decentralized diagnostic assays owing to the low cost and ease of use of paper-based analytical devices (PADs). It can be challenging to detect on PADs very low concentrations of nucleic acid biomarkers of lengths as used in clinical assays. Herein we report the use of thermophilic helicase-dependent amplification (tHDA) in combination with a paper-based platform for fluorescence detection of probe-target hybridization. Paper substrates were patterned using wax printing. The cellulosic fibers were chemically derivatized with imidazole groups for the assembly of the transduction interface that consisted of immobilized quantum dot (QD)–probe oligonucleotide conjugates. Green-emitting QDs (gQDs) served as donors with Cy3 as the acceptor dye in a fluorescence resonance energy transfer (FRET)-based transduction method. After probe-target hybridization, a further hybridization event with a reporter sequence brought the Cy3 acceptor dye in close proximity to the surface of immobilized gQDs, triggering a FRET sensitized emission that served as an analytical signal. Ratiometric detection was evaluated using both an epifluorescence microscope and a low-cost iPad camera as detectors. Addition of the tHDA method for target amplification to produce sequences of ∼100 base length allowed for the detection of zmol quantities of nucleic acid targets using the two detection platforms. The ratiometric QD-FRET transduction method not only offered improved assay precision, but also lowered the limit of detection of the assay when compared with the non

  17. Progress toward magnetic confinement of a positron-electron plasma: nearly 100% positron injection efficiency into a dipole trap

    Science.gov (United States)

    Stoneking, Matthew

    2017-10-01

    The hydrogen atom provides the simplest system and in some cases the most precise one for comparing theory and experiment in atomics physics. The field of plasma physics lacks an experimental counterpart, but there are efforts underway to produce a magnetically confined positron-electron plasma that promises to represent the simplest plasma system. The mass symmetry of positron-electron plasma makes it particularly tractable from a theoretical standpoint and many theory papers have been published predicting modified wave and stability properties in these systems. Our approach is to utilize techniques from the non-neutral plasma community to trap and accumulate electrons and positrons prior to mixing in a magnetic trap with good confinement properties. Ultimately we aim to use a levitated superconducting dipole configuration fueled by positrons from a reactor-based positron source and buffer-gas trap. To date we have conducted experiments to characterize and optimize the positron beam and test strategies for injecting positrons into the field of a supported permanent magnet by use of ExB drifts and tailored static and dynamic potentials applied to boundary electrodes and to the magnet itself. Nearly 100% injection efficiency has been achieved under certain conditions and some fraction of the injected positrons are confined for as long as 400 ms. These results are promising for the next step in the project which is to use an inductively energized high Tc superconducting coil to produce the dipole field, initially in a supported configuration, but ultimately levitated using feedback stabilization. Work performed with the support of the German Research Foundation (DFG), JSPS KAKENHI, NIFS Collaboration Research Program, and the UCSD Foundation.

  18. Positron interactions with water–total elastic, total inelastic, and elastic differential cross section measurements

    Energy Technology Data Exchange (ETDEWEB)

    Tattersall, Wade [Centre for Antimatter-Matter Studies, Research School of Physics and Engineering, The Australian National University, Canberra, ACT 0200 (Australia); Centre for Antimatter-Matter Studies, School of Engineering and Physical Sciences, James Cook University, Townsville, 4810 Queensland (Australia); Chiari, Luca [Centre for Antimatter-Matter Studies, School of Chemical and Physical Sciences, Flinders University, GPO Box 2100, Adelaide 5001, South Australia (Australia); Machacek, J. R.; Anderson, Emma; Sullivan, James P. [Centre for Antimatter-Matter Studies, Research School of Physics and Engineering, The Australian National University, Canberra, ACT 0200 (Australia); White, Ron D. [Centre for Antimatter-Matter Studies, School of Engineering and Physical Sciences, James Cook University, Townsville, 4810 Queensland (Australia); Brunger, M. J. [Centre for Antimatter-Matter Studies, School of Chemical and Physical Sciences, Flinders University, GPO Box 2100, Adelaide 5001, South Australia (Australia); Institute of Mathematical Sciences, University of Malaya, 50603 Kuala Lumpur (Malaysia); Buckman, Stephen J. [Centre for Antimatter-Matter Studies, Research School of Physics and Engineering, The Australian National University, Canberra, ACT 0200 (Australia); Institute of Mathematical Sciences, University of Malaya, 50603 Kuala Lumpur (Malaysia); Garcia, Gustavo [Instituto de Fısica Fundamental, Consejo Superior de Investigationes Cientıficas (CSIC), Serrano 113-bis, E-28006 Madrid (Spain); Blanco, Francisco [Departamento de Fısica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid, E-28040 Madrid (Spain)

    2014-01-28

    Utilising a high-resolution, trap-based positron beam, we have measured both elastic and inelastic scattering of positrons from water vapour. The measurements comprise differential elastic, total elastic, and total inelastic (not including positronium formation) absolute cross sections. The energy range investigated is from 1 eV to 60 eV. Comparison with theory is made with both R-Matrix and distorted wave calculations, and with our own application of the Independent Atom Model for positron interactions.

  19. NFC - Narrow Field Camera

    Science.gov (United States)

    Koukal, J.; Srba, J.; Gorková, S.

    2015-01-01

    We have been introducing a low-cost CCTV video system for faint meteor monitoring and here we describe the first results from 5 months of two-station operations. Our system called NFC (Narrow Field Camera) with a meteor limiting magnitude around +6.5mag allows research on trajectories of less massive meteoroids within individual parent meteor showers and the sporadic background. At present 4 stations (2 pairs with coordinated fields of view) of NFC system are operated in the frame of CEMeNt (Central European Meteor Network). The heart of each NFC station is a sensitive CCTV camera Watec 902 H2 and a fast cinematographic lens Meopta Meostigmat 1/50 - 52.5 mm (50 mm focal length and fixed aperture f/1.0). In this paper we present the first results based on 1595 individual meteors, 368 of which were recorded from two stations simultaneously. This data set allows the first empirical verification of theoretical assumptions for NFC system capabilities (stellar and meteor magnitude limit, meteor apparent brightness distribution and accuracy of single station measurements) and the first low mass meteoroid trajectory calculations. Our experimental data clearly showed the capabilities of the proposed system for low mass meteor registration and for calculations based on NFC data to lead to a significant refinement in the orbital elements for low mass meteoroids.

  20. Channeling crystals for positron production

    Energy Technology Data Exchange (ETDEWEB)

    Decker, F.J.

    1991-05-01

    Particles traversing at small angles along a single crystal axis experience a collective scattering force of many crystal atoms. The enormous fields can trap the particles along an axis or plane, called channeling. High energy electrons are attracted by the positive nuclei and therefore produce strongly enhanced so called coherent bremsstrahlung and pair production. These effects could be used in a positron production target: A single tungsten crystal is oriented to the incident electron beam within 1 mrad. At 28 GeV/c the effective radiation length is with 0.9 mm about one quarter of the amorphous material. So the target length can be shorter, which yields a higher conversion coefficient and a lower emittance of the positron beam. This makes single crystals very interesting for positron production targets. 18 refs., 2 figs.

  1. Positron emitter labeled enzyme inhibitors

    Science.gov (United States)

    Fowler, J.S.; MacGregor, R.R.; Wolf, A.P.

    1987-05-22

    This invention involved a new strategy for imaging and mapping enzyme activity in the living human and animal body using positron emitter-labeled suicide enzyme inactivators or inhibitors which become covalently bound to the enzyme as a result of enzymatic catalysis. Two such suicide in activators for monoamine oxidase have been labeled with carbon-11 and used to map the enzyme subtypes in the living human and animal body using PET. By using positron emission tomography to image the distribution of radioactivity produced by the body penetrating radiation emitted by carbon-11, a map of functionally active monoamine oxidase activity is obtained. Clorgyline and L-deprenyl are suicide enzyme inhibitors and irreversibly inhibit monoamine oxidase. When these inhibitors are labeled with carbon-11 they provide selective probes for monoamine oxidase localization and reactivity in vivo using positron emission tomography. 2 figs.

  2. A novel optical apparatus for the study of rolling contact wear/fatigue based on a high-speed camera and multiple-source laser illumination.

    Science.gov (United States)

    Bodini, I; Sansoni, G; Lancini, M; Pasinetti, S; Docchio, F

    2016-08-01

    Rolling contact wear/fatigue tests on wheel/rail specimens are important to produce wheels and rails of new materials for improved lifetime and performance, which are able to operate in harsh environments and at high rolling speeds. This paper presents a novel non-invasive, all-optical system, based on a high-speed video camera and multiple laser illumination sources, which is able to continuously monitor the dynamics of the specimens used to test wheel and rail materials, in a laboratory test bench. 3D macro-topography and angular position of the specimen are simultaneously performed, together with the acquisition of surface micro-topography, at speeds up to 500 rpm, making use of a fast camera and image processing algorithms. Synthetic indexes for surface micro-topography classification are defined, the 3D macro-topography is measured with a standard uncertainty down to 0.019 mm, and the angular position is measured on a purposely developed analog encoder with a standard uncertainty of 2.9°. The very small camera exposure time enables to obtain blur-free images with excellent definition. The system will be described with the aid of end-cycle specimens, as well as of in-test specimens.

  3. A novel optical apparatus for the study of rolling contact wear/fatigue based on a high-speed camera and multiple-source laser illumination

    Science.gov (United States)

    Bodini, I.; Sansoni, G.; Lancini, M.; Pasinetti, S.; Docchio, F.

    2016-08-01

    Rolling contact wear/fatigue tests on wheel/rail specimens are important to produce wheels and rails of new materials for improved lifetime and performance, which are able to operate in harsh environments and at high rolling speeds. This paper presents a novel non-invasive, all-optical system, based on a high-speed video camera and multiple laser illumination sources, which is able to continuously monitor the dynamics of the specimens used to test wheel and rail materials, in a laboratory test bench. 3D macro-topography and angular position of the specimen are simultaneously performed, together with the acquisition of surface micro-topography, at speeds up to 500 rpm, making use of a fast camera and image processing algorithms. Synthetic indexes for surface micro-topography classification are defined, the 3D macro-topography is measured with a standard uncertainty down to 0.019 mm, and the angular position is measured on a purposely developed analog encoder with a standard uncertainty of 2.9°. The very small camera exposure time enables to obtain blur-free images with excellent definition. The system will be described with the aid of end-cycle specimens, as well as of in-test specimens.

  4. Three-dimensional camera

    Science.gov (United States)

    Bothe, Thorsten; Gesierich, Achim; Legarda-Saenz, Ricardo; Jueptner, Werner P. O.

    2003-05-01

    Industrial- and multimedia applications need cost effective, compact and flexible 3D profiling instruments. In the talk we will show the principle of, applications for and results from a new miniaturized 3-D profiling system for macroscopic scenes. The system uses a compact housing and is usable like a camera with minimum stabilization like a tripod. The system is based on common fringe projection technique. Camera and projector are assembled with parallel optical axes having coplanar projection and imaging plane. Their axes distance is comparable to the human eyes distance altogether giving a complete system of 21x20x11 cm size and allowing to measure high gradient objects like the interior of tubes. The fringe projector uses a LCD which enables fast and flexible pattern projection. Camera and projector have a short focal length and a high system aperture as well as a large depth of focus. Thus, objects can be measured from a shorter distance compared to common systems (e.g. 1 m sized objects in 80 cm distance). Actually, objects with diameters up to 4 m can be profiled because the set-up allows working with completely opened aperture combined with bright lamps giving a big amount of available light and a high Signal to Noise Ratio. Normally a small basis has the disadvantage of reduced sensitivity. We investigated in methods to compensate the reduced sensitivity via setup and enhanced evaluation methods. For measurement we use synthetic wavelengths. The developed algorithms are completely adaptable concerning the users needs of speed and accuracy. The 3D camera is built from low cost components, robust, nearly handheld and delivers insights also into difficult technical objects like tubes and inside volumes. Besides the realized high resolution phase measurement the system calibration is an important task for usability. While calibrating with common photogrammetric models (which are typically used for actual fringe projection systems) problems were found that

  5. Image reconstruction for a Positron Emission Tomograph optimized for breast cancer imaging

    Energy Technology Data Exchange (ETDEWEB)

    Virador, Patrick R.G. [Univ. of California, Berkeley, CA (United States)

    2000-04-01

    The author performs image reconstruction for a novel Positron Emission Tomography camera that is optimized for breast cancer imaging. This work addresses for the first time, the problem of fully-3D, tomographic reconstruction using a septa-less, stationary, (i.e. no rotation or linear motion), and rectangular camera whose Field of View (FOV) encompasses the entire volume enclosed by detector modules capable of measuring Depth of Interaction (DOI) information. The camera is rectangular in shape in order to accommodate breasts of varying sizes while allowing for soft compression of the breast during the scan. This non-standard geometry of the camera exacerbates two problems: (a) radial elongation due to crystal penetration and (b) reconstructing images from irregularly sampled data. Packing considerations also give rise to regions in projection space that are not sampled which lead to missing information. The author presents new Fourier Methods based image reconstruction algorithms that incorporate DOI information and accommodate the irregular sampling of the camera in a consistent manner by defining lines of responses (LORs) between the measured interaction points instead of rebinning the events into predefined crystal face LORs which is the only other method to handle DOI information proposed thus far. The new procedures maximize the use of the increased sampling provided by the DOI while minimizing interpolation in the data. The new algorithms use fixed-width evenly spaced radial bins in order to take advantage of the speed of the Fast Fourier Transform (FFT), which necessitates the use of irregular angular sampling in order to minimize the number of unnormalizable Zero-Efficiency Bins (ZEBs). In order to address the persisting ZEBs and the issue of missing information originating from packing considerations, the algorithms (a) perform nearest neighbor smoothing in 2D in the radial bins (b) employ a semi-iterative procedure in order to estimate the unsampled data

  6. Novel targets for positron emission tomography (PET) radiopharmaceutical tracers for visualization of neuroinflammation

    Science.gov (United States)

    Shchepetkin, I.; Shvedova, M.; Anfinogenova, Y.; Litvak, M.; Atochin, D.

    2017-08-01

    Non-invasive molecular imaging techniques can enhance diagnosis of neurological diseases to achieve their successful treatment. Positron emission tomography (PET) imaging can identify activated microglia and provide detailed functional information based on molecular biology. This imaging modality is based on detection of isotope labeled tracers, which emit positrons. The review summarizes the developments of various radiolabeled ligands for PET imaging of neuroinflammation.

  7. Comparison of Anger camera and BGO mosaic position-sensitive detectors for `Super ACAR`. Precision electron momentum densities via angular correlation of annihilation radiation

    Energy Technology Data Exchange (ETDEWEB)

    Mills, A.P. Jr. [Bell Labs. Murray Hill, NJ (United States); West, R.N.; Hyodo, Toshio

    1997-03-01

    We discuss the relative merits of Anger cameras and Bismuth Germanate mosaic counters for measuring the angular correlation of positron annihilation radiation at a facility such as the proposed Positron Factory at Takasaki. The two possibilities appear equally cost effective at this time. (author)

  8. Compact 3D camera

    Science.gov (United States)

    Bothe, Thorsten; Osten, Wolfgang; Gesierich, Achim; Jueptner, Werner P. O.

    2002-06-01

    A new, miniaturized fringe projection system is presented which has a size and handling that approximates to common 2D cameras. The system is based on the fringe projection technique. A miniaturized fringe projector and camera are assembled into a housing of 21x20x11 cm size with a triangulation basis of 10 cm. The advantage of the small triangulation basis is the possibility to measure difficult objects with high gradients. Normally a small basis has the disadvantage of reduced sensitivity. We investigated in methods to compensate the reduced sensitivity via setup and enhanced evaluation methods. Special hardware issues are a high quality, bright light source (and components to handle the high luminous flux) as well as adapted optics to gain a large aperture angle and a focus scan unit to increase the usable measurement volume. Adaptable synthetic wavelengths and integration times were used to increase the measurement quality and allow robust measurements that are adaptable to the desired speed and accuracy. Algorithms were developed to generate automatic focus positions to completely cover extended measurement volumes. Principles, setup, measurement examples and applications are shown.

  9. Camera self-calibration from translation by referring to a known camera.

    Science.gov (United States)

    Zhao, Bin; Hu, Zhaozheng

    2015-09-01

    This paper presents a novel linear method for camera self-calibration by referring to a known (or calibrated) camera. The method requires at least three images, with two images generated by the uncalibrated camera from pure translation and one image generated by the known reference camera. We first propose a method to compute the infinite homography from scene depths. Based on this, we use two images generated by translating the uncalibrated camera to recover scene depths, which are further utilized to linearly compute the infinite homography between an arbitrary uncalibrated image, and the image from the known camera. With the known camera as reference, the computed infinite homography is readily decomposed for camera calibration. The proposed self-calibration method has been tested with simulation and real image data. Experimental results demonstrate that the method is practical and accurate. This paper proposes using a "known reference camera" for camera calibration. The pure translation, as required in the method, is much more maneuverable, compared with some strict motions in the literature, such as pure rotation. The proposed self-calibration method has good potential for solving online camera calibration problems, which has important applications, especially for multicamera and zooming camera systems.

  10. Production and application of pulsed slow-positron beam using an electron LINAC

    Energy Technology Data Exchange (ETDEWEB)

    Yamazaki, Tetsuo; Suzuki, Ryoichi; Ohdaira, Toshiyuki; Mikado, Tomohisa [Electrotechnical Lab., Tsukuba, Ibaraki (Japan); Kobayashi, Yoshinori

    1997-03-01

    Slow-positron beam is quite useful for non-destructive material research. At the Electrotechnical Laboratory (ETL), an intense slow positron beam line by exploiting an electron linac has been constructed in order to carry out various experiments on material analysis. The beam line can generates pulsed positron beams of variable energy and of variable pulse period. Many experiments have been carried out so far with the beam line. In this paper, various capability of the intense pulsed positron beam is presented, based on the experience at the ETL, and the prospect for the future is discussed. (author)

  11. Implementation of an X ray image plate camera in characterisation and crystallisation studies of iron based alloys

    CERN Document Server

    Steer, W A

    2001-01-01

    Developed in the early 1980s, versatile X-ray storage phosphor screens have opened up new possibilities in diffraction instruments for crystallography. Originally adopted by high-pressure researchers using diamond-anvil cells and very small sample volumes, flat phosphor screens give great advantage because of their high intrinsic sensitivity. But less demanding applications still stand to benefit from increased throughput and enhanced count rates made possible by this technology. With this in mind the Curved Image Plate camera, a large radius (350mm and 185mm) Debye-Scherrer instrument primarily designed for use with capillary-contained powder samples had been devised. As a substantial part of this work, new software to pre-process the data, calibration procedures and modes of operation were developed to enable the full potential of the system to be realised. One particular application of the CIP camera is the comparative study of a large number of samples, for example as a function of heat treatment. Amorpho...

  12. An infrared range camera-based approach for three-dimensional locomotion tracking and pose reconstruction in a rodent.

    Science.gov (United States)

    Ou-Yang, Tai-Hsien; Tsai, Meng-Li; Yen, Chen-Tung; Lin, Ta-Te

    2011-09-30

    We herein introduce an automated three-dimensional (3D) locomotion tracking and pose reconstruction system for rodents with superior robustness, rapidity, reliability, resolution, simplicity, and cost. An off-the-shelf composite infrared (IR) range camera was adopted to grab high-resolution depth images (640×480×2048 pixels at 20Hz) in our system for automated behavior analysis. For the inherent 3D structure of the depth images, we developed a compact algorithm to reconstruct the locomotion and body behavior with superior temporal and solid spatial resolution. Since the range camera operates in the IR spectrum, interference from the visible light spectrum did not affect the tracking performance. The accuracy of our system was 98.1±3.2%. We also validated the system, which yielded strong correlation with automated and manual tracking. Meanwhile, the system replicates a detailed dynamic rat model in virtual space, which demonstrates the movements of the extremities of the body and locomotion in detail on varied terrain. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Studies of positron induced luminescence from polymers

    Energy Technology Data Exchange (ETDEWEB)

    Xu, J.; Hulett, L.D. Jr.; Lewis, T.A. [Oak Ridge National Lab., TN (United States); Tolk, N.H. [Vanderbilt Univ., Nashville, TN (United States). Dept. of Physics and Astronomy

    1994-06-01

    Light emission from polymers (anthracene dissolved in polystryrene) induced by low-energy positrons and electrons has been studied. Results indicate a clear difference between optical emissions under positron and electron bombardment. The positron-induced luminescence spectrum is believed to be generated by both collisional and annihilation processes.

  14. High Speed Digital Camera Technology Review

    Science.gov (United States)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  15. DHCAL with Minimal Absorber: Measurements with Positrons

    CERN Document Server

    Freund, B; Repond, J.; Schlereth, J.; Xia, L.; Dotti, A.; Grefe, C.; Ivantchenko, V.; Antequera, J.Berenguer; Calvo Alamillo, E.; Fouz, M.C.; Marin, J.; Puerta-Pelayo, J.; Verdugo, A.; Brianne, E.; Ebrahimi, A.; Gadow, K.; Göttlicher, P.; Günter, C.; Hartbrich, O.; Hermberg, B.; Irles, A.; Krivan, F.; Krüger, K.; Kvasnicka, J.; Lu, S.; Lutz, B.; Morgunov, V.; Provenza, A.; Reinecke, M.; Sefkow, F.; Schuwalow, S.; Tran, H.L.; Garutti, E.; Laurien, S.; Matysek, M.; Ramilli, M.; Schroeder, S.; Bilki, B.; Norbeck, E.; Northacker, D.; Onel, Y.; Cvach, J.; Gallus, P.; Havranek, M.; Janata, M.; Kovalcuk, M.; Kvasnicka, J.; Lednicky, D.; Marcisovsky, M.; Polak, I.; Popule, J.; Tomasek, L.; Tomasek, M.; Sicho, P.; Smolik, J.; Vrba, V.; Zalesak, J.; van Doren, B.; Wilson, G.W.; Kawagoe, K.; Hirai, H.; Sudo, Y.; Suehara, T.; Sumida, H.; Takada, S.; Tomita, T.; Yoshioka, T.; Bilokin, S.; Bonis, J.; Cornebise, P.; Pöschl, R.; Richard, F.; Thiebault, A.; Zerwas, D.; Hostachy, J.Y.; Morin, L.; Besson, D.; Chadeeva, M.; Danilov, M.; Markin, O.; Popova, E.; Gabriel, M.; Goecke, P.; Kiesling, C.; Kolk, N.van der; Simon, F.; Szalay, M.; Corriveau, F.; Blazey, G.C.; Dyshkant, A.; Francis, K.; Zutshi, V.; Kotera, K.; Ono, H.; Takeshita, T.; Ieki, S.; Kamiya, Y.; Ootani, W.; Shibata, N.; Jeans, D.; Komamiya, S.; Nakanishi, H.

    2016-01-01

    In special tests, the active layers of the CALICE Digital Hadron Calorimeter prototype, the DHCAL, were exposed to low energy particle beams, without being interleaved by absorber plates. The thickness of each layer corresponded approximately to 0.29 radiation lengths or 0.034 nuclear interaction lengths, defined mostly by the copper and steel skins of the detector cassettes. This paper reports on measurements performed with this device in the Fermilab test beam with positrons in the energy range of 1 to 10 GeV. The measurements are compared to simulations based on GEANT4 and a standalone program to emulate the detailed response of the active elements.

  16. Liquid Xenon Detectors for Positron Emission Tomography

    Science.gov (United States)

    Miceli, A.; Amaudruz, P.; Benard, F.; Bryman, D. A.; Kurchaninov, L.; Martin, J. P.; Muennich, A.; Retiere, F.; Ruth, T. J.; Sossi, V.; Stoessl, A. J.

    2011-09-01

    PET is a functional imaging technique based on detection of annihilation photons following beta decay producing positrons. In this paper, we present the concept of a new PET system for preclinical applications consisting of a ring of twelve time projection chambers filled with liquid xenon viewed by avalanche photodiodes. Simultaneous measurement of ionization charge and scintillation light leads to a significant improvement to spatial resolution, image quality, and sensitivity. Simulated performance shows that an energy resolution of < 10% (FWHM) and a sensitivity of 15% are achievable. First tests with a prototype TPC indicate position resolution < 1 mm (FWHM).

  17. The VISTA infrared camera

    Science.gov (United States)

    Dalton, G. B.; Caldwell, M.; Ward, A. K.; Whalley, M. S.; Woodhouse, G.; Edeson, R. L.; Clark, P.; Beard, S. M.; Gallie, A. M.; Todd, S. P.; Strachan, J. M. D.; Bezawada, N. N.; Sutherland, W. J.; Emerson, J. P.

    2006-06-01

    We describe the integration and test phase of the construction of the VISTA Infrared Camera, a 64 Megapixel, 1.65 degree field of view 0.9-2.4 micron camera which will soon be operating at the cassegrain focus of the 4m VISTA telescope. The camera incorporates sixteen IR detectors and six CCD detectors which are used to provide autoguiding and wavefront sensing information to the VISTA telescope control system.

  18. Streak camera meeting summary

    Energy Technology Data Exchange (ETDEWEB)

    Dolan, Daniel H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bliss, David E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-09-01

    Streak cameras are important for high-speed data acquisition in single event experiments, where the total recorded information (I) is shared between the number of measurements (M) and the number of samples (S). Topics of this meeting included: streak camera use at the national laboratories; current streak camera production; new tube developments and alternative technologies; and future planning. Each topic is summarized in the following sections.

  19. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  20. Airborne Network Camera Standard

    Science.gov (United States)

    2015-06-01

    Optical Systems Group Document 466-15 AIRBORNE NETWORK CAMERA STANDARD DISTRIBUTION A: APPROVED FOR PUBLIC RELEASE...Airborne Network Camera Standard 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER...without the focus of standardization for interoperable command and control, storage, and data streaming has been the airborne network camera systems used

  1. Simulation of the annihilation emission of galactic positrons; Modelisation de l'emission d'annihilation des positrons Galactiques

    Energy Technology Data Exchange (ETDEWEB)

    Gillard, W

    2008-01-15

    Positrons annihilate in the central region of our Galaxy. This has been known since the detection of a strong emission line centered on an energy of 511 keV in the direction of the Galactic center. This gamma-ray line is emitted during the annihilation of positrons with electrons from the interstellar medium. The spectrometer SPI, onboard the INTEGRAL observatory, performed spatial and spectral analyses of the positron annihilation emission. This thesis presents a study of the Galactic positron annihilation emission based on models of the different interactions undergone by positrons in the interstellar medium. The models are relied on our present knowledge of the properties of the interstellar medium in the Galactic bulge, where most of the positrons annihilate, and of the physics of positrons (production, propagation and annihilation processes). In order to obtain constraints on the positrons sources and physical characteristics of the annihilation medium, we compared the results of the models to measurements provided by the SPI spectrometer. (author)

  2. Optimal configuration of a low-dose breast-specific gamma camera based on semiconductor CdZnTe pixelated detectors

    Science.gov (United States)

    Genocchi, B.; Pickford Scienti, O.; Darambara, DG

    2017-05-01

    Breast cancer is one of the most frequent tumours in women. During the ‘90s, the introduction of screening programmes allowed the detection of cancer before the palpable stage, reducing its mortality up to 50%. About 50% of the women aged between 30 and 50 years present dense breast parenchyma. This percentage decreases to 30% for women between 50 to 80 years. In these women, mammography has a sensitivity of around 30%, and small tumours are covered by the dense parenchyma and missed in the mammogram. Interestingly, breast-specific gamma-cameras based on semiconductor CdZnTe detectors have shown to be of great interest to early diagnosis. Infact, due to the high energy, spatial resolution, and high sensitivity of CdZnTe, molecular breast imaging has been shown to have a sensitivity of about 90% independently of the breast parenchyma. The aim of this work is to determine the optimal combination of the detector pixel size, hole shape, and collimator material in a low dose dual head breast specific gamma camera based on a CdZnTe pixelated detector at 140 keV, in order to achieve high count rate, and the best possible image spatial resolution. The optimal combination has been studied by modeling the system using the Monte Carlo code GATE. Six different pixel sizes from 0.85 mm to 1.6 mm, two hole shapes, hexagonal and square, and two different collimator materials, lead and tungsten were considered. It was demonstrated that the camera achieved higher count rates, and better signal-to-noise ratio when equipped with square hole, and large pixels (> 1.3 mm). In these configurations, the spatial resolution was worse than using small pixel sizes (< 1.3 mm), but remained under 3.6 mm in all cases.

  3. PEGylation of HPMA-based block copolymers enhances tumor accumulation in vivo: a quantitative study using radiolabeling and positron emission tomography.

    Science.gov (United States)

    Allmeroth, Mareli; Moderegger, Dorothea; Gündel, Daniel; Buchholz, Hans-Georg; Mohr, Nicole; Koynov, Kaloian; Rösch, Frank; Thews, Oliver; Zentel, Rudolf

    2013-11-28

    This paper reports the body distribution of block copolymers (made by controlled radical polymerization) with N-(2-hydroxypropyl)methacrylamide (HPMA) as hydrophilic block and lauryl methacrylate (LMA) as hydrophobic block. They form micellar aggregates in aqueous solution. For this study the hydrophilic/hydrophobic balance was varied by incorporation of differing amounts of poly(ethylene glycol) (PEG) side chains into the hydrophilic block, while keeping the degree of polymerization of both blocks constant. PEGylation reduced the size of the micellar aggregates (Rh=113 to 38 nm) and led to a minimum size of 7% PEG side chains. Polymers were labeled with the positron emitter (18)F, which enables to monitor their biodistribution pattern for up to 4h with high spatial resolution. These block copolymers were investigated in Sprague-Dawley rats bearing the Walker 256 mammary carcinoma in vivo. Organ/tumor uptake was quantified by ex vivo biodistribution as well as small animal positron emission tomography (PET). All polymers showed renal clearance with time. Their uptake in liver and spleen decreased with size of the aggregates. This made PEGylated polymers--which form smaller aggregates--attractive as they show a higher blood pool concentration. Within the studied polymers, the block copolymer of 7% PEGylation exhibited the most favorable organ distribution pattern, showing highest blood-circulation level as well as lowest hepatic and splenic uptake. Most remarkably, the in vivo results revealed a continuous increase in tumor accumulation with PEGylation (independent of the blood pool concentration)--starting from lowest tumor uptake for the pure block copolymer to highest enrichment with 11% PEG side chains. These findings emphasize the need for reliable (non-invasive) in vivo techniques revealing overall polymer distribution and helping to identify drug carrier systems for efficient therapy. © 2013.

  4. Invariant Observer-Based State Estimation for Micro-Aerial Vehicles in GPS-Denied Indoor Environments Using an RGB-D Camera and MEMS Inertial Sensors

    Directory of Open Access Journals (Sweden)

    Dachuan Li

    2015-04-01

    Full Text Available This paper presents a non-linear state observer-based integrated navigation scheme for estimating the attitude, position and velocity of micro aerial vehicles (MAV operating in GPS-denied indoor environments, using the measurements from low-cost MEMS (micro electro-mechanical systems inertial sensors and an RGB-D camera. A robust RGB-D visual odometry (VO approach was developed to estimate the MAV’s relative motion by extracting and matching features captured by the RGB-D camera from the environment. The state observer of the RGB-D visual-aided inertial navigation was then designed based on the invariant observer theory for systems possessing symmetries. The motion estimates from the RGB-D VO were fused with inertial and magnetic measurements from the onboard MEMS sensors via the state observer, providing the MAV with accurate estimates of its full six degree-of-freedom states. Implementations on a quadrotor MAV and indoor flight test results demonstrate that the resulting state observer is effective in estimating the MAV’s states without relying on external navigation aids such as GPS. The properties of computational efficiency and simplicity in gain tuning make the proposed invariant observer-based navigation scheme appealing for actual MAV applications in indoor environments.

  5. Junocam: Juno's Outreach Camera

    Science.gov (United States)

    Hansen, C. J.; Caplinger, M. A.; Ingersoll, A.; Ravine, M. A.; Jensen, E.; Bolton, S.; Orton, G.

    2017-11-01

    Junocam is a wide-angle camera designed to capture the unique polar perspective of Jupiter offered by Juno's polar orbit. Junocam's four-color images include the best spatial resolution ever acquired of Jupiter's cloudtops. Junocam will look for convective clouds and lightning in thunderstorms and derive the heights of the clouds. Junocam will support Juno's radiometer experiment by identifying any unusual atmospheric conditions such as hotspots. Junocam is on the spacecraft explicitly to reach out to the public and share the excitement of space exploration. The public is an essential part of our virtual team: amateur astronomers will supply ground-based images for use in planning, the public will weigh in on which images to acquire, and the amateur image processing community will help process the data.

  6. Clinical usefulness of augmented reality using infrared camera based real-time feedback on gait function in cerebral palsy: a case study.

    Science.gov (United States)

    Lee, Byoung-Hee

    2016-04-01

    [Purpose] This study investigated the effects of real-time feedback using infrared camera recognition technology-based augmented reality in gait training for children with cerebral palsy. [Subjects] Two subjects with cerebral palsy were recruited. [Methods] In this study, augmented reality based real-time feedback training was conducted for the subjects in two 30-minute sessions per week for four weeks. Spatiotemporal gait parameters were used to measure the effect of augmented reality-based real-time feedback training. [Results] Velocity, cadence, bilateral step and stride length, and functional ambulation improved after the intervention in both cases. [Conclusion] Although additional follow-up studies of the augmented reality based real-time feedback training are required, the results of this study demonstrate that it improved the gait ability of two children with cerebral palsy. These findings suggest a variety of applications of conservative therapeutic methods which require future clinical trials.

  7. Fazendo 3d com uma camera so

    CERN Document Server

    Lunazzi, J J

    2010-01-01

    A simple system to make stereo photography or videos based in just two mirrors was made in 1989 and recently adapted to a digital camera setup. Um sistema simples para fazer fotografia ou videos em estereo baseado em dois espelhos que dividem o campo da imagem foi criado no ano 1989, e recentemente adaptado para camera digital.

  8. Omnidirectional Underwater Camera Design and Calibration

    Directory of Open Access Journals (Sweden)

    Josep Bosch

    2015-03-01

    Full Text Available This paper presents the development of an underwater omnidirectional multi-camera system (OMS based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3 and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach.

  9. Omnidirectional Underwater Camera Design and Calibration

    Science.gov (United States)

    Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David

    2015-01-01

    This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach. PMID:25774707

  10. Low Noise Camera for Suborbital Science Applications

    Science.gov (United States)

    Hyde, David; Robertson, Bryan; Holloway, Todd

    2015-01-01

    Low-cost, commercial-off-the-shelf- (COTS-) based science cameras are intended for lab use only and are not suitable for flight deployment as they are difficult to ruggedize and repackage into instruments. Also, COTS implementation may not be suitable since mission science objectives are tied to specific measurement requirements, and often require performance beyond that required by the commercial market. Custom camera development for each application is cost prohibitive for the International Space Station (ISS) or midrange science payloads due to nonrecurring expenses ($2,000 K) for ground-up camera electronics design. While each new science mission has a different suite of requirements for camera performance (detector noise, speed of image acquisition, charge-coupled device (CCD) size, operation temperature, packaging, etc.), the analog-to-digital conversion, power supply, and communications can be standardized to accommodate many different applications. The low noise camera for suborbital applications is a rugged standard camera platform that can accommodate a range of detector types and science requirements for use in inexpensive to mid range payloads supporting Earth science, solar physics, robotic vision, or astronomy experiments. Cameras developed on this platform have demonstrated the performance found in custom flight cameras at a price per camera more than an order of magnitude lower.

  11. Photogrammetry-Based Head Digitization for Rapid and Accurate Localization of EEG Electrodes and MEG Fiducial Markers Using a Single Digital SLR Camera.

    Science.gov (United States)

    Clausner, Tommy; Dalal, Sarang S; Crespo-García, Maité

    2017-01-01

    The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D. Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position.

  12. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...

  13. Modeling and prototyping of a flux concentrator for positron capture.

    Energy Technology Data Exchange (ETDEWEB)

    Liu, W.; Gai, W.; Wang, H.; Wong, T.; High Energy Physics; IIT

    2008-10-01

    An adiabatic matching device (AMD) generates a tapered high-strength magnetic field to capture positrons emitted from a positron target to a downstream accelerating structure. The AMD is a key component of a positron source and represents a technical challenge. The International Linear Collider collaboration is proposing to employ a pulsed, normal-conducting, flux-concentrator to generate a 5 Tesla initial magnetic field. The flux-concentrator structure itself and the interactions between the flux-concentrator and the external power supply circuits give rise to a nontrivial system. In this paper, we present a recently developed equivalent circuit model for a flux concentrator, along with the characteristics of a prototype fabricated for validating the model. Using the model, we can obtain the transient response of the pulsed magnetic field and the field profile. Calculations based on the model and the results of measurements made on the prototype are in good agreement.

  14. Electron-Positron Accumulator (EPA)

    CERN Multimedia

    Photographic Service

    1986-01-01

    After acceleration in the low-current linac LIL-W, the electrons and positrons are accumulated in EPA to obtain a sufficient intensity and a suitable time-structure, before being passed on to the PS for further acceleration to 3.5 GeV. Electrons circulate from right to left, positrons in the other direction. Dipole bending magnets are red, focusing quadrupoles blue, sextupoles for chromaticity-control orange. The vertical tube at the left of the picture belongs to an optical transport system carrying the synchrotron radiation to detectors for beam size measurement. Construction of EPA was completed in spring 1986. LIL-W and EPA were conceived for an energy of 600 MeV, but operation was limited to 500 MeV.

  15. Volume-Based Parameters of {sup 18}F-Fluorodeoxyglucose Positron Emission Tomography/Computed Tomography Improve Disease Recurrence Prediction in Postmastectomy Breast Cancer Patients With 1 to 3 Positive Axillary Lymph Nodes

    Energy Technology Data Exchange (ETDEWEB)

    Nakajima, Naomi, E-mail: haruhi0321@gmail.com [Department of Radiation Oncology, National Hospital Organization Shikoku Cancer Center, Ehime (Japan); Department of Radiology, Ehime University, Ehime (Japan); Kataoka, Masaaki [Department of Radiation Oncology, National Hospital Organization Shikoku Cancer Center, Ehime (Japan); Sugawara, Yoshifumi [Department of Diagnostic Radiology, National Hospital Organization Shikoku Cancer Center, Ehime (Japan); Ochi, Takashi [Department of Radiology, Ehime University, Ehime (Japan); Kiyoto, Sachiko; Ohsumi, Shozo [Department of Breast Oncology, National Hospital Organization Shikoku Cancer Center, Ehime (Japan); Mochizuki, Teruhito [Department of Radiology, Ehime University, Ehime (Japan)

    2013-11-15

    Purpose: To determine whether volume-based parameters on pretreatment {sup 18}F-fluorodeoxyglucose positron emission tomography/computed tomography in breast cancer patients treated with mastectomy without adjuvant radiation therapy are predictive of recurrence. Methods and Materials: We retrospectively analyzed 93 patients with 1 to 3 positive axillary nodes after surgery, who were studied with {sup 18}F-fluorodeoxyglucose positron emission tomography/computed tomography for initial staging. We evaluated the relationship between positron emission tomography parameters, including the maximum standardized uptake value, metabolic tumor volume (MTV), and total lesion glycolysis (TLG), and clinical outcomes. Results: The median follow-up duration was 45 months. Recurrence was observed in 11 patients. Metabolic tumor volume and TLG were significantly related to tumor size, number of involved nodes, nodal ratio, nuclear grade, estrogen receptor (ER) status, and triple negativity (TN) (all P values were <.05). In receiver operating characteristic curve analysis, MTV and TLG showed better predictive performance than tumor size, ER status, or TN (area under the curve: 0.85, 0.86, 0.79, 0.74, and 0.74, respectively). On multivariate analysis, MTV was an independent prognostic factor of locoregional recurrence-free survival (hazard ratio 34.42, 95% confidence interval 3.94-882.71, P=.0008) and disease-free survival (DFS) (hazard ratio 13.92, 95% confidence interval 2.65-103.78, P=.0018). The 3-year DFS rate was 93.8% for the lower MTV group (<53.1; n=85) and 25.0% for the higher MTV group (≥53.1; n=8; P<.0001, log–rank test). The 3-year DFS rate for patients with both ER-positive status and MTV <53.1 was 98.2%; and for those with ER-negative status and MTV ≥53.1 it was 25.0% (P<.0001). Conclusions: Volume-based parameters improve recurrence prediction in postmastectomy breast cancer patients with 1 to 3 positive nodes. The addition of MTV to ER status or TN has

  16. Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor

    Directory of Open Access Journals (Sweden)

    Toan Minh Hoang

    2017-10-01

    Full Text Available Recently, autonomous vehicles, particularly self-driving cars, have received significant attention owing to rapid advancements in sensor and computation technologies. In addition to traffic sign recognition, road lane detection is one of the most important factors used in lane departure warning systems and autonomous vehicles for maintaining the safety of semi-autonomous and fully autonomous systems. Unlike traffic signs, road lanes are easily damaged by both internal and external factors such as road quality, occlusion (traffic on the road, weather conditions, and illumination (shadows from objects such as cars, trees, and buildings. Obtaining clear road lane markings for recognition processing is a difficult challenge. Therefore, we propose a method to overcome various illumination problems, particularly severe shadows, by using fuzzy system and line segment detector algorithms to obtain better results for detecting road lanes by a visible light camera sensor. Experimental results from three open databases, Caltech dataset, Santiago Lanes dataset (SLD, and Road Marking dataset, showed that our method outperformed conventional lane detection methods.

  17. Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor

    Science.gov (United States)

    Hoang, Toan Minh; Baek, Na Rae; Cho, Se Woon; Kim, Ki Wan; Park, Kang Ryoung

    2017-01-01

    Recently, autonomous vehicles, particularly self-driving cars, have received significant attention owing to rapid advancements in sensor and computation technologies. In addition to traffic sign recognition, road lane detection is one of the most important factors used in lane departure warning systems and autonomous vehicles for maintaining the safety of semi-autonomous and fully autonomous systems. Unlike traffic signs, road lanes are easily damaged by both internal and external factors such as road quality, occlusion (traffic on the road), weather conditions, and illumination (shadows from objects such as cars, trees, and buildings). Obtaining clear road lane markings for recognition processing is a difficult challenge. Therefore, we propose a method to overcome various illumination problems, particularly severe shadows, by using fuzzy system and line segment detector algorithms to obtain better results for detecting road lanes by a visible light camera sensor. Experimental results from three open databases, Caltech dataset, Santiago Lanes dataset (SLD), and Road Marking dataset, showed that our method outperformed conventional lane detection methods. PMID:29143764

  18. Recognition of Damaged Arrow-Road Markings by Visible Light Camera Sensor Based on Convolutional Neural Network.

    Science.gov (United States)

    Vokhidov, Husan; Hong, Hyung Gil; Kang, Jin Kyu; Hoang, Toan Minh; Park, Kang Ryoung

    2016-12-16

    Automobile driver information as displayed on marked road signs indicates the state of the road, traffic conditions, proximity to schools, etc. These signs are important to insure the safety of the driver and pedestrians. They are also important input to the automated advanced driver assistance system (ADAS), installed in many automobiles. Over time, the arrow-road markings may be eroded or otherwise damaged by automobile contact, making it difficult for the driver to correctly identify the marking. Failure to properly identify an arrow-road marker creates a dangerous situation that may result in traffic accidents or pedestrian injury. Very little research exists that studies the problem of automated identification of damaged arrow-road marking painted on the road. In this study, we propose a method that uses a convolutional neural network (CNN) to recognize six types of arrow-road markings, possibly damaged, by visible light camera sensor. Experimental results with six databases of Road marking dataset, KITTI dataset, Málaga dataset 2009, Málaga urban dataset, Naver street view dataset, and Road/Lane detection evaluation 2013 dataset, show that our method outperforms conventional methods.

  19. Recognition of Damaged Arrow-Road Markings by Visible Light Camera Sensor Based on Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Husan Vokhidov

    2016-12-01

    Full Text Available Automobile driver information as displayed on marked road signs indicates the state of the road, traffic conditions, proximity to schools, etc. These signs are important to insure the safety of the driver and pedestrians. They are also important input to the automated advanced driver assistance system (ADAS, installed in many automobiles. Over time, the arrow-road markings may be eroded or otherwise damaged by automobile contact, making it difficult for the driver to correctly identify the marking. Failure to properly identify an arrow-road marker creates a dangerous situation that may result in traffic accidents or pedestrian injury. Very little research exists that studies the problem of automated identification of damaged arrow-road marking painted on the road. In this study, we propose a method that uses a convolutional neural network (CNN to recognize six types of arrow-road markings, possibly damaged, by visible light camera sensor. Experimental results with six databases of Road marking dataset, KITTI dataset, Málaga dataset 2009, Málaga urban dataset, Naver street view dataset, and Road/Lane detection evaluation 2013 dataset, show that our method outperforms conventional methods.

  20. Road Lane Detection Robust to Shadows Based on a Fuzzy System Using a Visible Light Camera Sensor.

    Science.gov (United States)

    Hoang, Toan Minh; Baek, Na Rae; Cho, Se Woon; Kim, Ki Wan; Park, Kang Ryoung

    2017-10-28

    Recently, autonomous vehicles, particularly self-driving cars, have received significant attention owing to rapid advancements in sensor and computation technologies. In addition to traffic sign recognition, road lane detection is one of the most important factors used in lane departure warning systems and autonomous vehicles for maintaining the safety of semi-autonomous and fully autonomous systems. Unlike traffic signs, road lanes are easily damaged by both internal and external factors such as road quality, occlusion (traffic on the road), weather conditions, and illumination (shadows from objects such as cars, trees, and buildings). Obtaining clear road lane markings for recognition processing is a difficult challenge. Therefore, we propose a method to overcome various illumination problems, particularly severe shadows, by using fuzzy system and line segment detector algorithms to obtain better results for detecting road lanes by a visible light camera sensor. Experimental results from three open databases, Caltech dataset, Santiago Lanes dataset (SLD), and Road Marking dataset, showed that our method outperformed conventional lane detection methods.

  1. Placido disk-based topography versus high-resolution rotating Scheimpflug camera for corneal power measurements in keratoconic and post-LASIK eyes: reliability and agreement.

    Science.gov (United States)

    Penna, Rachele R; de Sanctis, Ugo; Catalano, Martina; Brusasco, Luca; Grignolo, Federico M

    2017-01-01

    To compare the repeatability/reproducibility of measurement by high-resolution Placido disk-based topography with that of a high-resolution rotating Scheimpflug camera and assess the agreement between the two instruments in measuring corneal power in eyes with keratoconus and post-laser in situ keratomileusis (LASIK). One eye each of 36 keratoconic patients and 20 subjects who had undergone LASIK was included in this prospective observational study. Two independent examiners worked in a random order to take three measurements of each eye with both instruments. Four parameters were measured on the anterior cornea: steep keratometry (Ks), flat keratometry (Kf), mean keratometry (Km), and astigmatism (Ks-Kf). Intra-examiner repeatability and inter-examiner reproducibility were evaluated by calculating the within-subject standard deviation (Sw) the coefficient of repeatability (R), the coefficient of variation (CoV), and the intraclass correlation coefficient (ICC). Agreement between instruments was tested with the Bland-Altman method by calculating the 95% limits of agreement (95% LoA). In keratoconic eyes, the intra-examiner and inter-examiner ICC were >0.95. As compared with measurement by high-resolution Placido disk-based topography, the intra-examiner R of the high-resolution rotating Scheimpflug camera was lower for Kf (0.32 vs 0.88), Ks (0.61 vs 0.88), and Km (0.32 vs 0.84) but higher for Ks-Kf (0.70 vs 0.57). Inter-examiner R values were lower for all parameters measured using the high-resolution rotating Scheimpflug camera. The 95% LoA were -1.28 to +0.55 for Kf, -1.36 to +0.99 for Ks, -1.08 to +0.50 for Km, and -1.11 to +1.48 for Ks-Kf. In the post-LASIK eyes, the intra-examiner and inter-examiner ICC were >0.87 for all parameters. The intra-examiner and inter-examiner R were lower for all parameters measured using the high-resolution rotating Scheimpflug camera. The intra-examiner R was 0.17 vs 0.88 for Kf, 0.21 vs 0.88 for Ks, 0.17 vs 0.86 for Km, and 0

  2. Theranostic unimolecular micelles based on brush-shaped amphiphilic block copolymers for tumor-targeted drug delivery and positron emission tomography imaging.

    Science.gov (United States)

    Guo, Jintang; Hong, Hao; Chen, Guojun; Shi, Sixiang; Nayak, Tapas R; Theuer, Charles P; Barnhart, Todd E; Cai, Weibo; Gong, Shaoqin

    2014-12-24

    Brush-shaped amphiphilic block copolymers were conjugated with a monoclonal antibody against CD105 (i.e., TRC105) and a macrocyclic chelator for (64)Cu-labeling to generate multifunctional theranostic unimolecular micelles. The backbone of the brush-shaped amphiphilic block copolymer was poly(2-hydroxyethyl methacrylate) (PHEMA) and the side chains were poly(L-lactide)-poly(ethylene glycol) (PLLA-PEG). The doxorubicin (DOX)-loaded unimolecular micelles showed a pH-dependent drug release profile and a uniform size distribution. A significantly higher cellular uptake of TRC105-conjugated micelles was observed in CD105-positive human umbilical vein endothelial cells (HUVEC) than nontargeted micelles due to CD105-mediated endocytosis. In contrast, similar and extremely low cellular uptake of both targeted and nontargeted micelles was observed in MCF-7 human breast cancer cells (CD105-negative). The difference between the in vivo tumor accumulation of (64)Cu-labeled TRC105-conjugated micelles and that of nontargeted micelles was studied in 4T1 murine breast tumor-bearing mice, by serial positron emission tomography (PET) imaging and validated by biodistribution studies. These multifunctional unimolecular micelles offer pH-responsive drug release, noninvasive PET imaging capability, together with both passive and active tumor-targeting abilities, thus making them a desirable nanoplatform for cancer theranostics.

  3. 64Cu loaded liposomes as positron emission tomography imaging agents

    DEFF Research Database (Denmark)

    Petersen, Anncatrine Luisa; Binderup, Tina; Rasmussen, Palle

    2011-01-01

    We have developed a highly efficient method for utilizing liposomes as imaging agents for positron emission tomography (PET) giving high resolution images and allowing direct quantification of tissue distribution and blood clearance. Our approach is based on remote loading of a copper-radionuclid...

  4. Kitt Peak speckle camera.

    Science.gov (United States)

    Breckinridge, J B; McAlister, H A; Robinson, W G

    1979-04-01

    The speckle camera in regular use at Kitt Peak National Observatory since 1974 is described in detail. The design of the atmospheric dispersion compensation prisms, the use of film as a recording medium, the accuracy of double star measurements, and the next generation speckle camera are discussed. Photographs of double star speckle patterns with separations from 1.4 sec of arc to 4.7 sec of arc are shown to illustrate the quality of image formation with this camera, the effects of seeing on the patterns, and to illustrate the isoplanatic patch of the atmosphere.

  5. Camera as Cultural Critique

    DEFF Research Database (Denmark)

    Suhr, Christian

    2015-01-01

    What does the use of cameras entail for the production of cultural critique in anthropology? Visual anthropological analysis and cultural critique starts at the very moment a camera is brought into the field or existing visual images are engaged. The framing, distances, and interactions between...... researchers, cameras, and filmed subjects already inherently comprise analytical decisions. It is these ethnographic qualities inherent in audiovisual and photographic imagery that make it of particular value to a participatory anthropological enterprise that seeks to resist analytic closure and seeks instead...

  6. 99mTc-mercaptoacetyltriglycine camera-based measurement of renal clearance: should the result be normalized for body surface area?

    Science.gov (United States)

    Klingensmith, William C

    2013-12-01

    Testing the rate of creatinine clearance by measuring the level of creatinine in the blood and in a 24-h urine collection is a common method of evaluating renal function. The result is routinely normalized for body surface area (BSA). Alternatively, renal clearance can be measured by (99m)Tc-mercaptoacetyltriglycine (MAG3) renal imaging without the need for urine collection. Frequently, the (99m)Tc-MAG3 camera-based result is also normalized for BSA. I evaluated the need for BSA normalization of renal clearance measurements in (99m)Tc-MAG3 imaging studies from both a conceptual and a mathematic point of view. Both approaches involved analyzing the effect of patient size, that is, BSA, on the factors blood volume, renal blood flow, and amount of test substance present in the blood in the creatinine clearance method compared with the (99m)Tc-MAG3 camera-based method. Both the conceptual and the mathematic analyses were consistent with a significant difference between the creatinine and (99m)Tc-MAG3 approaches to measuring renal clearance. Larger patients have larger kidneys, greater renal blood flow, higher renal clearances, larger blood volumes, more muscle mass, and higher BSAs than smaller patients. However, the concentration of creatinine in the blood of patients of any size with normal renal function is similar because the amount of creatinine released into the blood varies with patient muscle mass, which varies with blood volume. Because normalization for BSA is needed for creatinine clearance, a single reference range can be used for all patients. In the case of measurement of renal clearance with (99m)Tc-MAG3 imaging (assuming a constant dose), the concentration of tracer in the blood will vary inversely with patient size because blood volume varies with patient size. Thus, as patient size increases, the blood concentration of tracer will go down and compensate for the increase in renal blood flow and renal clearance, and conversely. Consequently, the (99m

  7. Dynamic Camera Positioning and Reconfiguration for Multi-Camera Networks

    OpenAIRE

    Konda, Krishna Reddy

    2015-01-01

    The large availability of different types of cameras and lenses, together with the reduction in price of video sensors, has contributed to a widespread use of video surveillance systems, which have become a widely adopted tool to enforce security and safety, in detecting and preventing crimes and dangerous events. The possibility for personalization of such systems is generally very high, letting the user customize the sensing infrastructure, and deploying ad-hoc solutions based on the curren...

  8. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  9. Solid State Replacement of Rotating Mirror Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Frank, A M; Bartolick, J M

    2006-08-25

    Rotating mirror cameras have been the mainstay of mega-frame per second imaging for decades. There is still no electronic camera that can match a film based rotary mirror camera for the combination of frame count, speed, resolution and dynamic range. The rotary mirror cameras are predominantly used in the range of 0.1 to 100 micro-seconds per frame, for 25 to more than a hundred frames. Electron tube gated cameras dominate the sub microsecond regime but are frame count limited. Video cameras are pushing into the microsecond regime but are resolution limited by the high data rates. An all solid state architecture, dubbed ''In-situ Storage Image Sensor'' or ''ISIS'', by Prof. Goji Etoh, has made its first appearance into the market and its evaluation is discussed. Recent work at Lawrence Livermore National Laboratory has concentrated both on evaluation of the presently available technologies and exploring the capabilities of the ISIS architecture. It is clear though there is presently no single chip camera that can simultaneously match the rotary mirror cameras, the ISIS architecture has the potential to approach their performance.

  10. Imaging plates as position-sensitive detectors of positrons and gamma-rays

    CERN Document Server

    Doyama, M; Yoshiie, T; Hayashi, Y; Kiritani, M; Oikawa, T

    2000-01-01

    Imaging plates have been used as position-sensitive detectors for positrons. Photo-stimulated luminescent material based on BaFX : Eu sup 2 sup + (X=Cl, Br, I) is used. A linear relation between the positron fluence and output signal intensity readout by a 'PIXsysTEM II' ( pixelized to 25 mu mx25 mu m is obtained, using sup 5 sup 8 Co and sup 2 sup 2 Na positron emitters. The linearity extends to six decades from 10 sup 5 to 10 sup 1 sup 1 positrons/cm sup 2. Sensitivities of one gamma-ray photon relative to a positron are 0.011 and 3.4x10 sup - sup 3 for sup 6 sup 5 Zn and sup 2 sup 2 Na, respectively.

  11. Recent Advances in Electron and Positron Sources

    Energy Technology Data Exchange (ETDEWEB)

    Clendenin, James E

    2000-07-20

    Recent advances in electron and positron sources have resulted in new capabilities driven in most cases by the increasing demands of advanced accelerating systems. Electron sources for brighter beams and for high average-current beams are described. The status and remaining challenges for polarized electron beams are also discussed. For positron sources, recent activity in the development of polarized positron beams for future colliders is reviewed. Finally, a new proposal for combining laser cooling with beam polarization is presented.

  12. Positron Spectroscopy of Hydrothermally Grown Actinide Oxides

    Science.gov (United States)

    2014-03-27

    will be critical to the interactions of positrons with the matter . Other types of defects that would be likely to be present in these materials...a suite of techniques that depend on the interactions of positrons with normal matter in order to gain some information about the structure of a...test sample. All of these techniques depend on the property that the positron is the antimatter complement of the electron, and at meV energies

  13. Positron imaging with multiwire proportional chamber-gamma converter hybrid detectors

    Energy Technology Data Exchange (ETDEWEB)

    Chu, D.Y.H.

    1976-09-01

    A large area positron camera was developed using multiwire proportional chambers as detectors and electromagnetic delay lines for coordinate readout. Honeycomb structured gamma converters made of lead are coupled to the chambers for efficient gamma detection and good spatial resolution. Two opposing detectors, each having a sensitive area of 48 cm x 48 cm, are operated in coincidence for the detection of annihilation gammas (511 keV) from positron emitters. Detection efficiency of 4.2 percent per detector and spatial resolution of 6 to 7 mm FWHM at the mid-plane were achieved. The present camera operates at a maximum count rate of 24 K counts/min, limited by accidental coincidence. The theory for the gamma converter is presented along with a review of the operation of the multiwire proportional chamber and delay line readout. Calculated gamma converter efficiencies are compared with the measured results using a prototype test chamber. The characteristics of the positron camera system is evaluated, and the performance is shown to be consistent with calculation.

  14. Digital Low Frequency Radio Camera

    Science.gov (United States)

    Fullekrug, M.; Mezentsev, A.; Soula, S.; van der Velde, O.; Poupeney, J.; Sudre, C.; Gaffet, S.; Pincon, J.

    2012-04-01

    This contribution reports the design, realization and operation of a novel digital low frequency radio camera towards an exploration of the Earth's electromagnetic environment with particular emphasis on lightning discharges and subsequent atmospheric effects such as transient luminous events. The design of the digital low frequency radio camera is based on the idea of radio interferometry with a network of radio receivers which are separated by spatial baselines comparable to the wavelength of the observed radio waves, i.e., ~1-100 km which corresponds to a frequency range from ~3-300 kHz. The key parameter towards the realization of the radio interferometer is the frequency dependent slowness of the radio waves within the Earth's atmosphere with respect to the speed of light in vacuum. This slowness is measured with the radio interferometer by using well documented radio transmitters. The digital low frequency radio camera can be operated in different modes. In the imaging mode, still photographs show maps of the low frequency radio sky. In the video mode, movies show the dynamics of the low frequency radio sky. The exposure time of the photograhps, the frame rate of the video, and the radio frequency of interest can be adjusted by the observer. Alternatively, the digital radio camera can be used in the monitoring mode, where a particular area of the sky is observed continuously. The first application of the digital low frequency radio camera is to characterize the electromagnetic energy emanating from sprite producing lightning discharges, but it is expected that it can also be used to identify and investigate numerous other radio sources of the Earth's electromagnetic environment.

  15. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  16. [Positron-emission tomography (PET)--basic considerations].

    Science.gov (United States)

    von Schulthess, G K; Westera, G; Schubiger, P A

    1993-08-24

    A PET installation is a technically complex system composed essentially of two parts. The first consists in isotope production and synthesis of labeled biochemical compounds, the second in measuring the distribution of radioactivity in the body with the PET camera and the generation of image data. The specific advantage of PET lies on one hand in the use of positron emitters that are isotopes of ubiquitous elements in biologic matter, i.e. exact analogs of biomolecules can be produced and utilized and on the other hand quantification is possible. (= enable quantitative...?) Theoretically there are no limits for the synthesis of radioactive compounds and the method therefore provides unlimited test designs. The short half-life of the employed isotopes is advantageous for radioprotection reasons but the production of labeled compounds necessitates a cyclotron accelerator and a special laboratory for the handling of radioactive compounds rendering the production of the test substances relatively expensive. Measurements take place in a PET camera with a large number of coincidence detectors. The best available cameras have a spatial resolution of 5 mm in all three axes with an axial window of about 15 cm diameter. Evaluation of PET images is done in a qualitative way by superposition on anatomic images (CT, MRI) by image fusion. Quantitative determinations require elaborate computer modeling.

  17. Use of Multiple-Angle Snow Camera (MASC) Observations as a Constraint on Radar-Based Retrievals of Snowfall Rate

    Science.gov (United States)

    Cooper, S.; Garrett, T. J.; Wood, N.; L'Ecuyer, T. S.

    2015-12-01

    We use a combination of Ka-band Zenith Radar (KaZR) and Multiple-Angle Snow Camera (MASC) observations at the ARM North Slope Alaska Climate Facility Site at Barrow to quantify snowfall. The optimal-estimation framework is used to combine information from the KaZR and MASC into a common retrieval scheme, where retrieved estimates of snowfall are compared to observations at a nearby NWS measurement site for evaluation. Modified from the operational CloudSat algorithm, the retrieval scheme returns estimates of the vertical profile of exponential PSD slope parameter with a constant number density. These values, in turn, can be used to calculate surface snowrate (liquid equivalent) given knowledge of snowflake microphysical properties and fallspeeds. We exploit scattering models for a variety of ice crystal shapes including aggregates developed specifically from observations of snowfall properties at high-latitudes, as well as more pristine crystal shapes involving sector plates, bullet rosettes, and hexagonal columns. As expected, initial retrievals suggest large differences (300% for some events) in estimated snowfall accumulations given the use of the different ice crystal assumptions. The complex problem of how we can more quantitatively link MASC snowflake images to specific radar scattering properties is an ongoing line of research. Here, however, we do quantify the use of MASC observations of fallspeed and PSD parameters as constraint on our optimal-estimation retrieval approach. In terms of fallspeed, we find differences in estimated snowfall of nearly 50% arising from the use of MASC observed fallspeeds relative to those derived from traditional fallspeed parameterizations. In terms of snowflake PSD, we find differences of nearly 25% arising from the use of MASC observed slope parameters relative to those derived from field campaign observations of high-altitude snow events. Of course, these different sources of error conspire to make the estimate of snowfall

  18. Positron transport in the interstellar medium

    National Research Council Canada - National Science Library

    Jean, P; Gillard, W; Marcowith, A; Ferrière, K

    2009-01-01

    ...). This understanding is a key to determine whether thespatial distribution of the annihilation emission observed in our Galaxyreflects the spatial distribution of positron sources and, therefore, makes...

  19. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  20. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2017-03-01

    Full Text Available Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT, speed-up robust feature (SURF, local binary patterns (LBP, histogram of oriented gradients (HOG, and weighted HOG. Recently, the convolutional neural network (CNN method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  1. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  2. Ge Quantum Dot Infrared Imaging Camera Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Luna Innovations Incorporated proposes to develop a high performance Ge quantum dots-based infrared (IR) imaging camera on Si substrate. The high sensitivity, large...

  3. Estimation of Energy Balance Components over a Drip-Irrigated Olive Orchard Using Thermal and Multispectral Cameras Placed on a Helicopter-Based Unmanned Aerial Vehicle (UAV

    Directory of Open Access Journals (Sweden)

    Samuel Ortega-Farías

    2016-08-01

    Full Text Available A field experiment was carried out to implement a remote sensing energy balance (RSEB algorithm for estimating the incoming solar radiation (Rsi, net radiation (Rn, sensible heat flux (H, soil heat flux (G and latent heat flux (LE over a drip-irrigated olive (cv. Arbequina orchard located in the Pencahue Valley, Maule Region, Chile (35°25′S; 71°44′W; 90 m above sea level. For this study, a helicopter-based unmanned aerial vehicle (UAV was equipped with multispectral and infrared thermal cameras to obtain simultaneously the normalized difference vegetation index (NDVI and surface temperature (Tsurface at very high resolution (6 cm × 6 cm. Meteorological variables and surface energy balance components were measured at the time of the UAV overpass (near solar noon. The performance of the RSEB algorithm was evaluated using measurements of H and LE obtained from an eddy correlation system. In addition, estimated values of Rsi and Rn were compared with ground-truth measurements from a four-way net radiometer while those of G were compared with soil heat flux based on flux plates. Results indicated that RSEB algorithm estimated LE and H with errors of 7% and 5%, respectively. Values of the root mean squared error (RMSE and mean absolute error (MAE for LE were 50 and 43 W m−2 while those for H were 56 and 46 W m−2, respectively. Finally, the RSEB algorithm computed Rsi, Rn and G with error less than 5% and with values of RMSE and MAE less than 38 W m−2. Results demonstrated that multispectral and thermal cameras placed on an UAV could provide an excellent tool to evaluate the intra-orchard spatial variability of Rn, G, H, LE, NDVI and Tsurface over the tree canopy and soil surface between rows.

  4. Graphic design of pinhole cameras

    Science.gov (United States)

    Edwards, H. B.; Chu, W. P.

    1979-01-01

    The paper describes a graphic technique for the analysis and optimization of pinhole size and focal length. The technique is based on the use of the transfer function of optical elements described by Scott (1959) to construct the transfer function of a circular pinhole camera. This transfer function is the response of a component or system to a pattern of lines having a sinusoidally varying radiance at varying spatial frequencies. Some specific examples of graphic design are presented.

  5. Analysis of the experimental positron lifetime spectra by neural networks

    Directory of Open Access Journals (Sweden)

    Avdić Senada

    2003-01-01

    Full Text Available This paper deals with the analysis of experimental positron lifetime spectra in polymer materials by using various algorithms of neural networks. A method based on the use of artificial neural networks for unfolding the mean lifetime and intensity of the spectral components of simulated positron lifetime spectra was previously suggested and tested on simulated data [Pžzsitetal, Applied Surface Science, 149 (1998, 97]. In this work, the applicability of the method to the analysis of experimental positron spectra has been verified in the case of spectra from polymer materials with three components. It has been demonstrated that the backpropagation neural network can determine the spectral parameters with a high accuracy and perform the decomposi-tion of lifetimes which differ by 10% or more. The backpropagation network has not been suitable for the identification of both the parameters and the number of spectral components. Therefore, a separate artificial neural network module has been designed to solve the classification problem. Module types based on self-organizing map and learning vector quantization algorithms have been tested. The learning vector quantization algorithm was found to have better performance and reliability. A complete artificial neural network analysis tool of positron lifetime spectra has been constructed to include a spectra classification module and parameter evaluation modules for spectra with a different number of components. In this way, both flexibility and high resolution can be achieved.

  6. Acceptance of gamma camera Philips bright view based in the protocol nema 2001; Aceptacion de gammacamara philips brightview basada en el protocolo nema 2001

    Energy Technology Data Exchange (ETDEWEB)

    Ferrer Gracia, C.; Luquero Llopis, N.; Plaza Aparicio, R.; Huerga Cabrerizo, C.; Corredoira Silva, E.; Serrada Hierro, A.

    2013-07-01

    Recently a new Philips Bright View X gamma camera installed in the Nuclear Medicine Service. It is one gamma camera of nuclear medicine of variable angle with double detector that can be configured for cardiac SPECT, SPECT not circular, body full, dynamic planar and several acquisitions with a single detector. (Author)

  7. A small-scale comparison of Iceland scallop size distributions obtained from a camera based autonomous underwater vehicle and dredge survey.

    Science.gov (United States)

    Singh, Warsha; Örnólfsdóttir, Erla B; Stefansson, Gunnar

    2014-01-01

    An approach is developed to estimate size of Iceland scallop shells from AUV photos. A small-scale camera based AUV survey of Iceland scallops was conducted at a defined site off West Iceland. Prior to height estimation of the identified shells, the distortions introduced by the vehicle orientation and the camera lens were corrected. The average AUV pitch and roll was 1.3 and 2.3 deg that resulted in <2% error in ground distance rendering these effects negligible. A quadratic polynomial model was identified for lens distortion correction. This model successfully predicted a theoretical grid from a frame photographed underwater, representing the inherent lens distortion. The predicted shell heights were scaled for the distance from the bottom at which the photos were taken. This approach was validated by height estimation of scallops of known sizes. An underestimation of approximately 0.5 cm was seen, which could be attributed to pixel error, where each pixel represented 0.24 x 0.27 cm. After correcting for this difference the estimated heights ranged from 3.8-9.3 cm. A comparison of the height-distribution from a small-scale dredge survey carried out in the vicinity showed non-overlapping peaks in size distribution, with scallops of a broader size range visible in the AUV survey. Further investigations are necessary to evaluate any underlying bias and to validate how representative these surveys are of the true population. The low resolution images made identification of smaller scallops difficult. Overall, the observations of very few small scallops in both surveys could be attributed to low recruitment levels in the recent years due to the known scallop parasite outbreak in the region.

  8. A small-scale comparison of Iceland scallop size distributions obtained from a camera based autonomous underwater vehicle and dredge survey.

    Directory of Open Access Journals (Sweden)

    Warsha Singh

    Full Text Available An approach is developed to estimate size of Iceland scallop shells from AUV photos. A small-scale camera based AUV survey of Iceland scallops was conducted at a defined site off West Iceland. Prior to height estimation of the identified shells, the distortions introduced by the vehicle orientation and the camera lens were corrected. The average AUV pitch and roll was 1.3 and 2.3 deg that resulted in <2% error in ground distance rendering these effects negligible. A quadratic polynomial model was identified for lens distortion correction. This model successfully predicted a theoretical grid from a frame photographed underwater, representing the inherent lens distortion. The predicted shell heights were scaled for the distance from the bottom at which the photos were taken. This approach was validated by height estimation of scallops of known sizes. An underestimation of approximately 0.5 cm was seen, which could be attributed to pixel error, where each pixel represented 0.24 x 0.27 cm. After correcting for this difference the estimated heights ranged from 3.8-9.3 cm. A comparison of the height-distribution from a small-scale dredge survey carried out in the vicinity showed non-overlapping peaks in size distribution, with scallops of a broader size range visible in the AUV survey. Further investigations are necessary to evaluate any underlying bias and to validate how representative these surveys are of the true population. The low resolution images made identification of smaller scallops difficult. Overall, the observations of very few small scallops in both surveys could be attributed to low recruitment levels in the recent years due to the known scallop parasite outbreak in the region.

  9. Motion correction in thoracic positron emission tomography

    CERN Document Server

    Gigengack, Fabian; Dawood, Mohammad; Schäfers, Klaus P

    2015-01-01

    Respiratory and cardiac motion leads to image degradation in Positron Emission Tomography (PET), which impairs quantification. In this book, the authors present approaches to motion estimation and motion correction in thoracic PET. The approaches for motion estimation are based on dual gating and mass-preserving image registration (VAMPIRE) and mass-preserving optical flow (MPOF). With mass-preservation, image intensity modulations caused by highly non-rigid cardiac motion are accounted for. Within the image registration framework different data terms, different variants of regularization and parametric and non-parametric motion models are examined. Within the optical flow framework, different data terms and further non-quadratic penalization are also discussed. The approaches for motion correction particularly focus on pipelines in dual gated PET. A quantitative evaluation of the proposed approaches is performed on software phantom data with accompanied ground-truth motion information. Further, clinical appl...

  10. Source of slow polarized positrons using the brilliant gamma beam at ELI-NP. Converter design and simulations

    Science.gov (United States)

    Djourelov, Nikolay; Oprisa, Andreea; Leca, Victor

    2016-01-01

    Simulations of slow positron (es+) source based on interaction of a circularly polarized gamma beam with a W converter were performed. The aim of the study was to propose a converter geometry and to determine the expected slow positron beam intensity and its spot size, and the degree of positron spin polarization, as well. The Monte Carlo simulations by means of GEANT4 were used to estimate the fast positron production and the moderation efficiency of the converter working as a self-moderator, as well. Finite element analysis by means of COMSOL Multiphysics was applied to calculate the fraction of extracted moderated positrons from the converter cells and the quality of the beam formation by focusing. Using the low energy (converter geometry and in case of 100% circular polarization of the gammas the degree of spin polarization of the slow positron beam is expected to be 33%.

  11. Ionisation of atomic hydrogen by positron impact

    Science.gov (United States)

    Spicher, Gottfried; Olsson, Bjorn; Raith, Wilhelm; Sinapius, Guenther; Sperber, Wolfgang

    1990-01-01

    With the crossed beam apparatus the relative impact-ionization cross section of atomic hydrogen by positron impact was measured. A layout of the scattering region is given. The first measurements on the ionization of atomic hydrogen by positron impact are also given.

  12. Slow positron beam at the JINR, Dubna

    Directory of Open Access Journals (Sweden)

    Horodek Paweł

    2015-12-01

    Full Text Available The Low Energy Positron Toroidal Accumulator (LEPTA at the Joint Institute for Nuclear Research (JINR proposed for generation of positronium in flight has been adopted for positron annihilation spectroscopy (PAS. The positron injector generates continuous slow positron beam with positron energy range between 50 eV and 35 keV. The radioactive 22Na isotope is used. In distinction to popular tungsten foil, here the solid neon is used as moderator. It allows to obtain the beam intensity of about 105 e+/s width energy spectrum characterized by full width at half maximum (FWHM of 3.4 eV and a tail to lower energies of about 30 eV. The paper covers the characteristic of variable energy positron beam at the LEPTA facility: parameters, the rule of moderation, scheme of injector, and transportation of positrons into the sample chamber. Recent status of the project and its development in the field of PAS is discussed. As an example, the measurement of the positron diffusion length in pure iron is demonstrated.

  13. Electron and Positron Stopping Powers of Materials

    Science.gov (United States)

    SRD 7 NIST Electron and Positron Stopping Powers of Materials (PC database for purchase)   The EPSTAR database provides rapid calculations of stopping powers (collisional, radiative, and total), CSDA ranges, radiation yields and density effect corrections for incident electrons or positrons with kinetic energies from 1 keV to 10 GeV, and for any chemically defined target material.

  14. The development of a compact positron tomograph for prostate imaging

    Energy Technology Data Exchange (ETDEWEB)

    Huber, Jennifer S.; Qi, Jinyi; Derenzo, Stephen E.; Moses, William W.; Huesman, Ronald H.; Budinger, Thomas F.

    2002-12-17

    We give design details and expected image results of a compact positron tomograph designed for prostate imaging that centers a patient between a pair of external curved detector banks (ellipse: 45 cm minor, 70 cm major axis). The bottom bank is fixed below the patient bed, and the top bank moves upward for patient access and downward for maximum sensitivity. Each bank is composed of two rows (axially) of 20 CTI PET Systems HR+ block detectors, forming two arcs that can be tilted to minimize attenuation. Compared to a conventional PET system, our camera uses about one-quarter the number of detectors and has almost two times higher solid angle coverage for a central point source, because the detectors are close to the patient. The detectors are read out by modified CTI HRRT data acquisition electronics. The individual detectors are angled in the plane to point towards the prostate to minimize reso

  15. The VISTA IR camera

    Science.gov (United States)

    Dalton, Gavin B.; Caldwell, Martin; Ward, Kim; Whalley, Martin S.; Burke, Kevin; Lucas, John M.; Richards, Tony; Ferlet, Marc; Edeson, Ruben L.; Tye, Daniel; Shaughnessy, Bryan M.; Strachan, Mel; Atad-Ettedgui, Eli; Leclerc, Melanie R.; Gallie, Angus; Bezawada, Nagaraja N.; Clark, Paul; Bissonauth, Nirmal; Luke, Peter; Dipper, Nigel A.; Berry, Paul; Sutherland, Will; Emerson, Jim

    2004-09-01

    The VISTA IR Camera has now completed its detailed design phase and is on schedule for delivery to ESO"s Cerro Paranal Observatory in 2006. The camera consists of 16 Raytheon VIRGO 2048x2048 HgCdTe arrays in a sparse focal plane sampling a 1.65 degree field of view. A 1.4m diameter filter wheel provides slots for 7 distinct science filters, each comprising 16 individual filter panes. The camera also provides autoguiding and curvature sensing information for the VISTA telescope, and relies on tight tolerancing to meet the demanding requirements of the f/1 telescope design. The VISTA IR camera is unusual in that it contains no cold pupil-stop, but rather relies on a series of nested cold baffles to constrain the light reaching the focal plane to the science beam. In this paper we present a complete overview of the status of the final IR Camera design, its interaction with the VISTA telescope, and a summary of the predicted performance of the system.

  16. Euclidean Position Estimation if Features on a Moving Object Using a Single Camera: A Lyapunov-Based Approach

    National Research Council Canada - National Science Library

    Chitrakaran, V. K; Dawson, D. M; Chen, J; Dixon, W. E

    2004-01-01

    .... No explicit model is used to describe the movement of the object. Homography-based techniques are used in the development of the object kinematics, while Lyapunov design methods are utilized in the synthesis of the adaptive estimator...

  17. Applications of slow positrons to cancer research: Search for selectivity of positron annihilation to skin cancer

    Energy Technology Data Exchange (ETDEWEB)

    Jean, Y.C. [Department of Chemistry, University of Missouri-Kansas City, 205 Spenscer Chemistry Building, 5009 Rockhill Road, Kansas City, MO 64110 (United States)]. E-mail: jeany@umkc.edu; Li Ying [Department of Chemistry, University of Missouri-Kansas City, 205 Spenscer Chemistry Building, 5009 Rockhill Road, Kansas City, MO 64110 (United States); Liu Gaung [Department of Chemistry, University of Missouri-Kansas City, 205 Spenscer Chemistry Building, 5009 Rockhill Road, Kansas City, MO 64110 (United States); Chen, Hongmin [Department of Chemistry, University of Missouri-Kansas City, 205 Spenscer Chemistry Building, 5009 Rockhill Road, Kansas City, MO 64110 (United States); Zhang Junjie [Department of Chemistry, University of Missouri-Kansas City, 205 Spenscer Chemistry Building, 5009 Rockhill Road, Kansas City, MO 64110 (United States); Gadzia, Joseph E. [Dermatology, Department of Internal Medicine, University of Kansas Medical Center, Kansas City, KS 66103 (United States); Kansas Medical Clinic, Topeka, KS 66614 (United States)

    2006-02-28

    Slow positrons and positron annihilation spectroscopy (PAS) have been applied to medical research in searching for positron annihilation selectivity to cancer cells. We report the results of positron lifetime and Doppler broadening energy spectroscopies in human skin samples with and without cancer as a function of positron incident energy (up to 8 {mu}m depth) and found that the positronium annihilates at a significantly lower rate and forms at a lower probability in the samples having either basal cell carcinoma (BCC) or squamous cell carcinoma (SCC) than in the normal skin. The significant selectivity of positron annihilation to skin cancer may open a new research area of developing positron annihilation spectroscopy as a novel medical tool to detect cancer formation externally and non-invasively at the early stages.

  18. Commercialization of radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10{sup 6} - 10{sup 8} rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  19. Uav Cameras: Overview and Geometric Calibration Benchmark

    Science.gov (United States)

    Cramer, M.; Przybilla, H.-J.; Zurhorst, A.

    2017-08-01

    Different UAV platforms and sensors are used in mapping already, many of them equipped with (sometimes) modified cameras as known from the consumer market. Even though these systems normally fulfil their requested mapping accuracy, the question arises, which system performs best? This asks for a benchmark, to check selected UAV based camera systems in well-defined, reproducible environments. Such benchmark is tried within this work here. Nine different cameras used on UAV platforms, representing typical camera classes, are considered. The focus is laid on the geometry here, which is tightly linked to the process of geometrical calibration of the system. In most applications the calibration is performed in-situ, i.e. calibration parameters are obtained as part of the project data itself. This is often motivated because consumer cameras do not keep constant geometry, thus, cannot be seen as metric cameras. Still, some of the commercial systems are quite stable over time, as it was proven from repeated (terrestrial) calibrations runs. Already (pre-)calibrated systems may offer advantages, especially when the block geometry of the project does not allow for a stable and sufficient in-situ calibration. Especially for such scenario close to metric UAV cameras may have advantages. Empirical airborne test flights in a calibration field have shown how block geometry influences the estimated calibration parameters and how consistent the parameters from lab calibration can be reproduced.

  20. Towards next generation 3D cameras

    Science.gov (United States)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (<100 microns resolution) scans in extremely demanding scenarios with low-cost components. Several of these cameras are making a practical impact in industrial automation, being adopted in robotic inspection and assembly systems.

  1. HIGH SPEED CAMERA

    Science.gov (United States)

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  2. A Smartphone Camera-Based Indoor Positioning Algorithm of Crowded Scenarios with the Assistance of Deep CNN.

    Science.gov (United States)

    Jiao, Jichao; Li, Fei; Deng, Zhongliang; Ma, Wenjing

    2017-03-28

    Considering the installation cost and coverage, the received signal strength indicator (RSSI)-based indoor positioning system is widely used across the world. However, the indoor positioning performance, due to the interference of wireless signals that are caused by the complex indoor environment that includes a crowded population, cannot achieve the demands of indoor location-based services. In this paper, we focus on increasing the signal strength estimation accuracy considering the population density, which is different to the other RSSI-based indoor positioning methods. Therefore, we propose a new wireless signal compensation model considering the population density, distance, and frequency. First of all, the number of individuals in an indoor crowded scenario can be calculated by our convolutional neural network (CNN)-based human detection approach. Then, the relationship between the population density and the signal attenuation is described in our model. Finally, we use the trilateral positioning principle to realize the pedestrian location. According to the simulation and tests in the crowded scenarios, the proposed model increases the accuracy of the signal strength estimation by 1.53 times compared to that without considering the human body. Therefore, the localization accuracy is less than 1.37 m, which indicates that our algorithm can improve the indoor positioning performance and is superior to other RSSI models.

  3. Wide angle pinhole camera

    Science.gov (United States)

    Franke, J. M.

    1978-01-01

    Hemispherical refracting element gives pinhole camera 180 degree field-of-view without compromising its simplicity and depth-of-field. Refracting element, located just behind pinhole, bends light coming in from sides so that it falls within image area of film. In contrast to earlier pinhole cameras that used water or other transparent fluids to widen field, this model is not subject to leakage and is easily loaded and unloaded with film. Moreover, by selecting glass with different indices of refraction, field at film plane can be widened or reduced.

  4. A Dataset for Camera Independent Color Constancy.

    Science.gov (United States)

    Aytekin, Caglar; Nikkanen, Jarno; Gabbouj, Moncef

    2017-10-17

    In this paper, we provide a novel dataset designed for camera independent color constancy research. Camera independence corresponds to the robustness of an algorithm's performance when run on images of the same scene taken by different cameras. Accordingly, the images in our database correspond to several lab and field scenes each of which is captured by three different cameras with minimal registration errors. The lab scenes are also captured under five different illuminations. The spectral responses of cameras and the spectral power distributions of the lab light sources are also provided, as they may prove beneficial for training future algorithms to achieve color constancy. For a fair evaluation of future methods, we provide guidelines for supervised methods with indicated training, validation and testing partitions. Accordingly, we evaluate two recently proposed convolutional neural network based color constancy algorithms as baselines for future research. As a side contribution, this dataset also includes images taken by a mobile camera with color shading corrected and uncorrected results. This allows research on the effect of color shading as well.In this paper, we provide a novel dataset designed for camera independent color constancy research. Camera independence corresponds to the robustness of an algorithm's performance when run on images of the same scene taken by different cameras. Accordingly, the images in our database correspond to several lab and field scenes each of which is captured by three different cameras with minimal registration errors. The lab scenes are also captured under five different illuminations. The spectral responses of cameras and the spectral power distributions of the lab light sources are also provided, as they may prove beneficial for training future algorithms to achieve color constancy. For a fair evaluation of future methods, we provide guidelines for supervised methods with indicated training, validation and testing

  5. Studies of Positron Generation from Ultraintense Laser-Matter Interactions

    Science.gov (United States)

    Williams, Gerald Jackson

    Laser-produced pair jets possess unique characteristics that offer great potential for their use in laboratory-astrophysics experiments to study energetic phenomenon such as relativistic shock accelerations. High-flux, high-energy positron sources may also be used to study relativistic pair plasmas and useful as novel diagnostic tools for high energy density conditions. Copious amounts of positrons are produced with MeV energies from directly irradiating targets with ultraintense lasers where relativistic electrons, accelerated by the laser field, drive positron-electron pair production. Alternatively, laser wakefield accelerated electrons can produce pairs by the same mechanisms inside a secondary converter target. This dissertation describes a series of novel experiments that investigate the characteristics and scaling of pair production from ultraintense lasers, which are designed to establish a robust platform for laboratory-based relativistic pair plasmas. Results include a simple power-law scaling to estimate the effective positron yield for elemental targets for any Maxwellian electron source, typical of direct laser-target interactions. To facilitate these measurements, a solenoid electromagnetic coil was constructed to focus emitted particles, increasing the effective collection angle of the detector and enabling the investigation of pair production from thin targets and low-Z materials. Laser wakefield electron sources were also explored as a compact, high repetition rate platform for the production of high energy pairs with potential applications to the creation of charge-neutral relativistic pair plasmas. Plasma accelerators can produce low-divergence electron beams with energies approaching a GeV at Hz frequencies. It was found that, even for high-energy positrons, energy loss and scattering mechanisms in the target create a fundamental limit to the divergence and energy spectrum of the emitted positrons. The potential future application of laser

  6. Toward a miniaturized fundus camera.

    Science.gov (United States)

    Gliss, Christine; Parel, Jean-Marie; Flynn, John T; Pratisto, Hans; Niederer, Peter

    2004-01-01

    Retinopathy of prematurity (ROP) describes a pathological development of the retina in prematurely born children. In order to prevent severe permanent damage to the eye and enable timely treatment, the fundus of the eye in such children has to be examined according to established procedures. For these examinations, our miniaturized fundus camera is intended to allow the acquisition of wide-angle digital pictures of the fundus for on-line or off-line diagnosis and documentation. We designed two prototypes of a miniaturized fundus camera, one with graded refractive index (GRIN)-based optics, the other with conventional optics. Two different modes of illumination were compared: transscleral and transpupillary. In both systems, the size and weight of the camera were minimized. The prototypes were tested on young rabbits. The experiments led to the conclusion that the combination of conventional optics with transpupillary illumination yields the best results in terms of overall image quality. (c) 2004 Society of Photo-Optical Instrumentation Engineers.

  7. Feature Learning Based Approach for Weed Classification Using High Resolution Aerial Images from a Digital Camera Mounted on a UAV

    Directory of Open Access Journals (Sweden)

    Calvin Hung

    2014-12-01

    Full Text Available The development of low-cost unmanned aerial vehicles (UAVs and light weight imaging sensors has resulted in significant interest in their use for remote sensing applications. While significant attention has been paid to the collection, calibration, registration and mosaicking of data collected from small UAVs, the interpretation of these data into semantically meaningful information can still be a laborious task. A standard data collection and classification work-flow requires significant manual effort for segment size tuning, feature selection and rule-based classifier design. In this paper, we propose an alternative learning-based approach using feature learning to minimise the manual effort required. We apply this system to the classification of invasive weed species. Small UAVs are suited to this application, as they can collect data at high spatial resolutions, which is essential for the classification of small or localised weed outbreaks. In this paper, we apply feature learning to generate a bank of image filters that allows for the extraction of features that discriminate between the weeds of interest and background objects. These features are pooled to summarise the image statistics and form the input to a texton-based linear classifier that classifies an image patch as weed or background. We evaluated our approach to weed classification on three weeds of significance in Australia: water hyacinth, tropical soda apple and serrated tussock. Our results showed that collecting images at 5–10 m resulted in the highest classifier accuracy, indicated by F1 scores of up to 94%.

  8. Positron emission tomography basic sciences

    CERN Document Server

    Townsend, D W; Valk, P E; Maisey, M N

    2003-01-01

    Essential for students, science and medical graduates who want to understand the basic science of Positron Emission Tomography (PET), this book describes the physics, chemistry, technology and overview of the clinical uses behind the science of PET and the imaging techniques it uses. In recent years, PET has moved from high-end research imaging tool used by the highly specialized to an essential component of clinical evaluation in the clinic, especially in cancer management. Previously being the realm of scientists, this book explains PET instrumentation, radiochemistry, PET data acquisition and image formation, integration of structural and functional images, radiation dosimetry and protection, and applications in dedicated areas such as drug development, oncology, and gene expression imaging. The technologist, the science, engineering or chemistry graduate seeking further detailed information about PET, or the medical advanced trainee wishing to gain insight into the basic science of PET will find this book...

  9. Development of a novel handheld intra-operative laparoscopic Compton camera for 18F-Fluoro-2-deoxy-2-D-glucose-guided surgery

    Science.gov (United States)

    Nakamura, Y.; Shimazoe, K.; Takahashi, H.; Yoshimura, S.; Seto, Y.; Kato, S.; Takahashi, M.; Momose, T.

    2016-08-01

    As well as pre-operative roadmapping by 18F-Fluoro-2-deoxy-2-D-glucose (FDG) positron emission tomography, intra-operative localization of the tracer is important to identify local margins for less-invasive surgery, especially FDG-guided surgery. The objective of this paper is to develop a laparoscopic Compton camera and system aimed at use for intra-operative FDG imaging for accurate and less-invasive dissections. The laparoscopic Compton camera consists of four layers of a 12-pixel cross-shaped array of GFAG crystals (2× 2× 3 mm3) and through silicon via multi-pixel photon counters and dedicated individual readout electronics based on a dynamic time-over-threshold method. Experimental results yielded a spatial resolution of 4 mm (FWHM) for a 10 mm working distance and an absolute detection efficiency of 0.11 cps kBq-1, corresponding to an intrinsic detection efficiency of  ˜0.18%. In an experiment using a NEMA-like well-shaped FDG phantom, a φ 5× 10 mm cylindrical hot spot was clearly obtained even in the presence of a background distribution surrounding the Compton camera and the hot spot. We successfully obtained reconstructed images of a resected lymph node and primary tumor ex vivo after FDG administration to a patient having esophageal cancer. These performance characteristics indicate a new possibility of FDG-directed surgery by using a Compton camera intra-operatively.

  10. The measurement of in vivo joint angles during a squat using a single camera markerless motion capture system as compared to a marker based system.

    Science.gov (United States)

    Schmitz, Anne; Ye, Mao; Boggess, Grant; Shapiro, Robert; Yang, Ruigang; Noehren, Brian

    2015-02-01

    Markerless motion capture may have the potential to make motion capture technology widely clinically practical. However, the ability of a single markerless camera system to quantify clinically relevant, lower extremity joint angles has not been studied in vivo. Therefore, the goal of this study was to compare in vivo joint angles calculated using a marker-based motion capture system and a Microsoft Kinect during a squat. Fifteen individuals participated in the study: 8 male, 7 female, height 1.702±0.089m, mass 67.9±10.4kg, age 24±4 years, BMI 23.4±2.2kg/m(2). Marker trajectories and Kinect depth map data of the leg were collected while each subject performed a slow squat motion. Custom code was used to export virtual marker trajectories for the Kinect data. Each set of marker trajectories was utilized to calculate Cardan knee and hip angles. The patterns of motion were similar between systems with average absolute differences of 0.9 for both systems. The peak angles calculated by the marker-based and Kinect systems were largely correlated (r>0.55). These results suggest the data from the Kinect can be post processed in way that it may be a feasible markerless motion capture system that can be used in the clinic. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. An Embedded Real-Time Red Peach Detection System Based on an OV7670 Camera, ARM Cortex-M4 Processor and 3D Look-Up Tables

    Science.gov (United States)

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-01-01

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second. PMID:23202040

  12. An embedded real-time red peach detection system based on an OV7670 camera, ARM cortex-M4 processor and 3D look-up tables.

    Science.gov (United States)

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-10-22

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second.

  13. An Embedded Real-Time Red Peach Detection System Based on an OV7670 Camera, ARM Cortex-M4 Processor and 3D Look-Up Tables

    Directory of Open Access Journals (Sweden)

    Marcel Tresanchez

    2012-10-01

    Full Text Available This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6 processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second.

  14. Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity

    Directory of Open Access Journals (Sweden)

    Taekjun Oh

    2015-07-01

    Full Text Available Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

  15. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    camera. We approach this problem by modelling it as a dynamic multi-objective optimisation problem and show how this metaphor allows a much richer expressiveness than a classical single objective approach. Finally, we showcase the application of a multi-objective evolutionary algorithm to generate a shot...

  16. Make a Pinhole Camera

    Science.gov (United States)

    Fisher, Diane K.; Novati, Alexander

    2009-01-01

    On Earth, using ordinary visible light, one can create a single image of light recorded over time. Of course a movie or video is light recorded over time, but it is a series of instantaneous snapshots, rather than light and time both recorded on the same medium. A pinhole camera, which is simple to make out of ordinary materials and using ordinary…

  17. The canopy camera

    Science.gov (United States)

    Harry E. Brown

    1962-01-01

    The canopy camera is a device of new design that takes wide-angle, overhead photographs of vegetation canopies, cloud cover, topographic horizons, and similar subjects. Since the entire hemisphere is photographed in a single exposure, the resulting photograph is circular, with the horizon forming the perimeter and the zenith the center. Photographs of this type provide...

  18. Positron experiments in a supported dipole trap

    Science.gov (United States)

    Danielson, J. R.; Saitoh, H.; Horn-Stanja, J.; Stenson, E. V.; Hergenhahn, U.; Nißl, S.; Pedersen, T. Sunn; Stoneking, M. R.; Singer, M.; Dickmann, M.; Hugenschmidt, C.; Schwekhard, L.; Surko, C. M.

    2017-10-01

    A new levitated dipole trap is being designed to experimentally study the unique physics of electron-positron pair plasmas. In parallel with the design process, a number of key questions have been investigated in a supported dipole trap. This includes the use of E × B drift injection, the manipulation of positron spatial distribution in the trap by external electrostatic potentials, and studies of the positron confinement time in a system with asymmetric perturbations. In particular, E × B drift injection has been shown to be a viable and robust means of injecting positrons from the NEPOMUC (NEutron-induced POsitron source MUniCh) beam line, across the separatrix, and into the confinement region of the dipole. Nearly 100% injection of the beam has been demonstrated for a large region of parameter space. Once in the trap, positrons can be moved deeper into the confinement region by means of either static or oscillating potentials applied strategically to the segmented outer wall of the trap. Finally, once the injection potentials are switched off, experiments have demonstrated a long-lived component of the trapped positrons lasting for hundreds of milliseconds.

  19. Detection of unknown primary head and neck tumors by positron emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Braams, J.W.; Roodenburg, J.L.N. [Groningen Univ. Hospital, Dept. of Oral and Maxillofacial Surgery, Groningen (Netherlands); Pruim, J.; Vaalburg, W.; Kole, A.C. [Groningen Univ. Hospital, PET center, Groningen (Netherlands); Vermey, A. [Groningen Univ. Hospital, Dept. of Surgical Oncology, Groningen (Netherlands); Nikkels, P.G.J. [Groningen Univ. Hospital, Dept. of Pathology, Groningen (Netherlands)

    1997-04-01

    The purpose of this study was to investigate the potential of using positron emission tomography (PET) with {sup 18}F-labeled fluoro-2-deoxy-D-glucose (FDG) to detect unknown primary tumors of cervical metastases. Thirteen patients with various histologic types of cervical metastases of unknown primary origin were studied. Patients received 185-370 MBq FDG intravenously and were scanned from 30 min after injection onward. Whole-body scans were made with a Siemens ECAT 951/31 PET camera. PET identified the primary tumor in four patients: plasmocytoma, squamous cell cacinoma of the oropharynx, squamous cell carcinoma of the larynx, and bronchial carcinoma, respectively. All known metastatic tumor sites were visualized. PET did not identify a primary tumor in one patient in whom a squamous cell carcinoma at the base of the tongue was found in a latr phase. In the remaining eight patients, a primary lesion was never found. The follow up ranged from 18 to 30 months. A previously unknown primary tumor can be identified with FDG-PET in approximately 30% of patients with cervical metastases. PET can reveal useful information that results in more appropriate treatment, and it can be of value in guiding endoscopic biopsies for histologic diagnosis. (au).

  20. Time-resolved optical imaging through turbid media using a fast data acquisition system based on a gated CCD camera

    Energy Technology Data Exchange (ETDEWEB)

    D' Andrea, Cosimo; Comelli, Daniela; Pifferi, Antonio; Torricelli, Alessandro; Valentini, Gianluca; Cubeddu, Rinaldo [INFM-Dipartimento di Fisica and IFN-CNR, Politecnico di Milano Piazza Leonardo da Vinci 32, I-20133 Milan (Italy)

    2003-07-21

    In this paper, we propose a novel approach for the acquisition of time-resolved data for optical tomography. A fast gated CCD has been used as a parallel detector to acquire in one shot the light intensity exiting a phantom within a very short time slice. By using a pulsed illumination and repeating the acquisition at different delays, the time behaviour of the diffused transmittance can be recorded very quickly. Scattering inclusions embedded in a 5 cm thick phantom have been revealed by fitting a set of 120 images, delayed 50 ps from one another, with a mathematical model based on the random walk theory. Moreover, absorption inclusions have been detected in time-gated images taken at suitable delays.