WorldWideScience

Sample records for faint object camera

  1. Hubble Space Telescope, Faint Object Camera

    Science.gov (United States)

    1981-01-01

    This drawing illustrates Hubble Space Telescope's (HST's), Faint Object Camera (FOC). The FOC reflects light down one of two optical pathways. The light enters a detector after passing through filters or through devices that can block out light from bright objects. Light from bright objects is blocked out to enable the FOC to see background images. The detector intensifies the image, then records it much like a television camera. For faint objects, images can be built up over long exposure times. The total image is translated into digital data, transmitted to Earth, and then reconstructed. The purpose of the HST, the most complex and sensitive optical telescope ever made, is to study the cosmos from a low-Earth orbit. By placing the telescope in space, astronomers are able to collect data that is free of the Earth's atmosphere. The HST detects objects 25 times fainter than the dimmest objects seen from Earth and provides astronomers with an observable universe 250 times larger than visible from ground-based telescopes, perhaps as far away as 14 billion light-years. The HST views galaxies, stars, planets, comets, possibly other solar systems, and even unusual phenomena such as quasars, with 10 times the clarity of ground-based telescopes. The HST was deployed from the Space Shuttle Discovery (STS-31 mission) into Earth orbit in April 1990. The Marshall Space Flight Center had responsibility for design, development, and construction of the HST. The Perkin-Elmer Corporation, in Danbury, Cornecticut, developed the optical system and guidance sensors.

  2. A Study of Planetary Nebulae using the Faint Object Infrared Camera for the SOFIA Telescope

    Science.gov (United States)

    Davis, Jessica

    2012-01-01

    A planetary nebula is formed following an intermediate-mass (1-8 solar M) star's evolution off of the main sequence; it undergoes a phase of mass loss whereby the stellar envelope is ejected and the core is converted into a white dwarf. Planetary nebulae often display complex morphologies such as waists or torii, rings, collimated jet-like outflows, and bipolar symmetry, but exactly how these features form is unclear. To study how the distribution of dust in the interstellar medium affects their morphology, we utilize the Faint Object InfraRed CAmera for the SOFIA Telescope (FORCAST) to obtain well-resolved images of four planetary nebulae--NGC 7027, NGC 6543, M2-9, and the Frosty Leo Nebula--at wavelengths where they radiate most of their energy. We retrieve mid infrared images at wavelengths ranging from 6.3 to 37.1 micron for each of our targets. IDL (Interactive Data Language) is used to perform basic analysis. We select M2-9 to investigate further; analyzing cross sections of the southern lobe reveals a slight limb brightening effect. Modeling the dust distribution within the lobes reveals that the thickness of the lobe walls is higher than anticipated, or rather than surrounding a vacuum surrounds a low density region of tenuous dust. Further analysis of this and other planetary nebulae is needed before drawing more specific conclusions.

  3. Preliminary analysis on faint luminous lightning events recorded by multiple high speed cameras

    Science.gov (United States)

    Alves, J.; Saraiva, A. V.; Pinto, O.; Campos, L. Z.; Antunes, L.; Luz, E. S.; Medeiros, C.; Buzato, T. S.

    2013-12-01

    The objective of this work is the study of some faint luminous events produced by lightning flashes that were recorded simultaneously by multiple high-speed cameras during the previous RAMMER (Automated Multi-camera Network for Monitoring and Study of Lightning) campaigns. The RAMMER network is composed by three fixed cameras and one mobile color camera separated by, in average, distances of 13 kilometers. They were located in the Paraiba Valley (in the cities of São José dos Campos and Caçapava), SP, Brazil, arranged in a quadrilateral shape, centered in São José dos Campos region. This configuration allowed RAMMER to see a thunderstorm from different angles, registering the same lightning flashes simultaneously by multiple cameras. Each RAMMER sensor is composed by a triggering system and a Phantom high-speed camera version 9.1, which is set to operate at a frame rate of 2,500 frames per second with a lens Nikkor (model AF-S DX 18-55 mm 1:3.5 - 5.6 G in the stationary sensors, and a lens model AF-S ED 24 mm - 1:1.4 in the mobile sensor). All videos were GPS (Global Positioning System) time stamped. For this work we used a data set collected in four RAMMER manual operation days in the campaign of 2012 and 2013. On Feb. 18th the data set is composed by 15 flashes recorded by two cameras and 4 flashes recorded by three cameras. On Feb. 19th a total of 5 flashes was registered by two cameras and 1 flash registered by three cameras. On Feb. 22th we obtained 4 flashes registered by two cameras. Finally, in March 6th two cameras recorded 2 flashes. The analysis in this study proposes an evaluation methodology for faint luminous lightning events, such as continuing current. Problems in the temporal measurement of the continuing current can generate some imprecisions during the optical analysis, therefore this work aim to evaluate the effects of distance in this parameter with this preliminary data set. In the cases that include the color camera we analyzed the RGB

  4. Faint Objects and How to Observe Them

    CERN Document Server

    Cudnik, Brian

    2013-01-01

    Astronomers' Observing Guides provide up-to-date information for amateur astronomers who want to know all about what it is they are observing. This is the basis of the first part of the book. The second part details observing techniques for practical astronomers, working with a range of different instruments. Faint Objects and How to Observe Them is for visual observers who want to "go deep" with their observing. It's a guide to some of the most distant, dim, and rarely observed objects in the sky, with background information on surveys and object lists -- some familiar and some not. Typically, amateur astronomers begin by looking at the brighter objects, and work their way "deeper" as their experience and skills improve. Faint Objects is about the faintest objects we can see with an amateur's telescope -- their physical nature, why they appear so dim, and how to track them down. By definition, these objects are hard to see! But moderate equipment (a decent telescope of at least 10-inch aperture) and the righ...

  5. A Tool for Optimizing Observation Planning for Faint Moving Objects

    Science.gov (United States)

    Arredondo, Anicia; Bosh, Amanda S.; Levine, Stephen

    2016-10-01

    Observations of small solar system bodies such as trans-Neptunian objects and Centaurs are vital for understanding the basic properties of these small members of our solar system. Because these objects are often very faint, large telescopes and long exposures may be necessary, which can result in crowded fields in which the target of interest may be blended with a field star. For accurate photometry and astrometry, observations must be planned to occur when the target is free of background stars; this restriction results in limited observing windows. We have created a tool that can be used to plan observations of faint moving objects. Features of the tool include estimates of best times to observe (when the object is not too near another object), a finder chart output, a list of possible astrometric and photometric reference stars, and an exposure time calculator. This work makes use of the USNOFS Image and Catalogue Archive operated by the United States Naval Observatory, Flagstaff Station (S.E. Levine and D.G. Monet 2000), the JPL Horizons online ephemeris service (Giorgini et al. 1996), the Minor Planet Center's MPChecker (http://cgi.minorplanetcenter.net/cgi-bin/checkmp.cgi), and source extraction software SExtractor (Bertin & Arnouts 1996). Support for this work was provided by NASA SSO grant NNX15AJ82G.

  6. Europe's space camera unmasks a cosmic gamma-ray machine

    Science.gov (United States)

    1996-11-01

    The new-found neutron star is the visible counterpart of a pulsating radio source, Pulsar 1055-52. It is a mere 20 kilometres wide. Although the neutron star is very hot, at about a million degrees C, very little of its radiant energy takes the form of visible light. It emits mainly gamma-rays, an extremely energetic form of radiation. By examining it at visible wavelengths, astronomers hope to figure out why Pulsar 1055-52 is the most efficient generator of gamma-rays known so far, anywhere the Universe. The Faint Object Camera found Pulsar 1055-52 in near ultraviolet light at 3400 angstroms, a little shorter in wavelength than the violet light at the extremity of the human visual range. Roberto Mignani, Patrizia Caraveo and Giovanni Bignami of the Istituto di Fisica Cosmica in Milan, Italy, report its optical identification in a forthcoming issue of Astrophysical Journal Letters (1 January 1997). The formal name of the object is PSR 1055-52. Evading the glare of an adjacent star The Italian team had tried since 1988 to spot Pulsar 1055-52 with two of the most powerful ground-based optical telescopes in the Southern Hemisphere. These were the 3.6-metre Telescope and the 3.5-metre New Technology Telescope of the European Southern Observatory at La Silla, Chile. Unfortunately an ordinary star 100,000 times brighter lay in almost the same direction in the sky, separated from the neutron star by only a thousandth of a degree. The Earth's atmosphere defocused the star's light sufficiently to mask the glimmer from Pulsar 1055-52. The astronomers therefore needed an instrument in space. The Faint Object Camera offered the best precision and sensitivity to continue the hunt. Devised by European astronomers to complement the American wide field camera in the Hubble Space Telescope, the Faint Object Camera has a relatively narrow field of view. It intensifies the image of a faint object by repeatedly accelerating electrons from photo-electric films, so as to produce

  7. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  8. Faint H-alpha emission objects near the equatorial selected areas

    International Nuclear Information System (INIS)

    Robertson, T.H.; Jordan, T.M.

    1989-01-01

    An objective-prism survey of fields centered on the 24 Kapteyn Selected Areas along the celestial equator has resulted in the detection of 120 faint H-alpha emission-line objects. Diffuse objects and stars having molecular bands in their spectra are not included. Only 18 of these stars were identified in previous lists of emission-line objects. Identifications were found for an additional three stars. Images of these objects appear to be stellar on direct plates. The magnitude range for these stars is V = 10.1-19.00. Positions and V magnitudes of these objects are provided, as are identifications of objects which have been reported in other lists. Frequency distributions of the apparent magnitudes and Galactic latitudes of these emission-line objects are discussed, and finding charts are provided. 14 refs

  9. The Population of Optically Faint GEO Debris

    Science.gov (United States)

    Seitzer, Patrick; Barker, Ed; Buckalew, Brent; Burkhardt, Andrew; Cowardin, Heather; Frith, James; Gomez, Juan; Kaleida, Catherine; Lederer, Susan M.; Lee, Chris H.

    2016-01-01

    The 6.5-m Magellan telescope 'Walter Baade' at the Las Campanas Observatory in Chile has been used for spot surveys of the GEO orbital regime to study the population of optically faint GEO debris. The goal is to estimate the size of the population of GEO debris at sizes much smaller than can be studied with 1-meter class telescopes. Despite the small size of the field of view of the Magellan instrument (diameter 0.5-degree), a significant population of objects fainter than R = 19th magnitude have been found with angular rates consistent with circular orbits at GEO. We compare the size of this population with the numbers of GEO objects found at brighter magnitudes by smaller telescopes. The observed detections have a wide range in characteristics starting with those appearing as short uniform streaks. But there are a substantial number of detections with variations in brightness, flashers, during the 5-second exposure. The duration of each of these flashes can be extremely brief: sometimes less than half a second. This is characteristic of a rapidly tumbling object with a quite variable projected size times albedo. If the albedo is of the order of 0.2, then the largest projected size of these objects is around 10-cm. The data in this paper was collected over the last several years using Magellan's IMACS camera in f/2 mode. The analysis shows the brightness bins for the observed GEO population as well as the periodicity of the flashers. All objects presented are correlated with the catalog: the focus of the paper will be on the uncorrelated, optically faint, objects. The goal of this project is to better characterize the faint debris population in GEO that access to a 6.5-m optical telescope in a superb site can provide.

  10. Composition of faint comets

    International Nuclear Information System (INIS)

    Brown, L.W.

    1986-01-01

    The study uses an emission line, differential imaging camera built by the Science Operations Branch. This instrument allows photometric data to be obtained over a large area of a comet in a large number of resolution elements. The detector is a 100x100 Reticon array which with interchangeable optics can give resolutions from 2'' to 30'' over a field of 1' to 15'. The camera through its controlling computer can simultaneously take images in on-line and continuum filters and through computer subtraction and calibration present a photometric image of the comet produced by only the emission of the molecule under study. Initial work has shown two significant problems. First the auxiliary equipment of the telescope has not allowed the unambiguous location of faint comets so that systematic observations could be made, and secondly initial data has not shown much molecular emission from the faint comets which were located. Work last year on a software and hardware display system and this year on additional guide motors on the 36-inch telescope has allowed the differential camera to act as its own finder and guide scope. Comet IRAS was observed in C2 and CO+, as well as an occultation by the comet of SAO029103. The perodic comet Giacobini-Zinner was also observed in C2

  11. Fainting

    Science.gov (United States)

    ... a medicine you’re taking. Alcohol, cocaine, and marijuana can also cause fainting. More serious causes of fainting include seizures and problems with the heart or with the blood vessels leading to the brain. How is fainting diagnosed? Your doctor will probably ...

  12. Analysis of Camera Parameters Value in Various Object Distances Calibration

    International Nuclear Information System (INIS)

    Yusoff, Ahmad Razali; Ariff, Mohd Farid Mohd; Idris, Khairulnizam M; Majid, Zulkepli; Setan, Halim; Chong, Albert K

    2014-01-01

    In photogrammetric applications, good camera parameters are needed for mapping purpose such as an Unmanned Aerial Vehicle (UAV) that encompassed with non-metric camera devices. Simple camera calibration was being a common application in many laboratory works in order to get the camera parameter's value. In aerial mapping, interior camera parameters' value from close-range camera calibration is used to correct the image error. However, the causes and effects of the calibration steps used to get accurate mapping need to be analyze. Therefore, this research aims to contribute an analysis of camera parameters from portable calibration frame of 1.5 × 1 meter dimension size. Object distances of two, three, four, five, and six meters are the research focus. Results are analyzed to find out the changes in image and camera parameters' value. Hence, camera calibration parameter's of a camera is consider different depend on type of calibration parameters and object distances

  13. A framework for multi-object tracking over distributed wireless camera networks

    Science.gov (United States)

    Gau, Victor; Hwang, Jenq-Neng

    2010-07-01

    In this paper, we propose a unified framework targeting at two important issues in a distributed wireless camera network, i.e., object tracking and network communication, to achieve reliable multi-object tracking over distributed wireless camera networks. In the object tracking part, we propose a fully automated approach for tracking of multiple objects across multiple cameras with overlapping and non-overlapping field of views without initial training. To effectively exchange the tracking information among the distributed cameras, we proposed an idle probability based broadcasting method, iPro, which adaptively adjusts the broadcast probability to improve the broadcast effectiveness in a dense saturated camera network. Experimental results for the multi-object tracking demonstrate the promising performance of our approach on real video sequences for cameras with overlapping and non-overlapping views. The modeling and ns-2 simulation results show that iPro almost approaches the theoretical performance upper bound if cameras are within each other's transmission range. In more general scenarios, e.g., in case of hidden node problems, the simulation results show that iPro significantly outperforms standard IEEE 802.11, especially when the number of competing nodes increases.

  14. Studies of faint field galaxies

    International Nuclear Information System (INIS)

    Ellis, R.S.

    1983-01-01

    Although claims are often made that photometric surveys of faint field galaxies reveal evidence for evolution over recent epochs (z<0.6), it has not yet been possible to select a single evolutionary model from comparisons with the data. Magnitude counts are sensitive to evolution but the data is well-mixed in distance because of the width of the luminosity function (LF). Colours can narrow the possibilities but the effects of redshift and morphology can only be separated using many passbands. In this paper, the author highlights two ways in which one can make further progress in this important subject. First, he discusses results based on the AAT redshift survey which comprises 5 Schmidt fields to J = 16.7 i.e. well beyond local inhomogeneities. Secondly, the difficulties in resolving the many possibilities encountered with faint photometry could be resolved with redshifts. To obtain redshift distributions for faint samples is now feasible via multi-object spectroscopy. At intermediate magnitudes (J=20) such distributions test the faint end of the galaxy LF; at faint magnitudes (J=22) they offer a direct evolutionary test. (Auth.)

  15. Multi Camera Multi Object Tracking using Block Search over Epipolar Geometry

    Directory of Open Access Journals (Sweden)

    Saman Sargolzaei

    2000-01-01

    Full Text Available We present strategy for multi-objects tracking in multi camera environment for the surveillance and security application where tracking multitude subjects are of utmost importance in a crowded scene. Our technique assumes partially overlapped multi-camera setup where cameras share common view from different angle to assess positions and activities of subjects under suspicion. To establish spatial correspondence between camera views we employ an epipolar geometry technique. We propose an overlapped block search method to find the interested pattern (target in new frames. Color pattern update scheme has been considered to further optimize the efficiency of the object tracking when object pattern changes due to object motion in the field of views of the cameras. Evaluation of our approach is presented with the results on PETS2007 dataset..

  16. Long-Term Continuous Double Station Observation of Faint Meteor Showers

    Czech Academy of Sciences Publication Activity Database

    Vítek, S.; Páta, P.; Koten, Pavel; Fliegel, K.

    2016-01-01

    Roč. 16, č. 9 (2016), 1493/1-1493/10 ISSN 1424-8220 R&D Projects: GA ČR GA14-25251S Institutional support: RVO:67985815 Keywords : faint meteor shower * meteoroid * CCD camera Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 2.677, year: 2016

  17. Faint Object Detection in Multi-Epoch Observations via Catalog Data Fusion

    Energy Technology Data Exchange (ETDEWEB)

    Budavári, Tamás; Szalay, Alexander S. [Department of Physics and Astronomy, The Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Loredo, Thomas J. [Cornell Center for Astrophysics and Planetary Science, Cornell University, Ithaca, NY 14853 (United States)

    2017-03-20

    Astronomy in the time-domain era faces several new challenges. One of them is the efficient use of observations obtained at multiple epochs. The work presented here addresses faint object detection and describes an incremental strategy for separating real objects from artifacts in ongoing surveys. The idea is to produce low-threshold single-epoch catalogs and to accumulate information across epochs. This is in contrast to more conventional strategies based on co-added or stacked images. We adopt a Bayesian approach, addressing object detection by calculating the marginal likelihoods for hypotheses asserting that there is no object or one object in a small image patch containing at most one cataloged source at each epoch. The object-present hypothesis interprets the sources in a patch at different epochs as arising from a genuine object; the no-object hypothesis interprets candidate sources as spurious, arising from noise peaks. We study the detection probability for constant-flux objects in a Gaussian noise setting, comparing results based on single and stacked exposures to results based on a series of single-epoch catalog summaries. Our procedure amounts to generalized cross-matching: it is the product of a factor accounting for the matching of the estimated fluxes of the candidate sources and a factor accounting for the matching of their estimated directions. We find that probabilistic fusion of multi-epoch catalogs can detect sources with similar sensitivity and selectivity compared to stacking. The probabilistic cross-matching framework underlying our approach plays an important role in maintaining detection sensitivity and points toward generalizations that could accommodate variability and complex object structure.

  18. Faint Object Detection in Multi-Epoch Observations via Catalog Data Fusion

    International Nuclear Information System (INIS)

    Budavári, Tamás; Szalay, Alexander S.; Loredo, Thomas J.

    2017-01-01

    Astronomy in the time-domain era faces several new challenges. One of them is the efficient use of observations obtained at multiple epochs. The work presented here addresses faint object detection and describes an incremental strategy for separating real objects from artifacts in ongoing surveys. The idea is to produce low-threshold single-epoch catalogs and to accumulate information across epochs. This is in contrast to more conventional strategies based on co-added or stacked images. We adopt a Bayesian approach, addressing object detection by calculating the marginal likelihoods for hypotheses asserting that there is no object or one object in a small image patch containing at most one cataloged source at each epoch. The object-present hypothesis interprets the sources in a patch at different epochs as arising from a genuine object; the no-object hypothesis interprets candidate sources as spurious, arising from noise peaks. We study the detection probability for constant-flux objects in a Gaussian noise setting, comparing results based on single and stacked exposures to results based on a series of single-epoch catalog summaries. Our procedure amounts to generalized cross-matching: it is the product of a factor accounting for the matching of the estimated fluxes of the candidate sources and a factor accounting for the matching of their estimated directions. We find that probabilistic fusion of multi-epoch catalogs can detect sources with similar sensitivity and selectivity compared to stacking. The probabilistic cross-matching framework underlying our approach plays an important role in maintaining detection sensitivity and points toward generalizations that could accommodate variability and complex object structure.

  19. A fuzzy automated object classification by infrared laser camera

    Science.gov (United States)

    Kanazawa, Seigo; Taniguchi, Kazuhiko; Asari, Kazunari; Kuramoto, Kei; Kobashi, Syoji; Hata, Yutaka

    2011-06-01

    Home security in night is very important, and the system that watches a person's movements is useful in the security. This paper describes a classification system of adult, child and the other object from distance distribution measured by an infrared laser camera. This camera radiates near infrared waves and receives reflected ones. Then, it converts the time of flight into distance distribution. Our method consists of 4 steps. First, we do background subtraction and noise rejection in the distance distribution. Second, we do fuzzy clustering in the distance distribution, and form several clusters. Third, we extract features such as the height, thickness, aspect ratio, area ratio of the cluster. Then, we make fuzzy if-then rules from knowledge of adult, child and the other object so as to classify the cluster to one of adult, child and the other object. Here, we made the fuzzy membership function with respect to each features. Finally, we classify the clusters to one with the highest fuzzy degree among adult, child and the other object. In our experiment, we set up the camera in room and tested three cases. The method successfully classified them in real time processing.

  20. Automated Morphological Classification in Deep Hubble Space Telescope UBVI Fields: Rapidly and Passively Evolving Faint Galaxy Populations

    Science.gov (United States)

    Odewahn, Stephen C.; Windhorst, Rogier A.; Driver, Simon P.; Keel, William C.

    1996-11-01

    We analyze deep Hubble Space Telescope Wide Field Planetary Camera 2 (WFPC2) images in U, B, V, I using artificial neural network (ANN) classifiers, which are based on galaxy surface brightness and light profile (but not on color nor on scale length, rhl). The ANN distinguishes quite well between E/S0, Sabc, and Sd/Irr+M galaxies (M for merging systems) for BJ ~ 24 mag. The faint blue galaxy counts in the B band are dominated by Sd/Irr+M galaxies and can be explained by a moderately steep local luminosity function (LF) undergoing strong luminosity evolution. We suggest that these faint late-type objects (24 mag <~ BJ <~ 28 mag) are a combination of low-luminosity lower redshift dwarf galaxies, plus compact star-forming galaxies and merging systems at z ~= 1--3, possibly the building blocks of the luminous early-type galaxies seen today.

  1. Object Detection and Tracking-Based Camera Calibration for Normalized Human Height Estimation

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-01-01

    Full Text Available This paper presents a normalized human height estimation algorithm using an uncalibrated camera. To estimate the normalized human height, the proposed algorithm detects a moving object and performs tracking-based automatic camera calibration. The proposed method consists of three steps: (i moving human detection and tracking, (ii automatic camera calibration, and (iii human height estimation and error correction. The proposed method automatically calibrates camera by detecting moving humans and estimates the human height using error correction. The proposed method can be applied to object-based video surveillance systems and digital forensic.

  2. The active blind spot camera: hard real-time recognition of moving objects from a moving camera

    OpenAIRE

    Van Beeck, Kristof; Goedemé, Toon; Tuytelaars, Tinne

    2014-01-01

    This PhD research focuses on visual object recognition under specific demanding conditions. The object to be recognized as well as the camera move, and the time available for the recognition task is extremely short. This generic problem is applied here on a specific problem: the active blind spot camera. Statistics show a large number of accidents with trucks are related to the so-called blind spot, the area around the vehicle in which vulnerable road users are hard to perceive by the truck d...

  3. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i automatic camera calibration using both moving objects and a background structure; (ii object depth estimation; and (iii detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems.

  4. Real-time multiple objects tracking on Raspberry-Pi-based smart embedded camera

    Science.gov (United States)

    Dziri, Aziz; Duranton, Marc; Chapuis, Roland

    2016-07-01

    Multiple-object tracking constitutes a major step in several computer vision applications, such as surveillance, advanced driver assistance systems, and automatic traffic monitoring. Because of the number of cameras used to cover a large area, these applications are constrained by the cost of each node, the power consumption, the robustness of the tracking, the processing time, and the ease of deployment of the system. To meet these challenges, the use of low-power and low-cost embedded vision platforms to achieve reliable tracking becomes essential in networks of cameras. We propose a tracking pipeline that is designed for fixed smart cameras and which can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on a low-cost embedded smart camera composed of a Raspberry-Pi board and a RaspiCam camera. The tracking quality and the processing speed obtained with the proposed pipeline are evaluated on publicly available datasets and compared to the state-of-the-art methods.

  5. Expanded opportunities of THz passive camera for the detection of concealed objects

    Science.gov (United States)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Kuchik, Igor E.

    2013-10-01

    Among the security problems, the detection of object implanted into either the human body or animal body is the urgent problem. At the present time the main tool for the detection of such object is X-raying only. However, X-ray is the ionized radiation and therefore can not be used often. Other way for the problem solving is passive THz imaging using. In our opinion, using of the passive THz camera may help to detect the object implanted into the human body under certain conditions. The physical reason of such possibility arises from temperature trace on the human skin as a result of the difference in temperature between object and parts of human body. Modern passive THz cameras have not enough resolution in temperature to see this difference. That is why, we use computer processing to enhance the passive THz camera resolution for this application. After computer processing of images captured by passive THz camera TS4, developed by ThruVision Systems Ltd., we may see the pronounced temperature trace on the human body skin from the water, which is drunk by person, or other food eaten by person. Nevertheless, there are many difficulties on the way of full soution of this problem. We illustrate also an improvement of quality of the image captured by comercially available passive THz cameras using computer processing. In some cases, one can fully supress a noise on the image without loss of its quality. Using computer processing of the THz image of objects concealed on the human body, one may improve it many times. Consequently, the instrumental resolution of such device may be increased without any additional engineering efforts.

  6. Collaborative real-time scheduling of multiple PTZ cameras for multiple object tracking in video surveillance

    Science.gov (United States)

    Liu, Yu-Che; Huang, Chung-Lin

    2013-03-01

    This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.

  7. Globular Clusters for Faint Galaxies

    Science.gov (United States)

    Kohler, Susanna

    2017-07-01

    The origin of ultra-diffuse galaxies (UDGs) has posed a long-standing mystery for astronomers. New observations of several of these faint giants with the Hubble Space Telescope are now lending support to one theory.Faint-Galaxy MysteryHubble images of Dragonfly 44 (top) and DFX1 (bottom). The right panels show the data with greater contrast and extended objects masked. [van Dokkum et al. 2017]UDGs large, extremely faint spheroidal objects were first discovered in the Virgo galaxy cluster roughly three decades ago. Modern telescope capabilities have resulted in many more discoveries of similar faint galaxies in recent years, suggesting that they are a much more common phenomenon than we originally thought.Despite the many observations, UDGs still pose a number of unanswered questions. Chief among them: what are UDGs? Why are these objects the size of normal galaxies, yet so dim? There are two primary models that explain UDGs:UDGs were originally small galaxies, hence their low luminosity. Tidal interactions then puffed them up to the large size we observe today.UDGs are effectively failed galaxies. They formed the same way as normal galaxies of their large size, but something truncated their star formation early, preventing them from gaining the brightness that we would expect for galaxies of their size.Now a team of scientists led by Pieter van Dokkum (Yale University) has made some intriguing observations with Hubble that lend weight to one of these models.Globulars observed in 16 Coma-cluster UDGs by Hubble. The top right panel shows the galaxy identifications. The top left panel shows the derived number of globular clusters in each galaxy. [van Dokkum et al. 2017]Globulars GaloreVan Dokkum and collaborators imaged two UDGs with Hubble: Dragonfly 44 and DFX1, both located in the Coma galaxy cluster. These faint galaxies are both smooth and elongated, with no obvious irregular features, spiral arms, star-forming regions, or other indications of tidal interactions

  8. Fixed-focus camera objective for small remote sensing satellites

    Science.gov (United States)

    Topaz, Jeremy M.; Braun, Ofer; Freiman, Dov

    1993-09-01

    An athermalized objective has been designed for a compact, lightweight push-broom camera which is under development at El-Op Ltd. for use in small remote-sensing satellites. The high performance objective has a fixed focus setting, but maintains focus passively over the full range of temperatures encountered in small satellites. The lens is an F/5.0, 320 mm focal length Tessar type, operating over the range 0.5 - 0.9 micrometers . It has a 16 degree(s) field of view and accommodates various state-of-the-art silicon detector arrays. The design and performance of the objective is described in this paper.

  9. Determination of feature generation methods for PTZ camera object tracking

    Science.gov (United States)

    Doyle, Daniel D.; Black, Jonathan T.

    2012-06-01

    Object detection and tracking using computer vision (CV) techniques have been widely applied to sensor fusion applications. Many papers continue to be written that speed up performance and increase learning of artificially intelligent systems through improved algorithms, workload distribution, and information fusion. Military application of real-time tracking systems is becoming more and more complex with an ever increasing need of fusion and CV techniques to actively track and control dynamic systems. Examples include the use of metrology systems for tracking and measuring micro air vehicles (MAVs) and autonomous navigation systems for controlling MAVs. This paper seeks to contribute to the determination of select tracking algorithms that best track a moving object using a pan/tilt/zoom (PTZ) camera applicable to both of the examples presented. The select feature generation algorithms compared in this paper are the trained Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), the Mixture of Gaussians (MoG) background subtraction method, the Lucas- Kanade optical flow method (2000) and the Farneback optical flow method (2003). The matching algorithm used in this paper for the trained feature generation algorithms is the Fast Library for Approximate Nearest Neighbors (FLANN). The BSD licensed OpenCV library is used extensively to demonstrate the viability of each algorithm and its performance. Initial testing is performed on a sequence of images using a stationary camera. Further testing is performed on a sequence of images such that the PTZ camera is moving in order to capture the moving object. Comparisons are made based upon accuracy, speed and memory.

  10. Near-infrared imaging survey of faint companions around young dwarfs in the Pleiades cluster

    International Nuclear Information System (INIS)

    Itoh, Yoichi; Funayama, Hitoshi; Hashiguchi, Toshio; Oasa, Yumiko; Hayashi, Masahiko; Fukagawa, Misato; Currie, Thayne

    2011-01-01

    We conducted a near-infrared imaging survey of 11 young dwarfs in the Pleiades cluster using the Subaru Telescope and the near-infrared coronagraph imager. We found ten faint point sources, with magnitudes as faint as 20 mag in the K-band, with around seven dwarfs. Comparison with the Spitzer archive images revealed that a pair of the faint sources around V 1171 Tau is very red in infrared wavelengths, which indicates very low-mass young stellar objects. However, the results of our follow-up proper motion measurements implied that the central star and the faint sources do not share common proper motions, suggesting that they are not physically associated.

  11. Fainting

    Science.gov (United States)

    ... go to the ER. When Desiree asked her school nurse about it the next day, she said Desiree probably fainted because she stayed in the whirlpool too long or the temperature was set too high, affecting her blood pressure. ...

  12. Infrared-faint radio sources in the SERVS deep fields. Pinpointing AGNs at high redshift

    NARCIS (Netherlands)

    Maini, A.; Prandoni, I.; Norris, R. P.; Spitler, L. R.; Mignano, A.; Lacy, M.; Morganti, R.

    2016-01-01

    Context. Infrared-faint radio sources (IFRS) represent an unexpected class of objects which are relatively bright at radio wavelength, but unusually faint at infrared (IR) and optical wavelengths. A recent and extensive campaign on the radio-brightest IFRSs (S1.4 GHz≳ 10 mJy) has provided evidence

  13. Exploring three faint source detections methods for aperture synthesis radio images

    Science.gov (United States)

    Peracaula, M.; Torrent, A.; Masias, M.; Lladó, X.; Freixenet, J.; Martí, J.; Sánchez-Sutil, J. R.; Muñoz-Arjonilla, A. J.; Paredes, J. M.

    2015-04-01

    Wide-field radio interferometric images often contain a large population of faint compact sources. Due to their low intensity/noise ratio, these objects can be easily missed by automated detection methods, which have been classically based on thresholding techniques after local noise estimation. The aim of this paper is to present and analyse the performance of several alternative or complementary techniques to thresholding. We compare three different algorithms to increase the detection rate of faint objects. The first technique consists of combining wavelet decomposition with local thresholding. The second technique is based on the structural behaviour of the neighbourhood of each pixel. Finally, the third algorithm uses local features extracted from a bank of filters and a boosting classifier to perform the detections. The methods' performances are evaluated using simulations and radio mosaics from the Giant Metrewave Radio Telescope and the Australia Telescope Compact Array. We show that the new methods perform better than well-known state of the art methods such as SEXTRACTOR, SAD and DUCHAMP at detecting faint sources of radio interferometric images.

  14. Automatic Moving Object Segmentation for Freely Moving Cameras

    Directory of Open Access Journals (Sweden)

    Yanli Wan

    2014-01-01

    Full Text Available This paper proposes a new moving object segmentation algorithm for freely moving cameras which is very common for the outdoor surveillance system, the car build-in surveillance system, and the robot navigation system. A two-layer based affine transformation model optimization method is proposed for camera compensation purpose, where the outer layer iteration is used to filter the non-background feature points, and the inner layer iteration is used to estimate a refined affine model based on the RANSAC method. Then the feature points are classified into foreground and background according to the detected motion information. A geodesic based graph cut algorithm is then employed to extract the moving foreground based on the classified features. Unlike the existing global optimization or the long term feature point tracking based method, our algorithm only performs on two successive frames to segment the moving foreground, which makes it suitable for the online video processing applications. The experiment results demonstrate the effectiveness of our algorithm in both of the high accuracy and the fast speed.

  15. THE EXAMPLE OF USING THE XIAOMI CAMERAS IN INVENTORY OF MONUMENTAL OBJECTS - FIRST RESULTS

    Directory of Open Access Journals (Sweden)

    J. S. Markiewicz

    2017-11-01

    Full Text Available At present, digital documentation recorded in the form of raster or vector files is the obligatory way of inventorying historical objects. Today, photogrammetry is becoming more and more popular and is becoming the standard of documentation in many projects involving the recording of all possible spatial data on landscape, architecture, or even single objects. Low-cost sensors allow for the creation of reliable and accurate three-dimensional models of investigated objects. This paper presents the results of a comparison between the outcomes obtained when using three sources of image: low-cost Xiaomi cameras, a full-frame camera (Canon 5D Mark II and middle-frame camera (Hasselblad-Hd4. In order to check how the results obtained from the two sensors differ the following parameters were analysed: the accuracy of the orientation of the ground level photos on the control and check points, the distribution of appointed distortion in the self-calibration process, the flatness of the walls, the discrepancies between point clouds from the low-cost cameras and references data. The results presented below are a result of co-operation of researchers from three institutions: the Systems Research Institute PAS, The Department of Geodesy and Cartography at the Warsaw University of Technology and the National Museum in Warsaw.

  16. The Example of Using the Xiaomi Cameras in Inventory of Monumental Objects - First Results

    Science.gov (United States)

    Markiewicz, J. S.; Łapiński, S.; Bienkowski, R.; Kaliszewska, A.

    2017-11-01

    At present, digital documentation recorded in the form of raster or vector files is the obligatory way of inventorying historical objects. Today, photogrammetry is becoming more and more popular and is becoming the standard of documentation in many projects involving the recording of all possible spatial data on landscape, architecture, or even single objects. Low-cost sensors allow for the creation of reliable and accurate three-dimensional models of investigated objects. This paper presents the results of a comparison between the outcomes obtained when using three sources of image: low-cost Xiaomi cameras, a full-frame camera (Canon 5D Mark II) and middle-frame camera (Hasselblad-Hd4). In order to check how the results obtained from the two sensors differ the following parameters were analysed: the accuracy of the orientation of the ground level photos on the control and check points, the distribution of appointed distortion in the self-calibration process, the flatness of the walls, the discrepancies between point clouds from the low-cost cameras and references data. The results presented below are a result of co-operation of researchers from three institutions: the Systems Research Institute PAS, The Department of Geodesy and Cartography at the Warsaw University of Technology and the National Museum in Warsaw.

  17. Detecting Target Objects by Natural Language Instructions Using an RGB-D Camera

    Directory of Open Access Journals (Sweden)

    Jiatong Bao

    2016-12-01

    Full Text Available Controlling robots by natural language (NL is increasingly attracting attention for its versatility, convenience and no need of extensive training for users. Grounding is a crucial challenge of this problem to enable robots to understand NL instructions from humans. This paper mainly explores the object grounding problem and concretely studies how to detect target objects by the NL instructions using an RGB-D camera in robotic manipulation applications. In particular, a simple yet robust vision algorithm is applied to segment objects of interest. With the metric information of all segmented objects, the object attributes and relations between objects are further extracted. The NL instructions that incorporate multiple cues for object specifications are parsed into domain-specific annotations. The annotations from NL and extracted information from the RGB-D camera are matched in a computational state estimation framework to search all possible object grounding states. The final grounding is accomplished by selecting the states which have the maximum probabilities. An RGB-D scene dataset associated with different groups of NL instructions based on different cognition levels of the robot are collected. Quantitative evaluations on the dataset illustrate the advantages of the proposed method. The experiments of NL controlled object manipulation and NL-based task programming using a mobile manipulator show its effectiveness and practicability in robotic applications.

  18. Theoretical colours and isochrones for some Hubble Space Telescope colour systems. II

    Science.gov (United States)

    Paltoglou, G.; Bell, R. A.

    1991-01-01

    A grid of synthetic surface brightness magnitudes for 14 bandpasses of the Hubble Space Telescope Faint Object Camera is presented, as well as a grid of UBV, uvby, and Faint Object Camera surface brightness magnitudes derived from the Gunn-Stryker spectrophotometric atlas. The synthetic colors are used to examine the transformations between the ground-based Johnson UBV and Stromgren uvby systems and the Faint Object Camera UBV and uvby. Two new four-color systems, similar to the Stromgren system, are proposed for the determination of abundance, temperature, and surface gravity. The synthetic colors are also used to calculate color-magnitude isochrones from the list of theoretical tracks provided by VandenBerg and Bell (1990). It is shown that by using the appropriate filters it is possible to minimize the dependence of this color difference on metallicity. The effects of interstellar reddening on various Faint Object Camera colors are analyzed as well as the observational requirements for obtaining data of a given signal-to-noise for each of the 14 bandpasses.

  19. Hydra II: A Faint and Compact Milky Way Dwarf Galaxy Found in the Survey of the Magellanic Stellar History

    NARCIS (Netherlands)

    Martin, Nicolas F.; Nidever, David L.; Besla, Gurtina; Olsen, Knut; Walker, Alistair R.; Vivas, A. Katherina; Gruendl, Robert A.; Kaleida, Catherine C.; Muñoz, Ricardo R.; Blum, Robert D.; Saha, Abhijit; Conn, Blair C.; Bell, Eric F.; Chu, You-Hua; Cioni, Maria-Rosa L.; de Boer, Thomas J. L.; Gallart, Carme; Jin, Shoko; Kunder, Andrea; Majewski, Steven R.; Martinez-Delgado, David; Monachesi, Antonela; Monelli, Matteo; Monteagudo, Lara; Noël, Noelia E. D.; Olszewski, Edward W.; Stringfellow, Guy S.; van der Marel, Roeland P.; Zaritsky, Dennis

    We present the discovery of a new dwarf galaxy, Hydra II, found serendipitously within the data from the ongoing Survey of the Magellanic Stellar History conducted with the Dark Energy Camera on the Blanco 4 m Telescope. The new satellite is compact ({{r}h}=68 ± 11 pc) and faint ({{M}V}=-4.8 ± 0.3),

  20. Photometric Variability in the Faint Sky Variability Survey

    NARCIS (Netherlands)

    Morales-Rueda, L.; Groot, P.J.; Augusteijn, T.; Nelemans, G.A.; Vreeswijk, P.M.; Besselaar, E.J.M. van den

    2005-01-01

    The Faint Sky Variability Survey (FSVS) is aimed at finding photometric and/or astrometric variable objects between 16th and 24th mag on time-scales between tens of minutes and years with photometric precisions ranging from 3 millimag to 0.2 mag. An area of ~23 deg2, located at mid and

  1. Syncope (Fainting)

    Science.gov (United States)

    ... for Heart.org CPR & ECC for Heart.org Shop for Heart.org Causes for Heart.org Advocate ... loss of consciousness usually related to insufficient blood flow to the brain. It’s also called fainting or " ...

  2. SPECKLE CAMERA OBSERVATIONS FOR THE NASA KEPLER MISSION FOLLOW-UP PROGRAM

    International Nuclear Information System (INIS)

    Howell, Steve B.; Everett, Mark E.; Sherry, William; Horch, Elliott; Ciardi, David R.

    2011-01-01

    We present the first results from a speckle imaging survey of stars classified as candidate exoplanet host stars discovered by the Kepler mission. We use speckle imaging to search for faint companions or closely aligned background stars that could contribute flux to the Kepler light curves of their brighter neighbors. Background stars are expected to contribute significantly to the pool of false positive candidate transiting exoplanets discovered by the Kepler mission, especially in the case that the faint neighbors are eclipsing binary stars. Here, we describe our Kepler follow-up observing program, the speckle imaging camera used, our data reduction, and astrometric and photometric performance. Kepler stars range from R = 8 to 16 and our observations attempt to provide background non-detection limits 5-6 mag fainter and binary separations of ∼0.05-2.0 arcsec. We present data describing the relative brightness, separation, and position angles for secondary sources, as well as relative plate limits for non-detection of faint nearby stars around each of 156 target stars. Faint neighbors were found near 10 of the stars.

  3. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  4. The Properties of Faint Field Galaxies

    Science.gov (United States)

    Driver, Simon. P.

    1994-12-01

    One of the current drawbacks of Charge Coupled Devices (CCDs) is their restrictive fields of view. The Hitchhiker CCD camera overcomes this limitation by operating in parallel with existing instrumentation and is able to cover a large area as well as large volumes. Hitchhiker is mounted on the 4.2m William Herschel Telescope and has been operating for two years. The first use of the Hitchhiker data set has been to study the general properties of faint galaxies. The observed trend of how the differential numbers of galaxies vary with magnitude agrees extremely well with those of other groups and covers, for the first time, all four major optical bandpasses. This multi-band capability has also allowed the study of how the colors of galaxies change with magnitude and how the correlation of galaxies on the sky varies between the optical bandpasses. A dwarf dominated model has been developed to explain these observations and challenges our knowledge of the space-density of dwarf galaxies. The model demonstrates that a simple upward turn in the luminosity distribution of galaxies, similar to that observed in clusters, would remain undetected by the field surveys yet can explain many of the observations without recourse to non-passive galaxy evolution. The conclusion is that the field luminosity distribution is not constrained at faint absolute magnitudes. A combination of a high density of dwarf galaxies and mild evolution could explain all the observations. Continuing work with HST and the Medium Deep Survey Team now reveals the morphological mix of galaxies down to mI ~ 24.0. The results confirm that ellipticals and early-type spirals are well fitted by standard no-evolution models whilst the late-type spirals can only be fitted by strong evolution and/or a significant turn-up in the local field LF.

  5. Star formation rate and extinction in faint z ∼ 4 Lyman break galaxies

    Energy Technology Data Exchange (ETDEWEB)

    To, Chun-Hao; Wang, Wei-Hao [Institute of Astronomy and Astrophysics, Academia Sinica, No. 1, Sec. 4, Roosevelt Road, Taipei 10617, Taiwan (China); Owen, Frazer N. [National Radio Astronomy Observatory, P.O. Box 0, Socorro, NM 87801 (United States)

    2014-09-10

    We present a statistical detection of 1.5 GHz radio continuum emission from a sample of faint z ∼ 4 Lyman break galaxies (LBGs). To constrain their extinction and intrinsic star formation rate (SFR), we combine the latest ultradeep Very Large Array 1.5 GHz radio image and the Hubble Space Telescope Advanced Camera for Surveys (ACS) optical images in the GOODS-N. We select a large sample of 1771 z ∼ 4 LBGs from the ACS catalog using B {sub F435W}-dropout color criteria. Our LBG samples have I {sub F775W} ∼ 25-28 (AB), ∼0-3 mag fainter than M{sub UV}{sup ⋆} at z ∼ 4. In our stacked radio images, we find the LBGs to be point-like under our 2'' angular resolution. We measure their mean 1.5 GHz flux by stacking the measurements on the individual objects. We achieve a statistical detection of S {sub 1.5} {sub GHz} = 0.210 ± 0.075 μJy at ∼3σ for the first time on such a faint LBG population at z ∼ 4. The measurement takes into account the effects of source size and blending of multiple objects. The detection is visually confirmed by stacking the radio images of the LBGs, and the uncertainty is quantified with Monte Carlo simulations on the radio image. The stacked radio flux corresponds to an obscured SFR of 16.0 ± 5.7 M {sub ☉} yr{sup –1}, and implies a rest-frame UV extinction correction factor of 3.8. This extinction correction is in excellent agreement with that derived from the observed UV continuum spectral slope, using the local calibration of Meurer et al. This result supports the use of the local calibration on high-redshift LBGs to derive the extinction correction and SFR, and also disfavors a steep reddening curve such as that of the Small Magellanic Cloud.

  6. Hubble Space Telescope: Faint object spectrograph instrument handbook. Version 1.1

    Science.gov (United States)

    Ford, Holland C. (Editor)

    1990-01-01

    The Faint Object Spectrograph (FOS) has undergone substantial rework since the 1985 FOS Instrument Handbook was published, and we are now more knowledgeable regarding the spacecraft and instrument operations requirements and constraints. The formal system for observation specification has also evolved considerably, as the GTO programs were defined in detail. This supplement to the FOS Instrument Handbook addresses the important aspects of these changes, to facilitate proper selection and specification of FOS observing programs. Since the Handbook was published, the FOS red detector has been replaced twice, first with the best available spare in 1985 (which proved to have a poor, and steadily degrading red response), and later with a newly developed Digicon, which exhibits a high, stable efficiency and a dark-count rate less than half that of its predecessors. Also, the FOS optical train was realigned in 1987-88 to eliminate considerable beam-vignetting losses, and the collimators were both removed and recoated for greater reflectivity. Following the optics and detector rework, the FOS was carefully recalibrated (although only ambient measurements were possible, so the far-UV characteristics could not be re-evaluated directly). The resulting efficiency curves, including improved estimates of the telescope throughput, are shown. A number of changes in the observing-mode specifications and addition of several optional parameters resulted as the Proposal Instructions were honed during the last year. Target-brightness limitations, which have only recently been formulated carefully, are described. Although these restrictions are very conservative, it is imperative that the detector safety be guarded closely, especially during the initial stages of flight operations. Restrictions on the use of the internal calibration lamps and aperture-illumination sources (TA LEDs), also resulting from detector safety considerations, are outlined. Finally, many changes have been made to

  7. Infrared-faint radio sources are at high redshifts. Spectroscopic redshift determination of infrared-faint radio sources using the Very Large Telescope

    Science.gov (United States)

    Herzog, A.; Middelberg, E.; Norris, R. P.; Sharp, R.; Spitler, L. R.; Parker, Q. A.

    2014-07-01

    Context. Infrared-faint radio sources (IFRS) are characterised by relatively high radio flux densities and associated faint or even absent infrared and optical counterparts. The resulting extremely high radio-to-infrared flux density ratios up to several thousands were previously known only for high-redshift radio galaxies (HzRGs), suggesting a link between the two classes of object. However, the optical and infrared faintness of IFRS makes their study difficult. Prior to this work, no redshift was known for any IFRS in the Australia Telescope Large Area Survey (ATLAS) fields which would help to put IFRS in the context of other classes of object, especially of HzRGs. Aims: This work aims at measuring the first redshifts of IFRS in the ATLAS fields. Furthermore, we test the hypothesis that IFRS are similar to HzRGs, that they are higher-redshift or dust-obscured versions of these massive galaxies. Methods: A sample of IFRS was spectroscopically observed using the Focal Reducer and Low Dispersion Spectrograph 2 (FORS2) at the Very Large Telescope (VLT). The data were calibrated based on the Image Reduction and Analysis Facility (IRAF) and redshifts extracted from the final spectra, where possible. This information was then used to calculate rest-frame luminosities, and to perform the first spectral energy distribution modelling of IFRS based on redshifts. Results: We found redshifts of 1.84, 2.13, and 2.76, for three IFRS, confirming the suggested high-redshift character of this class of object. These redshifts and the resulting luminosities show IFRS to be similar to HzRGs, supporting our hypothesis. We found further evidence that fainter IFRS are at even higher redshifts. Conclusions: Considering the similarities between IFRS and HzRGs substantiated in this work, the detection of IFRS, which have a significantly higher sky density than HzRGs, increases the number of active galactic nuclei in the early universe and adds to the problems of explaining the formation of

  8. Are the infrared-faint radio sources pulsars?

    Science.gov (United States)

    Cameron, A. D.; Keith, M.; Hobbs, G.; Norris, R. P.; Mao, M. Y.; Middelberg, E.

    2011-07-01

    Infrared-faint radio sources (IFRS) are objects which are strong at radio wavelengths but undetected in sensitive Spitzer observations at infrared wavelengths. Their nature is uncertain and most have not yet been associated with any known astrophysical object. One possibility is that they are radio pulsars. To test this hypothesis we undertook observations of 16 of these sources with the Parkes Radio Telescope. Our results limit the radio emission to a pulsed flux density of less than 0.21 mJy (assuming a 50 per cent duty cycle). This is well below the flux density of the IFRS. We therefore conclude that these IFRS are not radio pulsars.

  9. Chemical Abundance Measurements of Ultra-Faint Dwarf Galaxies Discovered by the Dark Energy Survey

    Science.gov (United States)

    Nagasawa, Daniel; Marshall, Jennifer L.; Simon, Joshua D.; Hansen, Terese; Li, Ting; Bernstein, Rebecca; Balbinot, Eduardo; Drlica-Wagner, Alex; Pace, Andrew; Strigari, Louis; Pellegrino, Craig; DePoy, Darren L.; Suntzeff, Nicholas; Bechtol, Keith; Dark Energy Suvey

    2018-01-01

    We present chemical abundance analysis results derived from high-resolution spectroscopy of ultra-faint dwarfs discovered by the Dark Energy Survey. Ultra-faint dwarf galaxies preserve a fossil record of the chemical abundance patterns imprinted by the first stars in the Universe. High-resolution spectroscopic observations of member stars in several recently discovered Milky Way satellites reveal a range of abundance patterns among ultra-faint dwarfs suggesting that star formation processes in the early Universe were quite diverse. The chemical content provides a glimpse not only of the varied nucleosynthetic processes and chemical history of the dwarfs themselves, but also the environment in which they were formed. We present the chemical abundance analysis of these objects and discuss possible explanations for the observed abundance patterns.

  10. Contribution to the reconstruction of scenes made of cylindrical and polyhedral objects from sequences of images obtained by a moving camera

    International Nuclear Information System (INIS)

    Viala, Marc

    1992-01-01

    Environment perception is an important process which enables a robot to perform actions in an unknown scene. Although many sensors exist to 'give sight', the camera seems to play a leading part. This thesis deals with the reconstruction of scenes made of cylindrical and polyhedral objects from sequences of images provided by a moving camera. Two methods are presented. Both are based on the evolution of apparent contours of objects in a sequence. The first approach has been developed considering that camera motion is known. Despite the good results obtained by this method, the specific conditions it requires makes its use limited. In order to avoid an accurate evaluation of camera motion, we introduce another method allowing, at the same time, to estimate the object parameters and camera positions. In this approach, only is needed a 'poor' knowledge of camera displacements supplied by the control system of the robotic platform, in which the camera is embedded. An optimal integration of a priori information, as well as the dynamic feature of the state model to estimate, lead us to use the Kalman filter. Experiments conducted with synthetic and real images proved the reliability of these methods. Camera calibration set-up is also suggested to achieve the most accurate scene models resulting from reconstruction processes. (author) [fr

  11. Traffic intensity monitoring using multiple object detection with traffic surveillance cameras

    Science.gov (United States)

    Hamdan, H. G. Muhammad; Khalifah, O. O.

    2017-11-01

    Object detection and tracking is a field of research that has many applications in the current generation with increasing number of cameras on the streets and lower cost for Internet of Things(IoT). In this paper, a traffic intensity monitoring system is implemented based on the Macroscopic Urban Traffic model is proposed using computer vision as its source. The input of this program is extracted from a traffic surveillance camera which has another program running a neural network classification which can identify and differentiate the vehicle type is implanted. The neural network toolbox is trained with positive and negative input to increase accuracy. The accuracy of the program is compared to other related works done and the trends of the traffic intensity from a road is also calculated. relevant articles in literature searches, great care should be taken in constructing both. Lastly the limitation and the future work is concluded.

  12. DEEP SPITZER OBSERVATIONS OF INFRARED-FAINT RADIO SOURCES: HIGH-REDSHIFT RADIO-LOUD ACTIVE GALACTIC NUCLEI?

    International Nuclear Information System (INIS)

    Norris, Ray P.; Mao, Minnie; Afonso, Jose; Cava, Antonio; Farrah, Duncan; Oliver, Seb; Huynh, Minh T.; Mauduit, Jean-Christophe; Surace, Jason; Ivison, R. J.; Jarvis, Matt; Lacy, Mark; Maraston, Claudia; Middelberg, Enno; Seymour, Nick

    2011-01-01

    Infrared-faint radio sources (IFRSs) are a rare class of objects which are relatively bright at radio wavelengths but very faint at infrared and optical wavelengths. Here we present sensitive near-infrared observations of a sample of these sources taken as part of the Spitzer Extragalactic Representative Volume Survey. Nearly all the IFRSs are undetected at a level of ∼1 μJy in these new deep observations, and even the detections are consistent with confusion with unrelated galaxies. A stacked image implies that the median flux density is S 3.6μm ∼ 0.2 μJy or less, giving extreme values of the radio-infrared flux density ratio. Comparison of these objects with known classes of object suggests that the majority are probably high-redshift radio-loud galaxies, possibly suffering from significant dust extinction.

  13. Origin of faint blue stars

    International Nuclear Information System (INIS)

    Tutukov, A.; Iungelson, L.

    1987-01-01

    The origin of field faint blue stars that are placed in the HR diagram to the left of the main sequence is discussed. These include degenerate dwarfs and O and B subdwarfs. Degenerate dwarfs belong to two main populations with helium and carbon-oxygen cores. The majority of the hot subdwarfs most possibly are helium nondegenerate stars that are produced by mass exchange close binaries of moderate mass cores (3-15 solar masses). The theoretical estimates of the numbers of faint blue stars of different types brighter than certain stellar magnitudes agree with star counts based on the Palomar Green Survey. 28 references

  14. Morphology and astrometry of Infrared-Faint Radio Sources

    Science.gov (United States)

    Middelberg, Enno; Norris, Ray; Randall, Kate; Mao, Minnie; Hales, Christopher

    2008-10-01

    Infrared-Faint Radio Sources, or IFRS, are an unexpected class of object discovered in the Australia Telescope Large Area Survey, ATLAS. They are compact 1.4GHz radio sources with no visible counterparts in co-located (relatively shallow) Spitzer infrared and optical images. We have detected two of these objects with VLBI, indicating the presence of an AGN. These observations and our ATLAS data indicate that IFRS are extended on scales of arcseconds, and we wish to image their morphologies to obtain clues about their nature. These observations will also help us to select optical counterparts from very deep, and hence crowded, optical images which we have proposed. With these data in hand, we will be able to compare IFRS to known object types and to apply for spectroscopy to obtain their redshifts.

  15. Faint nebulosities in the vicinity of the Magellanic H I Stream

    International Nuclear Information System (INIS)

    Johnson, P.G.; Meaburn, J.; Osman, A.M.I.

    1982-01-01

    Very deep Hα image tube photographs with a wide-field filter camera have been taken of the Magellanic H I Stream. A diffuse region of emission has been detected. Furthermore a mosaic of high contrast prints of IIIaJ survey plates taken with the SRC Schmidt, has been compiled over the same area. A complex region of faint, blue, filamentary nebulosity has been revealed. This appears to be reflection nebulosity either in the galactic plane or less probably, in the vicinity of the Large Magellanic Cloud. A deep Hα 1.2-m Schmidt photograph of these blue filaments reinforces the suggestion that they are reflection nebulae. The reflection and emission nebulosities in this vicinity have been compared to each other and the Magellanic H I Stream. The diffuse region of Hα emission is particularly well correlated with the Stream. (author)

  16. On the Nature of Ultra-faint Dwarf Galaxy Candidates. II. The Case of Cetus II

    Science.gov (United States)

    Conn, Blair C.; Jerjen, Helmut; Kim, Dongwon; Schirmer, Mischa

    2018-04-01

    We obtained deep Gemini GMOS-S g, r photometry of the ultra-faint dwarf galaxy candidate Cetus II with the aim of providing stronger constraints on its size, luminosity, and stellar population. Cetus II is an important object in the size–luminosity plane, as it occupies the transition zone between dwarf galaxies and star clusters. All known objects smaller than Cetus II (r h ∼ 20 pc) are reported to be star clusters, while most larger objects are likely dwarf galaxies. We found a prominent excess of main-sequence stars in the color–magnitude diagram of Cetus II, best described by a single stellar population with an age of 11.2 Gyr, metallicity of [Fe/H] = ‑1.28 dex, an [α/Fe] = 0.0 dex at a heliocentric distance of 26.3 ± 1.2 kpc. As well as being spatially located within the Sagittarius dwarf tidal stream, these properties are well matched to the Sagittarius galaxy’s Population B stars. Interestingly, like our recent findings on the ultra-faint dwarf galaxy candidate Tucana V, the stellar field in the direction of Cetus II shows no evidence of a concentrated overdensity despite tracing the main sequence for over six magnitudes. These results strongly support the picture that Cetus II is not an ultra-faint stellar system in the Milky Way halo, but made up of stars from the Sagittarius tidal stream.

  17. Calibration of EFOSC2 Broadband Linear Imaging Polarimetry

    Science.gov (United States)

    Wiersema, K.; Higgins, A. B.; Covino, S.; Starling, R. L. C.

    2018-03-01

    The European Southern Observatory Faint Object Spectrograph and Camera v2 is one of the workhorse instruments on ESO's New Technology Telescope, and is one of the most popular instruments at La Silla observatory. It is mounted at a Nasmyth focus, and therefore exhibits strong, wavelength and pointing-direction-dependent instrumental polarisation. In this document, we describe our efforts to calibrate the broadband imaging polarimetry mode, and provide a calibration for broadband B, V, and R filters to a level that satisfies most use cases (i.e. polarimetric calibration uncertainty 0.1%). We make our calibration codes public. This calibration effort can be used to enhance the yield of future polarimetric programmes with the European Southern Observatory Faint Object Spectrograph and Camera v2, by allowing good calibration with a greatly reduced number of standard star observations. Similarly, our calibration model can be combined with archival calibration observations to post-process data taken in past years, to form the European Southern Observatory Faint Object Spectrograph and Camera v2 legacy archive with substantial scientific potential.

  18. Hydra II: A Faint and Compact Milky Way Dwarf Galaxy Found in the Survey of the Magellanic Stellar History

    OpenAIRE

    Martin, NF; Nidever, DL; Besla, G; Olsen, K; Walker, AR; Vivas, AK; Gruendl, RA; Kaleida, CC; Muñoz, RR; Blum, RD; Saha, A; Conn, BC; Bell, EF; Chu, YH; Cioni, MRL

    2015-01-01

    © 2015. The American Astronomical Society. All rights reserved.We present the discovery of a new dwarf galaxy, Hydra II, found serendipitously within the data from the ongoing Survey of the Magellanic Stellar History conducted with the Dark Energy Camera on the Blanco 4 m Telescope. The new satellite is compact (rh = 68 ± 11 pc) and faint (MV = -4.8 ± 0.3), but well within the realm of dwarf galaxies. The stellar distribution of Hydra II in the color-magnitude diagram is well-described by a m...

  19. Optimization of a miniature short-wavelength infrared objective optics of a short-wavelength infrared to visible upconversion layer attached to a mobile-devices visible camera

    Science.gov (United States)

    Kadosh, Itai; Sarusi, Gabby

    2017-10-01

    The use of dual cameras in parallax in order to detect and create 3-D images in mobile devices has been increasing over the last few years. We propose a concept where the second camera will be operating in the short-wavelength infrared (SWIR-1300 to 1800 nm) and thus have night vision capability while preserving most of the other advantages of dual cameras in terms of depth and 3-D capabilities. In order to maintain commonality of the two cameras, we propose to attach to one of the cameras a SWIR to visible upconversion layer that will convert the SWIR image into a visible image. For this purpose, the fore optics (the objective lenses) should be redesigned for the SWIR spectral range and the additional upconversion layer, whose thickness is mobile device visible range camera sensor (the CMOS sensor). This paper presents such a SWIR objective optical design and optimization that is formed and fit mechanically to the visible objective design but with different lenses in order to maintain the commonality and as a proof-of-concept. Such a SWIR objective design is very challenging since it requires mimicking the original visible mobile camera lenses' sizes and the mechanical housing, so we can adhere to the visible optical and mechanical design. We present in depth a feasibility study and the overall optical system performance of such a SWIR mobile-device camera fore optics design.

  20. Faint Traces

    OpenAIRE

    Denyer, Frank

    2005-01-01

    CD of six compositions by Denyer played by The Barton Workshop (Amsterdam): ‘Out of the Shattered Shadows 1’; ‘Out of the Shattered Shadows 2’; ‘Faint Traces’; ‘Music for Two Performers’; ‘Play’; ‘Passages’. Liner notes by Bob Gilmore. \\ud \\ud Like ‘Fired City’ (2002), this is a portrait CD and comprises première recordings of six works. The three longest – one of which is the title track (2001) – are the most recent. All six works continue Denyer’s research into new acoustic instrumental sou...

  1. Photometry of faint blue stars

    International Nuclear Information System (INIS)

    Kilkenny, D.; Hill, P.W.; Brown, A.

    1977-01-01

    Photometry on the uvby system is given for 61 faint blue stars. The stars are classified by means of the Stromgren indices, using criteria described in a previous paper (Kilkenny and Hill (1975)). (author)

  2. Infrared Faint Radio Sources in the Extended Chandra Deep Field South

    Science.gov (United States)

    Huynh, Minh T.

    2009-01-01

    Infrared-Faint Radio Sources (IFRSs) are a class of radio objects found in the Australia Telescope Large Area Survey (ATLAS) which have no observable counterpart in the Spitzer Wide-area Infrared Extragalactic Survey (SWIRE). The extended Chandra Deep Field South now has even deeper Spitzer imaging (3.6 to 70 micron) from a number of Legacy surveys. We report the detections of two IFRS sources in IRAC images. The non-detection of two other IFRSs allows us to constrain the source type. Detailed modeling of the SED of these objects shows that they are consistent with high redshift AGN (z > 2).

  3. Sub-Camera Calibration of a Penta-Camera

    Science.gov (United States)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  4. MEASURING THE UNDETECTABLE: PROPER MOTIONS AND PARALLAXES OF VERY FAINT SOURCES

    International Nuclear Information System (INIS)

    Lang, Dustin; Hogg, David W.; Jester, Sebastian; Rix, Hans-Walter

    2009-01-01

    The near future of astrophysics involves many large solid-angle, multi-epoch, multiband imaging surveys. These surveys will, at their faint limits, have data on a large number of sources that are too faint to be detected at any individual epoch. Here, we show that it is possible to measure in multi-epoch data not only the fluxes and positions, but also the parallaxes and proper motions of sources that are too faint to be detected at any individual epoch. The method involves fitting a model of a moving point source simultaneously to all imaging, taking account of the noise and point-spread function (PSF) in each image. By this method it is possible to measure the proper motion of a point source with an uncertainty close to the minimum possible uncertainty given the information in the data, which is limited by the PSF, the distribution of observation times (epochs), and the total signal-to-noise in the combined data. We demonstrate our technique on multi-epoch Sloan Digital Sky Survey (SDSS) imaging of the SDSS Southern Stripe (SDSSSS). We show that with our new technique we can use proper motions to distinguish very red brown dwarfs from very high-redshift quasars in these SDSS data, for objects that are inaccessible to traditional techniques, and with better fidelity than by multiband imaging alone. We rediscover all 10 known brown dwarfs in our sample and present nine new candidate brown dwarfs, identified on the basis of significant proper motion.

  5. Rapid objective measurement of gamma camera resolution using statistical moments.

    Science.gov (United States)

    Hander, T A; Lancaster, J L; Kopp, D T; Lasher, J C; Blumhardt, R; Fox, P T

    1997-02-01

    An easy and rapid method for the measurement of the intrinsic spatial resolution of a gamma camera was developed. The measurement is based on the first and second statistical moments of regions of interest (ROIs) applied to bar phantom images. This leads to an estimate of the modulation transfer function (MTF) and the full-width-at-half-maximum (FWHM) of a line spread function (LSF). Bar phantom images were acquired using four large field-of-view (LFOV) gamma cameras (Scintronix, Picker, Searle, Siemens). The following factors important for routine measurements of gamma camera resolution with this method were tested: ROI placement and shape, phantom orientation, spatial sampling, and procedural consistency. A 0.2% coefficient of variation (CV) between repeat measurements of MTF was observed for a circular ROI. The CVs of less than 2% were observed for measured MTF values for bar orientations ranging from -10 degrees to +10 degrees with respect to the x and y axes of the camera acquisition matrix. A 256 x 256 matrix (1.6 mm pixel spacing) was judged sufficient for routine measurements, giving an estimate of the FWHM to within 0.1 mm of manufacturer-specified values (3% difference). Under simulated clinical conditions, the variation in measurements attributable to procedural effects yielded a CV of less than 2% in newer generation cameras. The moments method for determining MTF correlated well with a peak-valley method, with an average difference of 0.03 across the range of spatial frequencies tested (0.11-0.17 line pairs/mm, corresponding to 4.5-3.0 mm bars). When compared with the NEMA method for measuring intrinsic spatial resolution, the moments method was found to be within 4% of the expected FWHM.

  6. A dual-mask coronagraph for observing faint companions to binary stars

    NARCIS (Netherlands)

    Cady, E.; McElwain, M.; Kasdin, N.J.; Thalmann, C.

    2011-01-01

    Observations of binary stars for faint companions with conventional coronagraphic methods are challenging, as both targets will be bright enough to obscure any nearby faint companions if their scattered light is not suppressed. We propose coronagraphic examination of binary stars using an

  7. EoR Foregrounds: the Faint Extragalactic Radio Sky

    Science.gov (United States)

    Prandoni, Isabella

    2018-05-01

    A wealth of new data from upgraded and new radio interferometers are rapidly improving and transforming our understanding of the faint extra-galactic radio sky. Indeed the mounting statistics at sub-mJy and μJy flux levels is finally allowing us to get stringent observational constraints on the faint radio population and on the modeling of its various components. In this paper I will provide a brief overview of the latest results in areas that are potentially important for an accurate treatment of extra-galactic foregrounds in experiments designed to probe the Epoch of Reionization.

  8. Optical and near-infrared imaging of faint Gigahertz Peaked Spectrum sources

    NARCIS (Netherlands)

    Snellen, IAG; Schilizzi, RT; de Bruyn, AG; Miley, GK; Rottgering, HJA; McMahon, RG; Fournon, IP

    1998-01-01

    A sample of 47 faint Gigahertz Peaked Spectrum (GPS) radio sources selected from the Westerbork Northern Sky Survey (WENSS) has been imaged in the optical and near-infrared, resulting in an identification fraction of 87 per cent. The R - I and R - K colours of the faint optical counterparts are as

  9. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  10. FPGA-Based HD Camera System for the Micropositioning of Biomedical Micro-Objects Using a Contactless Micro-Conveyor

    Directory of Open Access Journals (Sweden)

    Elmar Yusifli

    2017-03-01

    Full Text Available With recent advancements, micro-object contactless conveyers are becoming an essential part of the biomedical sector. They help avoid any infection and damage that can occur due to external contact. In this context, a smart micro-conveyor is devised. It is a Field Programmable Gate Array (FPGA-based system that employs a smart surface for conveyance along with an OmniVision complementary metal-oxide-semiconductor (CMOS HD camera for micro-object position detection and tracking. A specific FPGA-based hardware design and VHSIC (Very High Speed Integrated Circuit Hardware Description Language (VHDL implementation are realized. It is done without employing any Nios processor or System on a Programmable Chip (SOPC builder based Central Processing Unit (CPU core. It keeps the system efficient in terms of resource utilization and power consumption. The micro-object positioning status is captured with an embedded FPGA-based camera driver and it is communicated to the Image Processing, Decision Making and Command (IPDC module. The IPDC is programmed in C++ and can run on a Personal Computer (PC or on any appropriate embedded system. The IPDC decisions are sent back to the FPGA, which pilots the smart surface accordingly. In this way, an automated closed-loop system is employed to convey the micro-object towards a desired location. The devised system architecture and implementation principle is described. Its functionality is also verified. Results have confirmed the proper functionality of the developed system, along with its outperformance compared to other solutions.

  11. World's fastest and most sensitive astronomical camera

    Science.gov (United States)

    2009-06-01

    The next generation of instruments for ground-based telescopes took a leap forward with the development of a new ultra-fast camera that can take 1500 finely exposed images per second even when observing extremely faint objects. The first 240x240 pixel images with the world's fastest high precision faint light camera were obtained through a collaborative effort between ESO and three French laboratories from the French Centre National de la Recherche Scientifique/Institut National des Sciences de l'Univers (CNRS/INSU). Cameras such as this are key components of the next generation of adaptive optics instruments of Europe's ground-based astronomy flagship facility, the ESO Very Large Telescope (VLT). ESO PR Photo 22a/09 The CCD220 detector ESO PR Photo 22b/09 The OCam camera ESO PR Video 22a/09 OCam images "The performance of this breakthrough camera is without an equivalent anywhere in the world. The camera will enable great leaps forward in many areas of the study of the Universe," says Norbert Hubin, head of the Adaptive Optics department at ESO. OCam will be part of the second-generation VLT instrument SPHERE. To be installed in 2011, SPHERE will take images of giant exoplanets orbiting nearby stars. A fast camera such as this is needed as an essential component for the modern adaptive optics instruments used on the largest ground-based telescopes. Telescopes on the ground suffer from the blurring effect induced by atmospheric turbulence. This turbulence causes the stars to twinkle in a way that delights poets, but frustrates astronomers, since it blurs the finest details of the images. Adaptive optics techniques overcome this major drawback, so that ground-based telescopes can produce images that are as sharp as if taken from space. Adaptive optics is based on real-time corrections computed from images obtained by a special camera working at very high speeds. Nowadays, this means many hundreds of times each second. The new generation instruments require these

  12. Prism-based single-camera system for stereo display

    Science.gov (United States)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  13. VLBI observations of Infrared-Faint Radio Sources

    Science.gov (United States)

    Middelberg, Enno; Phillips, Chris; Norris, Ray; Tingay, Steven

    2006-10-01

    We propose to observe a small sample of radio sources from the ATLAS project (ATLAS = Australia Telescope Large Area Survey) with the LBA, to determine their compactness and map their structures. The sample consists of three radio sources with no counterpart in the co-located SWIRE survey (3.6 um to 160 um), carried out with the Spitzer Space Telescope. This rare class of sources, dubbed Infrared-Faint Radio Sources, or IFRS, is inconsistent with current galaxy evolution models. VLBI observations are an essential way to obtain further clues on what these objects are and why they are hidden from infrared observations: we will map their structure to test whether they resemble core-jet or double-lobed morphologies, and we will measure the flux densities on long baselines, to determine their compactness. Previous snapshot-style LBA observations of two other IFRS yielded no detections, hence we propose to use disk-based recording with 512 Mbps where possible, for highest sensitivity. With the observations proposed here, we will increase the number of VLBI-observed IFRS from two to five, soon allowing us to draw general conclusions about this intriguing new class of objects.

  14. The GOODS UV Legacy Fields: A Full Census of Faint Star-Forming Galaxies at z~0.5-2

    Science.gov (United States)

    Oesch, Pascal

    2014-10-01

    Deep HST imaging has shown that the overall star formation density and UV light density at z>3 is dominated by faint, blue galaxies. Remarkably, very little is known about the equivalent galaxy population at lower redshifts. Understanding how these galaxies evolve across the epoch of peak cosmic star-formation is key to a complete picture of galaxy evolution. While we and others have been making every effort to use existing UV imaging data, a large fraction of the prior data were taken without post-flash and are not photometric. We now propose to obtain a robust legacy dataset for a complete census of faint star-forming galaxies at z~0.5-2, akin to what is achieved at z>3, using the unique capabilities of the WFC3/UVIS camera to obtain very deep UV imaging to 27.5-28.0 mag over the CANDELS Deep fields in GOODS North and South. We directly sample the FUV at z>~0.5 and we make these prime legacy fields for JWST with unique and essential UV/blue HST coverage. Together with the exquisite ancillary multi-wavelength data at high spatial resolution from ACS and WFC3/IR our program will result in accurate photometric redshifts for very faint sources and will enable a wealth of research by the community. This includes tracing the evolution of the FUV luminosity function over the peak of the star formation rate density from z~3 down to z~0.5, measuring the physical properties of sub-L* galaxies, and characterizing resolved stellar populations to decipher the build-up of the Hubble sequence from sub-galactic clumps. The lack of a future UV space telescope makes the acquisition of such legacy data imperative for the JWST era and beyond.

  15. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.

  16. Confirmation of Faint Dwarf Galaxies in the M81 Group

    Science.gov (United States)

    Chiboucas, Kristin; Jacobs, Bradley A.; Tully, R. Brent; Karachentsev, Igor D.

    2013-11-01

    We have followed up on the results of a 65 deg2 CFHT/MegaCam imaging survey of the nearby M81 Group searching for faint and ultra-faint dwarf galaxies. The original survey turned up 22 faint candidate dwarf members. Based on two-color HST ACS/WFC and WFPC2 photometry, we now confirm 14 of these as dwarf galaxy members of the group. Distances and stellar population characteristics are discussed for each. To a completeness limit of M_{r^{\\prime }} = -10, we find a galaxy luminosity function slope of -1.27 ± 0.04 for the M81 Group. In this region, there are now 36 M81 Group members known, including 4 blue compact dwarfs; 8 other late types including the interacting giants M81, NGC 3077, and M82; 19 early type dwarfs; and at least 5 potential tidal dwarf galaxies. We find that the dSph galaxies in M81 appear to lie in a flattened distribution, similar to that found for the Milky Way and M31. One of the newly discovered dSph galaxies has properties similar to the ultra-faint dwarfs being found in the Local Group with a size Re ~ 100 pc and total magnitude estimates M_{r^{\\prime }} = -6.8 and MI ~ -9.1.

  17. Automated analysis of objective-prism spectra

    International Nuclear Information System (INIS)

    Hewett, P.C.; Irwin, M.J.; Bunclark, P.; Bridgeland, M.T.; Kibblewhite, E.J.; Smith, M.G.

    1985-01-01

    A fully automated system for the location, measurement and analysis of large numbers of low-resolution objective-prism spectra is described. The system is based on the APM facility at the University of Cambridge, and allows processing of objective-prism, grens or grism data. Particular emphasis is placed on techniques to obtain the maximum signal-to-noise ratio from the data, both in the initial spectral estimation procedure and for subsequent feature identification. Comparison of a high-quality visual catalogue of faint quasar candidates with an equivalent automated sample demonstrates the ability of the APM system to identify all the visually selected quasar candidates. In addition, a large population of new, faint (msub(J)approx. 20) candidates is identified. (author)

  18. X-ray Counterparts of Infrared Faint Radio Sources

    Science.gov (United States)

    Schartel, Norbert

    2011-10-01

    Infrared Faint Radio Sources (IFRS) are radio sources with extremely faint or even absent infrared emission in deep Spitzer Surveys. Models of their spectral energy distributions, the ratios of radio to infrared flux densities and their steep radio spectra strongly suggest that IFRS are AGN at high redshifts (2IFRS, but if confirmed, the increased AGN numbers at these redshifts will account for the unresolved part of the X-ray background. The identification of X-ray counterparts of IFRS is considered to be the smoking gun for this hypothesis. We propose to observe 8 IFRS using 30ks pointed observations. X-ray detections of IFRS with different ratios of radio-to-infrared fluxes, will constrain the class-specific SED.

  19. CONFIRMATION OF FAINT DWARF GALAXIES IN THE M81 GROUP

    Energy Technology Data Exchange (ETDEWEB)

    Chiboucas, Kristin [Gemini Observatory, 670 North A' ohoku Pl, Hilo, HI 96720 (United States); Jacobs, Bradley A.; Tully, R. Brent [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96821 (United States); Karachentsev, Igor D., E-mail: kchibouc@gemini.edu, E-mail: bjacobs@ifa.hawaii.edu, E-mail: tully@ifa.hawaii.edu, E-mail: ikar@luna.sao.ru [Special Astrophysical Observatory (SAO), Russian Academy of Sciences, Nizhnij Arkhyz, Karachai-Cherkessian Republic 369167 (Russian Federation)

    2013-11-01

    We have followed up on the results of a 65 deg{sup 2} CFHT/MegaCam imaging survey of the nearby M81 Group searching for faint and ultra-faint dwarf galaxies. The original survey turned up 22 faint candidate dwarf members. Based on two-color HST ACS/WFC and WFPC2 photometry, we now confirm 14 of these as dwarf galaxy members of the group. Distances and stellar population characteristics are discussed for each. To a completeness limit of M{sub r{sup '}}= -10, we find a galaxy luminosity function slope of –1.27 ± 0.04 for the M81 Group. In this region, there are now 36 M81 Group members known, including 4 blue compact dwarfs; 8 other late types including the interacting giants M81, NGC 3077, and M82; 19 early type dwarfs; and at least 5 potential tidal dwarf galaxies. We find that the dSph galaxies in M81 appear to lie in a flattened distribution, similar to that found for the Milky Way and M31. One of the newly discovered dSph galaxies has properties similar to the ultra-faint dwarfs being found in the Local Group with a size R{sub e} ∼ 100 pc and total magnitude estimates M{sub r{sup '}}= -6.8 and M{sub I} ∼ –9.1.

  20. On the Dearth of Ultra-faint Extremely Metal-poor Galaxies

    Energy Technology Data Exchange (ETDEWEB)

    Sánchez Almeida, J.; Filho, M. E.; Vecchia, C. Dalla [Instituto Astrofísica de Canarias, E-38200 La Laguna, Tenerife (Spain); Skillman, E. D., E-mail: jos@iac.es [Minnesota Institute for Astrophysics, School of Physics and Astronomy, University of Minnesota, Minneapolis, MN (United States)

    2017-02-01

    Local extremely metal-poor galaxies (XMPs) are of particular astrophysical interest since they allow us to look into physical processes characteristic of the early universe, from the assembly of galaxy disks to the formation of stars in conditions of low metallicity. Given the luminosity–metallicity relationship, all galaxies fainter than M{sub r} ≃ −13 are expected to be XMPs. Therefore, XMPs should be common in galaxy surveys. However, they are not common, because several observational biases hamper their detection. This work compares the number of faint XMPs in the SDSS-DR7 spectroscopic survey with the expected number, given the known biases and the observed galaxy luminosity function (LF). The faint end of the LF is poorly constrained observationally, but it determines the expected number of XMPs. Surprisingly, the number of observed faint XMPs (∼10) is overpredicted by our calculation, unless the upturn in the faint end of the LF is not present in the model. The lack of an upturn can be naturally understood if most XMPs are central galaxies in their low-mass dark matter halos, which are highly depleted in baryons due to interaction with the cosmic ultraviolet background and to other physical processes. Our result also suggests that the upturn toward low luminosity of the observed galaxy LF is due to satellite galaxies.

  1. The first VLBI image of an infrared-faint radio source

    Science.gov (United States)

    Middelberg, E.; Norris, R. P.; Tingay, S.; Mao, M. Y.; Phillips, C. J.; Hotan, A. W.

    2008-11-01

    Context: We investigate the joint evolution of active galactic nuclei and star formation in the Universe. Aims: In the 1.4 GHz survey with the Australia Telescope Compact Array of the Chandra Deep Field South and the European Large Area ISO Survey - S1 we have identified a class of objects which are strong in the radio but have no detectable infrared and optical counterparts. This class has been called Infrared-Faint Radio Sources, or IFRS. 53 sources out of 2002 have been classified as IFRS. It is not known what these objects are. Methods: To address the many possible explanations as to what the nature of these objects is we have observed four sources with the Australian Long Baseline Array. Results: We have detected and imaged one of the four sources observed. Assuming that the source is at a high redshift, we find its properties in agreement with properties of Compact Steep Spectrum sources. However, due to the lack of optical and infrared data the constraints are not particularly strong.

  2. Science, conservation, and camera traps

    Science.gov (United States)

    Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.

  3. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i generation of a three-dimensional (3D human model; (ii human object-based automatic scene calibration; and (iii metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  4. Evidence for Infrared-faint Radio Sources as z > 1 Radio-loud Active Galactic Nuclei

    Science.gov (United States)

    Huynh, Minh T.; Norris, Ray P.; Siana, Brian; Middelberg, Enno

    2010-02-01

    Infrared-Faint Radio Sources (IFRSs) are a class of radio objects found in the Australia Telescope Large Area Survey which have no observable mid-infrared counterpart in the Spitzer Wide-area Infrared Extragalactic (SWIRE) survey. The extended Chandra Deep Field South now has even deeper Spitzer imaging (3.6-70 μm) from a number of Legacy surveys. We report the detections of two IFRS sources in IRAC images. The non-detection of two other IFRSs allows us to constrain the source type. Detailed modeling of the spectral energy distribution of these objects shows that they are consistent with high-redshift (z >~ 1) active galactic nuclei.

  5. Video camera use at nuclear power plants

    International Nuclear Information System (INIS)

    Estabrook, M.L.; Langan, M.O.; Owen, D.E.

    1990-08-01

    A survey of US nuclear power plants was conducted to evaluate video camera use in plant operations, and determine equipment used and the benefits realized. Basic closed circuit television camera (CCTV) systems are described and video camera operation principles are reviewed. Plant approaches for implementing video camera use are discussed, as are equipment selection issues such as setting task objectives, radiation effects on cameras, and the use of disposal cameras. Specific plant applications are presented and the video equipment used is described. The benefits of video camera use --- mainly reduced radiation exposure and increased productivity --- are discussed and quantified. 15 refs., 6 figs

  6. Falling-incident detection and throughput enhancement in a multi-camera video-surveillance system.

    Science.gov (United States)

    Shieh, Wann-Yun; Huang, Ju-Chin

    2012-09-01

    For most elderly, unpredictable falling incidents may occur at the corner of stairs or a long corridor due to body frailty. If we delay to rescue a falling elder who is likely fainting, more serious consequent injury may occur. Traditional secure or video surveillance systems need caregivers to monitor a centralized screen continuously, or need an elder to wear sensors to detect falling incidents, which explicitly waste much human power or cause inconvenience for elders. In this paper, we propose an automatic falling-detection algorithm and implement this algorithm in a multi-camera video surveillance system. The algorithm uses each camera to fetch the images from the regions required to be monitored. It then uses a falling-pattern recognition algorithm to determine if a falling incident has occurred. If yes, system will send short messages to someone needs to be noticed. The algorithm has been implemented in a DSP-based hardware acceleration board for functionality proof. Simulation results show that the accuracy of falling detection can achieve at least 90% and the throughput of a four-camera surveillance system can be improved by about 2.1 times. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  7. Space Telescope maintenance and refurbishment

    Science.gov (United States)

    Trucks, H. F.

    1983-01-01

    The Space Telescope (ST) represents a new concept regarding spaceborne astronomical observatories. Maintenance crews will be brought to the orbital worksite to make repairs and replace scientific instruments. For major overhauls the telescope can be temporarily returned to earth with the aid of the Shuttle. It will, thus, be possible to conduct astronomical studies with the ST for two decades or more. The five first-generation scientific instruments used with the ST include a wide field/planetary camera, a faint object camera, a faint object spectrograph, a high resolution spectrograph, and a high speed photometer. Attention is given to the optical telescope assembly, the support systems module, aspects of mission and science operations, unscheduled maintenance, contingency orbital maintenance, planned on-orbit maintenance, ground maintenance, ground refurbishment, and ground logistics.

  8. The Faint End of the Quasar Luminosity Function at z ~ 4

    Science.gov (United States)

    Glikman, Eilat; Bogosavljević, Milan; Djorgovski, S. G.; Stern, Daniel; Dey, Arjun; Jannuzi, Buell T.; Mahabal, Ashish

    2010-02-01

    The evolution of the quasar luminosity function (QLF) is one of the basic cosmological measures providing insight into structure formation and mass assembly in the universe. We have conducted a spectroscopic survey to find faint quasars (-26.0 law (Φ vprop L β) gives a faint-end slope β = -1.6 ± 0.2. If we consider our larger, but highly incomplete sample going 1 mag fainter, we measure a steeper faint-end slope -2 law LF. Our best fit finds a bright-end slope, α = -2.4 ± 0.2, and faint-end slope, β = -2.3 ± 0.2, without a well-constrained break luminosity. This is effectively a single power law, with β = -2.7 ± 0.1. We use these results to place limits on the amount of ultraviolet radiation produced by quasars and find that quasars are able to ionize the intergalactic medium at these redshifts. The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.

  9. A study of faint radio sources near the North Galactic Pole

    International Nuclear Information System (INIS)

    Benn, C.R.

    1981-09-01

    A large amount of observational data has been obtained on faint radio sources in a small area of sky near the North Galactic Pole (the 5C 12 area). This provides a new perspective (3 decades in flux density from the 3CR catalogue) on the physical properties and cosmological evolution of extragalactic radio sources. Chapter 1 introduces the problem and concludes that faint-object cosmology is best served by intensive investigation of sources in a small area of sky. An optimum area is chosen, at right ascension 12sup(h) 58sup(m) 43sup(s) and declination 35 0 14' 00'' (1950.0). Chapter 2 describes the 5C12 radio survey (complete to 9mJy apparent flux density at 408MHz) conducted with the One Mile Telescope at Cambridge. Chapter 4 describes a 4.85GHz survey to 20mJy of the area, conducted at Effelsberg. In chapter 5, a program of optical identification for the sources is described, using deep (msub(g) = 22.5, msub(y) = 20.7) Schmidt plates taken at Hale Observatories. A statistical algorithm is developed to cope with the problems of optical confusion due to radio positional errors. Chapter 6 draws on data from the previous 4, and presents results concerning radio source counts, spectral index distributions, optical identifications and clustering. (author)

  10. Non-contact measurement of rotation angle with solo camera

    Science.gov (United States)

    Gan, Xiaochuan; Sun, Anbin; Ye, Xin; Ma, Liqun

    2015-02-01

    For the purpose to measure a rotation angle around the axis of an object, a non-contact rotation angle measurement method based on solo camera was promoted. The intrinsic parameters of camera were calibrated using chessboard on principle of plane calibration theory. The translation matrix and rotation matrix between the object coordinate and the camera coordinate were calculated according to the relationship between the corners' position on object and their coordinates on image. Then the rotation angle between the measured object and the camera could be resolved from the rotation matrix. A precise angle dividing table (PADT) was chosen as the reference to verify the angle measurement error of this method. Test results indicated that the rotation angle measurement error of this method did not exceed +/- 0.01 degree.

  11. The Evolution of the Faint End of the UV Luminosity Function during the Peak Epoch of Star Formation (1 < z < 3)

    Science.gov (United States)

    Alavi, Anahita; Siana, Brian; Richard, Johan; Rafelski, Marc; Jauzac, Mathilde; Limousin, Marceau; Freeman, William R.; Scarlata, Claudia; Robertson, Brant; Stark, Daniel P.; Teplitz, Harry I.; Desai, Vandana

    2016-11-01

    We present a robust measurement of the rest-frame UV luminosity function (LF) and its evolution during the peak epoch of cosmic star formation at 1\\lt z\\lt 3. We use our deep near-ultraviolet imaging from WFC3/UVIS on the Hubble Space Telescope and existing Advanced Camera for Surveys (ACS)/WFC and WFC3/IR imaging of three lensing galaxy clusters, Abell 2744 and MACS J0717 from the Hubble Frontier Field survey and Abell 1689. Combining deep UV imaging and high magnification from strong gravitational lensing, we use photometric redshifts to identify 780 ultra-faint galaxies with {M}{UV}\\lt -12.5 AB mag at 1\\lt z\\lt 3. From these samples, we identified five new, faint, multiply imaged systems in A1689. We run a Monte Carlo simulation to estimate the completeness correction and effective volume for each cluster using the latest published lensing models. We compute the rest-frame UV LF and find the best-fit faint-end slopes of α =-1.56+/- 0.04, α =-1.72+/- 0.04, and α =-1.94+/- 0.06 at 1.0\\lt z\\lt 1.6, 1.6\\lt z\\lt 2.2, and 2.2\\lt z\\lt 3.0, respectively. Our results demonstrate that the UV LF becomes steeper from z˜ 1.3 to z˜ 2.6 with no sign of a turnover down to {M}{UV}=-14 AB mag. We further derive the UV LFs using the Lyman break “dropout” selection and confirm the robustness of our conclusions against different selection methodologies. Because the sample sizes are so large and extend to such faint luminosities, the statistical uncertainties are quite small, and systematic uncertainties (due to the assumed size distribution, for example) likely dominate. If we restrict our analysis to galaxies and volumes above \\gt 50 % completeness in order to minimize these systematics, we still find that the faint-end slope is steep and getting steeper with redshift, though with slightly shallower (less negative) values (α =-1.55+/- 0.06, -1.69 ± 0.07, and -1.79 ± 0.08 for z˜ 1.3, 1.9, and 2.6, respectively). Finally, we conclude that the faint star

  12. Detailed abundances in stars belonging to ultra-faint dwarf spheroidal galaxies

    OpenAIRE

    François, P.; Monaco, L.; Villanova, S.; Catelan, M.; Bonifacio, P.; Bellazzini, M.; Bidin, C. Moni; Marconi, G.; Geisler, D.; Sbordone, L.

    2012-01-01

    We report preliminary results concerning the detailed chemical composition of metal poor stars belonging to close ultra-faint dwarf galaxies (hereafter UfDSphs). The abundances have been determined thanks to spectra obtained with X-Shooter, a high efficiency spectrograph installed on one of the ESO VLT units. The sample of ultra-faint dwarf spheroidal stars have abundance ratios slightly lower to what is measured in field halo star of the same metallicity.We did not find extreme abundances in...

  13. Observations of the Perseids 2012 using SPOSH cameras

    Science.gov (United States)

    Margonis, A.; Flohrer, J.; Christou, A.; Elgner, S.; Oberst, J.

    2012-09-01

    The Perseids are one of the most prominent annual meteor showers occurring every summer when the stream of dust particles, originating from Halley-type comet 109P/Swift-Tuttle, intersects the orbital path of the Earth. The dense core of this stream passes Earth's orbit on the 12th of August producing the maximum number of meteors. The Technical University of Berlin (TUB) and the German Aerospace Center (DLR) organize observing campaigns every summer monitoring the Perseids activity. The observations are carried out using the Smart Panoramic Optical Sensor Head (SPOSH) camera system [0]. The SPOSH camera has been developed by DLR and Jena-Optronik GmbH under an ESA/ESTEC contract and it is designed to image faint, short-lived phenomena on dark planetary hemispheres. The camera features a highly sensitive backilluminated 1024x1024 CCD chip and a high dynamic range of 14 bits. The custom-made fish-eye lens offers a 120°x120° field-of-view (168° over the diagonal). Figure 1: A meteor captured by the SPOSH cameras simultaneously during the last 2011 observing campaign in Greece. The horizon including surrounding mountains can be seen in the image corners as a result of the large FOV of the camera. The observations will be made on the Greek Peloponnese peninsula monitoring the post-peak activity of the Perseids during a one-week period around the August New Moon (14th to 21st). Two SPOSH cameras will be deployed in two remote sites in high altitudes for the triangulation of meteor trajectories captured at both stations simultaneously. The observations during this time interval will give us the possibility to study the poorly-observed postmaximum branch of the Perseid stream and compare the results with datasets from previous campaigns which covered different periods of this long-lived meteor shower. The acquired data will be processed using dedicated software for meteor data reduction developed at TUB and DLR. Assuming a successful campaign, statistics, trajectories

  14. Digital airborne camera introduction and technology

    CERN Document Server

    Sandau, Rainer

    2014-01-01

    The last decade has seen great innovations on the airborne camera. This book is the first ever written on the topic and describes all components of a digital airborne camera ranging from the object to be imaged to the mass memory device.

  15. Hardware accelerator design for tracking in smart camera

    Science.gov (United States)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.

  16. THE SUBARU HIGH-z QUASAR SURVEY: DISCOVERY OF FAINT z ∼ 6 QUASARS

    International Nuclear Information System (INIS)

    Kashikawa, Nobunari; Furusawa, Hisanori; Niino, Yuu; Ishizaki, Yoshifumi; Onoue, Masafusa; Toshikawa, Jun; Ishikawa, Shogo; Willott, Chris J.; Im, Myungshin; Shimasaku, Kazuhiro; Ouchi, Masami; Hibon, Pascale

    2015-01-01

    We present the discovery of one or two extremely faint z ∼ 6 quasars in 6.5 deg 2 utilizing a unique capability of the wide-field imaging of the Subaru/Suprime-Cam. The quasar selection was made in (i'-z B ) and (z B -z R ) colors, where z B and z R are bandpasses with central wavelengths of 8842 Å and 9841 Å, respectively. The color selection can effectively isolate quasars at z ∼ 6 from M/L/T dwarfs without the J-band photometry down to z R < 24.0, which is 3.5 mag deeper than the Sloan Digital Sky Survey (SDSS). We have selected 17 promising quasar candidates. The follow-up spectroscopy for seven targets identified one apparent quasar at z = 6.156 with M 1450 = –23.10. We also identified one possible quasar at z = 6.041 with a faint continuum of M 1450 = –22.58 and a narrow Lyα emission with HWHM =427 km s –1 , which cannot be distinguished from Lyman α emitters. We derive the quasar luminosity function at z ∼ 6 by combining our faint quasar sample with the bright quasar samples by SDSS and CFHQS. Including our data points invokes a higher number density in the faintest bin of the quasar luminosity function than the previous estimate employed. This suggests a steeper faint-end slope than lower z, though it is yet uncertain based on a small number of spectroscopically identified faint quasars, and several quasar candidates still remain to be diagnosed. The steepening of the quasar luminosity function at the faint end does increase the expected emission rate of the ionizing photon; however, it only changes by a factor of approximately two to six. This was found to still be insufficient for the required photon budget of reionization at z ∼ 6

  17. a Faint and Lonely Brown Dwarf in the Solar Vicinity

    Science.gov (United States)

    1997-04-01

    Discovery of KELU-1 Promises New Insights into Strange Objects Brown Dwarfs are star-like objects which are too small to become real stars, yet too large to be real planets. Their mass is too small to ignite those nuclear processes which are responsible for the large energies and high temperatures of stars, but it is much larger than that of the planets we know in our solar system. Until now, very few Brown Dwarfs have been securely identified as such. Two are members of double-star systems, and a few more are located deep within the Pleiades star cluster. Now, however, Maria Teresa Ruiz of the Astronomy Department at Universidad de Chile (Santiago de Chile), using telescopes at the ESO La Silla observatory, has just discovered one that is all alone and apparently quite near to us. Contrary to the others which are influenced by other objects in their immediate surroundings, this new Brown Dwarf is unaffected and will thus be a perfect object for further investigations that may finally allow us to better understand these very interesting celestial bodies. It has been suggested that Brown Dwarfs may constitute a substantial part of the unseen dark matter in our Galaxy. This discovery may therefore also have important implications for this highly relevant research area. Searching for nearby faint stars The story of this discovery goes back to 1987 when Maria Teresa Ruiz decided to embark upon a long-term search (known as the Calan-ESO proper-motion survey ) for another type of unusual object, the so-called White Dwarfs , i.e. highly evolved, small and rather faint stars. Although they have masses similar to that of the Sun, such stars are no larger than the Earth and are therefore extremely compact. They are particularly interesting, because they most probably represent the future end point of evolution of our Sun, some billions of years from now. For this project, the Chilean astronomer obtained large-field photographic exposures with the 1-m ESO Schmidt telescope at

  18. Galaxy modelling. II. Multi-wavelength faint counts from a semi-analytic model of galaxy formation

    Science.gov (United States)

    Devriendt, J. E. G.; Guiderdoni, B.

    2000-11-01

    This paper predicts self-consistent faint galaxy counts from the UV to the submm wavelength range. The stardust spectral energy distributions described in Devriendt et al. \\citeparyear{DGS99} (Paper I) are embedded within the explicit cosmological framework of a simple semi-analytic model of galaxy formation and evolution. We begin with a description of the non-dissipative and dissipative collapses of primordial perturbations, and plug in standard recipes for star formation, stellar evolution and feedback. We also model the absorption of starlight by dust and its re-processing in the IR and submm. We then build a class of models which capture the luminosity budget of the universe through faint galaxy counts and redshift distributions in the whole wavelength range spanned by our spectra. In contrast with a rather stable behaviour in the optical and even in the far-IR, the submm counts are dramatically sensitive to variations in the cosmological parameters and changes in the star formation history. Faint submm counts are more easily accommodated within an open universe with a low value of Omega_0 , or a flat universe with a non-zero cosmological constant. We confirm the suggestion of Guiderdoni et al. \\citeparyear{GHBM98} that matching the current multi-wavelength data requires a population of heavily-extinguished, massive galaxies with large star formation rates ( ~ 500 M_sun yr-1) at intermediate and high redshift (z >= 1.5). Such a population of objects probably is the consequence of an increase of interaction and merging activity at high redshift, but a realistic quantitative description can only be obtained through more detailed modelling of such processes. This study illustrates the implementation of multi-wavelength spectra into a semi-analytic model. In spite of its simplicity, it already provides fair fits of the current data of faint counts, and a physically motivated way of interpolating and extrapolating these data to other wavelengths and fainter flux

  19. Foreground effect on the J-factor estimation of ultra-faint dwarf spheroidal galaxies

    Science.gov (United States)

    Ichikawa, Koji; Horigome, Shun-ichi; Ishigaki, Miho N.; Matsumoto, Shigeki; Ibe, Masahiro; Sugai, Hajime; Hayashi, Kohei

    2018-05-01

    Dwarf spheroidal galaxies (dSphs) are promising targets for the gamma-ray dark matter (DM) search. In particular, DM annihilation signal is expected to be strong in some of the recently discovered nearby ultra-faint dSphs, which potentially give stringent constraints on the O(1) TeV WIMP DM. However, various non-negligible systematic uncertainties complicate the estimation of the astrophysical factors relevant for the DM search in these objects. Among them, the effects of foreground stars particularly attract attention because the contamination is unavoidable even for the future kinematical survey. In this article, we assess the effects of the foreground contamination on the astrophysical J-factor estimation by generating mock samples of stars in the four ultra-faint dSphs and using a model of future spectrographs. We investigate various data cuts to optimize the quality of the data and apply a likelihood analysis which takes member and foreground stellar distributions into account. We show that the foreground star contaminations in the signal region (the region of interest) and their statistical uncertainty can be estimated by interpolating the foreground star distribution in the control region where the foreground stars dominate the member stars. Such regions can be secured at future spectroscopic observations utilizing a multiple object spectrograph with a large field of view; e.g. the Prime Focus Spectrograph mounted on Subaru Telescope. The above estimation has several advantages: The data-driven estimation of the contamination makes the analysis of the astrophysical factor stable against the complicated foreground distribution. Besides, foreground contamination effect is considered in the likelihood analysis.

  20. Selecting a digital camera for telemedicine.

    Science.gov (United States)

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  1. Wide-Field Imaging of Omega Centauri with the Advanced Camera for Surveys

    Science.gov (United States)

    Haggard, D.; Dorfman, J. L.; Cool, A. M.; Anderson, J.; Bailyn, C. D.; Edmonds, P. D.; Grindlay, J. E.

    2003-12-01

    We present initial results of a wide-field imaging study of the globular cluster Omega Cen (NGC 5139) using the Advanced Camera for Surveys (ACS). We have obtained a mosaic of 3x3 pointings of the cluster using the HST/ACS Wide Field Camera covering approximately 10' x 10', roughly out to the cluster's half-mass radius. Using F435W (B435), F625W (R625) and F658N (H-alpha) filters, we are searching for optical counterparts of Chandra X-ray sources and studying the cluster's stellar populations. Here we report the discovery of an optical counterpart to the X-ray source identified by Rutledge et al. (2002) as a possible quiescent neutron star on the basis of its X-ray spectrum. The star's magnitude and color (R625 = 24.4, B435-R625 = 1.5) place it more than 1.5 magnitudes to the blue side of the main sequence. Through the H-alpha filter it is about 1.3 magnitudes brighter than cluster stars of comparable R625 magnitude. The blue color and H-alpha excess suggest the presence of an accretion disk, implying that the neutron star is a member of a quiescent low-mass X-ray binary. The object's faint absolute magnitude (M625 ˜ 10.6, M435 ˜ 11.8) implies that the system contains an unusually weak disk and that the companion, if it is a main-sequence star, is of very low mass (ACS study. This work is supported by NASA grant GO-9442 from the Space Telescope Science Institute.

  2. Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras.

    Science.gov (United States)

    Wu, Dewen; Chen, Ruizhi; Chen, Liang

    2017-11-16

    Artificial Intelligence (AI) technologies and their related applications are now developing at a rapid pace. Indoor positioning will be one of the core technologies that enable AI applications because people spend 80% of their time indoors. Humans can locate themselves related to a visually well-defined object, e.g., a door, based on their visual observations. Can a smartphone camera do a similar job when it points to an object? In this paper, a visual positioning solution was developed based on a single image captured from a smartphone camera pointing to a well-defined object. The smartphone camera simulates the process of human eyes for the purpose of relatively locating themselves against a well-defined object. Extensive experiments were conducted with five types of smartphones on three different indoor settings, including a meeting room, a library, and a reading room. Experimental results shown that the average positioning accuracy of the solution based on five smartphone cameras is 30.6 cm, while that for the human-observed solution with 300 samples from 10 different people is 73.1 cm.

  3. Star/galaxy separation at faint magnitudes: Application to a simulated Dark Energy Survey

    Energy Technology Data Exchange (ETDEWEB)

    Soumagnac, M.T.; et al.

    2013-06-21

    We address the problem of separating stars from galaxies in future large photometric surveys. We focus our analysis on simulations of the Dark Energy Survey (DES). In the first part of the paper, we derive the science requirements on star/galaxy separation, for measurement of the cosmological parameters with the Gravitational Weak Lensing and Large Scale Structure probes. These requirements are dictated by the need to control both the statistical and systematic errors on the cosmological parameters, and by Point Spread Function calibration. We formulate the requirements in terms of the completeness and purity provided by a given star/galaxy classifier. In order to achieve these requirements at faint magnitudes, we propose a new method for star/galaxy separation in the second part of the paper. We first use Principal Component Analysis to outline the correlations between the objects parameters and extract from it the most relevant information. We then use the reduced set of parameters as input to an Artificial Neural Network. This multi-parameter approach improves upon purely morphometric classifiers (such as the classifier implemented in SExtractor), especially at faint magnitudes: it increases the purity by up to 20% for stars and by up to 12% for galaxies, at i-magnitude fainter than 23.

  4. Star/galaxy separation at faint magnitudes: application to a simulated Dark Energy Survey

    Energy Technology Data Exchange (ETDEWEB)

    Soumagnac, M. T.; Abdalla, F. B.; Lahav, O.; Kirk, D.; Sevilla, I.; Bertin, E.; Rowe, B. T. P.; Annis, J.; Busha, M. T.; Da Costa, L. N.; Frieman, J. A.; Gaztanaga, E.; Jarvis, M.; Lin, H.; Percival, W. J.; Santiago, B. X.; Sabiu, C. G.; Wechsler, R. H.; Wolz, L.; Yanny, B.

    2015-04-14

    We address the problem of separating stars from galaxies in future large photometric surveys. We focus our analysis on simulations of the Dark Energy Survey (DES). In the first part of the paper, we derive the science requirements on star/galaxy separation, for measurement of the cosmological parameters with the gravitational weak lensing and large-scale structure probes. These requirements are dictated by the need to control both the statistical and systematic errors on the cosmological parameters, and by point spread function calibration. We formulate the requirements in terms of the completeness and purity provided by a given star/galaxy classifier. In order to achieve these requirements at faint magnitudes, we propose a new method for star/galaxy separation in the second part of the paper. We first use principal component analysis to outline the correlations between the objects parameters and extract from it the most relevant information. We then use the reduced set of parameters as input to an Artificial Neural Network. This multiparameter approach improves upon purely morphometric classifiers (such as the classifier implemented in SExtractor), especially at faint magnitudes: it increases the purity by up to 20 per cent for stars and by up to 12 per cent for galaxies, at i-magnitude fainter than 23.

  5. Wavefront analysis for plenoptic camera imaging

    International Nuclear Information System (INIS)

    Luan Yin-Sen; Xu Bing; Yang Ping; Tang Guo-Mao

    2017-01-01

    The plenoptic camera is a single lens stereo camera which can retrieve the direction of light rays while detecting their intensity distribution. In this paper, to reveal more truths of plenoptic camera imaging, we present the wavefront analysis for the plenoptic camera imaging from the angle of physical optics but not from the ray tracing model of geometric optics. Specifically, the wavefront imaging model of a plenoptic camera is analyzed and simulated by scalar diffraction theory and the depth estimation is redescribed based on physical optics. We simulate a set of raw plenoptic images of an object scene, thereby validating the analysis and derivations and the difference between the imaging analysis methods based on geometric optics and physical optics are also shown in simulations. (paper)

  6. Application of Terrestrial Laser Scanner with an Integrated Thermal Camera in Non-Destructive Evaluation of Concrete Surface of Hydrotechnical Objects

    Science.gov (United States)

    Kaczmarek, Łukasz Dominik; Dobak, Paweł Józef; Kiełbasiński, Kamil

    2017-12-01

    The authors present possible applications of thermal data as an additional source of information on an object's behaviour during the technical assessment of the condition of a concrete surface. For the study one of the most recent propositions introduced by Zoller + Fröhlich company was used, which is an integration of a thermal camera with a terrestrial laser scanner. This solution enables an acquisition of geometric and spectral data on the surveyed object and also provides information on the surface's temperature in the selected points. A section of the dam's downstream concrete wall was selected as the subject of the study for which a number of scans were carried out and a number of thermal images were taken at different times of the day. The obtained thermal data was confronted with the acquired spectral information for the specified points. This made it possible to carry out broader analysis of the surface and an inspection of the revealed fissure. The thermal analysis of said fissure indicated that the temperature changes within it are slower, which may affect the way the concrete works and may require further elaboration by the appropriate experts. Through the integration of a thermal camera with a terrestrial laser scanner one can not only analyse changes of temperature in the discretely selected points but on the whole surface as well. Moreover, it is also possible to accurately determine the range and the area of the change affecting the surface. The authors note the limitations of the presented solution like, inter alia, the resolution of the thermal camera.

  7. Coaxial fundus camera for opthalmology

    Science.gov (United States)

    de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.

    2015-09-01

    A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.

  8. Planetcam: A Visible And Near Infrared Lucky-imaging Camera To Study Planetary Atmospheres And Solar System Objects

    Science.gov (United States)

    Sanchez-Lavega, Agustin; Rojas, J.; Hueso, R.; Perez-Hoyos, S.; de Bilbao, L.; Murga, G.; Ariño, J.; Mendikoa, I.

    2012-10-01

    PlanetCam is a two-channel fast-acquisition and low-noise camera designed for a multispectral study of the atmospheres of the planets (Venus, Mars, Jupiter, Saturn, Uranus and Neptune) and the satellite Titan at high temporal and spatial resolutions simultaneously invisible (0.4-1 μm) and NIR (1-2.5 μm) channels. This is accomplished by means of a dichroic beam splitter that separates both beams directing them into two different detectors. Each detector has filter wheels corresponding to the characteristic absorption bands of each planetary atmosphere. Images are acquired and processed using the “lucky imaging” technique in which several thousand images of the same object are obtained in a short time interval, coregistered and ordered in terms of image quality to reconstruct a high-resolution ideally diffraction limited image of the object. Those images will be also calibrated in terms of intensity and absolute reflectivity. The camera will be tested at the 50.2 cm telescope of the Aula EspaZio Gela (Bilbao) and then commissioned at the 1.05 m at Pic-duMidi Observatory (Franca) and at the 1.23 m telescope at Calar Alto Observatory in Spain. Among the initially planned research targets are: (1) The vertical structure of the clouds and hazes in the planets and their scales of variability; (2) The meteorology, dynamics and global winds and their scales of variability in the planets. PlanetCam is also expected to perform studies of other Solar System and astrophysical objects. Acknowledgments: This work was supported by the Spanish MICIIN project AYA2009-10701 with FEDER funds, by Grupos Gobierno Vasco IT-464-07 and by Universidad País Vasco UPV/EHU through program UFI11/55.

  9. ToF camera ego-motion estimation

    CSIR Research Space (South Africa)

    Ratshidaho, T

    2012-10-01

    Full Text Available random errors, do not have a mean value when measurements are repeated several times. They are handled by filtering. A jump edge filter is implemented in all the experiments undertaken. Jump edges occur when the transition between foreground objects... and the background objects is sudden but the camera transition is smooth. The application of a jump edge is shown in Figure 1 below. Figure 1: (a) bimodal scene used to test the jump edge filter and (b) shows the point cloud from the SR4000 ToF camera...

  10. Using DSLR cameras in digital holography

    Science.gov (United States)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  11. Camera network video summarization

    Science.gov (United States)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  12. The radio spectral energy distribution of infrared-faint radio sources

    Science.gov (United States)

    Herzog, A.; Norris, R. P.; Middelberg, E.; Seymour, N.; Spitler, L. R.; Emonts, B. H. C.; Franzen, T. M. O.; Hunstead, R.; Intema, H. T.; Marvil, J.; Parker, Q. A.; Sirothia, S. K.; Hurley-Walker, N.; Bell, M.; Bernardi, G.; Bowman, J. D.; Briggs, F.; Cappallo, R. J.; Callingham, J. R.; Deshpande, A. A.; Dwarakanath, K. S.; For, B.-Q.; Greenhill, L. J.; Hancock, P.; Hazelton, B. J.; Hindson, L.; Johnston-Hollitt, M.; Kapińska, A. D.; Kaplan, D. L.; Lenc, E.; Lonsdale, C. J.; McKinley, B.; McWhirter, S. R.; Mitchell, D. A.; Morales, M. F.; Morgan, E.; Morgan, J.; Oberoi, D.; Offringa, A.; Ord, S. M.; Prabu, T.; Procopio, P.; Udaya Shankar, N.; Srivani, K. S.; Staveley-Smith, L.; Subrahmanyan, R.; Tingay, S. J.; Wayth, R. B.; Webster, R. L.; Williams, A.; Williams, C. L.; Wu, C.; Zheng, Q.; Bannister, K. W.; Chippendale, A. P.; Harvey-Smith, L.; Heywood, I.; Indermuehle, B.; Popping, A.; Sault, R. J.; Whiting, M. T.

    2016-10-01

    Context. Infrared-faint radio sources (IFRS) are a class of radio-loud (RL) active galactic nuclei (AGN) at high redshifts (z ≥ 1.7) that are characterised by their relative infrared faintness, resulting in enormous radio-to-infrared flux density ratios of up to several thousand. Aims: Because of their optical and infrared faintness, it is very challenging to study IFRS at these wavelengths. However, IFRS are relatively bright in the radio regime with 1.4 GHz flux densities of a few to a few tens of mJy. Therefore, the radio regime is the most promising wavelength regime in which to constrain their nature. We aim to test the hypothesis that IFRS are young AGN, particularly GHz peaked-spectrum (GPS) and compact steep-spectrum (CSS) sources that have a low frequency turnover. Methods: We use the rich radio data set available for the Australia Telescope Large Area Survey fields, covering the frequency range between 150 MHz and 34 GHz with up to 19 wavebands from different telescopes, and build radio spectral energy distributions (SEDs) for 34 IFRS. We then study the radio properties of this class of object with respect to turnover, spectral index, and behaviour towards higher frequencies. We also present the highest-frequency radio observations of an IFRS, observed with the Plateau de Bure Interferometer at 105 GHz, and model the multi-wavelength and radio-far-infrared SED of this source. Results: We find IFRS usually follow single power laws down to observed frequencies of around 150 MHz. Mostly, the radio SEDs are steep (α IFRS show statistically significantly steeper radio SEDs than the broader RL AGN population. Our analysis reveals that the fractions of GPS and CSS sources in the population of IFRS are consistent with the fractions in the broader RL AGN population. We find that at least % of IFRS contain young AGN, although the fraction might be significantly higher as suggested by the steep SEDs and the compact morphology of IFRS. The detailed multi

  13. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    Science.gov (United States)

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  14. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    Science.gov (United States)

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  15. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-07-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2-deg deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  16. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-04-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2 0 deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  17. Towards next generation 3D cameras

    Science.gov (United States)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (robotic inspection and assembly systems.

  18. Performance Evaluation of Thermographic Cameras for Photogrammetric Measurements

    Science.gov (United States)

    Yastikli, N.; Guler, E.

    2013-05-01

    The aim of this research is the performance evaluation of the termographic cameras for possible use for photogrammetric documentation and deformation analyses caused by moisture and isolation problem of the historical and cultural heritage. To perform geometric calibration of the termographic camera, the 3D test object was designed with 77 control points which were distributed in different depths. For performance evaluation, Flir A320 termographic camera with 320 × 240 pixels and lens with 18 mm focal length was used. The Nikon D3X SLR digital camera with 6048 × 4032 pixels and lens with 20 mm focal length was used as reference for comparison. The size of pixel was 25 μm for the Flir A320 termographic camera and 6 μm for the Nikon D3X SLR digital camera. The digital images of the 3D test object were recorded with the Flir A320 termographic camera and Nikon D3X SLR digital camera and the image coordinate of the control points in the images were measured. The geometric calibration parameters, including the focal length, position of principal points, radial and tangential distortions were determined with introduced additional parameters in bundle block adjustments. The measurement of image coordinates and bundle block adjustments with additional parameters were performed using the PHIDIAS digital photogrammetric system. The bundle block adjustment was repeated with determined calibration parameter for both Flir A320 termographic camera and Nikon D3X SLR digital camera. The obtained standard deviation of measured image coordinates was 9.6 μm and 10.5 μm for Flir A320 termographic camera and 8.3 μm and 7.7 μm for Nikon D3X SLR digital camera. The obtained standard deviation of measured image points in Flir A320 termographic camera images almost same accuracy level with digital camera in comparison with 4 times bigger pixel size. The obtained results from this research, the interior geometry of the termographic cameras and lens distortion was modelled efficiently

  19. CONFIRMATION OF THE COMPACTNESS OF A z = 1.91 QUIESCENT GALAXY WITH HUBBLE SPACE TELESCOPE'S WIDE FIELD CAMERA 3

    International Nuclear Information System (INIS)

    Szomoru, Daniel; Franx, Marijn; Bouwens, Rychard J.; Van Dokkum, Pieter G.; Trenti, Michele; Illingworth, Garth D.; Labbe, Ivo; Oesch, Pascal A.; Carollo, C. Marcella

    2010-01-01

    We present very deep Wide Field Camera 3 (WFC3) photometry of a massive, compact galaxy located in the Hubble Ultra Deep Field. This quiescent galaxy has a spectroscopic redshift z = 1.91 and has been identified as an extremely compact galaxy by Daddi et al. We use new H F160W imaging data obtained with Hubble Space Telescope/WFC3 to measure the deconvolved surface brightness profile to H ∼ 28 mag arcsec -2 . We find that the surface brightness profile is well approximated by an n = 3.7 Sersic profile. Our deconvolved profile is constructed by a new technique which corrects the best-fit Sersic profile with the residual of the fit to the observed image. This allows for galaxy profiles which deviate from a Sersic profile. We determine the effective radius of this galaxy: r e = 0.42 ± 0.14 kpc in the observed H F160W band. We show that this result is robust to deviations from the Sersic model used in the fit. We test the sensitivity of our analysis to faint 'wings' in the profile using simulated galaxy images consisting of a bright compact component and a faint extended component. We find that due to the combination of the WFC3 imaging depth and our method's sensitivity to extended faint emission we can accurately trace the intrinsic surface brightness profile, and that we can therefore confidently rule out the existence of a faint extended envelope around the observed galaxy down to our surface brightness limit. These results confirm that the galaxy lies a factor ∼10 off from the local mass-size relation.

  20. A Search for Faint, Diffuse Halo Emission in Edge-On Galaxies with Spitzer/IRAC

    Science.gov (United States)

    Ashby, Matthew; Arendt, R. G.; Pipher, J. L.; Forrest, W. J.; Marengo, M.; Barmby, P.; Willner, S. P.; Stauffer, J. R.; Fazio, G. G.

    2006-12-01

    We present deep infrared mosaics of the nearby edge-on spiral galaxies NGC 891, 4244, 4565, and 5907. These data were acquired at 3.6, 4.5, 5.8, and 8.0 microns using the Infrared Array Camera aboard Spitzer as part of GTO program number 3. This effort is designed to detect the putative faint, diffuse emission from halos and thick disks of spiral galaxies in the near-mid infrared under the thermally stable, low-background conditions of space. These conditions in combination with the advantageous viewing angles presented by these well-known edge-on spirals provide arguably the best opportunity to characterize the halo/thick disk components of such galaxies in the infrared. In this contribution we describe our observations, data reduction techniques, corrections for artifacts in the data, and the modeling approach we applied to analyze this unique dataset. This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. Support for this work was provided by NASA through an award issued by JPL/Caltech.

  1. Temperature measurements on fast-rotating objects using a thermographic camera with an optomechanical image derotator

    Science.gov (United States)

    Altmann, Bettina; Pape, Christian; Reithmeier, Eduard

    2017-08-01

    Increasing requirements concerning the quality and lifetime of machine components in industrial and automotive applications require comprehensive investigations of the components in conditions close to the application. Irregularities in heating of mechanical parts reveal regions with increased loading of pressure, draft or friction. In the long run this leads to damage and total failure of the machine. Thermographic measurements of rotating objects, e.g., rolling bearings, brakes, and clutches provide an approach to investigate those defects. However, it is challenging to measure fast-rotating objects accurately. Currently one contact-free approach is performing stroboscopic measurements using an infrared sensor. The data acquisition is triggered so that the image is taken once per revolution. This leads to a huge loss of information on the majority of the movement and to motion blur. The objective of this research is showing the potential of using an optomechanical image derotator together with a thermographic camera. The derotator follows the rotation of the measurement object so that quasi-stationary thermal images during motion can be acquired by the infrared sensor. Unlike conventional derotators which use a glass prism to achieve this effect, the derotator within this work is equipped with a sophisticated reflector assembly. These reflectors are made of aluminum to transfer infrared radiation emitted by the rotating object. Because of the resulting stationary thermal image, the operation can be monitored continuously even for fast-rotating objects. The field of view can also be set to a small off-axis region of interest which then can be investigated with higher resolution or frame rate. To depict the potential of this approach, thermographic measurements on a rolling bearings in different operating states are presented.

  2. A search for AGN activity in Infrared-Faint Radio Sources (IFRS)

    Science.gov (United States)

    Lenc, Emil; Middelberg, Enno; Norris, Ray; Mao, Minnie

    2010-04-01

    We propose to observe a large sample of radio sources from the ATLAS (Australia Telescope Large Area Survey) source catalogue with the LBA, to determine their compactness. The sample consists of 36 sources with no counterpart in the co-located SWIRE survey (3.6 um to 160 um), carried out with the Spitzer Space Telescope. This rare class of sources, dubber Infrared-Faint Radio Sources (IFRS), is inconsistent with current galaxy evolution models. VLBI observations are an essential way to obtain further clues on what these objects are and why they are hidden from infrared observations. We will measure the flux densities on long baselines to determine their compactness. Only five IFRS have been previously targeted with VLBI observations (resulting in two detections). We propose using single baseline (Parkes-ATCA) eVLBI observations with the LBA at 1 Gbps to maximise sensitivity. With the observations proposed here we will increase the number of VLBI-observed IFRS from 5 to 36, allowing us to draw statistical conclusions about this intriguing new class of objects.

  3. Faint galaxies - Bounds on the epoch of galaxy formation and the cosmological deceleration parameter

    International Nuclear Information System (INIS)

    Yoshii, Yuzuru; Peterson, B.A.

    1991-01-01

    Models of galaxy luminosity evolution are used to interpret the observed color distributions, redshift distributions, and number counts of faint galaxies. It is found from the color distributions that the redshift corresponding to the epoch of galaxy formation must be greater than three, and that the number counts of faint galaxies, which are sensitive to the slope of the faint end of the luminosity function, are incompatible with q0 = 1/2 and indicate a smaller value. The models assume that the sequence of galaxy types is due to different star-formation rates, that the period of galaxy formation can be characterized by a single epoch, and that after formation, galaxies change in luminosity by star formation and stellar evolution, maintaining a constant comoving space density. 40 refs

  4. Children's exposure to alcohol marketing within supermarkets: An objective analysis using GPS technology and wearable cameras.

    Science.gov (United States)

    Chambers, T; Pearson, A L; Stanley, J; Smith, M; Barr, M; Ni Mhurchu, C; Signal, L

    2017-07-01

    Exposure to alcohol marketing within alcohol retailers has been associated with higher rates of childhood drinking, brand recognition, and marketing recall. This study aimed to objectively measure children's everyday exposure to alcohol marketing within supermarkets. Children aged 11-13 (n = 167) each wore a wearable camera and GPS device for four consecutive days. Micro-spatial analyses were used to examine exposures within supermarkets. In alcohol retailing supermarkets (n = 30), children encountered alcohol marketing on 85% of their visits (n = 78). Alcohol marketing was frequently near everyday goods (bread and milk) or entrance/exit. Alcohol sales in supermarkets should be banned in order to protect children from alcohol marketing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. PERFORMANCE EVALUATION OF THERMOGRAPHIC CAMERAS FOR PHOTOGRAMMETRIC MEASUREMENTS

    Directory of Open Access Journals (Sweden)

    N. Yastikli

    2013-05-01

    Full Text Available The aim of this research is the performance evaluation of the termographic cameras for possible use for photogrammetric documentation and deformation analyses caused by moisture and isolation problem of the historical and cultural heritage. To perform geometric calibration of the termographic camera, the 3D test object was designed with 77 control points which were distributed in different depths. For performance evaluation, Flir A320 termographic camera with 320 × 240 pixels and lens with 18 mm focal length was used. The Nikon D3X SLR digital camera with 6048 × 4032 pixels and lens with 20 mm focal length was used as reference for comparison. The size of pixel was 25 μm for the Flir A320 termographic camera and 6 μm for the Nikon D3X SLR digital camera. The digital images of the 3D test object were recorded with the Flir A320 termographic camera and Nikon D3X SLR digital camera and the image coordinate of the control points in the images were measured. The geometric calibration parameters, including the focal length, position of principal points, radial and tangential distortions were determined with introduced additional parameters in bundle block adjustments. The measurement of image coordinates and bundle block adjustments with additional parameters were performed using the PHIDIAS digital photogrammetric system. The bundle block adjustment was repeated with determined calibration parameter for both Flir A320 termographic camera and Nikon D3X SLR digital camera. The obtained standard deviation of measured image coordinates was 9.6 μm and 10.5 μm for Flir A320 termographic camera and 8.3 μm and 7.7 μm for Nikon D3X SLR digital camera. The obtained standard deviation of measured image points in Flir A320 termographic camera images almost same accuracy level with digital camera in comparison with 4 times bigger pixel size. The obtained results from this research, the interior geometry of the termographic cameras and lens distortion was

  6. Line-Constrained Camera Location Estimation in Multi-Image Stereomatching.

    Science.gov (United States)

    Donné, Simon; Goossens, Bart; Philips, Wilfried

    2017-08-23

    Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid-we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature.

  7. Development and application of an automatic system for measuring the laser camera

    International Nuclear Information System (INIS)

    Feng Shuli; Peng Mingchen; Li Kuncheng

    2004-01-01

    Objective: To provide an automatic system for measuring imaging quality of laser camera, and to make an automatic measurement and analysis system. Methods: On the special imaging workstation (SGI 540), the procedure was written by using Matlab language. An automatic measurement and analysis system of imaging quality for laser camera was developed and made according to the imaging quality measurement standard of laser camera of International Engineer Commission (IEC). The measurement system used the theories of digital signal processing, and was based on the characteristics of digital images, as well as put the automatic measurement and analysis of laser camera into practice by the affiliated sample pictures of the laser camera. Results: All the parameters of imaging quality of laser camera, including H-D and MTF curve, low and middle and high resolution of optical density, all kinds of geometry distort, maximum and minimum density, as well as the dynamic range of gray scale, could be measured by this system. The system was applied for measuring the laser cameras in 20 hospitals in Beijing. The measuring results showed that the system could provide objective and quantitative data, and could accurately evaluate the imaging quality of laser camera, as well as correct the results made by manual measurement based on the affiliated sample pictures of the laser camera. Conclusion: The automatic measuring system of laser camera is an effective and objective tool for testing the quality of the laser camera, and the system makes a foundation for the future research

  8. The faint-end of galaxy luminosity functions at the Epoch of Reionization

    Science.gov (United States)

    Yue, B.; Castellano, M.; Ferrara, A.; Fontana, A.; Merlin, E.; Amorín, R.; Grazian, A.; Mármol-Queralto, E.; Michałowski, M. J.; Mortlock, A.; Paris, D.; Parsa, S.; Pilo, S.; Santini, P.; Di Criscienzo, M.

    2018-05-01

    During the Epoch of Reionization (EoR), feedback effects reduce the efficiency of star formation process in small halos or even fully quench it. The galaxy luminosity function (LF) may then turn over at the faint-end. We analyze the number counts of z > 5 galaxies observed in the fields of four Frontier Fields (FFs) clusters and obtain constraints on the LF faint-end: for the turn-over magnitude at z ~ 6, MUVT >~-13.3 for the circular velocity threshold of quenching star formation process, vc* <~ 47 km s-1. We have not yet found significant evidence of the presence of feedback effects suppressing the star formation in small galaxies.

  9. X-ray-bright optically faint active galactic nuclei in the Subaru Hyper Suprime-Cam wide survey

    Science.gov (United States)

    Terashima, Yuichi; Suganuma, Makoto; Akiyama, Masayuki; Greene, Jenny E.; Kawaguchi, Toshihiro; Iwasawa, Kazushi; Nagao, Tohru; Noda, Hirofumi; Toba, Yoshiki; Ueda, Yoshihiro; Yamashita, Takuji

    2018-01-01

    We construct a sample of X-ray-bright optically faint active galactic nuclei by combining Subaru Hyper Suprime-Cam, XMM-Newton, and infrared source catalogs. Fifty-three X-ray sources satisfying i-band magnitude fainter than 23.5 mag and X-ray counts with the EPIC-PN detector larger than 70 are selected from 9.1 deg2, and their spectral energy distributions (SEDs) and X-ray spectra are analyzed. Forty-four objects with an X-ray to i-band flux ratio FX/Fi > 10 are classified as extreme X-ray-to-optical flux sources. Spectral energy distributions of 48 among 53 are represented by templates of type 2 AGNs or star-forming galaxies and show the optical signature of stellar emission from host galaxies in the source rest frame. Infrared/optical SEDs indicate a significant contribution of emission from dust to the infrared fluxes, and that the central AGN is dust obscured. The photometric redshifts determined from the SEDs are in the range of 0.6-2.5. The X-ray spectra are fitted by an absorbed power-law model, and the intrinsic absorption column densities are modest (best-fit log NH = 20.5-23.5 cm-2 in most cases). The absorption-corrected X-ray luminosities are in the range of 6 × 1042-2 × 1045 erg s-1. Twenty objects are classified as type 2 quasars based on X-ray luminsosity and NH. The optical faintness is explained by a combination of redshifts (mostly z > 1.0), strong dust extinction, and in part a large ratio of dust/gas.

  10. DISCOVERY OF A FAINT QUASAR AT z ∼ 6 AND IMPLICATIONS FOR COSMIC REIONIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yongjung; Im, Myungshin; Jeon, Yiseul; Choi, Changsu; Hong, Jueun; Hyun, Minhee; Jun, Hyunsung David; Kim, Dohyeong; Kim, Duho; Kim, Jae-Woo; Lee, Seong-Kook; Taak, Yoon Chan; Yoon, Yongmin [Center for the Exploration of the Origin of the Universe (CEOU), Building 45, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 151-742 (Korea, Republic of); Kim, Minjin; Park, Won-Kee [Korea Astronomy and Space Science Institute, Daejeon 305-348 (Korea, Republic of); Karouzos, Marios [Astronomy Program, FPRD, Department of Physics and Astronomy, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 151-742 (Korea, Republic of); Kim, Ji Hoon [Subaru Telescope, National Astronomical Observatory of Japan, 650 North A’ohoku Place, Hilo, HI 96720 (United States); Pak, Soojong, E-mail: yjkim@astro.snu.ac.kr, E-mail: mim@astro.snu.ac.kr [School of Space Research and Institute of Natural Sciences, Kyung Hee University, 1732 Deogyeong-daero, Giheung-gu, Yongin-si, Gyeonggi-do 446-701 (Korea, Republic of)

    2015-11-10

    Recent studies suggest that faint active galactic nuclei may be responsible for the reionization of the universe. Confirmation of this scenario requires spectroscopic identification of faint quasars (M{sub 1450} > −24 mag) at z ≳ 6, but only a very small number of such quasars have been spectroscopically identified so far. Here, we report the discovery of a faint quasar IMS J220417.92+011144.8 at z ∼ 6 in a 12.5 deg{sup 2} region of the SA22 field of the Infrared Medium-deep Survey (IMS). The spectrum of the quasar shows a sharp break at ∼8443 Å, with emission lines redshifted to z = 5.944 ± 0.002 and rest-frame ultraviolet continuum magnitude M{sub 1450} = −23.59 ± 0.10 AB mag. The discovery of IMS J220417.92+011144.8 is consistent with the expected number of quasars at z ∼ 6 estimated from quasar luminosity functions based on previous observations of spectroscopically identified low-luminosity quasars. This suggests that the number of M{sub 1450} ∼ −23 mag quasars at z ∼ 6 may not be high enough to fully account for the reionization of the universe. In addition, our study demonstrates that faint quasars in the early universe can be identified effectively with a moderately wide and deep near-infrared survey such as the IMS.

  11. Infrared-faint radio sources in the SERVS deep fields. Pinpointing AGNs at high redshift

    Science.gov (United States)

    Maini, A.; Prandoni, I.; Norris, R. P.; Spitler, L. R.; Mignano, A.; Lacy, M.; Morganti, R.

    2016-12-01

    Context. Infrared-faint radio sources (IFRS) represent an unexpected class of objects which are relatively bright at radio wavelength, but unusually faint at infrared (IR) and optical wavelengths. A recent and extensive campaign on the radio-brightest IFRSs (S1.4 GHz≳ 10 mJy) has provided evidence that most of them (if not all) contain an active galactic nuclei (AGN). Still uncertain is the nature of the radio-faintest IFRSs (S1.4 GHz≲ 1 mJy). Aims: The scope of this paper is to assess the nature of the radio-faintest IFRSs, testing their classification and improving the knowledge of their IR properties by making use of the most sensitive IR survey available so far: the Spitzer Extragalactic Representative Volume Survey (SERVS). We also explore how the criteria of IFRSs can be fine-tuned to pinpoint radio-loud AGNs at very high redshift (z > 4). Methods: We analysed a number of IFRS samples identified in SERVS fields, including a new sample (21 sources) extracted from the Lockman Hole. 3.6 and 4.5 μm IR counterparts of the 64 sources located in the SERVS fields were searched for and, when detected, their IR properties were studied. Results: We compared the radio/IR properties of the IR-detected IFRSs with those expected for a number of known classes of objects. We found that IR-detected IFRSs are mostly consistent with a mixture of high-redshift (z ≳ 3) radio-loud AGNs. The faintest ones (S1.4 GHz 100 μJy), however, could be also associated with nearer (z 2) dust-enshrouded star-burst galaxies. We also argue that, while IFRSs with radio-to-IR ratios >500 can very efficiently pinpoint radio-loud AGNs at redshift 2 < z < 4, lower radio-to-IR ratios ( 100-200) are expected for higher redshift radio-loud AGNs.

  12. Computing camera heading: A study

    Science.gov (United States)

    Zhang, John Jiaxiang

    2000-08-01

    An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.

  13. Teaching the Thrill of Discovery: Student Exploration of Ultra-Faint Dwarf Galaxies with the NOAO Data Lab

    Science.gov (United States)

    Olsen, Knut; Walker, Constance E.; Smith, Blake; NOAO Data Lab Team

    2018-01-01

    We describe an activity aimed at teaching students how ultra-faint Milky Way dwarf galaxies are typically discovered: through filtering of optical photometric catalogs and cross-examination with deep images. The activity, which was developed as part of the Teen Astronomy Café program (https://teensciencecafe.org/cafes/az-teen-astronomy-cafe-tucson/), uses the NOAO Data Lab (http://datalab.noao.edu) and other professional-grade tools to lead high school students through exploration of the object catalog and images from the Survey of the Magellanic Stellar History (SMASH). The students are taught how to use images and color-magnitude diagrams to analyze and interpret stellar populations of increasing complexity, including those of star clusters and the Magellanic Clouds, and culminating with the discovery of the Hydra II ultra-faint dwarf galaxy. The tools and datasets presented allow the students to explore and discover other known stellar systems, as well as unknown candidate star clusters and dwarf galaxies. The ultimate goal of the activity is to give students insight into the methods of modern astronomical research and to allow them to participate in the thrill of discovery.

  14. Short timescale variability in the faint sky variability survey

    NARCIS (Netherlands)

    Morales-Rueda, L.; Groot, P.J.; Augusteijn, T.; Nelemans, G.A.; Vreeswijk, P.M.; Besselaar, E.J.M. van den

    2006-01-01

    We present the V-band variability analysis of the Faint Sky Variability Survey (FSVS). The FSVS combines colour and time variability information, from timescales of 24 minutes to tens of days, down to V = 24. We find that �1% of all point sources are variable along the main sequence reaching �3.5%

  15. Scintillation camera with second order resolution

    International Nuclear Information System (INIS)

    Muehllehner, G.

    1976-01-01

    A scintillation camera for use in radioisotope imaging to determine the concentration of radionuclides in a two-dimensional area is described in which means is provided for second order positional resolution. The phototubes, which normally provide only a single order of resolution, are modified to provide second order positional resolution of radiation within an object positioned for viewing by the scintillation camera. The phototubes are modified in that multiple anodes are provided to receive signals from the photocathode in a manner such that each anode is particularly responsive to photoemissions from a limited portion of the photocathode. Resolution of radioactive events appearing as an output of this scintillation camera is thereby improved

  16. Scintillation camera with second order resolution

    International Nuclear Information System (INIS)

    1975-01-01

    A scintillation camera is described for use in radioisotope imaging to determine the concentration of radionuclides in a two-dimensional area in which means is provided for second-order positional resolution. The phototubes which normally provide only a single order of resolution, are modified to provide second-order positional resolution of radiation within an object positioned for viewing by the scintillation camera. The phototubes are modified in that multiple anodes are provided to receive signals from the photocathode in a manner such that each anode is particularly responsive to photoemissions from a limited portion of the photocathode. Resolution of radioactive events appearing as an output of this scintillation camera is thereby improved

  17. MOVING OBJECTS IN THE HUBBLE ULTRA DEEP FIELD

    Energy Technology Data Exchange (ETDEWEB)

    Kilic, Mukremin; Gianninas, Alexandros [Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, 440 W. Brooks St., Norman, OK 73019 (United States); Von Hippel, Ted, E-mail: kilic@ou.edu, E-mail: alexg@nhn.ou.edu, E-mail: ted.vonhippel@erau.edu [Embry-Riddle Aeronautical University, 600 S. Clyde Morris Blvd., Daytona Beach, FL 32114 (United States)

    2013-09-01

    We identify proper motion objects in the Hubble Ultra Deep Field (UDF) using the optical data from the original UDF program in 2004 and the near-infrared data from the 128 orbit UDF 2012 campaign. There are 12 sources brighter than I = 27 mag that display >3{sigma} significant proper motions. We do not find any proper motion objects fainter than this magnitude limit. Combining optical and near-infrared photometry, we model the spectral energy distribution of each point-source using stellar templates and state-of-the-art white dwarf models. For I {<=} 27 mag, we identify 23 stars with K0-M6 spectral types and two faint blue objects that are clearly old, thick disk white dwarfs. We measure a thick disk white dwarf space density of 0.1-1.7 Multiplication-Sign 10{sup -3} pc{sup -3} from these two objects. There are no halo white dwarfs in the UDF down to I = 27 mag. Combining the Hubble Deep Field North, South, and the UDF data, we do not see any evidence for dark matter in the form of faint halo white dwarfs, and the observed population of white dwarfs can be explained with the standard Galactic models.

  18. BENCHMARKING THE OPTICAL RESOLVING POWER OF UAV BASED CAMERA SYSTEMS

    Directory of Open Access Journals (Sweden)

    H. Meißner

    2017-08-01

    Full Text Available UAV based imaging and 3D object point generation is an established technology. Some of the UAV users try to address (very highaccuracy applications, i.e. inspection or monitoring scenarios. In order to guarantee such level of detail and accuracy high resolving imaging systems are mandatory. Furthermore, image quality considerably impacts photogrammetric processing, as the tie point transfer, mandatory for forming the block geometry, fully relies on the radiometric quality of images. Thus, empirical testing of radiometric camera performance is an important issue, in addition to standard (geometric calibration, which normally is covered primarily. Within this paper the resolving power of ten different camera/lens installations has been investigated. Selected systems represent different camera classes, like DSLRs, system cameras, larger format cameras and proprietary systems. As the systems have been tested in wellcontrolled laboratory conditions and objective quality measures have been derived, individual performance can be compared directly, thus representing a first benchmark on radiometric performance of UAV cameras. The results have shown, that not only the selection of appropriate lens and camera body has an impact, in addition the image pre-processing, i.e. the use of a specific debayering method, significantly influences the final resolving power.

  19. Geometric database maintenance using CCTV cameras and overlay graphics

    Science.gov (United States)

    Oxenberg, Sheldon C.; Landell, B. Patrick; Kan, Edwin

    1988-01-01

    An interactive graphics system using closed circuit television (CCTV) cameras for remote verification and maintenance of a geometric world model database has been demonstrated in GE's telerobotics testbed. The database provides geometric models and locations of objects viewed by CCTV cameras and manipulated by telerobots. To update the database, an operator uses the interactive graphics system to superimpose a wireframe line drawing of an object with known dimensions on a live video scene containing that object. The methodology used is multipoint positioning to easily superimpose a wireframe graphic on the CCTV image of an object in the work scene. An enhanced version of GE's interactive graphics system will provide the object designation function for the operator control station of the Jet Propulsion Laboratory's telerobot demonstration system.

  20. Counts and colors of faint galaxies

    International Nuclear Information System (INIS)

    Kron, R.G.

    1980-01-01

    The color distribution of faint galaxies is an observational dimension which has not yet been fully exploited, despite the important constraints obtainable for galaxy evolution and cosmology. Number-magnitude counts alone contain very diluted information about the state of things because galaxies from a wide range in redshift contribute to the counts at each magnitude. The most-frequently-seen type of galaxy depends on the luminosity function and the relative proportions of galaxies of different spectral classes. The addition of color as a measured quantity can thus considerably sharpen the interpretation of galaxy counts since the apparent color depends on the redshift and rest-frame spectrum. (Auth.)

  1. D Reconstruction of AN Underwater Archaelogical Site: Comparison Between Low Cost Cameras

    Science.gov (United States)

    Capra, A.; Dubbini, M.; Bertacchini, E.; Castagnetti, C.; Mancini, F.

    2015-04-01

    The 3D reconstruction with a metric content of a submerged area, where objects and structures of archaeological interest are found, could play an important role in the research and study activities and even in the digitization of the cultural heritage. The reconstruction of 3D object, of interest for archaeologists, constitutes a starting point in the classification and description of object in digital format and for successive fruition by user after delivering through several media. The starting point is a metric evaluation of the site obtained with photogrammetric surveying and appropriate 3D restitution. The authors have been applying the underwater photogrammetric technique since several years using underwater digital cameras and, in this paper, digital low cost cameras (off-the-shelf). Results of tests made on submerged objects with three cameras are presented: Canon Power Shot G12, Intova Sport HD e GoPro HERO 2. The experimentation had the goal to evaluate the precision in self-calibration procedures, essential for multimedia underwater photogrammetry, and to analyze the quality of 3D restitution. Precisions obtained in the calibration and orientation procedures was assessed by using three cameras, and an homogeneous set control points. Data were processed with Agisoft Photoscan. Successively, 3D models were created and the comparison of the models derived from the use of different cameras was performed. Different potentialities of the used cameras are reported in the discussion section. The 3D restitution of objects and structures was integrated with sea bottom floor morphology in order to achieve a comprehensive description of the site. A possible methodology of survey and representation of submerged objects is therefore illustrated, considering an automatic and a semi-automatic approach.

  2. The MVACS Robotic Arm Camera

    Science.gov (United States)

    Keller, H. U.; Hartwig, H.; Kramm, R.; Koschny, D.; Markiewicz, W. J.; Thomas, N.; Fernades, M.; Smith, P. H.; Reynolds, R.; Lemmon, M. T.; Weinberg, J.; Marcialis, R.; Tanner, R.; Boss, B. J.; Oquest, C.; Paige, D. A.

    2001-08-01

    The Robotic Arm Camera (RAC) is one of the key instruments newly developed for the Mars Volatiles and Climate Surveyor payload of the Mars Polar Lander. This lightweight instrument employs a front lens with variable focus range and takes images at distances from 11 mm (image scale 1:1) to infinity. Color images with a resolution of better than 50 μm can be obtained to characterize the Martian soil. Spectral information of nearby objects is retrieved through illumination with blue, green, and red lamp sets. The design and performance of the camera are described in relation to the science objectives and operation. The RAC uses the same CCD detector array as the Surface Stereo Imager and shares the readout electronics with this camera. The RAC is mounted at the wrist of the Robotic Arm and can characterize the contents of the scoop, the samples of soil fed to the Thermal Evolved Gas Analyzer, the Martian surface in the vicinity of the lander, and the interior of trenches dug out by the Robotic Arm. It can also be used to take panoramic images and to retrieve stereo information with an effective baseline surpassing that of the Surface Stereo Imager by about a factor of 3.

  3. A direct-view customer-oriented digital holographic camera

    Science.gov (United States)

    Besaga, Vira R.; Gerhardt, Nils C.; Maksimyak, Peter P.; Hofmann, Martin R.

    2018-01-01

    In this paper, we propose a direct-view digital holographic camera system consisting mostly of customer-oriented components. The camera system is based on standard photographic units such as camera sensor and objective and is adapted to operate under off-axis external white-light illumination. The common-path geometry of the holographic module of the system ensures direct-view operation. The system can operate in both self-reference and self-interference modes. As a proof of system operability, we present reconstructed amplitude and phase information of a test sample.

  4. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  5. A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network

    Science.gov (United States)

    Li, Yiming; Bhanu, Bir

    Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.

  6. Lyman continuum escape fraction of faint galaxies at z 3.3 in the CANDELS/GOODS-North, EGS, and COSMOS fields with LBC

    Science.gov (United States)

    Grazian, A.; Giallongo, E.; Paris, D.; Boutsia, K.; Dickinson, M.; Santini, P.; Windhorst, R. A.; Jansen, R. A.; Cohen, S. H.; Ashcraft, T. A.; Scarlata, C.; Rutkowski, M. J.; Vanzella, E.; Cusano, F.; Cristiani, S.; Giavalisco, M.; Ferguson, H. C.; Koekemoer, A.; Grogin, N. A.; Castellano, M.; Fiore, F.; Fontana, A.; Marchi, F.; Pedichini, F.; Pentericci, L.; Amorín, R.; Barro, G.; Bonchi, A.; Bongiorno, A.; Faber, S. M.; Fumana, M.; Galametz, A.; Guaita, L.; Kocevski, D. D.; Merlin, E.; Nonino, M.; O'Connell, R. W.; Pilo, S.; Ryan, R. E.; Sani, E.; Speziali, R.; Testa, V.; Weiner, B.; Yan, H.

    2017-06-01

    Context. The reionization of the Universe is one of the most important topics of present-day astrophysical research. The most plausible candidates for the reionization process are star-forming galaxies, which according to the predictions of the majority of the theoretical and semi-analytical models should dominate the H I ionizing background at z ≳ 3. Aims: We measure the Lyman continuum escape fraction, which is one of the key parameters used to compute the contribution of star-forming galaxies to the UV background. It provides the ratio between the photons produced at λ ≤ 912 Å rest-frame and those that are able to reach the inter-galactic medium, I.e. that are not absorbed by the neutral hydrogen or by the dust of the galaxy's inter-stellar medium. Methods: We used ultra-deep U-band imaging (U = 30.2 mag at 1σ) from Large Binocular Camera at the Large Binocular Telescope (LBC/LBT) in the CANDELS/GOODS-North field and deep imaging in the COSMOS and EGS fields in order to estimate the Lyman continuum escape fraction of 69 star-forming galaxies with secure spectroscopic redshifts at 3.27 ≤ z ≤ 3.40 to faint magnitude limits (L = 0.2L∗, or equivalently M1500 - 19). The narrow redshift range implies that the LBC U-band filter exclusively samples the λ ≤ 912 Å rest-frame wavelengths. Results: We measured through stacks a stringent upper limit (L∗), while for the faint population (L = 0.2L∗) the limit to the escape fraction is ≲ 10%. We computed the contribution of star-forming galaxies to the observed UV background at z 3 and find that it is not sufficient to keep the Universe ionized at these redshifts unless their escape fraction increases significantly (≥ 10%) at low luminosities (M1500 ≥ - 19). Conclusions: We compare our results on the Lyman continuum escape fraction of high-z galaxies with recent estimates in the literature, and discuss future prospects to shed light on the end of the Dark Ages. In the future, strong gravitational

  7. First results from the TOPSAT camera

    Science.gov (United States)

    Greenway, Paul; Tosh, Ian; Morris, Nigel; Burton, Gary; Cawley, Steve

    2017-11-01

    The TopSat camera is a low cost remote sensing imager capable of producing 2.5 metre resolution panchromatic imagery, funded by the British National Space Centre's Mosaic programme. The instrument was designed and assembled at the Space Science & Technology Department of the CCLRC's Rutherford Appleton Laboratory (RAL) in the UK, and was launched on the 27th October 2005 from Plesetsk Cosmodrome in Northern Russia on a Kosmos-3M. The camera utilises an off-axis three mirror system, which has the advantages of excellent image quality over a wide field of view, combined with a compactness that makes its overall dimensions smaller than its focal length. Keeping the costs to a minimum has been a major design driver in the development of this camera. The camera is part of the TopSat mission, which is a collaboration between four UK organisations; QinetiQ, Surrey Satellite Technology Ltd (SSTL), RAL and Infoterra. Its objective is to demonstrate provision of rapid response high resolution imagery to fixed and mobile ground stations using a low cost minisatellite. The paper "Development of the TopSat Camera" presented by RAL at the 5th ICSO in 2004 described the opto-mechanical design, assembly, alignment and environmental test methods implemented. Now that the spacecraft is in orbit and successfully acquiring images, this paper presents the first results from the camera and makes an initial assessment of the camera's in-orbit performance.

  8. A cooperative control algorithm for camera based observational systems.

    Energy Technology Data Exchange (ETDEWEB)

    Young, Joseph G.

    2012-01-01

    Over the last several years, there has been considerable growth in camera based observation systems for a variety of safety, scientific, and recreational applications. In order to improve the effectiveness of these systems, we frequently desire the ability to increase the number of observed objects, but solving this problem is not as simple as adding more cameras. Quite often, there are economic or physical restrictions that prevent us from adding additional cameras to the system. As a result, we require methods that coordinate the tracking of objects between multiple cameras in an optimal way. In order to accomplish this goal, we present a new cooperative control algorithm for a camera based observational system. Specifically, we present a receding horizon control where we model the underlying optimal control problem as a mixed integer linear program. The benefit of this design is that we can coordinate the actions between each camera while simultaneously respecting its kinematics. In addition, we further improve the quality of our solution by coupling our algorithm with a Kalman filter. Through this integration, we not only add a predictive component to our control, but we use the uncertainty estimates provided by the filter to encourage the system to periodically observe any outliers in the observed area. This combined approach allows us to intelligently observe the entire region of interest in an effective and thorough manner.

  9. Simulation-based camera navigation training in laparoscopy-a randomized trial

    DEFF Research Database (Denmark)

    Nilsson, Cecilia; Sørensen, Jette Led; Konge, Lars

    2017-01-01

    patient safety. The objectives of this trial were to examine how to train laparoscopic camera navigation and to explore the transfer of skills to the operating room. MATERIALS AND METHODS: A randomized, single-center superiority trial with three groups: The first group practiced simulation-based camera...... navigation tasks (camera group), the second group practiced performing a simulation-based cholecystectomy (procedure group), and the third group received no training (control group). Participants were surgical novices without prior laparoscopic experience. The primary outcome was assessment of camera.......033), had a higher score. CONCLUSIONS: Simulation-based training improves the technical skills required for camera navigation, regardless of practicing camera navigation or the procedure itself. Transfer to the clinical setting could, however, not be demonstrated. The control group demonstrated higher...

  10. Localization and Mapping Using a Non-Central Catadioptric Camera System

    Science.gov (United States)

    Khurana, M.; Armenakis, C.

    2018-05-01

    This work details the development of an indoor navigation and mapping system using a non-central catadioptric omnidirectional camera and its implementation for mobile applications. Omnidirectional catadioptric cameras find their use in navigation and mapping of robotic platforms, owing to their wide field of view. Having a wider field of view, or rather a potential 360° field of view, allows the system to "see and move" more freely in the navigation space. A catadioptric camera system is a low cost system which consists of a mirror and a camera. Any perspective camera can be used. A platform was constructed in order to combine the mirror and a camera to build a catadioptric system. A calibration method was developed in order to obtain the relative position and orientation between the two components so that they can be considered as one monolithic system. The mathematical model for localizing the system was determined using conditions based on the reflective properties of the mirror. The obtained platform positions were then used to map the environment using epipolar geometry. Experiments were performed to test the mathematical models and the achieved location and mapping accuracies of the system. An iterative process of positioning and mapping was applied to determine object coordinates of an indoor environment while navigating the mobile platform. Camera localization and 3D coordinates of object points obtained decimetre level accuracies.

  11. Calibration of a dual-PTZ camera system for stereo vision

    Science.gov (United States)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2010-08-01

    In this paper, we propose a calibration process for the intrinsic and extrinsic parameters of dual-PTZ camera systems. The calibration is based on a complete definition of six coordinate systems fixed at the image planes, and the pan and tilt rotation axes of the cameras. Misalignments between estimated and ideal coordinates of image corners are formed into cost values to be solved by the Nelder-Mead simplex optimization method. Experimental results show that the system is able to obtain 3D coordinates of objects with a consistent accuracy of 1 mm when the distance between the dual-PTZ camera set and the objects are from 0.9 to 1.1 meters.

  12. Observations of the Perseids 2013 using SPOSH cameras

    Science.gov (United States)

    Margonis, A.; Elgner, S.; Christou, A.; Oberst, J.; Flohrer, J.

    2013-09-01

    Earth is constantly bombard by debris, most of which disintegrates in the upper atmosphere. The collision of a dust particle, having a mass of approximately 1g or larger, with the Earth's atmosphere results into a visible streak of light in the night sky, called meteor. Comets produce new meteoroids each time they come close to the Sun due to sublimation processes. These fresh particles are moving around the Sun in orbits similar to their parent comet forming meteoroid streams. For this reason, the intersection of Earth's orbital path with different comets, gives rise to anumber of meteor showers throughout the year. The Perseids are one of the most prominent annual meteor showers occurring every summer, having its origin in Halley-type comet 109P/Swift-Tuttle. The dense core of this stream passes Earth's orbit on the 12th of August when more than 100 meteors per hour can been seen by a single observer under ideal conditions. The Technical University of Berlin (TUB) and the German Aerospace Center (DLR) together with the Armagh observatory organize meteor campaigns every summer observing the activity of the Perseids meteor shower. The observations are carried out using the Smart Panoramic Optical Sensor Head (SPOSH) camera system [2] which has been developed by DLR and Jena-Optronik GmbH under an ESA/ESTEC contract. The camera was designed to image faint, short-lived phenomena on dark planetary hemispheres. The camera is equipped with a highly sensitive back-illuminated CCD chip having a pixel resolution of 1024x1024. The custom-made fish-eye lens offers a 120°x120° field-of-view (168° over the diagonal) making the monitoring of nearly the whole night sky possible (Fig. 1). This year the observations will take place between 3rd and 10th of August to cover the meteor activity of the Perseids just before their maximum. The SPOSH cameras will be deployed at two remote sites located in high altitudes in the Greek Peloponnese peninsula. The baseline of ∼50km

  13. OPTIMAL CAMERA NETWORK DESIGN FOR 3D MODELING OF CULTURAL HERITAGE

    Directory of Open Access Journals (Sweden)

    B. S. Alsadik

    2012-07-01

    Full Text Available Digital cultural heritage documentation in 3D is subject to research and practical applications nowadays. Image-based modeling is a technique to create 3D models, which starts with the basic task of designing the camera network. This task is – however – quite crucial in practical applications because it needs a thorough planning and a certain level of expertise and experience. Bearing in mind todays computational (mobile power we think that the optimal camera network should be designed in the field, and, therefore, making the preprocessing and planning dispensable. The optimal camera network is designed when certain accuracy demands are fulfilled with a reasonable effort, namely keeping the number of camera shots at a minimum. In this study, we report on the development of an automatic method to design the optimum camera network for a given object of interest, focusing currently on buildings and statues. Starting from a rough point cloud derived from a video stream of object images, the initial configuration of the camera network assuming a high-resolution state-of-the-art non-metric camera is designed. To improve the image coverage and accuracy, we use a mathematical penalty method of optimization with constraints. From the experimental test, we found that, after optimization, the maximum coverage is attained beside a significant improvement of positional accuracy. Currently, we are working on a guiding system, to ensure, that the operator actually takes the desired images. Further next steps will include a reliable and detailed modeling of the object applying sophisticated dense matching techniques.

  14. ACT-Vision: active collaborative tracking for multiple PTZ cameras

    Science.gov (United States)

    Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet

    2009-04-01

    We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.

  15. High-speed holographic camera

    International Nuclear Information System (INIS)

    Novaro, Marc

    The high-speed holographic camera is a disgnostic instrument using holography as an information storing support. It allows us to take 10 holograms, of an object, with exposures times of 1,5ns, separated in time by 1 or 2ns. In order to get these results easily, no mobile part is used in the set-up [fr

  16. Efficient Multiclass Object Detection: Detecting Pedestrians and Bicyclists in a Truck’s Blind Spot Camera

    OpenAIRE

    Van Beeck, Kristof; Goedemé, Toon

    2015-01-01

    In this paper we propose an efficient detection and tracking framework targeting vulnerable road users in the blind spot camera images of a truck. Existing non-vision based safety solutions are not able to handle this problem completely. Therefore we aim to develop an active safety system, based solely on the vision input of the blind spot camera. This is far from trivial: vulnerable road users are a diverse class and consist of a wide variety of poses and appearances. Evidently we need to ac...

  17. Small Orbital Stereo Tracking Camera Technology Development

    Science.gov (United States)

    Gagliano, L.; Bryan, T.; MacLeod, T.

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASAs Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  18. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  19. Neutron imaging system based on a video camera

    International Nuclear Information System (INIS)

    Dinca, M.

    2004-01-01

    The non-destructive testing with cold, thermal, epithermal or fast neutrons is nowadays more and more useful because the world-wide level of industrial development requires considerably higher standards of quality of manufactured products and reliability of technological processes especially where any deviation from standards could result in large-scale catastrophic consequences or human loses. Thanks to their properties, easily obtained and very good discrimination of the materials that penetrate, the thermal neutrons are the most used probe. The methods involved for this technique have advanced from neutron radiography based on converter screens and radiological films to neutron radioscopy based on video cameras, that is, from static images to dynamic images. Many neutron radioscopy systems have been used in the past with various levels of success. The quality of an image depends on the quality of the neutron beam and the type of the neutron imaging system. For real time investigations there are involved tube type cameras, CCD cameras and recently CID cameras that capture the image from an appropriate scintillator through the agency of a mirror. The analog signal of the camera is then converted into digital signal by the signal processing technology included into the camera. The image acquisition card or frame grabber from a PC converts the digital signal into an image. The image is formatted and processed by image analysis software. The scanning position of the object is controlled by the computer that commands the electrical motors that move horizontally, vertically and rotate the table of the object. Based on this system, a lot of static image acquisitions, real time non-destructive investigations of dynamic processes and finally, tomographic investigations of the small objects are done in a short time. A system based on a CID camera is presented. Fundamental differences between CCD and CID cameras lie in their pixel readout structure and technique. CIDs

  20. PEOPLE REIDENTIFCATION IN A DISTRIBUTED CAMERA NETWORK

    Directory of Open Access Journals (Sweden)

    Icaro Oliveira de Oliveira

    2010-06-01

    Full Text Available This paper presents an approach to the object reidentification problem in a distributed camera network system. The reidentification or reacquisition problem consists essentially on the matching process of images acquired from different cameras. This work is applied in a monitored environment by cameras. This application is important to modern security systems, in which the targets presence identification in the environment expands the capacity of action by security agents in real time and provides important parameters like localization for each target. We used target’s interest points and target’s color with features for reidentification. The satisfactory results were obtained from real experiments in public video datasets and synthetic images with noise.

  1. Wired and Wireless Camera Triggering with Arduino

    Science.gov (United States)

    Kauhanen, H.; Rönnholm, P.

    2017-10-01

    Synchronous triggering is an important task that allows simultaneous data capture from multiple cameras. Accurate synchronization enables 3D measurements of moving objects or from a moving platform. In this paper, we describe one wired and four wireless variations of Arduino-based low-cost remote trigger systems designed to provide a synchronous trigger signal for industrial cameras. Our wireless systems utilize 315 MHz or 434 MHz frequencies with noise filtering capacitors. In order to validate the synchronization accuracy, we developed a prototype of a rotating trigger detection system (named RoTriDeS). This system is suitable to detect the triggering accuracy of global shutter cameras. As a result, the wired system indicated an 8.91 μs mean triggering time difference between two cameras. Corresponding mean values for the four wireless triggering systems varied between 7.92 and 9.42 μs. Presented values include both camera-based and trigger-based desynchronization. Arduino-based triggering systems appeared to be feasible, and they have the potential to be extended to more complicated triggering systems.

  2. Observations of faint comets at McDonald Observatory: 1978-1980

    Science.gov (United States)

    Barker, E. S.; Cochran, A. L.; Rybski, P. M.

    1981-01-01

    Modern observational techniques, developed for spectroscopy and photometry of faint galaxies and quasars, successfully applied to faint comets on the 2.7 m telescope. The periodic comets Van Biesbrock, Ashbrook-Jackson, Schwassmann-Wachmann 1, Tempel 2, Encke, Forbes, Brooks 2, Stephan-Oterma and the new comets Bradfield (19791), Bowell (1980b), Chernis-Petrauskas (1980k) were observed. The comets ranged in magnitude from 10th to 20th magnitude. For comets fainter than 19th magnitude, reflectance spectra at 100A resolution and area photometry were obtained. On comets of 17th or 18th magnitude, spectrometric scans (6A resolution) of the nucleus or inner coma region. On those comets which are brighter than 16th magnitude spatial spectrophotometric (6A resolution) studies of the inner and extended comae were done. An extensive spatial study of the comae of P/Encke and P/Stephen-Oterma, correlated with heliocentric distance is taking place. The observing process used is described and examples of the results obtained to date are discussed.

  3. Camera Based Navigation System with Augmented Reality

    Directory of Open Access Journals (Sweden)

    M. Marcu

    2012-06-01

    Full Text Available Nowadays smart mobile devices have enough processing power, memory, storage and always connected wireless communication bandwidth that makes them available for any type of application. Augmented reality (AR proposes a new type of applications that tries to enhance the real world by superimposing or combining virtual objects or computer generated information with it. In this paper we present a camera based navigation system with augmented reality integration. The proposed system aims to the following: the user points the camera of the smartphone towards a point of interest, like a building or any other place, and the application searches for relevant information about that specific place and superimposes the data over the video feed on the display. When the user moves the camera away, changing its orientation, the data changes as well, in real-time, with the proper information about the place that is now in the camera view.

  4. Dynamical scene analysis with a moving camera: mobile targets detection system

    International Nuclear Information System (INIS)

    Hennebert, Christine

    1996-01-01

    This thesis work deals with the detection of moving objects in monocular image sequences acquired with a mobile camera. We propose a method able to detect small moving objects in visible or infrared images of real outdoor scenes. In order to detect objects of very low apparent motion, we consider an analysis on a large temporal interval. We have chosen to compensate for the dominant motion due to the camera displacement for several consecutive images in order to form a sub-sequence of images for which the camera seems virtually static. We have also developed a new approach allowing to extract the different layers of a real scene in order to deal with cases where the 2D motion due to the camera displacement cannot be globally compensated for. To this end, we use a hierarchical model with two levels: the local merging step and the global merging one. Then, an appropriate temporal filtering is applied to registered image sub-sequence to enhance signals corresponding to moving objects. The detection issue is stated as a labeling problem within a statistical regularization based on Markov Random Fields. Our method has been validated on numerous real image sequences depicting complex outdoor scenes. Finally, the feasibility of an integrated circuit for mobile object detection has been proved. This circuit could lead to an ASIC creation. (author) [fr

  5. About possibility of temperature trace observing on the human skin using commercially available IR camera

    Science.gov (United States)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Shestakov, Ivan L.; Blednov, Roman G.

    2016-09-01

    One of urgent security problems is a detection of objects placed inside the human body. Obviously, for safety reasons one cannot use X-rays for such object detection widely and often. Three years ago, we have demonstrated principal possibility to see a temperature trace, induced by food eating or water drinking, on the human body skin by using a passive THz camera. However, this camera is very expensive. Therefore, for practice it will be very convenient if one can use the IR camera for this purpose. In contrast to passive THz camera using, the IR camera does not allow to see the object under clothing, if an image, produced by this camera, is used directly. Of course, this is a big disadvantage for a security problem solution based on the IR camera using. To overcome this disadvantage we develop novel approach for computer processing of IR camera images. It allows us to increase a temperature resolution of IR camera as well as increasing of human year effective susceptibility. As a consequence of this, a possibility for seeing of a human body temperature changing through clothing appears. We analyze IR images of a person, which drinks water and eats chocolate. We follow a temperature trace on human body skin, caused by changing of temperature inside the human body. Some experiments were made with measurements of a body temperature covered by T-shirt. Shown results are very important for the detection of forbidden objects, cancelled inside the human body, by using non-destructive control without using X-rays.

  6. Grasping objects from a user’s hand using time-of-flight camera data

    CSIR Research Space (South Africa)

    Govender, N

    2010-11-01

    Full Text Available F camera emits an infrared pulse and measures return phase change at every pixel to estimate depth over an image. We used a Mesa Imaging SR4000 which, if conditions are right, provides impressively accurate point cloud data with associated intensities...

  7. Construction of a frameless camera-based stereotactic neuronavigator.

    Science.gov (United States)

    Cornejo, A; Algorri, M E

    2004-01-01

    We built an infrared vision system to be used as the real time 3D motion sensor in a prototype low cost, high precision, frameless neuronavigator. The objective of the prototype is to develop accessible technology for increased availability of neuronavigation systems in research labs and small clinics and hospitals. We present our choice of technology including camera and IR emitter characteristics. We describe the methodology for setting up the 3D motion sensor, from the arrangement of the cameras and the IR emitters on surgical instruments, to triangulation equations from stereo camera pairs, high bandwidth computer communication with the cameras and real time image processing algorithms. We briefly cover the issues of camera calibration and characterization. Although our performance results do not yet fully meet the high precision, real time requirements of neuronavigation systems we describe the current improvements being made to the 3D motion sensor that will make it suitable for surgical applications.

  8. Gamma cameras - a method of evaluation

    International Nuclear Information System (INIS)

    Oates, L.; Bibbo, G.

    2000-01-01

    Full text: With the sophistication and longevity of the modern gamma camera it is not often that the need arises to evaluate a gamma camera for purchase. We have recently been placed in the position of retiring our two single headed cameras of some vintage and replacing them with a state of the art dual head variable angle gamma camera. The process used for the evaluation consisted of five parts: (1) Evaluation of the technical specification as expressed in the tender document; (2) A questionnaire adapted from the British Society of Nuclear Medicine; (3) Site visits to assess gantry configuration, movement, patient access and occupational health, welfare and safety considerations; (4) Evaluation of the processing systems offered; (5) Whole of life costing based on equally configured systems. The results of each part of the evaluation were expressed using a weighted matrix analysis with each of the criteria assessed being weighted in accordance with their importance to the provision of an effective nuclear medicine service for our centre and the particular importance to paediatric nuclear medicine. This analysis provided an objective assessment of each gamma camera system from which a purchase recommendation was made. Copyright (2000) The Australian and New Zealand Society of Nuclear Medicine Inc

  9. Comment on "Clouds and the Faint Young Sun Paradox" by Goldblatt and Zahnle (2011

    Directory of Open Access Journals (Sweden)

    R. Rondanelli

    2012-03-01

    Full Text Available Goldblatt and Zahnle (2011 raise a number of issues related to the possibility that cirrus clouds can provide a solution to the faint young sun paradox. Here, we argue that: (1 climates having a lower than present mean surface temperature cannot be discarded as solutions to the faint young sun paradox, (2 the detrainment from deep convective clouds in the tropics is a well-established physical mechanism for the formation of high clouds that have a positive radiative forcing (even if the possible role of these clouds as a negative climate feedback remains controversial and (3 even if some cloud properties are not mutually consistent with observations in radiative transfer parameterizations, the most relevant consistency (for the purpose of hypothesis testing is with observations of the cloud radiative forcing. Therefore, we maintain that cirrus clouds, as observed in the current climate and covering a large region of the tropics, can provide a solution to the faint young sun paradox, or at least ease the amount of CO2 or other greenhouse substances needed to provide temperatures above freezing during the Archean.

  10. The Faint End of the Lyman Alpha Luminosity Function at 2 < z < 3.8

    Science.gov (United States)

    Devarakonda, Yaswant; Livermore, Rachael; Indahl, Briana; Wold, Isak; Davis, Dustin; Finkelstein, Steven

    2018-01-01

    Most current models predict that our universe is mostly composed of small, dim galaxies. Due to these galaxies being so faint, it is very difficult to study these types of galaxies outside of our local universe. This is particularly an issue for studying how these small galaxies evolved over their lifetimes. With the benefit of gravitational lensing, however, we are able to observe galaxies that are farther and fainter than ever before possible. In this particular study, we focus on Lyman-Alpha emitting galaxies between the redshifts of 2-3.8, so that we may study these galaxies during the epoch of peak star formation in the universe. We use the McDonald Observatory 2.7, Harlan Smith telescope with the VIRUS-P IFU spectrograph to observe several Hubble Frontier Field lensing clusters to spectroscopically discover faint galaxies over this redshift range. In addition to providing insight into the faint-end slope of the Lyman alpha luminosity function, the spectroscopic redshifts will allow us to better constrain the mass models of the foreground clusters, such as Abell 370, so that we may better understand lensing effects for this and future studies.

  11. REAL-TIME CAMERA GUIDANCE FOR 3D SCENE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    F. Schindler

    2012-07-01

    Full Text Available We propose a framework for operator guidance during the image acquisition process for reliable multi-view stereo reconstruction. Goal is to achieve full coverage of the object and sufficient overlap. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that when following the camera movements suggested by our system, the proposed framework is able to generate good approximate values for the bundle adjustment, leading to accurate results compared to ground truth after few iterations. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.

  12. Collimated trans-axial tomographic scintillation camera

    International Nuclear Information System (INIS)

    1980-01-01

    The objects of this invention are first to reduce the time required to obtain statistically significant data in trans-axial tomographic radioisotope scanning using a scintillation camera. Secondly, to provide a scintillation camera system to increase the rate of acceptance of radioactive events to contribute to the positional information obtainable from a known radiation source without sacrificing spatial resolution. Thirdly to reduce the scanning time without loss of image clarity. The system described comprises a scintillation camera detector, means for moving this in orbit about a cranial-caudal axis relative to a patient and a collimator having septa defining apertures such that gamma rays perpendicular to the axis are admitted with high spatial resolution, parallel to the axis with low resolution. The septa may be made of strips of lead. Detailed descriptions are given. (U.K.)

  13. Temperature resolution enhancing of commercially available THz passive cameras due to computer processing of images

    Science.gov (United States)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Kuchik, Igor E.

    2014-06-01

    As it is well-known, application of the passive THz camera for the security problems is very promising way. It allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection of concealed object: minimal size of the object, maximal distance of the detection, image detail. One of probable ways for a quality image enhancing consists in computer processing of image. Using computer processing of the THz image of objects concealed on the human body, one may improve it many times. Consequently, the instrumental resolution of such device may be increased without any additional engineering efforts. We demonstrate new possibilities for seeing the clothes details, which raw images, produced by the THz cameras, do not allow to see. We achieve good quality of the image due to applying various spatial filters with the aim to demonstrate independence of processed images on math operations. This result demonstrates a feasibility of objects seeing. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China).

  14. Head-coupled remote stereoscopic camera system for telepresence applications

    Science.gov (United States)

    Bolas, Mark T.; Fisher, Scott S.

    1990-09-01

    The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.

  15. Camera Trajectory fromWide Baseline Images

    Science.gov (United States)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the

  16. 3D RECONSTRUCTION OF AN UNDERWATER ARCHAELOGICAL SITE: COMPARISON BETWEEN LOW COST CAMERAS

    Directory of Open Access Journals (Sweden)

    A. Capra

    2015-04-01

    Full Text Available The 3D reconstruction with a metric content of a submerged area, where objects and structures of archaeological interest are found, could play an important role in the research and study activities and even in the digitization of the cultural heritage. The reconstruction of 3D object, of interest for archaeologists, constitutes a starting point in the classification and description of object in digital format and for successive fruition by user after delivering through several media. The starting point is a metric evaluation of the site obtained with photogrammetric surveying and appropriate 3D restitution. The authors have been applying the underwater photogrammetric technique since several years using underwater digital cameras and, in this paper, digital low cost cameras (off-the-shelf. Results of tests made on submerged objects with three cameras are presented: © Canon Power Shot G12, © Intova Sport HD e © GoPro HERO 2. The experimentation had the goal to evaluate the precision in self-calibration procedures, essential for multimedia underwater photogrammetry, and to analyze the quality of 3D restitution. Precisions obtained in the calibration and orientation procedures was assessed by using three cameras, and an homogeneous set control points. Data were processed with © Agisoft Photoscan. Successively, 3D models were created and the comparison of the models derived from the use of different cameras was performed. Different potentialities of the used cameras are reported in the discussion section. The 3D restitution of objects and structures was integrated with sea bottom floor morphology in order to achieve a comprehensive description of the site. A possible methodology of survey and representation of submerged objects is therefore illustrated, considering an automatic and a semi-automatic approach.

  17. Homography-based multiple-camera person-tracking

    Science.gov (United States)

    Turk, Matthew R.

    2009-01-01

    Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of

  18. Application of Terrestrial Laser Scanner with an Integrated Thermal Camera in Non-Destructive Evaluation of Concrete Surface of Hydrotechnical Objects

    Directory of Open Access Journals (Sweden)

    Kowalska Maria

    2017-12-01

    Full Text Available The authors present possible applications of thermal data as an additional source of information on an object’s behaviour during the technical assessment of the condition of a concrete surface. For the study one of the most recent propositions introduced by Zoller + Fröhlich company was used, which is an integration of a thermal camera with a terrestrial laser scanner. This solution enables an acquisition of geometric and spectral data on the surveyed object and also provides information on the surface’s temperature in the selected points. A section of the dam’s downstream concrete wall was selected as the subject of the study for which a number of scans were carried out and a number of thermal images were taken at different times of the day. The obtained thermal data was confronted with the acquired spectral information for the specified points. This made it possible to carry out broader analysis of the surface and an inspection of the revealed fissure. The thermal analysis of said fissure indicated that the temperature changes within it are slower, which may affect the way the concrete works and may require further elaboration by the appropriate experts. Through the integration of a thermal camera with a terrestrial laser scanner one can not only analyse changes of temperature in the discretely selected points but on the whole surface as well. Moreover, it is also possible to accurately determine the range and the area of the change affecting the surface. The authors note the limitations of the presented solution like, inter alia, the resolution of the thermal camera.

  19. Bi-variate statistical attribute filtering : A tool for robust detection of faint objects

    NARCIS (Netherlands)

    Teeninga, Paul; Moschini, Ugo; Trager, Scott C.; Wilkinson, M.H.F.

    2013-01-01

    We present a new method for morphological connected attribute filtering for object detection in astronomical images. In this approach, a threshold is set on one attribute (power), based on its distribution due to noise, as a function of object area. The results show an order of magnitude higher

  20. An Overdensity of i-Dropouts among a Population of Excess Field Objects in the Virgo Cluster

    Science.gov (United States)

    Yan, Haojing; Hathi, Nimish P.; Windhorst, Rogier A.

    2008-03-01

    Using a set of deep imaging data obtained by the Advanced Camera for Surveys (ACS) on the Hubble Space Telescope (HST) shortly after its deployment, Yan, Windhorst, & Cohen found a large number of F775W-band dropouts (i-dropouts), which are consistent with being galaxies at z ≈ 6. The surface density of i-dropouts thus derived, however, is an order of magnitude higher than those subsequent studies found in other deep ACS fields, including the Hubble Ultra Deep Field (HUDF). Here we revisit this problem, using both existing and new data. We confirm that the large overdensity of i-dropouts does exist in this field, and that their optical-to-IR colors are similar to those in the HUDF. However, we have discovered that the i-dropout overdensity is accompanied by an even larger excess of faint field objects in this region and its vicinity. This large excess of field objects is most likely caused by the fact that we have resolved the faint diffuse light extending from an interacting galaxy pair in the Virgo Cluster, M60/NGC 4647, which lies several arcminutes away from the region where the excess is found. The i-dropouts in this field are within the magnitude range where this excess of field objects occurs, and their spatial distribution seems to follow the same gradient as the entire excess field population. This excess population is also red in color, and the red wing of its color distribution continuously extends to the regime where the i-dropouts reside. While we still cannot completely rule out the possibility that the overdensity of i-dropouts might be a genuine large-scale structure of galaxies at z ≈ 6, we prefer the interpretation that most of them are part of the excess stellar population related to M60/NGC 4647. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These

  1. The radio properties of infrared-faint radio sources

    Science.gov (United States)

    Middelberg, E.; Norris, R. P.; Hales, C. A.; Seymour, N.; Johnston-Hollitt, M.; Huynh, M. T.; Lenc, E.; Mao, M. Y.

    2011-02-01

    Context. Infrared-faint radio sources (IFRS) are objects that have flux densities of several mJy at 1.4 GHz, but that are invisible at 3.6 μm when using sensitive Spitzer observations with μJy sensitivities. Their nature is unclear and difficult to investigate since they are only visible in the radio. Aims: High-resolution radio images and comprehensive spectral coverage can yield constraints on the emission mechanisms of IFRS and can give hints to similarities with known objects. Methods: We imaged a sample of 17 IFRS at 4.8 GHz and 8.6 GHz with the Australia Telescope Compact Array to determine the structures on arcsecond scales. We added radio data from other observing projects and from the literature to obtain broad-band radio spectra. Results: We find that the sources in our sample are either resolved out at the higher frequencies or are compact at resolutions of a few arcsec, which implies that they are smaller than a typical galaxy. The spectra of IFRS are remarkably steep, with a median spectral index of -1.4 and a prominent lack of spectral indices larger than -0.7. We also find that, given the IR non-detections, the ratio of 1.4 GHz flux density to 3.6 μm flux density is very high, and this puts them into the same regime as high-redshift radio galaxies. Conclusions: The evidence that IFRS are predominantly high-redshift sources driven by active galactic nuclei (AGN) is strong, even though not all IFRS may be caused by the same phenomenon. Compared to the rare and painstakingly collected high-redshift radio galaxies, IFRS appear to be much more abundant, but less luminous, AGN-driven galaxies at similar cosmological distances.

  2. An Improved Technique for the Photometry and Astrometry of Faint Companions

    Science.gov (United States)

    Burke, Daniel; Gladysz, Szymon; Roberts, Lewis; Devaney, Nicholas; Dainty, Chris

    2009-07-01

    We propose a new approach to differential astrometry and photometry of faint companions in adaptive optics images. It is based on a prewhitening matched filter, also referred to in the literature as the Hotelling observer. We focus on cases where the signal of the companion is located within the bright halo of the parent star. Using real adaptive optics data from the 3 m Shane telescope at the Lick Observatory, we compare the performance of the Hotelling algorithm with other estimation algorithms currently used for the same problem. The real single-star data are used to generate artificial binary objects with a range of magnitude ratios. In most cases, the Hotelling observer gives significantly lower astrometric and photometric errors. In the case of high Strehl ratio (SR) data (SR ≈ 0.5), the differential photometry of a binary star with a Δm = 4.5 and a separation of 0.6″ is better than 0.1 mag a factor of 2 lower than the other algorithms considered.

  3. Parallel object-oriented data mining system

    Science.gov (United States)

    Kamath, Chandrika; Cantu-Paz, Erick

    2004-01-06

    A data mining system uncovers patterns, associations, anomalies and other statistically significant structures in data. Data files are read and displayed. Objects in the data files are identified. Relevant features for the objects are extracted. Patterns among the objects are recognized based upon the features. Data from the Faint Images of the Radio Sky at Twenty Centimeters (FIRST) sky survey was used to search for bent doubles. This test was conducted on data from the Very Large Array in New Mexico which seeks to locate a special type of quasar (radio-emitting stellar object) called bent doubles. The FIRST survey has generated more than 32,000 images of the sky to date. Each image is 7.1 megabytes, yielding more than 100 gigabytes of image data in the entire data set.

  4. Calibration of Low Cost RGB and NIR Uav Cameras

    Science.gov (United States)

    Fryskowska, A.; Kedzierski, M.; Grochala, A.; Braula, A.

    2016-06-01

    Non-metric digital cameras are being widely used for photogrammetric studies. The increase in resolution and quality of images obtained by non-metric cameras, allows to use it in low-cost UAV and terrestrial photogrammetry. Imagery acquired with non-metric cameras can be used in 3D modeling of objects or landscapes, reconstructing of historical sites, generating digital elevation models (DTM), orthophotos, or in the assessment of accidents. Non-metric digital camcorders are characterized by instability and ignorance of the interior orientation parameters. Therefore, the use of these devices requires prior calibration. Calibration research was conducted using non-metric camera, different calibration tests and various software. The first part of the paper contains a brief theoretical introduction including the basic definitions, like the construction of non-metric cameras or description of different optical distortions. The second part of the paper contains cameras calibration process, details of the calibration methods and models that have been used. Sony Nex 5 camera calibration has been done using software: Image Master Calib, Matlab - Camera Calibrator application and Agisoft Lens. For the study 2D test fields has been used. As a part of the research a comparative analysis of the results have been done.

  5. LAMOST CCD camera-control system based on RTS2

    Science.gov (United States)

    Tian, Yuan; Wang, Zheng; Li, Jian; Cao, Zi-Huang; Dai, Wei; Wei, Shou-Lin; Zhao, Yong-Heng

    2018-05-01

    The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is the largest existing spectroscopic survey telescope, having 32 scientific charge-coupled-device (CCD) cameras for acquiring spectra. Stability and automation of the camera-control software are essential, but cannot be provided by the existing system. The Remote Telescope System 2nd Version (RTS2) is an open-source and automatic observatory-control system. However, all previous RTS2 applications were developed for small telescopes. This paper focuses on implementation of an RTS2-based camera-control system for the 32 CCDs of LAMOST. A virtual camera module inherited from the RTS2 camera module is built as a device component working on the RTS2 framework. To improve the controllability and robustness, a virtualized layer is designed using the master-slave software paradigm, and the virtual camera module is mapped to the 32 real cameras of LAMOST. The new system is deployed in the actual environment and experimentally tested. Finally, multiple observations are conducted using this new RTS2-framework-based control system. The new camera-control system is found to satisfy the requirements for automatic camera control in LAMOST. This is the first time that RTS2 has been applied to a large telescope, and provides a referential solution for full RTS2 introduction to the LAMOST observatory control system.

  6. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    Science.gov (United States)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellutta, Paolo; Sherwin, Gary W.

    2011-01-01

    The ability to perform off-road autonomous navigation at any time of day or night is a requirement for some unmanned ground vehicle (UGV) programs. Because there are times when it is desirable for military UGVs to operate without emitting strong, detectable electromagnetic signals, a passive only terrain perception mode of operation is also often a requirement. Thermal infrared (TIR) cameras can be used to provide day and night passive terrain perception. TIR cameras have a detector sensitive to either mid-wave infrared (MWIR) radiation (3-5?m) or long-wave infrared (LWIR) radiation (8-12?m). With the recent emergence of high-quality uncooled LWIR cameras, TIR cameras have become viable passive perception options for some UGV programs. The Jet Propulsion Laboratory (JPL) has used a stereo pair of TIR cameras under several UGV programs to perform stereo ranging, terrain mapping, tree-trunk detection, pedestrian detection, negative obstacle detection, and water detection based on object reflections. In addition, we have evaluated stereo range data at a variety of UGV speeds, evaluated dual-band TIR classification of soil, vegetation, and rock terrain types, analyzed 24 hour water and 12 hour mud TIR imagery, and analyzed TIR imagery for hazard detection through smoke. Since TIR cameras do not currently provide the resolution available from megapixel color cameras, a UGV's daytime safe speed is often reduced when using TIR instead of color cameras. In this paper, we summarize the UGV terrain perception work JPL has performed with TIR cameras over the last decade and describe a calibration target developed by General Dynamics Robotic Systems (GDRS) for TIR cameras and other sensors.

  7. Automatic video segmentation employing object/camera modeling techniques

    NARCIS (Netherlands)

    Farin, D.S.

    2005-01-01

    Practically established video compression and storage techniques still process video sequences as rectangular images without further semantic structure. However, humans watching a video sequence immediately recognize acting objects as semantic units. This semantic object separation is currently not

  8. Holographic interferometry using a digital photo-camera

    International Nuclear Information System (INIS)

    Sekanina, H.; Hledik, S.

    2001-01-01

    The possibilities of running digital holographic interferometry using commonly available compact digital zoom photo-cameras are studied. The recently developed holographic setup, suitable especially for digital photo-cameras equipped with an un detachable object lens, is used. The method described enables a simple and straightforward way of both recording and reconstructing of a digital holographic interferograms. The feasibility of the new method is verified by digital reconstruction of the interferograms acquired, using a numerical code based on the fast Fourier transform. Experimental results obtained are presented and discussed. (authors)

  9. Camera calibration method of binocular stereo vision based on OpenCV

    Science.gov (United States)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  10. Primordial black holes as dark matter: constraints from compact ultra-faint dwarfs

    Science.gov (United States)

    Zhu, Qirong; Vasiliev, Eugene; Li, Yuexing; Jing, Yipeng

    2018-05-01

    The ground-breaking detections of gravitational waves from black hole mergers by LIGO have rekindled interest in primordial black holes (PBHs) and the possibility of dark matter being composed of PBHs. It has been suggested that PBHs of tens of solar masses could serve as dark matter candidates. Recent analytical studies demonstrated that compact ultra-faint dwarf galaxies can serve as a sensitive test for the PBH dark matter hypothesis, since stars in such a halo-dominated system would be heated by the more massive PBHs, their present-day distribution can provide strong constraints on PBH mass. In this study, we further explore this scenario with more detailed calculations, using a combination of dynamical simulations and Bayesian inference methods. The joint evolution of stars and PBH dark matter is followed with a Fokker-Planck code PHASEFLOW. We run a large suite of such simulations for different dark matter parameters, then use a Markov chain Monte Carlo approach to constrain the PBH properties with observations of ultra-faint galaxies. We find that two-body relaxation between the stars and PBH drives up the stellar core size, and increases the central stellar velocity dispersion. Using the observed half-light radius and velocity dispersion of stars in the compact ultra-faint dwarf galaxies as joint constraints, we infer that these dwarfs may have a cored dark matter halo with the central density in the range of 1-2 M⊙pc - 3, and that the PBHs may have a mass range of 2-14 M⊙ if they constitute all or a substantial fraction of the dark matter.

  11. Feature-based automatic color calibration for networked camera system

    Science.gov (United States)

    Yamamoto, Shoji; Taki, Keisuke; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi

    2011-01-01

    In this paper, we have developed a feature-based automatic color calibration by using an area-based detection and adaptive nonlinear regression method. Simple color matching of chartless is achieved by using the characteristic of overlapping image area with each camera. Accurate detection of common object is achieved by the area-based detection that combines MSER with SIFT. Adaptive color calibration by using the color of detected object is calculated by nonlinear regression method. This method can indicate the contribution of object's color for color calibration, and automatic selection notification for user is performed by this function. Experimental result show that the accuracy of the calibration improves gradually. It is clear that this method can endure practical use of multi-camera color calibration if an enough sample is obtained.

  12. A practical block detector for a depth-encoding PET camera

    International Nuclear Information System (INIS)

    Rogers, J.G.; Moisan, C.; Hoskinson, E.M.; Andreaco, M.S.; Williams, C.W.; Nutt, R.

    1996-01-01

    The depth-of-interaction effect in block detectors degrades the image resolution in commercial PET cameras and impedes the natural evolution of smaller, less expensive cameras. A method for correcting the measured position of each detected gamma ray by measuring its depth-of-interaction was tested and found to recover 38% of the lost resolution at 7.5 cm radius in a tabletop, 50-cm-diameter camera. To obtain the desired depth sensitivity, standard commercial detectors were modified by a simple and practical process that is suitable for mass production of the detectors. The impact of the detector modifications on central image resolution and on the ability of the camera to correct for object scatter were also measured

  13. A practical block detector for a depth encoding PET camera

    International Nuclear Information System (INIS)

    Rogers, J.G.; Moisan, C.; Hoskinson, E.M.

    1995-10-01

    The depth-of-interaction effect in block detectors degrades the image resolution in commercial PET cameras and impedes the natural evolution of smaller, less expensive cameras. A method for correcting the measured position of each detected gamma ray by measuring its depth-of-interaction was tested and found to recover 38% of the lost resolution in a table-top 50 cm diameter camera. To obtain the desired depth sensitivity, standard commercial detectors were modified by a simple and practical process, which is suitable for mass production of the detectors. The impact of the detectors modifications on central image resolution and on the ability of the camera to correct for object scatter were also measured. (authors)

  14. Acceptance/operational test procedure 241-AN-107 Video Camera System

    International Nuclear Information System (INIS)

    Pedersen, L.T.

    1994-01-01

    This procedure will document the satisfactory operation of the 241-AN-107 Video Camera System. The camera assembly, including camera mast, pan-and-tilt unit, camera, and lights, will be installed in Tank 241-AN-107 to monitor activities during the Caustic Addition Project. The camera focus, zoom, and iris remote controls will be functionally tested. The resolution and color rendition of the camera will be verified using standard reference charts. The pan-and-tilt unit will be tested for required ranges of motion, and the camera lights will be functionally tested. The master control station equipment, including the monitor, VCRs, printer, character generator, and video micrometer will be set up and performance tested in accordance with original equipment manufacturer's specifications. The accuracy of the video micrometer to measure objects in the range of 0.25 inches to 67 inches will be verified. The gas drying distribution system will be tested to ensure that a drying gas can be flowed over the camera and lens in the event that condensation forms on these components. This test will be performed by attaching the gas input connector, located in the upper junction box, to a pressurized gas supply and verifying that the check valve, located in the camera housing, opens to exhaust the compressed gas. The 241-AN-107 camera system will also be tested to assure acceptable resolution of the camera imaging components utilizing the camera system lights

  15. Herschel-PACS photometry of faint stars for sensitivity performance assessment and establishment of faint FIR primary photometric standards

    Science.gov (United States)

    Klaas, U.; Balog, Z.; Nielbock, M.; Müller, T. G.; Linz, H.; Kiss, Cs.

    2018-05-01

    Aims: Our aims are to determine flux densities and their photometric accuracy for a set of seventeen stars that range in flux from intermediately bright (≲2.5 Jy) to faint (≳5 mJy) in the far-infrared (FIR). We also aim to derive signal-to-noise dependence with flux and time, and compare the results with predictions from the Herschel exposure-time calculation tool. Methods: We obtain aperture photometry from Herschel-PACS high-pass-filtered scan maps and chop/nod observations of the faint stars. The issues of detection limits and sky confusion noise are addressed by comparison of the field-of-view at different wavelengths, by multi-aperture photometry, by special processing of the maps to preserve extended emission, and with the help of large-scale absolute sky brightness maps from AKARI. This photometry is compared with flux-density predictions based on photospheric models for these stars. We obtain a robust noise estimate by fitting the flux distribution per map pixel histogram for the area around the stars, scaling it for the applied aperture size and correcting for noise correlation. Results: For 15 stars we obtain reliable photometry in at least one PACS filter, and for 11 stars we achieve this in all three PACS filters (70, 100, 160 μm). Faintest fluxes, for which the photometry still has good quality, are about 10-20 mJy with scan map photometry. The photometry of seven stars is consistent with models or flux predictions for pure photospheric emission, making them good primary standard candidates. Two stars exhibit source-intrinsic far-infrared excess: β Gem (Pollux), being the host star of a confirmed Jupiter-size exoplanet, due to emission of an associated dust disk, and η Dra due to dust emission in a binary system with a K1 dwarf. The investigation of the 160 μm sky background and environment of four sources reveals significant sky confusion prohibiting the determination of an accurate stellar flux at this wavelength. As a good model

  16. Herbig-Haro objects and T Tauri nebulae

    International Nuclear Information System (INIS)

    Boehm, K.H.

    1975-01-01

    The empirical information about Herbig-Haro objects and T Tauri nebulae is summarized. We emphasize especially the importance of the spectroscopic and spectrophotometric data. Relative and (preliminary) absolute emission line fluxes are presented and discussed. We consider the radial velocity data and the detection of a faint blue continuum in Herbig-Haro objects as important from a theoretical point of view. The direct interpretation of the emission line spectra is simple and leads to values of the electron temperature, electron density, density inhomogeneities, filling factors, degree of ionization and chemical abundances. The relevant procedures are discussed in some detail. The possible role of the Herbig-Haro objects in the early phases of stellar evolution is discussed. (orig./BJ) [de

  17. Development of underwater camera using high-definition camera

    International Nuclear Information System (INIS)

    Tsuji, Kenji; Watanabe, Masato; Takashima, Masanobu; Kawamura, Shingo; Tanaka, Hiroyuki

    2012-01-01

    In order to reduce the time for core verification or visual inspection of BWR fuels, the underwater camera using a High-Definition camera has been developed. As a result of this development, the underwater camera has 2 lights and 370 x 400 x 328mm dimensions and 20.5kg weight. Using the camera, 6 or so spent-fuel IDs are identified at 1 or 1.5m distance at a time, and 0.3mmφ pin-hole is recognized at 1.5m distance and 20 times zoom-up. Noises caused by radiation less than 15 Gy/h are not affected the images. (author)

  18. Environmental Effects on Measurement Uncertainties of Time-of-Flight Cameras

    DEFF Research Database (Denmark)

    Gudmundsson, Sigurjon Arni; Aanæs, Henrik; Larsen, Rasmus

    2007-01-01

    In this paper the effect the environment has on the SwissRanger SR3000 Time-Of-Flight camera is investigated. The accuracy of this camera is highly affected by the scene it is pointed at: Such as the reflective properties, color and gloss. Also the complexity of the scene has considerable effects...... on the accuracy. To mention a few: The angle of the objects to the emitted light and the scattering effects of near objects. In this paper a general overview of known such inaccuracy factors are described, followed by experiments illustrating the additional uncertainty factors. Specifically we give a better...

  19. Control system for several rotating mirror camera synchronization operation

    Science.gov (United States)

    Liu, Ningwen; Wu, Yunfeng; Tan, Xianxiang; Lai, Guoji

    1997-05-01

    This paper introduces a single chip microcomputer control system for synchronization operation of several rotating mirror high-speed cameras. The system consists of four parts: the microcomputer control unit (including the synchronization part and precise measurement part and the time delay part), the shutter control unit, the motor driving unit and the high voltage pulse generator unit. The control system has been used to control the synchronization working process of the GSI cameras (driven by a motor) and FJZ-250 rotating mirror cameras (driven by a gas driven turbine). We have obtained the films of the same objective from different directions in different speed or in same speed.

  20. Identification and spectrophotometry of faint southern radio galaxies

    International Nuclear Information System (INIS)

    Spinrad, H.; Kron, R.G.; Hunstead, R.W.

    1980-01-01

    We have observed a mixed sample of southern radio sources, identified on the Palomar sky survey or on previous direct plates taken with medium-aperture reflectors. At CIO we obtained a few deep 4m photographs and SIT spectrophotometry for redshift and continuum-color measurement. Almost all our sources were faint galaxies; the largest redshift measured was for 3C 275, with z=0.480. The ultraviolet continuum of PKS 0400--643, a ''thermal'' galaxy with z=0.476, closely resembles that of 3C 295 and shows some color evolution in U--B compared to nearby giant ellipticals

  1. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Yu Lu

    2016-04-01

    Full Text Available A new compact large field of view (FOV multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second.

  2. A Keck/DEIMOS spectroscopic survey of faint Galactic satellites: searching for the least massive dwarf galaxies

    Science.gov (United States)

    Martin, N. F.; Ibata, R. A.; Chapman, S. C.; Irwin, M.; Lewis, G. F.

    2007-09-01

    We present the results of a spectroscopic survey of the recently discovered faint Milky Way satellites Boötes, Ursa Major I, Ursa Major II and Willman 1 (Wil1). Using the DEep Imaging Multi-Object Spectrograph mounted on the Keck II telescope, we have obtained samples that contain from ~15 to ~85 probable members of these satellites for which we derive radial velocities precise to a few kms-1 down to i ~ 21-22. About half of these stars are observed with a high enough signal-to-noise ratio to estimate their metallicity to within +/-0.2 dex. The characteristics of all the observed stars are made available, along with those of the Canes Venatici I dwarf galaxy that have been analysed in a companion paper. From this data set, we show that Ursa Major II is the only object that does not show a clear radial velocity peak. However, the measured systemic radial velocity (vr = 115 +/- 5kms-1) is in good agreement with simulations in which this object is the progenitor of the recently discovered Orphan Stream. The three other satellites show velocity dispersions that make them highly dark matter dominated systems (under the usual assumptions of symmetry and virial equilibrium). In particular, we show that despite its small size and faintness, the Wil1 object is not a globular cluster given its metallicity scatter over -2.0 systemic velocity of -12.3 +/- 2.3kms-1 which implies a mass-to-light ratio of ~700 and a total mass of ~5 × 105Msolar for this satellite, making it the least massive satellite galaxy known to date. Such a low mass could mean that the 107Msolar limit that had until now never been crossed for Milky Way and Andromeda satellite galaxies may only be an observational limit and that fainter, less massive systems exist within the Local Group. However, more modelling and an extended search for potential extratidal stars are required to rule out the possibility that these systems have not been significantly heated by tidal interaction. The data presented herein

  3. Distributed Sensing and Processing for Multi-Camera Networks

    Science.gov (United States)

    Sankaranarayanan, Aswin C.; Chellappa, Rama; Baraniuk, Richard G.

    Sensor networks with large numbers of cameras are becoming increasingly prevalent in a wide range of applications, including video conferencing, motion capture, surveillance, and clinical diagnostics. In this chapter, we identify some of the fundamental challenges in designing such systems: robust statistical inference, computationally efficiency, and opportunistic and parsimonious sensing. We show that the geometric constraints induced by the imaging process are extremely useful for identifying and designing optimal estimators for object detection and tracking tasks. We also derive pipelined and parallelized implementations of popular tools used for statistical inference in non-linear systems, of which multi-camera systems are examples. Finally, we highlight the use of the emerging theory of compressive sensing in reducing the amount of data sensed and communicated by a camera network.

  4. Low-cost uncooled VOx infrared camera development

    Science.gov (United States)

    Li, Chuan; Han, C. J.; Skidmore, George D.; Cook, Grady; Kubala, Kenny; Bates, Robert; Temple, Dorota; Lannon, John; Hilton, Allan; Glukh, Konstantin; Hardy, Busbee

    2013-06-01

    The DRS Tamarisk® 320 camera, introduced in 2011, is a low cost commercial camera based on the 17 µm pixel pitch 320×240 VOx microbolometer technology. A higher resolution 17 µm pixel pitch 640×480 Tamarisk®640 has also been developed and is now in production serving the commercial markets. Recently, under the DARPA sponsored Low Cost Thermal Imager-Manufacturing (LCTI-M) program and internal project, DRS is leading a team of industrial experts from FiveFocal, RTI International and MEMSCAP to develop a small form factor uncooled infrared camera for the military and commercial markets. The objective of the DARPA LCTI-M program is to develop a low SWaP camera (costs less than US $500 based on a 10,000 units per month production rate. To meet this challenge, DRS is developing several innovative technologies including a small pixel pitch 640×512 VOx uncooled detector, an advanced digital ROIC and low power miniature camera electronics. In addition, DRS and its partners are developing innovative manufacturing processes to reduce production cycle time and costs including wafer scale optic and vacuum packaging manufacturing and a 3-dimensional integrated camera assembly. This paper provides an overview of the DRS Tamarisk® project and LCTI-M related uncooled technology development activities. Highlights of recent progress and challenges will also be discussed. It should be noted that BAE Systems and Raytheon Vision Systems are also participants of the DARPA LCTI-M program.

  5. Relative camera localisation in non-overlapping camera networks using multiple trajectories

    NARCIS (Netherlands)

    John, V.; Englebienne, G.; Kröse, B.J.A.

    2012-01-01

    In this article we present an automatic camera calibration algorithm using multiple trajectories in a multiple camera network with non-overlapping field-of-views (FOV). Visible trajectories within a camera FOV are assumed to be measured with respect to the camera local co-ordinate system.

  6. A Quality Evaluation of Single and Multiple Camera Calibration Approaches for an Indoor Multi Camera Tracking System

    Directory of Open Access Journals (Sweden)

    M. Adduci

    2014-06-01

    Full Text Available Human detection and tracking has been a prominent research area for several scientists around the globe. State of the art algorithms have been implemented, refined and accelerated to significantly improve the detection rate and eliminate false positives. While 2D approaches are well investigated, 3D human detection and tracking is still an unexplored research field. In both 2D/3D cases, introducing a multi camera system could vastly expand the accuracy and confidence of the tracking process. Within this work, a quality evaluation is performed on a multi RGB-D camera indoor tracking system for examining how camera calibration and pose can affect the quality of human tracks in the scene, independently from the detection and tracking approach used. After performing a calibration step on every Kinect sensor, state of the art single camera pose estimators were evaluated for checking how good the quality of the poses is estimated using planar objects such as an ordinate chessboard. With this information, a bundle block adjustment and ICP were performed for verifying the accuracy of the single pose estimators in a multi camera configuration system. Results have shown that single camera estimators provide high accuracy results of less than half a pixel forcing the bundle to converge after very few iterations. In relation to ICP, relative information between cloud pairs is more or less preserved giving a low score of fitting between concatenated pairs. Finally, sensor calibration proved to be an essential step for achieving maximum accuracy in the generated point clouds, and therefore in the accuracy of the produced 3D trajectories, from each sensor.

  7. Advances in top-down and bottom-up approaches to video-based camera tracking

    OpenAIRE

    Marimón Sanjuán, David

    2007-01-01

    Video-based camera tracking consists in trailing the three dimensional pose followed by a mobile camera using video as sole input. In order to estimate the pose of a camera with respect to a real scene, one or more three dimensional references are needed. Examples of such references are landmarks with known geometric shape, or objects for which a model is generated beforehand. By comparing what is seen by a camera with what is geometrically known from reality, it is possible to recover the po...

  8. Advances in top-down and bottom-up approaches to video-based camera tracking

    OpenAIRE

    Marimón Sanjuán, David; Ebrahimi, Touradj

    2008-01-01

    Video-based camera tracking consists in trailing the three dimensional pose followed by a mobile camera using video as sole input. In order to estimate the pose of a camera with respect to a real scene, one or more three dimensional references are needed. Examples of such references are landmarks with known geometric shape, or objects for which a model is generated beforehand. By comparing what is seen by a camera with what is geometrically known from reality, it is possible to recover the po...

  9. A multi-criteria approach to camera motion design for volume data animation.

    Science.gov (United States)

    Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.

  10. Image-scanning measurement using video dissection cameras

    International Nuclear Information System (INIS)

    Carson, J.S.

    1978-01-01

    A high speed dimensional measuring system capable of scanning a thin film network, and determining if there are conductor widths, resistor widths, or spaces not typical of the design for this product is described. The eye of the system is a conventional TV camera, although such devices as image dissector cameras or solid-state scanners may be used more often in the future. The analog signal from the TV camera is digitized for processing by the computer and is presented to the TV monitor to assist the operator in monitoring the system's operation. Movable stages are required when the field of view of the scanner is less than the size of the object. A minicomputer controls the movement of the stage, and communicates with the digitizer to select picture points that are to be processed. Communications with the system are maintained through a teletype or CRT terminal

  11. Automated Meteor Detection by All-Sky Digital Camera Systems

    Science.gov (United States)

    Suk, Tomáš; Šimberová, Stanislava

    2017-12-01

    We have developed a set of methods to detect meteor light traces captured by all-sky CCD cameras. Operating at small automatic observatories (stations), these cameras create a network spread over a large territory. Image data coming from these stations are merged in one central node. Since a vast amount of data is collected by the stations in a single night, robotic storage and analysis are essential to processing. The proposed methodology is adapted to data from a network of automatic stations equipped with digital fish-eye cameras and includes data capturing, preparation, pre-processing, analysis, and finally recognition of objects in time sequences. In our experiments we utilized real observed data from two stations.

  12. Lessons Learned from Crime Caught on Camera

    DEFF Research Database (Denmark)

    Lindegaard, Marie Rosenkrantz; Bernasco, Wim

    2018-01-01

    Objectives: The widespread use of camera surveillance in public places offers criminologists the opportunity to systematically and unobtrusively observe crime, their main subject matter. The purpose of this essay is to inform the reader of current developments in research on crimes caught on came...

  13. Sweating the small stuff: simulating dwarf galaxies, ultra-faint dwarf galaxies, and their own tiny satellites

    Science.gov (United States)

    Wheeler, Coral; Oñorbe, Jose; Bullock, James S.; Boylan-Kolchin, Michael; Elbert, Oliver D.; Garrison-Kimmel, Shea; Hopkins, Philip F.; Kereš, Dušan

    2015-10-01

    We present Feedback in Realistic Environment (FIRE)/GIZMO hydrodynamic zoom-in simulations of isolated dark matter haloes, two each at the mass of classical dwarf galaxies (Mvir ≃ 1010 M⊙) and ultra-faint galaxies (Mvir ≃ 109 M⊙), and with two feedback implementations. The resulting central galaxies lie on an extrapolated abundance matching relation from M⋆ ≃ 106 to 104 M⊙ without a break. Every host is filled with subhaloes, many of which form stars. Each of our dwarfs with M⋆ ≃ 106 M⊙ has 1-2 well-resolved satellites with M⋆ = 3-200 × 103 M⊙. Even our isolated ultra-faint galaxies have star-forming subhaloes. If this is representative, dwarf galaxies throughout the Universe should commonly host tiny satellite galaxies of their own. We combine our results with the Exploring the Local Volume in Simulations (ELVIS) simulations to show that targeting ˜ 50 kpc regions around nearby isolated dwarfs could increase the chances of discovering ultra-faint galaxies by ˜35 per cent compared to random pointings, and specifically identify the region around the Phoenix dwarf galaxy as a good potential target. The well-resolved ultra-faint galaxies in our simulations (M⋆ ≃ 3-30 × 103 M⊙) form within Mpeak ≃ 0.5-3 × 109 M⊙ haloes. Each has a uniformly ancient stellar population ( > 10 Gyr) owing to reionization-related quenching. More massive systems, in contrast, all have late-time star formation. Our results suggest that Mhalo ≃ 5 × 109 M⊙ is a probable dividing line between haloes hosting reionization `fossils' and those hosting dwarfs that can continue to form stars in isolation after reionization.

  14. Three-dimensional cinematography with control object of unknown shape.

    Science.gov (United States)

    Dapena, J; Harman, E A; Miller, J A

    1982-01-01

    A technique for reconstruction of three-dimensional (3D) motion which involves a simple filming procedure but allows the deduction of coordinates in large object volumes was developed. Internal camera parameters are calculated from measurements of the film images of two calibrated crosses while external camera parameters are calculated from the film images of points in a control object of unknown shape but at least one known length. The control object, which includes the volume in which the activity is to take place, is formed by a series of poles placed at unknown locations, each carrying two targets. From the internal and external camera parameters, and from locations of the images of point in the films of the two cameras, 3D coordinates of the point can be calculated. Root mean square errors of the three coordinates of points in a large object volume (5m x 5m x 1.5m) were 15 mm, 13 mm, 13 mm and 6 mm, and relative errors in lengths averaged 0.5%, 0.7% and 0.5%, respectively.

  15. Medium-sized aperture camera for Earth observation

    Science.gov (United States)

    Kim, Eugene D.; Choi, Young-Wan; Kang, Myung-Seok; Kim, Ee-Eul; Yang, Ho-Soon; Rasheed, Ad. Aziz Ad.; Arshad, Ahmad Sabirin

    2017-11-01

    Satrec Initiative and ATSB have been developing a medium-sized aperture camera (MAC) for an earth observation payload on a small satellite. Developed as a push-broom type high-resolution camera, the camera has one panchromatic and four multispectral channels. The panchromatic channel has 2.5m, and multispectral channels have 5m of ground sampling distances at a nominal altitude of 685km. The 300mm-aperture Cassegrain telescope contains two aspheric mirrors and two spherical correction lenses. With a philosophy of building a simple and cost-effective camera, the mirrors incorporate no light-weighting, and the linear CCDs are mounted on a single PCB with no beam splitters. MAC is the main payload of RazakSAT to be launched in 2005. RazakSAT is a 180kg satellite including MAC, designed to provide high-resolution imagery of 20km swath width on a near equatorial orbit (NEqO). The mission objective is to demonstrate the capability of a high-resolution remote sensing satellite system on a near equatorial orbit. This paper describes the overview of the MAC and RarakSAT programmes, and presents the current development status of MAC focusing on key optical aspects of Qualification Model.

  16. Image quality testing of assembled IR camera modules

    Science.gov (United States)

    Winters, Daniel; Erichsen, Patrik

    2013-10-01

    Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.

  17. Collimator trans-axial tomographic scintillation camera

    International Nuclear Information System (INIS)

    Jaszczak, Ronald J.

    1979-01-01

    An improved collimator is provided for a scintillation camera system that employs a detector head for transaxial tomographic scanning. One object of this invention is to significantly reduce the time required to obtain statistically significant data in radioisotope scanning using a scintillation camera. Another is to increase the rate of acceptance of radioactive events to contribute to the positional information obtainable from a radiation source of known strength without sacrificing spatial resolution. A further object is to reduce the necessary scanning time without degrading the images obtained. The collimator described has apertures defined by septa of different radiation transparency. The septa are aligned to provide greater radiation shielding from gamma radiation travelling within planes perpendicular to the cranial-caudal axis and less radiation shielding from gamma radiation travelling within other planes. Septa may also define apertures such that the collimator provides high spatial resolution of gamma rays traveling within planes perpendicular to the cranial-caudal axis and directed at the detector and high radiation sensitivity to gamma radiation travelling other planes and indicated at the detector. (LL)

  18. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    Science.gov (United States)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  19. Applying image quality in cell phone cameras: lens distortion

    Science.gov (United States)

    Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje

    2009-01-01

    This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.

  20. Finding Objects for Assisting Blind People.

    Science.gov (United States)

    Yi, Chucai; Flores, Roberto W; Chincha, Ricardo; Tian, Yingli

    2013-07-01

    Computer vision technology has been widely used for blind assistance, such as navigation and wayfinding. However, few camera-based systems are developed for helping blind or visually-impaired people to find daily necessities. In this paper, we propose a prototype system of blind-assistant object finding by camera-based network and matching-based recognition. We collect a dataset of daily necessities and apply Speeded-Up Robust Features (SURF) and Scale Invariant Feature Transform (SIFT) feature descriptors to perform object recognition. Experimental results demonstrate the effectiveness of our prototype system.

  1. The Faint End of the Quasar Luminosity Function at z ~ 4: Implications for Ionization of the Intergalactic Medium and Cosmic Downsizing

    Science.gov (United States)

    Glikman, Eilat; Djorgovski, S. G.; Stern, Daniel; Dey, Arjun; Jannuzi, Buell T.; Lee, Kyoung-Soo

    2011-02-01

    We present an updated determination of the z ~ 4 QSO luminosity function (QLF), improving the quality of the determination of the faint end of the QLF presented by Glikman et al. (2010). We have observed an additional 43 candidates from our survey sample, yielding one additional QSO at z = 4.23 and increasing the completeness of our spectroscopic follow-up to 48% for candidates brighter than R = 24 over our survey area of 3.76 deg2. We study the effect of using K-corrections to compute the rest-frame absolute magnitude at 1450 Å compared with measuring M 1450 directly from the object spectra. We find a luminosity-dependent bias: template-based K-corrections overestimate the luminosity of low-luminosity QSOs, likely due to their reliance on templates derived from higher luminosity QSOs. Combining our sample with bright quasars from the Sloan Digital Sky Survey and using spectrum-based M 1450 for all the quasars, we fit a double power law to the binned QLF. Our best fit has a bright-end slope, α = 3.3 ± 0.2, and faint-end slope, β = 1.6+0.8 -0.6. Our new data revise the faint-end slope of the QLF down to flatter values similar to those measured at z ~ 3. The break luminosity, though poorly constrained, is at M* = -24.1+0.7 -1.9, approximately 1-1.5 mag fainter than at z ~ 3. This QLF implies that QSOs account for about half the radiation needed to ionize the intergalactic medium at these redshifts. The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.

  2. Very low-excitation Herbig-Haro objects

    International Nuclear Information System (INIS)

    Boehm, K.H.; Brugel, E.W.; Mannery, E.

    1980-01-01

    Spectrophotometric observations show that H-H 7 and H-H 11 belong to a class of very low-excitation Herbig-Haro objects of which H-H 47 has been the only known example. Typical properties include line flux ratios [N I] (lambda5198+lambda5200)/Hβ and [S II] lambda/6717/Hα, which are both considerably larger than 1, very strong [O I] and [C I] lines, as well as relatively faint [O II] lines. So far no shock-wave models are available for these low-excitation objects. H-H 7 and H-H 11 have electron densities which are lower by about one order of magnitude, and electron temperatures which are slightly lower than those for high-excitation objects like H-H 1 and H-H 2. H-H 11 has a filling factor of about 1, much higher than other H-H objects

  3. Blood pulsation measurement using cameras operating in visible light: limitations.

    Science.gov (United States)

    Koprowski, Robert

    2016-10-03

    The paper presents an automatic method for analysis and processing of images from a camera operating in visible light. This analysis applies to images containing the human facial area (body) and enables to measure the blood pulse rate. Special attention was paid to the limitations of this measurement method taking into account the possibility of using consumer cameras in real conditions (different types of lighting, different camera resolution, camera movement). The proposed new method of image analysis and processing was associated with three stages: (1) image pre-processing-allowing for the image filtration and stabilization (object location tracking); (2) main image processing-allowing for segmentation of human skin areas, acquisition of brightness changes; (3) signal analysis-filtration, FFT (Fast Fourier Transformation) analysis, pulse calculation. The presented algorithm and method for measuring the pulse rate has the following advantages: (1) it allows for non-contact and non-invasive measurement; (2) it can be carried out using almost any camera, including webcams; (3) it enables to track the object on the stage, which allows for the measurement of the heart rate when the patient is moving; (4) for a minimum of 40,000 pixels, it provides a measurement error of less than ±2 beats per minute for p lighting; (5) analysis of a single image takes about 40 ms in Matlab Version 7.11.0.584 (R2010b) with Image Processing Toolbox Version 7.1 (R2010b).

  4. Mobile phone camera benchmarking: combination of camera speed and image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  5. Minimum Delay Moving Object Detection

    KAUST Repository

    Lao, Dong

    2017-01-01

    This thesis presents a general framework and method for detection of an object in a video based on apparent motion. The object moves, at some unknown time, differently than the “background” motion, which can be induced from camera motion. The goal

  6. Unified framework for recognition, localization and mapping using wearable cameras.

    Science.gov (United States)

    Vázquez-Martín, Ricardo; Bandera, Antonio

    2012-08-01

    Monocular approaches to simultaneous localization and mapping (SLAM) have recently addressed with success the challenging problem of the fast computation of dense reconstructions from a single, moving camera. Thus, if these approaches initially relied on the detection of a reduced set of interest points to estimate the camera position and the map, they are currently able to reconstruct dense maps from a handheld camera while the camera coordinates are simultaneously computed. However, these maps of 3-dimensional points usually remain meaningless, that is, with no memorable items and without providing a way of encoding spatial relationships between objects and paths. In humans and mobile robotics, landmarks play a key role in the internalization of a spatial representation of an environment. They are memorable cues that can serve to define a region of the space or the location of other objects. In a topological representation of the space, landmarks can be identified and located according to its structural, perceptive or semantic significance and distinctiveness. But on the other hand, landmarks may be difficult to be located in a metric representation of the space. Restricted to the domain of visual landmarks, this work describes an approach where the map resulting from a point-based, monocular SLAM is annotated with the semantic information provided by a set of distinguished landmarks. Both features are obtained from the image. Hence, they can be linked by associating to each landmark all those point-based features that are superimposed to the landmark in a given image (key-frame). Visual landmarks will be obtained by means of an object-based, bottom-up attention mechanism, which will extract from the image a set of proto-objects. These proto-objects could not be always associated with natural objects, but they will typically constitute significant parts of these scene objects and can be appropriately annotated with semantic information. Moreover, they will be

  7. CHEMICAL DIVERSITY IN THE ULTRA-FAINT DWARF GALAXY TUCANA II

    Energy Technology Data Exchange (ETDEWEB)

    Ji, Alexander P.; Frebel, Anna; Ezzeddine, Rana [Department of Physics and Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Casey, Andrew R., E-mail: alexji@mit.edu [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge, CB3 0HA (United Kingdom)

    2016-11-20

    We present the first detailed chemical abundance study of the ultra-faint dwarf galaxy Tucana II, based on high-resolution Magellan/MIKE spectra of four red giant stars. The metallicities of these stars range from [Fe/H] = −3.2 to −2.6, and all stars are low in neutron-capture abundances ([Sr/Fe] and [Ba/Fe] < −1). However, a number of anomalous chemical signatures are present. One star is relatively metal-rich ([Fe/H] = −2.6) and shows [Na, α , Sc/Fe] < 0, suggesting an extended star formation history with contributions from AGB stars and SNe Ia. Two stars with [Fe/H] < −3 are mildly carbon-enhanced ([C/Fe] ∼ 0.7) and may be consistent with enrichment by faint supernovae, if such supernovae can produce neutron-capture elements. A fourth star with [Fe/H] = −3 is carbon-normal, and exhibits distinct light element abundance ratios from the carbon-enhanced stars. This carbon-normal star implies that at least two distinct nucleosynthesis sources, both possibly associated with Population III stars, contributed to the early chemical enrichment of this galaxy. Despite its very low luminosity, Tucana II shows a diversity of chemical signatures that preclude it from being a simple “one-shot” first galaxy yet still provide a window into star and galaxy formation in the early universe.

  8. CHEMICAL DIVERSITY IN THE ULTRA-FAINT DWARF GALAXY TUCANA II

    International Nuclear Information System (INIS)

    Ji, Alexander P.; Frebel, Anna; Ezzeddine, Rana; Casey, Andrew R.

    2016-01-01

    We present the first detailed chemical abundance study of the ultra-faint dwarf galaxy Tucana II, based on high-resolution Magellan/MIKE spectra of four red giant stars. The metallicities of these stars range from [Fe/H] = −3.2 to −2.6, and all stars are low in neutron-capture abundances ([Sr/Fe] and [Ba/Fe] < −1). However, a number of anomalous chemical signatures are present. One star is relatively metal-rich ([Fe/H] = −2.6) and shows [Na, α , Sc/Fe] < 0, suggesting an extended star formation history with contributions from AGB stars and SNe Ia. Two stars with [Fe/H] < −3 are mildly carbon-enhanced ([C/Fe] ∼ 0.7) and may be consistent with enrichment by faint supernovae, if such supernovae can produce neutron-capture elements. A fourth star with [Fe/H] = −3 is carbon-normal, and exhibits distinct light element abundance ratios from the carbon-enhanced stars. This carbon-normal star implies that at least two distinct nucleosynthesis sources, both possibly associated with Population III stars, contributed to the early chemical enrichment of this galaxy. Despite its very low luminosity, Tucana II shows a diversity of chemical signatures that preclude it from being a simple “one-shot” first galaxy yet still provide a window into star and galaxy formation in the early universe.

  9. The significance of faint visualization of the superior sagittal sinus in brain scintigraphy for the diagnosis of brain death

    International Nuclear Information System (INIS)

    Bisset, R.; Sfakianakis, G.; Ihmedian, I.; Holzman, B.; Curless, R.; Serafini, A.

    1985-01-01

    Brain death is associated with cessation of blood flow to the brain. Tc-99m brain flow studies are used as a laboratory confirmatory test for the establishment of the diagnosis of brain death. Criteria for the diagnosis of cessation of blood flow to the brain are 1) visualization of carotid artery activity in the neck of the patient and 2) no visualization of activity in the distribution of the anterior and middle cerebral arteries. The authors noticed that in a significant number of patients, although there was no visualization of arterial blood flow to the brain the static images demonstrated faint accumulation of activity in the region of the superior sagittal sinus (SSS). In a four year period 212 brain flow studies were performed in 154 patients for diagnosis of brain death; of them 137 studies (65%) showed no evidence of arterial flow. In 103 out of the 137 studies (75%) there was no visualization of the SSS; in the remaining 34 studies (3l patients) however three patterns of faint activity attributed to partial and or faint visualization of the SSS could be recognized at the midline of the immediate anterior static view: a) linear from the cranial vault floor up b) disk shaped at the apex of the vault and c) disk shaped at the apex tailing caudad. All of the 3l patients in this group satisfied brain death criteria within four days of the last study which showed faint visualization of the superior sagittal sinus. The authors conclude that even in the presence of a faint visualization of the superior sagittal sinus on static post brain flow scintigraphy, the diagnosis of cessation of blood flow to the brain can be made if there is no evidence of arterial blood flow

  10. Evaluation of the geometric stability and the accuracy potential of digital cameras — Comparing mechanical stabilisation versus parameterisation

    Science.gov (United States)

    Rieke-Zapp, D.; Tecklenburg, W.; Peipe, J.; Hastedt, H.; Haig, Claudia

    Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems-Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47 μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52 μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29 μm maximum absolute length measurement error in object space). Extending the parameter model with Fi

  11. 3D camera assisted fully automated calibration of scanning laser Doppler vibrometers

    International Nuclear Information System (INIS)

    Sels, Seppe; Ribbens, Bart; Mertens, Luc; Vanlanduit, Steve

    2016-01-01

    Scanning laser Doppler vibrometers (LDV) are used to measure full-field vibration shapes of products and structures. In most commercially available scanning laser Doppler vibrometer systems the user manually draws a grid of measurement locations on a 2D camera image of the product. The determination of the correct physical measurement locations can be a time consuming and diffcult task. In this paper we present a new methodology for product testing and quality control that integrates 3D imaging techniques with vibration measurements. This procedure allows to test prototypes in a shorter period because physical measurements locations will be located automatically. The proposed methodology uses a 3D time-of-flight camera to measure the location and orientation of the test-object. The 3D image of the time-of-flight camera is then matched with the 3D-CAD model of the object in which measurement locations are pre-defined. A time of flight camera operates strictly in the near infrared spectrum. To improve the signal to noise ratio in the time-of-flight measurement, a time-of-flight camera uses a band filter. As a result of this filter, the laser spot of most laser vibrometers is invisible in the time-of-flight image. Therefore a 2D RGB-camera is used to find the laser-spot of the vibrometer. The laser spot is matched to the 3D image obtained by the time-of-flight camera. Next an automatic calibration procedure is used to aim the laser at the (pre)defined locations. Another benefit from this methodology is that it incorporates automatic mapping between a CAD model and the vibration measurements. This mapping can be used to visualize measurements directly on a 3D CAD model. Secondly the orientation of the CAD model is known with respect to the laser beam. This information can be used to find the direction of the measured vibration relatively to the surface of the object. With this direction, the vibration measurements can be compared more precisely with numerical

  12. 3D camera assisted fully automated calibration of scanning laser Doppler vibrometers

    Energy Technology Data Exchange (ETDEWEB)

    Sels, Seppe, E-mail: Seppe.Sels@uantwerpen.be; Ribbens, Bart; Mertens, Luc; Vanlanduit, Steve [Op3Mech Research Group, University of Antwerp, Salesianenlaan 90, 2660 Antwerp (Belgium)

    2016-06-28

    Scanning laser Doppler vibrometers (LDV) are used to measure full-field vibration shapes of products and structures. In most commercially available scanning laser Doppler vibrometer systems the user manually draws a grid of measurement locations on a 2D camera image of the product. The determination of the correct physical measurement locations can be a time consuming and diffcult task. In this paper we present a new methodology for product testing and quality control that integrates 3D imaging techniques with vibration measurements. This procedure allows to test prototypes in a shorter period because physical measurements locations will be located automatically. The proposed methodology uses a 3D time-of-flight camera to measure the location and orientation of the test-object. The 3D image of the time-of-flight camera is then matched with the 3D-CAD model of the object in which measurement locations are pre-defined. A time of flight camera operates strictly in the near infrared spectrum. To improve the signal to noise ratio in the time-of-flight measurement, a time-of-flight camera uses a band filter. As a result of this filter, the laser spot of most laser vibrometers is invisible in the time-of-flight image. Therefore a 2D RGB-camera is used to find the laser-spot of the vibrometer. The laser spot is matched to the 3D image obtained by the time-of-flight camera. Next an automatic calibration procedure is used to aim the laser at the (pre)defined locations. Another benefit from this methodology is that it incorporates automatic mapping between a CAD model and the vibration measurements. This mapping can be used to visualize measurements directly on a 3D CAD model. Secondly the orientation of the CAD model is known with respect to the laser beam. This information can be used to find the direction of the measured vibration relatively to the surface of the object. With this direction, the vibration measurements can be compared more precisely with numerical

  13. Comparative evaluation of consumer grade cameras and mobile phone cameras for close range photogrammetry

    Science.gov (United States)

    Chikatsu, Hirofumi; Takahashi, Yoji

    2009-08-01

    The authors have been concentrating on developing convenient 3D measurement methods using consumer grade digital cameras, and it was concluded that consumer grade digital cameras are expected to become a useful photogrammetric device for the various close range application fields. On the other hand, mobile phone cameras which have 10 mega pixels were appeared on the market in Japan. In these circumstances, we are faced with alternative epoch-making problem whether mobile phone cameras are able to take the place of consumer grade digital cameras in close range photogrammetric applications. In order to evaluate potentials of mobile phone cameras in close range photogrammetry, comparative evaluation between mobile phone cameras and consumer grade digital cameras are investigated in this paper with respect to lens distortion, reliability, stability and robustness. The calibration tests for 16 mobile phone cameras and 50 consumer grade digital cameras were conducted indoors using test target. Furthermore, practability of mobile phone camera for close range photogrammetry was evaluated outdoors. This paper presents that mobile phone cameras have ability to take the place of consumer grade digital cameras, and develop the market in digital photogrammetric fields.

  14. Remote hardware-reconfigurable robotic camera

    Science.gov (United States)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.

    2001-10-01

    In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.

  15. A multi-camera system for real-time pose estimation

    Science.gov (United States)

    Savakis, Andreas; Erhard, Matthew; Schimmel, James; Hnatow, Justin

    2007-04-01

    This paper presents a multi-camera system that performs face detection and pose estimation in real-time and may be used for intelligent computing within a visual sensor network for surveillance or human-computer interaction. The system consists of a Scene View Camera (SVC), which operates at a fixed zoom level, and an Object View Camera (OVC), which continuously adjusts its zoom level to match objects of interest. The SVC is set to survey the whole filed of view. Once a region has been identified by the SVC as a potential object of interest, e.g. a face, the OVC zooms in to locate specific features. In this system, face candidate regions are selected based on skin color and face detection is accomplished using a Support Vector Machine classifier. The locations of the eyes and mouth are detected inside the face region using neural network feature detectors. Pose estimation is performed based on a geometrical model, where the head is modeled as a spherical object that rotates upon the vertical axis. The triangle formed by the mouth and eyes defines a vertical plane that intersects the head sphere. By projecting the eyes-mouth triangle onto a two dimensional viewing plane, equations were obtained that describe the change in its angles as the yaw pose angle increases. These equations are then combined and used for efficient pose estimation. The system achieves real-time performance for live video input. Testing results assessing system performance are presented for both still images and video.

  16. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, U.; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is mounted in the ray inlet opening of the camera, while the others are placed on separate supports. The supports are swingably mounted upon a column one above the other through about 90 0 to a collimator exchange position. Each of the separate supports is swingable to a vertically aligned position, with limiting of the swinging movement and positioning of the support at the desired exchange position. The collimators are carried on the supports by means of a series of vertically disposed coil springs. Projections on the camera are movable from above into grooves of the collimator at the exchange position, whereupon the collimator is turned so that it is securely prevented from falling out of the camera head

  17. Prototypic Development and Evaluation of a Medium Format Metric Camera

    Science.gov (United States)

    Hastedt, H.; Rofallski, R.; Luhmann, T.; Rosenbauer, R.; Ochsner, D.; Rieke-Zapp, D.

    2018-05-01

    Engineering applications require high-precision 3D measurement techniques for object sizes that vary between small volumes (2-3 m in each direction) and large volumes (around 20 x 20 x 1-10 m). The requested precision in object space (1σ RMS) is defined to be within 0.1-0.2 mm for large volumes and less than 0.01 mm for small volumes. In particular, focussing large volume applications the availability of a metric camera would have different advantages for several reasons: 1) high-quality optical components and stabilisations allow for a stable interior geometry of the camera itself, 2) a stable geometry leads to a stable interior orientation that enables for an a priori camera calibration, 3) a higher resulting precision can be expected. With this article the development and accuracy evaluation of a new metric camera, the ALPA 12 FPS add|metric will be presented. Its general accuracy potential is tested against calibrated lengths in a small volume test environment based on the German Guideline VDI/VDE 2634.1 (2002). Maximum length measurement errors of less than 0.025 mm are achieved with different scenarios having been tested. The accuracy potential for large volumes is estimated within a feasibility study on the application of photogrammetric measurements for the deformation estimation on a large wooden shipwreck in the German Maritime Museum. An accuracy of 0.2 mm-0.4 mm is reached for a length of 28 m (given by a distance from a lasertracker network measurement). All analyses have proven high stabilities of the interior orientation of the camera and indicate the applicability for a priori camera calibration for subsequent 3D measurements.

  18. PROTOTYPIC DEVELOPMENT AND EVALUATION OF A MEDIUM FORMAT METRIC CAMERA

    Directory of Open Access Journals (Sweden)

    H. Hastedt

    2018-05-01

    Full Text Available Engineering applications require high-precision 3D measurement techniques for object sizes that vary between small volumes (2–3 m in each direction and large volumes (around 20 x 20 x 1–10 m. The requested precision in object space (1σ RMS is defined to be within 0.1–0.2 mm for large volumes and less than 0.01 mm for small volumes. In particular, focussing large volume applications the availability of a metric camera would have different advantages for several reasons: 1 high-quality optical components and stabilisations allow for a stable interior geometry of the camera itself, 2 a stable geometry leads to a stable interior orientation that enables for an a priori camera calibration, 3 a higher resulting precision can be expected. With this article the development and accuracy evaluation of a new metric camera, the ALPA 12 FPS add|metric will be presented. Its general accuracy potential is tested against calibrated lengths in a small volume test environment based on the German Guideline VDI/VDE 2634.1 (2002. Maximum length measurement errors of less than 0.025 mm are achieved with different scenarios having been tested. The accuracy potential for large volumes is estimated within a feasibility study on the application of photogrammetric measurements for the deformation estimation on a large wooden shipwreck in the German Maritime Museum. An accuracy of 0.2 mm–0.4 mm is reached for a length of 28 m (given by a distance from a lasertracker network measurement. All analyses have proven high stabilities of the interior orientation of the camera and indicate the applicability for a priori camera calibration for subsequent 3D measurements.

  19. Thermoplastic film camera for holographic recording

    International Nuclear Information System (INIS)

    Liegeois, C.; Meyrueis, P.

    1982-01-01

    The design thermoplastic-film recording camera and its performance for holography of extended objects are reported. Special corona geometry and accurate control of development heat by constant current heating and high resolution measurement of the develop temperature make easy recording of reproducible, large aperture holograms possible. The experimental results give the transfer characteristics, the diffraction efficiency characteristics and the spatial frequency response. (orig.)

  20. Enhancing swimming pool safety by the use of range-imaging cameras

    Science.gov (United States)

    Geerardyn, D.; Boulanger, S.; Kuijk, M.

    2015-05-01

    Drowning is the cause of death of 372.000 people, each year worldwide, according to the report of November 2014 of the World Health Organization.1 Currently, most swimming pools only use lifeguards to detect drowning people. In some modern swimming pools, camera-based detection systems are nowadays being integrated. However, these systems have to be mounted underwater, mostly as a replacement of the underwater lighting. In contrast, we are interested in range imaging cameras mounted on the ceiling of the swimming pool, allowing to distinguish swimmers at the surface from drowning people underwater, while keeping the large field-of-view and minimizing occlusions. However, we have to take into account that the water surface of a swimming pool is not a flat, but mostly rippled surface, and that the water is transparent for visible light, but less transparent for infrared or ultraviolet light. We investigated the use of different types of 3D cameras to detect objects underwater at different depths and with different amplitudes of surface perturbations. Specifically, we performed measurements with a commercial Time-of-Flight camera, a commercial structured-light depth camera and our own Time-of-Flight system. Our own system uses pulsed Time-of-Flight and emits light of 785 nm. The measured distances between the camera and the object are influenced through the perturbations on the water surface. Due to the timing of our Time-of-Flight camera, our system is theoretically able to minimize the influence of the reflections of a partially-reflecting surface. The combination of a post image-acquisition filter compensating for the perturbations and the use of a light source with shorter wavelengths to enlarge the depth range can improve the current commercial cameras. As a result, we can conclude that low-cost range imagers can increase swimming pool safety, by inserting a post-processing filter and the use of another light source.

  1. ALGORITHM OF PLACEMENT OF VIDEO SURVEILLANCE CAMERAS AND ITS SOFTWARE IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Loktev Alexey Alexeevich

    2012-10-01

    Full Text Available Comprehensive distributed safety, control, and monitoring systems applied by companies and organizations of different ownership structure play a substantial role in the present-day society. Video surveillance elements that ensure image processing and decision making in automated or automatic modes are the essential components of new systems. This paper covers the modeling of video surveillance systems installed in buildings, and the algorithm, or pattern, of video camera placement with due account for nearly all characteristics of buildings, detection and recognition facilities, and cameras themselves. This algorithm will be subsequently implemented as a user application. The project contemplates a comprehensive approach to the automatic placement of cameras that take account of their mutual positioning and compatibility of tasks. The project objective is to develop the principal elements of the algorithm of recognition of a moving object to be detected by several cameras. The image obtained by different cameras will be processed. Parameters of motion are to be identified to develop a table of possible options of routes. The implementation of the recognition algorithm represents an independent research project to be covered by a different article. This project consists in the assessment of the degree of complexity of an algorithm of camera placement designated for identification of cases of inaccurate algorithm implementation, as well as in the formulation of supplementary requirements and input data by means of intercrossing sectors covered by neighbouring cameras. The project also contemplates identification of potential problems in the course of development of a physical security and monitoring system at the stage of the project design development and testing. The camera placement algorithm has been implemented as a software application that has already been pilot tested on buildings and inside premises that have irregular dimensions. The

  2. Photometric Calibration of Consumer Video Cameras

    Science.gov (United States)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  3. Cosmic Infrared Background Fluctuations in Deep Spitzer Infrared Array Camera Images: Data Processing and Analysis

    Science.gov (United States)

    Arendt, Richard; Kashlinsky, A.; Moseley, S.; Mather, J.

    2010-01-01

    This paper provides a detailed description of the data reduction and analysis procedures that have been employed in our previous studies of spatial fluctuation of the cosmic infrared background (CIB) using deep Spitzer Infrared Array Camera observations. The self-calibration we apply removes a strong instrumental signal from the fluctuations that would otherwise corrupt the results. The procedures and results for masking bright sources and modeling faint sources down to levels set by the instrumental noise are presented. Various tests are performed to demonstrate that the resulting power spectra of these fields are not dominated by instrumental or procedural effects. These tests indicate that the large-scale ([greater, similar]30') fluctuations that remain in the deepest fields are not directly related to the galaxies that are bright enough to be individually detected. We provide the parameterization of these power spectra in terms of separate instrument noise, shot noise, and power-law components. We discuss the relationship between fluctuations measured at different wavelengths and depths, and the relations between constraints on the mean intensity of the CIB and its fluctuation spectrum. Consistent with growing evidence that the [approx]1-5 [mu]m mean intensity of the CIB may not be as far above the integrated emission of resolved galaxies as has been reported in some analyses of DIRBE and IRTS observations, our measurements of spatial fluctuations of the CIB intensity indicate the mean emission from the objects producing the fluctuations is quite low ([greater, similar]1 nW m-2 sr-1 at 3-5 [mu]m), and thus consistent with current [gamma]-ray absorption constraints. The source of the fluctuations may be high-z Population III objects, or a more local component of very low luminosity objects with clustering properties that differ from the resolved galaxies. Finally, we discuss the prospects of the upcoming space-based surveys to directly measure the epochs

  4. COSMIC INFRARED BACKGROUND FLUCTUATIONS IN DEEP SPITZER INFRARED ARRAY CAMERA IMAGES: DATA PROCESSING AND ANALYSIS

    International Nuclear Information System (INIS)

    Arendt, Richard G.; Kashlinsky, A.; Moseley, S. H.; Mather, J.

    2010-01-01

    This paper provides a detailed description of the data reduction and analysis procedures that have been employed in our previous studies of spatial fluctuation of the cosmic infrared background (CIB) using deep Spitzer Infrared Array Camera observations. The self-calibration we apply removes a strong instrumental signal from the fluctuations that would otherwise corrupt the results. The procedures and results for masking bright sources and modeling faint sources down to levels set by the instrumental noise are presented. Various tests are performed to demonstrate that the resulting power spectra of these fields are not dominated by instrumental or procedural effects. These tests indicate that the large-scale (∼>30') fluctuations that remain in the deepest fields are not directly related to the galaxies that are bright enough to be individually detected. We provide the parameterization of these power spectra in terms of separate instrument noise, shot noise, and power-law components. We discuss the relationship between fluctuations measured at different wavelengths and depths, and the relations between constraints on the mean intensity of the CIB and its fluctuation spectrum. Consistent with growing evidence that the ∼1-5 μm mean intensity of the CIB may not be as far above the integrated emission of resolved galaxies as has been reported in some analyses of DIRBE and IRTS observations, our measurements of spatial fluctuations of the CIB intensity indicate the mean emission from the objects producing the fluctuations is quite low (∼>1 nW m -2 sr -1 at 3-5 μm), and thus consistent with current γ-ray absorption constraints. The source of the fluctuations may be high-z Population III objects, or a more local component of very low luminosity objects with clustering properties that differ from the resolved galaxies. Finally, we discuss the prospects of the upcoming space-based surveys to directly measure the epochs inhabited by the populations producing these

  5. The contribution to the modal analysis using an infrared camera

    Directory of Open Access Journals (Sweden)

    Dekys Vladimír

    2018-01-01

    Full Text Available The paper deals with modal analysis using an infrared camera. The test objects were excited by the modal exciter with narrowband noise and the response was registered as a frame sequence by the high speed infrared camera FLIR SC7500. The resonant frequencies and the modal shapes were determined from the infrared spectrum recordings. Lock-in technology has also been used. The experimental results were compared with calculated natural frequencies and modal shapes.

  6. Influence of Digital Camera Errors on the Photogrammetric Image Processing

    Science.gov (United States)

    Sužiedelytė-Visockienė, Jūratė; Bručas, Domantas

    2009-01-01

    The paper deals with the calibration of digital camera Canon EOS 350D, often used for the photogrammetric 3D digitalisation and measurements of industrial and construction site objects. During the calibration data on the optical and electronic parameters, influencing the distortion of images, such as correction of the principal point, focal length of the objective, radial symmetrical and non-symmetrical distortions were obtained. The calibration was performed by means of the Tcc software implementing the polynomial of Chebichev and using a special test-field with the marks, coordinates of which are precisely known. The main task of the research - to determine how parameters of the camera calibration influence the processing of images, i. e. the creation of geometric model, the results of triangulation calculations and stereo-digitalisation. Two photogrammetric projects were created for this task. In first project the non-corrected and in the second the corrected ones, considering the optical errors of the camera obtained during the calibration, images were used. The results of analysis of the images processing is shown in the images and tables. The conclusions are given.

  7. Measuring frequency of one-dimensional vibration with video camera using electronic rolling shutter

    Science.gov (United States)

    Zhao, Yipeng; Liu, Jinyue; Guo, Shijie; Li, Tiejun

    2018-04-01

    Cameras offer a unique capability of collecting high density spatial data from a distant scene of interest. They can be employed as remote monitoring or inspection sensors to measure vibrating objects because of their commonplace availability, simplicity, and potentially low cost. A defect of vibrating measurement with the camera is to process the massive data generated by camera. In order to reduce the data collected from the camera, the camera using electronic rolling shutter (ERS) is applied to measure the frequency of one-dimensional vibration, whose frequency is much higher than the speed of the camera. Every row in the image captured by the ERS camera records the vibrating displacement at different times. Those displacements that form the vibration could be extracted by local analysis with sliding windows. This methodology is demonstrated on vibrating structures, a cantilever beam, and an air compressor to identify the validity of the proposed algorithm. Suggestions for applications of this methodology and challenges in real-world implementation are given at last.

  8. Use of camera drive in stereoscopic display of learning contents of introductory physics

    Science.gov (United States)

    Matsuura, Shu

    2011-03-01

    Simple 3D physics simulations with stereoscopic display were created for a part of introductory physics e-Learning. First, cameras to see the 3D world can be made controllable by the user. This enabled to observe the system and motions of objects from any position in the 3D world. Second, cameras were made attachable to one of the moving object in the simulation so as to observe the relative motion of other objects. By this option, it was found that users perceive the velocity and acceleration more sensibly on stereoscopic display than on non-stereoscopic 3D display. Simulations were made using Adobe Flash ActionScript, and Papervison 3D library was used to render the 3D models in the flash web pages. To display the stereogram, two viewports from virtual cameras were displayed in parallel in the same web page. For observation of stereogram, the images of two viewports were superimposed by using 3D stereogram projection box (T&TS CO., LTD.), and projected on an 80-inch screen. The virtual cameras were controlled by keyboard and also by Nintendo Wii remote controller buttons. In conclusion, stereoscopic display offers learners more opportunities to play with the simulated models, and to perceive the characteristics of motion better.

  9. Engineering task plan for flammable gas atmosphere mobile color video camera systems

    International Nuclear Information System (INIS)

    Kohlman, E.H.

    1995-01-01

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and testing of the mobile video camera systems. The color video camera systems will be used to observe and record the activities within the vapor space of a tank on a limited exposure basis. The units will be fully mobile and designed for operation in the single-shell flammable gas producing tanks. The objective of this tank is to provide two mobile camera systems for use in flammable gas producing single-shell tanks (SSTs) for the Flammable Gas Tank Safety Program. The camera systems will provide observation, video recording, and monitoring of the activities that occur in the vapor space of applied tanks. The camera systems will be designed to be totally mobile, capable of deployment up to 6.1 meters into a 4 inch (minimum) riser

  10. Performance of Color Camera Machine Vision in Automated Furniture Rough Mill Systems

    Science.gov (United States)

    D. Earl Kline; Agus Widoyoko; Janice K. Wiedenbeck; Philip A. Araman

    1998-01-01

    The objective of this study was to evaluate the performance of color camera machine vision for lumber processing in a furniture rough mill. The study used 134 red oak boards to compare the performance of automated gang-rip-first rough mill yield based on a prototype color camera lumber inspection system developed at Virginia Tech with both estimated optimum rough mill...

  11. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, Ul; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is replaceably mounted in the ray inlet opening of the camera, while the others are placed on separate supports. Supports are swingably mounted upon a column one above the other

  12. Active galactic nuclei cores in infrared-faint radio sources. Very long baseline interferometry observations using the Very Long Baseline Array

    Science.gov (United States)

    Herzog, A.; Middelberg, E.; Norris, R. P.; Spitler, L. R.; Deller, A. T.; Collier, J. D.; Parker, Q. A.

    2015-06-01

    Context. Infrared-faint radio sources (IFRS) form a new class of galaxies characterised by radio flux densities between tenths and tens of mJy and faint or absent infrared counterparts. It has been suggested that these objects are radio-loud active galactic nuclei (AGNs) at significant redshifts (z ≳ 2). Aims: Whereas the high redshifts of IFRS have been recently confirmed based on spectroscopic data, the evidence for the presence of AGNs in IFRS is mainly indirect. So far, only two AGNs have been unquestionably confirmed in IFRS based on very long baseline interferometry (VLBI) observations. In this work, we test the hypothesis that IFRS contain AGNs in a large sample of sources using VLBI. Methods: We observed 57 IFRS with the Very Long Baseline Array (VLBA) down to a detection sensitivity in the sub-mJy regime and detected compact cores in 35 sources. Results: Our VLBA detections increase the number of VLBI-detected IFRS from 2 to 37 and provide strong evidence that most - if not all - IFRS contain AGNs. We find that IFRS have a marginally higher VLBI detection fraction than randomly selected sources with mJy flux densities at arcsec-scales. Moreover, our data provide a positive correlation between compactness - defined as the ratio of milliarcsec- to arcsec-scale flux density - and redshift for IFRS, but suggest a decreasing mean compactness with increasing arcsec-scale radio flux density. Based on these findings, we suggest that IFRS tend to contain young AGNs whose jets have not formed yet or have not expanded, equivalent to very compact objects. We found two IFRS that are resolved into two components. The two components are spatially separated by a few hundred milliarcseconds in both cases. They might be components of one AGN, a binary black hole, or the result of gravitational lensing.

  13. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  14. Radiation-resistant camera tube

    International Nuclear Information System (INIS)

    Kuwahata, Takao; Manabe, Sohei; Makishima, Yasuhiro

    1982-01-01

    It was a long time ago that Toshiba launched on manufacturing black-and-white radiation-resistant camera tubes employing nonbrowning face-plate glass for ITV cameras used in nuclear power plants. Now in compliance with the increasing demand in nuclear power field, the Company is at grips with the development of radiation-resistant single color-camera tubes incorporating a color-stripe filter for color ITV cameras used under radiation environment. Herein represented are the results of experiments on characteristics of materials for single color-camera tubes and prospects for commercialization of the tubes. (author)

  15. Performance of Hayabusa2 DCAM3-D Camera for Short-Range Imaging of SCI and Ejecta Curtain Generated from the Artificial Impact Crater Formed on Asteroid 162137 Ryugu (1999 JU3)

    Science.gov (United States)

    Ishibashi, K.; Shirai, K.; Ogawa, K.; Wada, K.; Honda, R.; Arakawa, M.; Sakatani, N.; Ikeda, Y.

    2017-07-01

    Deployable Camera 3-D (DCAM3-D) is a small high-resolution camera equipped on Deployable Camera 3 (DCAM3), one of the Hayabusa2 instruments. Hayabusa2 will explore asteroid 162137 Ryugu (1999 JU3) and conduct an impact experiment using a liner shooting device called Small Carry-on Impactor (SCI). DCAM3 will be detached from the Hayabusa2 spacecraft and observe the impact experiment. The purposes of the observation are to know the impact conditions, to estimate the surface structure of asteroid Ryugu, and to understand the physics of impact phenomena on low-gravity bodies. DCAM3-D requires high imaging performance because it has to image and detect multiple targets of different scale and radiance, i.e., the faint SCI before the shot from 1-km distance, the bright ejecta generated by the impact, and the asteroid. In this paper we report the evaluation of the performance of the CMOS imaging sensor and the optical system of DCAM3-D. We also describe the calibration of DCAM3-D. We confirmed that the imaging performance of DCAM3-D satisfies the required values to achieve the purposes of the observation.

  16. Image dynamic range test and evaluation of Gaofen-2 dual cameras

    Science.gov (United States)

    Zhang, Zhenhua; Gan, Fuping; Wei, Dandan

    2015-12-01

    In order to fully understand the dynamic range of Gaofen-2 satellite data and support the data processing, application and next satellites development, in this article, we evaluated the dynamic range by calculating some statistics such as maximum ,minimum, average and stand deviation of four images obtained at the same time by Gaofen-2 dual cameras in Beijing area; then the maximum ,minimum, average and stand deviation of each longitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of each camera's dynamic range consistency; and these four statistics of each latitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of the dynamic range consistency between PMS1 and PMS2 at last. The results suggest that there is a wide dynamic range of DN value in the image obtained by PMS1 and PMS2 which contains rich information of ground objects; in general, the consistency of dynamic range between the single camera images is in close agreement, but also a little difference, so do the dual cameras. The consistency of dynamic range between the single camera images is better than the dual cameras'.

  17. Variation in detection among passive infrared triggered-cameras used in wildlife research

    Science.gov (United States)

    Damm, Philip E.; Grand, James B.; Barnett, Steven W.

    2010-01-01

    Precise and accurate estimates of demographics such as age structure, productivity, and density are necessary in determining habitat and harvest management strategies for wildlife populations. Surveys using automated cameras are becoming an increasingly popular tool for estimating these parameters. However, most camera studies fail to incorporate detection probabilities, leading to parameter underestimation. The objective of this study was to determine the sources of heterogeneity in detection for trail cameras that incorporate a passive infrared (PIR) triggering system sensitive to heat and motion. Images were collected at four baited sites within the Conecuh National Forest, Alabama, using three cameras at each site operating continuously over the same seven-day period. Detection was estimated for four groups of animals based on taxonomic group and body size. Our hypotheses of detection considered variation among bait sites and cameras. The best model (w=0.99) estimated different rates of detection for each camera in addition to different detection rates for four animal groupings. Factors that explain this variability might include poor manufacturing tolerances, variation in PIR sensitivity, animal behavior, and species-specific infrared radiation. Population surveys using trail cameras with PIR systems must incorporate detection rates for individual cameras. Incorporating time-lapse triggering systems into survey designs should eliminate issues associated with PIR systems.

  18. GRACE star camera noise

    Science.gov (United States)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  19. ARACHNID: A prototype object-oriented database tool for distributed systems

    Science.gov (United States)

    Younger, Herbert; Oreilly, John; Frogner, Bjorn

    1994-01-01

    This paper discusses the results of a Phase 2 SBIR project sponsored by NASA and performed by MIMD Systems, Inc. A major objective of this project was to develop specific concepts for improved performance in accessing large databases. An object-oriented and distributed approach was used for the general design, while a geographical decomposition was used as a specific solution. The resulting software framework is called ARACHNID. The Faint Source Catalog developed by NASA was the initial database testbed. This is a database of many giga-bytes, where an order of magnitude improvement in query speed is being sought. This database contains faint infrared point sources obtained from telescope measurements of the sky. A geographical decomposition of this database is an attractive approach to dividing it into pieces. Each piece can then be searched on individual processors with only a weak data linkage between the processors being required. As a further demonstration of the concepts implemented in ARACHNID, a tourist information system is discussed. This version of ARACHNID is the commercial result of the project. It is a distributed, networked, database application where speed, maintenance, and reliability are important considerations. This paper focuses on the design concepts and technologies that form the basis for ARACHNID.

  20. Utilization and viability of biologically-inspired algorithms in a dynamic multiagent camera surveillance system

    Science.gov (United States)

    Mundhenk, Terrell N.; Dhavale, Nitin; Marmol, Salvador; Calleja, Elizabeth; Navalpakkam, Vidhya; Bellman, Kirstie; Landauer, Chris; Arbib, Michael A.; Itti, Laurent

    2003-10-01

    In view of the growing complexity of computational tasks and their design, we propose that certain interactive systems may be better designed by utilizing computational strategies based on the study of the human brain. Compared with current engineering paradigms, brain theory offers the promise of improved self-organization and adaptation to the current environment, freeing the programmer from having to address those issues in a procedural manner when designing and implementing large-scale complex systems. To advance this hypothesis, we discus a multi-agent surveillance system where 12 agent CPUs each with its own camera, compete and cooperate to monitor a large room. To cope with the overload of image data streaming from 12 cameras, we take inspiration from the primate"s visual system, which allows the animal to operate a real-time selection of the few most conspicuous locations in visual input. This is accomplished by having each camera agent utilize the bottom-up, saliency-based visual attention algorithm of Itti and Koch (Vision Research 2000;40(10-12):1489-1506) to scan the scene for objects of interest. Real time operation is achieved using a distributed version that runs on a 16-CPU Beowulf cluster composed of the agent computers. The algorithm guides cameras to track and monitor salient objects based on maps of color, orientation, intensity, and motion. To spread camera view points or create cooperation in monitoring highly salient targets, camera agents bias each other by increasing or decreasing the weight of different feature vectors in other cameras, using mechanisms similar to excitation and suppression that have been documented in electrophysiology, psychophysics and imaging studies of low-level visual processing. In addition, if cameras need to compete for computing resources, allocation of computational time is weighed based upon the history of each camera. A camera agent that has a history of seeing more salient targets is more likely to obtain

  1. Camera pose estimation for augmented reality in a small indoor dynamic scene

    Science.gov (United States)

    Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad

    2017-09-01

    Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.

  2. Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor

    Directory of Open Access Journals (Sweden)

    Dong Seop Kim

    2018-03-01

    Full Text Available Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR open database, show that our method outperforms previous works.

  3. Object tracking using active appearance models

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2001-01-01

    This paper demonstrates that (near) real-time object tracking can be accomplished by the deformable template model; the Active Appearance Model (AAM) using only low-cost consumer electronics such as a PC and a web-camera. Successful object tracking of perspective, rotational and translational...

  4. New light on faint stars

    International Nuclear Information System (INIS)

    Reid, N.; Gilmore, G.

    1982-01-01

    This paper presents the first purely photometric derivation of the stellar main-sequence luminosity function to absolute magnitude Msub(V) = + 19, which is comparable to the minimum mass for thermonuclear burning. The observations consist of COSMOS measures of UK Schmidt telescope plates in the V, R and I bands. They provide a complete sample of every star in 18.24 square degrees towards the South Galactic Pole, brighter than I = 17.0. Absolute magnitudes and distances are derived by photometric parallax from the Msub(V)/V-I and Msub(V)/I-K relations, which have been carefully calibrated on our photometric system. For +9<=Msub(V)<=+19, the photometrically defined luminosity function is in agreement with that derived from samples of nearby stars, and by proper motion techniques. There is no evidence for any excess of intrinsically faint stars, even though this survey reaches some 5 mag deeper into the luminosity function than previous photometric surveys. Re-analysis of subsamples of other photometric studies of the local stellar density removes any evidence for a significant excess of M dwarfs relative to the kinematically derived luminosity function. The missing mass in the solar neighbourhood, if any, does not reside in main-sequence stars brighter than Msub(V) approx. = + 17 mag. (author)

  5. MEASURING X-RAY VARIABILITY IN FAINT/SPARSELY SAMPLED ACTIVE GALACTIC NUCLEI

    Energy Technology Data Exchange (ETDEWEB)

    Allevato, V. [Department of Physics, University of Helsinki, Gustaf Haellstroemin katu 2a, FI-00014 Helsinki (Finland); Paolillo, M. [Department of Physical Sciences, University Federico II, via Cinthia 6, I-80126 Naples (Italy); Papadakis, I. [Department of Physics and Institute of Theoretical and Computational Physics, University of Crete, 71003 Heraklion (Greece); Pinto, C. [SRON Netherlands Institute for Space Research, Sorbonnelaan 2, 3584-CA Utrecht (Netherlands)

    2013-07-01

    We study the statistical properties of the normalized excess variance of variability process characterized by a ''red-noise'' power spectral density (PSD), as in the case of active galactic nuclei (AGNs). We perform Monte Carlo simulations of light curves, assuming both a continuous and a sparse sampling pattern and various signal-to-noise ratios (S/Ns). We show that the normalized excess variance is a biased estimate of the variance even in the case of continuously sampled light curves. The bias depends on the PSD slope and on the sampling pattern, but not on the S/N. We provide a simple formula to account for the bias, which yields unbiased estimates with an accuracy better than 15%. We show that the normalized excess variance estimates based on single light curves (especially for sparse sampling and S/N < 3) are highly uncertain (even if corrected for bias) and we propose instead the use of an ''ensemble estimate'', based on multiple light curves of the same object, or on the use of light curves of many objects. These estimates have symmetric distributions, known errors, and can also be corrected for biases. We use our results to estimate the ability to measure the intrinsic source variability in current data, and show that they could also be useful in the planning of the observing strategy of future surveys such as those provided by X-ray missions studying distant and/or faint AGN populations and, more in general, in the estimation of the variability amplitude of sources that will result from future surveys such as Pan-STARRS and LSST.

  6. Gamma camera

    International Nuclear Information System (INIS)

    Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  7. Picosecond camera

    International Nuclear Information System (INIS)

    Decroisette, Michel

    A Kerr cell activated by infrared pulses of a model locked Nd glass laser, acts as an ultra-fast and periodic shutter, with a few p.s. opening time. Associated with a S.T.L. camera, it gives rise to a picosecond camera allowing us to study very fast effects [fr

  8. A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object

    International Nuclear Information System (INIS)

    Winkler, A W; Zagar, B G

    2013-01-01

    An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives. (paper)

  9. A curve fitting method for extrinsic camera calibration from a single image of a cylindrical object

    Science.gov (United States)

    Winkler, A. W.; Zagar, B. G.

    2013-08-01

    An important step in the process of optical steel coil quality assurance is to measure the proportions of width and radius of steel coils as well as the relative position and orientation of the camera. This work attempts to estimate these extrinsic parameters from single images by using the cylindrical coil itself as the calibration target. Therefore, an adaptive least-squares algorithm is applied to fit parametrized curves to the detected true coil outline in the acquisition. The employed model allows for strictly separating the intrinsic and the extrinsic parameters. Thus, the intrinsic camera parameters can be calibrated beforehand using available calibration software. Furthermore, a way to segment the true coil outline in the acquired images is motivated. The proposed optimization method yields highly accurate results and can be generalized even to measure other solids which cannot be characterized by the identification of simple geometric primitives.

  10. Global Calibration of Multi-Cameras Based on Refractive Projection and Ray Tracing

    Directory of Open Access Journals (Sweden)

    Mingchi Feng

    2017-10-01

    Full Text Available Multi-camera systems are widely applied in the three dimensional (3D computer vision, especially when multiple cameras are distributed on both sides of the measured object. The calibration methods of multi-camera systems are critical to the accuracy of vision measurement and the key is to find an appropriate calibration target. In this paper, a high-precision camera calibration method for multi-camera systems based on transparent glass checkerboards and ray tracing is described, and is used to calibrate multiple cameras distributed on both sides of the glass checkerboard. Firstly, the intrinsic parameters of each camera are obtained by Zhang’s calibration method. Then, multiple cameras capture several images from the front and back of the glass checkerboard with different orientations, and all images contain distinct grid corners. As the cameras on one side are not affected by the refraction of glass checkerboard, extrinsic parameters can be directly calculated. However, the cameras on the other side are influenced by the refraction of glass checkerboard, and the direct use of projection model will produce a calibration error. A multi-camera calibration method using refractive projection model and ray tracing is developed to eliminate this error. Furthermore, both synthetic and real data are employed to validate the proposed approach. The experimental results of refractive calibration show that the error of the 3D reconstruction is smaller than 0.2 mm, the relative errors of both rotation and translation are less than 0.014%, and the mean and standard deviation of reprojection error of the four-camera system are 0.00007 and 0.4543 pixels, respectively. The proposed method is flexible, highly accurate, and simple to carry out.

  11. How to photograph the Moon and planets with your digital camera

    CERN Document Server

    Buick, Tony

    2007-01-01

    Since the advent of astronomical CCD imaging it has been possible for amateurs to produce images of a quality that was attainable only by universities and professional observatories just a decade ago. However, astronomical CCD cameras are still very expensive, and technology has now progressed so that digital cameras - the kind you use on holiday - are more than capable of photographing the brighter astronomical objects, notably the Moon and major planets. Tony Buick has worked for two years on the techniques involved, and has written this illustrated step-by-step manual for anyone who has a telescope (of any size) and a digital camera. The color images he has produced - there are over 300 of them in the book - are of breathtaking quality. His book is more than a manual of techniques (including details of how to make a low-cost DIY camera mount) and examples; it also provides a concise photographic atlas of the whole of the nearside of the Moon - with every image made using a standard digital camera - and des...

  12. Adaptation Computing Parameters of Pan-Tilt-Zoom Cameras for Traffic Monitoring

    Directory of Open Access Journals (Sweden)

    Ya Lin WU

    2014-01-01

    Full Text Available The Closed- CIRCUIT television (CCTV cameras have been widely used in recent years for traffic monitoring and surveillance applications. We can use CCTV cameras to extract automatically real-time traffic parameters according to the image processing and tracking technologies. Especially, the pan-tilt-zoom (PTZ cameras can provide flexible view selection as well as a wider observation range, and this makes the traffic parameters can be accurately calculated. Therefore, that the parameters of PTZ cameras are calibrated plays an important role in vision-based traffic applications. However, in the specific traffic environment, which is that the license plate number of the illegal parking is located, the parameters of PTZ cameras have to be updated according to the position and distance of illegal parking. In proposed traffic monitoring systems, we use the ordinary webcam and PTZ camera. We get vanishing-point of traffic lane lines in the pixel-based coordinate system by fixed webcam. The parameters of PTZ camera can be initialized by distance of the traffic monitoring and specific objectives and vanishing-point. And then we can use the coordinate position of the illegally parked car to update the parameters of PTZ camera and then get the real word coordinate position of the illegally parked car and use it to compute the distance. The result shows the error of the tested distance and real distance is only 0.2064 meter.

  13. TRANSFORMATION ALGORITHM FOR IMAGES OBTAINED BY OMNIDIRECTIONAL CAMERAS

    Directory of Open Access Journals (Sweden)

    V. P. Lazarenko

    2015-01-01

    Full Text Available Omnidirectional optoelectronic systems find their application in areas where a wide viewing angle is critical. However, omnidirectional optoelectronic systems have a large distortion that makes their application more difficult. The paper compares the projection functions of traditional perspective lenses and omnidirectional wide angle fish-eye lenses with a viewing angle not less than 180°. This comparison proves that distortion models of omnidirectional cameras cannot be described as a deviation from the classic model of pinhole camera. To solve this problem, an algorithm for transforming omnidirectional images has been developed. The paper provides a brief comparison of the four calibration methods available in open source toolkits for omnidirectional optoelectronic systems. Geometrical projection model is given used for calibration of omnidirectional optical system. The algorithm consists of three basic steps. At the first step, we calculate he field of view of a virtual pinhole PTZ camera. This field of view is characterized by an array of 3D points in the object space. At the second step the array of corresponding pixels for these three-dimensional points is calculated. Then we make a calculation of the projection function that expresses the relation between a given 3D point in the object space and a corresponding pixel point. In this paper we use calibration procedure providing the projection function for calibrated instance of the camera. At the last step final image is formed pixel-by-pixel from the original omnidirectional image using calculated array of 3D points and projection function. The developed algorithm gives the possibility for obtaining an image for a part of the field of view of an omnidirectional optoelectronic system with the corrected distortion from the original omnidirectional image. The algorithm is designed for operation with the omnidirectional optoelectronic systems with both catadioptric and fish-eye lenses

  14. Reducing the Variance of Intrinsic Camera Calibration Results in the ROS Camera_Calibration Package

    Science.gov (United States)

    Chiou, Geoffrey Nelson

    The intrinsic calibration of a camera is the process in which the internal optical and geometric characteristics of the camera are determined. If accurate intrinsic parameters of a camera are known, the ray in 3D space that every point in the image lies on can be determined. Pairing with another camera allows for the position of the points in the image to be calculated by intersection of the rays. Accurate intrinsics also allow for the position and orientation of a camera relative to some world coordinate system to be calculated. These two reasons for having accurate intrinsic calibration for a camera are especially important in the field of industrial robotics where 3D cameras are frequently mounted on the ends of manipulators. In the ROS (Robot Operating System) ecosystem, the camera_calibration package is the default standard for intrinsic camera calibration. Several researchers from the Industrial Robotics & Automation division at Southwest Research Institute have noted that this package results in large variances in the intrinsic parameters of the camera when calibrating across multiple attempts. There are also open issues on this matter in their public repository that have not been addressed by the developers. In this thesis, we confirm that the camera_calibration package does indeed return different results across multiple attempts, test out several possible hypothesizes as to why, identify the reason, and provide simple solution to fix the cause of the issue.

  15. Commercialization of radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10{sup 6} - 10{sup 8} rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  16. Commercialization of radiation tolerant camera

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10 6 - 10 8 rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  17. Evaluation of an Airborne Remote Sensing Platform Consisting of Two Consumer-Grade Cameras for Crop Identification

    Directory of Open Access Journals (Sweden)

    Jian Zhang

    2016-03-01

    Full Text Available Remote sensing systems based on consumer-grade cameras have been increasingly used in scientific research and remote sensing applications because of their low cost and ease of use. However, the performance of consumer-grade cameras for practical applications has not been well documented in related studies. The objective of this research was to apply three commonly-used classification methods (unsupervised, supervised, and object-based to three-band imagery with RGB (red, green, and blue bands and four-band imagery with RGB and near-infrared (NIR bands to evaluate the performance of a dual-camera imaging system for crop identification. Airborne images were acquired from a cropping area in Texas and mosaicked and georeferenced. The mosaicked imagery was classified using the three classification methods to assess the usefulness of NIR imagery for crop identification and to evaluate performance differences between the object-based and pixel-based methods. Image classification and accuracy assessment showed that the additional NIR band imagery improved crop classification accuracy over the RGB imagery and that the object-based method achieved better results with additional non-spectral image features. The results from this study indicate that the airborne imaging system based on two consumer-grade cameras used in this study can be useful for crop identification and other agricultural applications.

  18. Cameras in mobile phones

    Science.gov (United States)

    Nummela, Ville; Viinikanoja, Jarkko; Alakarhu, Juha

    2006-04-01

    One of the fastest growing markets in consumer markets today are camera phones. During past few years total volume has been growing fast and today millions of mobile phones with camera will be sold. At the same time resolution and functionality of the cameras has been growing from CIF towards DSC level. From camera point of view the mobile world is an extremely challenging field. Cameras should have good image quality but in small size. They also need to be reliable and their construction should be suitable for mass manufacturing. All components of the imaging chain should be well optimized in this environment. Image quality and usability are the most important parameters to user. The current trend of adding more megapixels to cameras and at the same time using smaller pixels is affecting both. On the other hand reliability and miniaturization are key drivers for product development as well as the cost. In optimized solution all parameters are in balance but the process of finding the right trade-offs is not an easy task. In this paper trade-offs related to optics and their effects to image quality and usability of cameras are discussed. Key development areas from mobile phone camera point of view are also listed.

  19. Deep Rapid Optical Follow-Up of Gravitational Wave Sources with the Dark Energy Camera

    Science.gov (United States)

    Cowperthwaite, Philip

    2018-01-01

    The detection of an electromagnetic counterpart associated with a gravitational wave detection by the Advanced LIGO and VIRGO interferometers is one of the great observational challenges of our time. The large localization regions and potentially faint counterparts require the use of wide-field, large aperture telescopes. As a result, the Dark Energy Camera, a 3.3 sq deg CCD imager on the 4-m Blanco telescope at CTIO in Chile is the most powerful instrument for this task in the Southern Hemisphere. I will report on the results from our joint program between the community and members of the dark energy survey to conduct rapid and efficient follow-up of gravitational wave sources. This includes systematic searches for optical counterparts, as well as developing an understanding of contaminating sources on timescales not normally probed by traditional untargeted supernova surveys. I will additionally comment on the immense science gains to be made by a joint detection and discuss future prospects from the standpoint of both next generation wide-field telescopes and next generation gravitational wave detectors.

  20. Resolving the faint end of the satellite luminosity function for the nearest elliptical Centaurus A

    Science.gov (United States)

    Crnojevic, Denija

    2014-10-01

    We request HST/ACS imaging to follow up 15 new faint candidate dwarfs around the nearest elliptical Centaurus A (3.8 Mpc). The dwarfs were found via a systematic ground-based (Magellan/Megacam) survey out to ~150 kpc, designed to directly confront the "missing satellites" problem in a wholly new environment. Current Cold Dark Matter models for structure formation fail to reproduce the shallow slope of the satellite luminosity function in spiral-dominated groups for which dwarfs fainter than M_V<-14 have been surveyed (the Local Group and the nearby, interacting M81 group). Clusters of galaxies show a better agreement with cosmological predictions, suggesting an environmental dependence of the (poorly-understood) physical processes acting on the evolution of low mass galaxies (e.g., reionization). However, the luminosity function completeness for these rich environments quickly drops due to the faintness of the satellites and to the difficult cluster membership determination. We target a yet unexplored "intermediate" environment, a nearby group dominated by an elliptical galaxy, ideal due to its proximity: accurate (10%) distance determinations for its members can be derived from resolved stellar populations. The proposed observations of the candidate dwarfs will confirm their nature, group membership, and constrain their luminosities, metallicities, and star formation histories. We will obtain the first complete census of dwarf satellites of an elliptical down to an unprecedented M_V<-9. Our results will crucially constrain cosmological predictions for the faint end of the satellite luminosity function to achieve a more complete picture of the galaxy formation process.

  1. HUBBLE SPACE TELESCOPE/NEAR-INFRARED CAMERA AND MULTI-OBJECT SPECTROMETER OBSERVATIONS OF THE GLIMPSE9 STELLAR CLUSTER

    International Nuclear Information System (INIS)

    Messineo, Maria; Figer, Donald F.; Davies, Ben; Trombley, Christine; Kudritzki, R. P.; Rich, R. Michael; MacKenty, John

    2010-01-01

    We present Hubble Space Telescope/Near-Infrared Camera and Multi-Object Spectrometer photometry, and low-resolution K-band spectra of the GLIMPSE9 stellar cluster. The newly obtained color-magnitude diagram shows a cluster sequence with H - K S = ∼1 mag, indicating an interstellar extinction A K s = 1.6 ± 0.2 mag. The spectra of the three brightest stars show deep CO band heads, which indicate red supergiants with spectral type M1-M2. Two 09-B2 supergiants are also identified, which yield a spectrophotometric distance of 4.2 ± 0.4 kpc. Presuming that the population is coeval, we derive an age between 15 and 27 Myr, and a total cluster mass of 1600 ± 400 M sun , integrated down to 1 M sun . In the vicinity of GLIMPSE9 are several H II regions and supernova remnants, all of which (including GLIMPSE9) are probably associated with a giant molecular cloud (GMC) in the inner galaxy. GLIMPSE9 probably represents one episode of massive star formation in this GMC. We have identified several other candidate stellar clusters of the same complex.

  2. THE ORIGIN OF THE HEAVIEST METALS IN MOST ULTRA-FAINT DWARF GALAXIES

    Energy Technology Data Exchange (ETDEWEB)

    Roederer, Ian U., E-mail: iur@umich.edu [Department of Astronomy, University of Michigan, 1085 S. University Ave., Ann Arbor, MI 48109 (United States)

    2017-01-20

    The heaviest metals found in stars in most ultra-faint dwarf (UFD) galaxies in the Milky Way halo are generally underabundant by an order of magnitude or more when compared with stars in the halo field. Among the heavy elements produced by n -capture reactions, only Sr and Ba can be detected in red giant stars in most UFD galaxies. This limited chemical information is unable to identify the nucleosynthesis process(es) responsible for producing the heavy elements in UFD galaxies. Similar [Sr/Ba] and [Ba/Fe] ratios are found in three bright halo field stars, BD−18°5550, CS 22185–007, and CS 22891–200. Previous studies of high-quality spectra of these stars report detections of additional n -capture elements, including Eu. The [Eu/Ba] ratios in these stars span +0.41 to +0.86. These ratios and others among elements in the rare Earth domain indicate an r -process origin. These stars have some of the lowest levels of r -process enhancement known, with [Eu/H] spanning −3.95 to −3.32, and they may be considered nearby proxies for faint stars in UFD galaxies. Direct confirmation, however, must await future observations of additional heavy elements in stars in the UFD galaxies themselves.

  3. Finding Objects for Assisting Blind People

    OpenAIRE

    Yi, Chucai; Flores, Roberto W.; Chincha, Ricardo; Tian, YingLi

    2013-01-01

    Computer vision technology has been widely used for blind assistance, such as navigation and wayfinding. However, few camera-based systems are developed for helping blind or visually-impaired people to find daily necessities. In this paper, we propose a prototype system of blind-assistant object finding by camera-based network and matching-based recognition. We collect a dataset of daily necessities and apply Speeded-Up Robust Features (SURF) and Scale Invariant Feature Transform (SIFT) featu...

  4. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    Science.gov (United States)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  5. Augmented reality glass-free three-dimensional display with the stereo camera

    Science.gov (United States)

    Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.

  6. No evidence for radio-quiet BL Lacertae objects

    International Nuclear Information System (INIS)

    Stocke, J.T.; Morris, S.L.; Gioia, I.; Maccacaro, T.; Schild, R.E.

    1990-01-01

    Using a large, flux-limited sample of faint X-ray sources, a search has been conducted for radio-quiet BL Lacertae objects. None has been found. Thirty-two X-ray-selected BL Lac objects and BL Lac candidates have been found within the sources of the Einstein Medium Sensitivity Survey (EMSS). Thirty-one of these have been observed with the VLA and all have been detected at 5 GHz. While the optical magnitudes of the EMSS BL Lac objects range from 17 to 20.8, their radio-to-optical spectral indices occupy a very small range. The very bright X-ray-selected BL Lac objects like PKS 2155-304 and Markarian 501 have similar range values. Therefore, unlike the clear dichotomy between radio-loud quasars and radio-quiet QSOs, there is no evidence for two populations of Lacertids distinguished by radio loudness. 43 refs

  7. Divergence-ratio axi-vision camera (Divcam): A distance mapping camera

    International Nuclear Information System (INIS)

    Iizuka, Keigo

    2006-01-01

    A novel distance mapping camera the divergence-ratio axi-vision camera (Divcam) is proposed. The decay rate of the illuminating light with distance due to the divergence of the light is used as means of mapping the distance. Resolutions of 10 mm over a range of meters and 0.5 mm over a range of decimeters were achieved. The special features of this camera are its high resolution real-time operation, simplicity, compactness, light weight, portability, and yet low fabrication cost. The feasibility of various potential applications is also included

  8. Camera Networks The Acquisition and Analysis of Videos over Wide Areas

    CERN Document Server

    Roy-Chowdhury, Amit K

    2012-01-01

    As networks of video cameras are installed in many applications like security and surveillance, environmental monitoring, disaster response, and assisted living facilities, among others, image understanding in camera networks is becoming an important area of research and technology development. There are many challenges that need to be addressed in the process. Some of them are listed below: - Traditional computer vision challenges in tracking and recognition, robustness to pose, illumination, occlusion, clutter, recognition of objects, and activities; - Aggregating local information for wide

  9. An intelligent space for mobile robot localization using a multi-camera system.

    Science.gov (United States)

    Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel

    2014-08-15

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  10. An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Mariana Rampinelli

    2014-08-01

    Full Text Available This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  11. NEMA NU-1 2007 based and independent quality control software for gamma cameras and SPECT

    International Nuclear Information System (INIS)

    Vickery, A; Joergensen, T; De Nijs, R

    2011-01-01

    A thorough quality assurance of gamma and SPECT cameras requires a careful handling of the measured quality control (QC) data. Most gamma camera manufacturers provide the users with camera specific QC Software. This QC software is indeed a useful tool for the following of day-to-day performance of a single camera. However, when it comes to objective performance comparison of different gamma cameras and a deeper understanding of the calculated numbers, the use of camera specific QC software without access to the source code is rather avoided. Calculations and definitions might differ, and manufacturer independent standardized results are preferred. Based upon the NEMA Standards Publication NU 1-2007, we have developed a suite of easy-to-use data handling software for processing acquired QC data providing the user with instructive images and text files with the results.

  12. Radiation camera exposure control

    International Nuclear Information System (INIS)

    Martone, R.J.; Yarsawich, M.; Wolczek, W.

    1976-01-01

    A system and method for governing the exposure of an image generated by a radiation camera to an image sensing camera is disclosed. The exposure is terminated in response to the accumulation of a predetermined quantity of radiation, defining a radiation density, occurring in a predetermined area. An index is produced which represents the value of that quantity of radiation whose accumulation causes the exposure termination. The value of the predetermined radiation quantity represented by the index is sensed so that the radiation camera image intensity can be calibrated to compensate for changes in exposure amounts due to desired variations in radiation density of the exposure, to maintain the detectability of the image by the image sensing camera notwithstanding such variations. Provision is also made for calibrating the image intensity in accordance with the sensitivity of the image sensing camera, and for locating the index for maintaining its detectability and causing the proper centering of the radiation camera image

  13. 宇宙デブリ観測技術の研究

    OpenAIRE

    Nakajima, Atsushi; Yanagisawa, Toshifumi; Kurosaki, Hirohisa; 中島 厚; 柳沢 俊史; 黒崎 裕久

    2006-01-01

    For the development of the optical observation technologies for space debris, Institute of Aerospace Technology (IAT) of JAXA has preliminary constructed an optical observation facility at Nyukasayama mountain in Nagano Prefecture. 35 cm Newtonian optical telescope with 2K2K CCD camera is a main equipment. The telescope is located at an altitude of 1,870 meters. The optical environment of this observation site provides good condition for faint objects detection; 21st magnitude asteroids can b...

  14. Hardware Middleware for Person Tracking on Embedded Distributed Smart Cameras

    Directory of Open Access Journals (Sweden)

    Ali Akbar Zarezadeh

    2012-01-01

    Full Text Available Tracking individuals is a prominent application in such domains like surveillance or smart environments. This paper provides a development of a multiple camera setup with jointed view that observes moving persons in a site. It focuses on a geometry-based approach to establish correspondence among different views. The expensive computational parts of the tracker are hardware accelerated via a novel system-on-chip (SoC design. In conjunction with this vision application, a hardware object request broker (ORB middleware is presented as the underlying communication system. The hardware ORB provides a hardware/software architecture to achieve real-time intercommunication among multiple smart cameras. Via a probing mechanism, a performance analysis is performed to measure network latencies, that is, time traversing the TCP/IP stack, in both software and hardware ORB approaches on the same smart camera platform. The empirical results show that using the proposed hardware ORB as client and server in separate smart camera nodes will considerably reduce the network latency up to 100 times compared to the software ORB.

  15. Tracking Non-stellar Objects on Ground and in Space

    DEFF Research Database (Denmark)

    Riis, Troels; Jørgensen, John Leif

    1999-01-01

    Many space exploration missions require a fast, early and accurate detection of a specific target. E.g. missions to asteroids, x-ray source missions or interplanetary missions.A second generation star tracker may be used for accurate detection of non-stellar objects of interest for such missions......, simply by listing all objects detected in an image not being identified as a star. Of course a lot of deep space objects will be listed too, especially if the detection threshold is set to let faint object pass through. Assuming a detection threshold of, say mv 7 (the Hipparcos catalogue is complete...... objects that do not move. For stationary objects no straightforward procedure exists to reduce the size of the list, but in the case the user has an approximate knowledge of which area to search the amount of data may be reduced substantially. In the case of a mission to an asteroid, the above described...

  16. Fog camera to visualize ionizing charged particles

    International Nuclear Information System (INIS)

    Trujillo A, L.; Rodriguez R, N. I.; Vega C, H. R.

    2014-10-01

    The human being can not perceive the different types of ionizing radiation, natural or artificial, present in the nature, for what appropriate detection systems have been developed according to the sensibility to certain radiation type and certain energy type. The objective of this work was to build a fog camera to visualize the traces, and to identify the trajectories, produced by charged particles with high energy, coming mainly of the cosmic rays. The origin of the cosmic rays comes from the solar radiation generated by solar eruptions where the protons compose most of this radiation. It also comes, of the galactic radiation which is composed mainly of charged particles and gamma rays that comes from outside of the solar system. These radiation types have energy time millions higher that those detected in the earth surface, being more important as the height on the sea level increases. These particles in their interaction produce secondary particles that are detectable by means of this cameras type. The camera operates by means of a saturated atmosphere of alcohol vapor. In the moment in that a charged particle crosses the cold area of the atmosphere, the medium is ionized and the particle acts like a condensation nucleus of the alcohol vapor, leaving a visible trace of its trajectory. The built camera was very stable, allowing the detection in continuous form and the observation of diverse events. (Author)

  17. IMAGE CAPTURE WITH SYNCHRONIZED MULTIPLE-CAMERAS FOR EXTRACTION OF ACCURATE GEOMETRIES

    Directory of Open Access Journals (Sweden)

    M. Koehl

    2016-06-01

    Full Text Available This paper presents a project of recording and modelling tunnels, traffic circles and roads from multiple sensors. The aim is the representation and the accurate 3D modelling of a selection of road infrastructures as dense point clouds in order to extract profiles and metrics from it. Indeed, these models will be used for the sizing of infrastructures in order to simulate exceptional convoy truck routes. The objective is to extract directly from the point clouds the heights, widths and lengths of bridges and tunnels, the diameter of gyrating and to highlight potential obstacles for a convoy. Light, mobile and fast acquisition approaches based on images and videos from a set of synchronized sensors have been tested in order to obtain useable point clouds. The presented solution is based on a combination of multiple low-cost cameras designed on an on-boarded device allowing dynamic captures. The experimental device containing GoPro Hero4 cameras has been set up and used for tests in static or mobile acquisitions. That way, various configurations have been tested by using multiple synchronized cameras. These configurations are discussed in order to highlight the best operational configuration according to the shape of the acquired objects. As the precise calibration of each sensor and its optics are major factors in the process of creation of accurate dense point clouds, and in order to reach the best quality available from such cameras, the estimation of the internal parameters of fisheye lenses of the cameras has been processed. Reference measures were also realized by using a 3D TLS (Faro Focus 3D to allow the accuracy assessment.

  18. Image Capture with Synchronized Multiple-Cameras for Extraction of Accurate Geometries

    Science.gov (United States)

    Koehl, M.; Delacourt, T.; Boutry, C.

    2016-06-01

    This paper presents a project of recording and modelling tunnels, traffic circles and roads from multiple sensors. The aim is the representation and the accurate 3D modelling of a selection of road infrastructures as dense point clouds in order to extract profiles and metrics from it. Indeed, these models will be used for the sizing of infrastructures in order to simulate exceptional convoy truck routes. The objective is to extract directly from the point clouds the heights, widths and lengths of bridges and tunnels, the diameter of gyrating and to highlight potential obstacles for a convoy. Light, mobile and fast acquisition approaches based on images and videos from a set of synchronized sensors have been tested in order to obtain useable point clouds. The presented solution is based on a combination of multiple low-cost cameras designed on an on-boarded device allowing dynamic captures. The experimental device containing GoPro Hero4 cameras has been set up and used for tests in static or mobile acquisitions. That way, various configurations have been tested by using multiple synchronized cameras. These configurations are discussed in order to highlight the best operational configuration according to the shape of the acquired objects. As the precise calibration of each sensor and its optics are major factors in the process of creation of accurate dense point clouds, and in order to reach the best quality available from such cameras, the estimation of the internal parameters of fisheye lenses of the cameras has been processed. Reference measures were also realized by using a 3D TLS (Faro Focus 3D) to allow the accuracy assessment.

  19. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen- er...

  20. CCD Camera Lens Interface for Real-Time Theodolite Alignment

    Science.gov (United States)

    Wake, Shane; Scott, V. Stanley, III

    2012-01-01

    Theodolites are a common instrument in the testing, alignment, and building of various systems ranging from a single optical component to an entire instrument. They provide a precise way to measure horizontal and vertical angles. They can be used to align multiple objects in a desired way at specific angles. They can also be used to reference a specific location or orientation of an object that has moved. Some systems may require a small margin of error in position of components. A theodolite can assist with accurately measuring and/or minimizing that error. The technology is an adapter for a CCD camera with lens to attach to a Leica Wild T3000 Theodolite eyepiece that enables viewing on a connected monitor, and thus can be utilized with multiple theodolites simultaneously. This technology removes a substantial part of human error by relying on the CCD camera and monitors. It also allows image recording of the alignment, and therefore provides a quantitative means to measure such error.

  1. A system and a method for detecting the position of an object

    International Nuclear Information System (INIS)

    Brown, M.H.; Harrison, J.G.

    1982-01-01

    The position of an object e.g. a manipulator, in an enclosure is detected by two video cameras from which signals representative of images in the cameras are supplied to a mini-computer. The mini-computer scans the signals to detect the position of the object in the signals, and relates this position to the spatial coordinates of the object in the enclosure. Means are provided for controlling the movement of the object within the enclosure, which may be a hostile environment e.g. radio-active. (author)

  2. Modulated electron-multiplied fluorescence lifetime imaging microscope: all-solid-state camera for fluorescence lifetime imaging.

    Science.gov (United States)

    Zhao, Qiaole; Schelen, Ben; Schouten, Raymond; van den Oever, Rein; Leenen, René; van Kuijk, Harry; Peters, Inge; Polderdijk, Frank; Bosiers, Jan; Raspe, Marcel; Jalink, Kees; Geert Sander de Jong, Jan; van Geest, Bert; Stoop, Karel; Young, Ian Ted

    2012-12-01

    We have built an all-solid-state camera that is directly modulated at the pixel level for frequency-domain fluorescence lifetime imaging microscopy (FLIM) measurements. This novel camera eliminates the need for an image intensifier through the use of an application-specific charge coupled device design in a frequency-domain FLIM system. The first stage of evaluation for the camera has been carried out. Camera characteristics such as noise distribution, dark current influence, camera gain, sampling density, sensitivity, linearity of photometric response, and optical transfer function have been studied through experiments. We are able to do lifetime measurement using our modulated, electron-multiplied fluorescence lifetime imaging microscope (MEM-FLIM) camera for various objects, e.g., fluorescein solution, fixed green fluorescent protein (GFP) cells, and GFP-actin stained live cells. A detailed comparison of a conventional microchannel plate (MCP)-based FLIM system and the MEM-FLIM system is presented. The MEM-FLIM camera shows higher resolution and a better image quality. The MEM-FLIM camera provides a new opportunity for performing frequency-domain FLIM.

  3. Identification of faint central stars in extended, low-surface-brightness planetary nebulae

    International Nuclear Information System (INIS)

    Kwitter, K.B.; Lydon, T.J.; Jacoby, G.H.

    1988-01-01

    As part of a larger program to study the properties of planetary nebula central stars, a search for faint central stars in extended, low-surface-brightness planetary nebulae using CCD imaging is performed. Of 25 target nebulae, central star candidates have been identified in 17, with certainties ranging from extremely probable to possible. Observed V values in the central star candidates extend to fainter than 23 mag. The identifications are presented along with the resulting photometric measurements. 24 references

  4. The Evolution in the Faint-End Slope of the Quasar Luminosity Function

    OpenAIRE

    Hopkins, Philip F.; Hernquist, Lars; Cox, Thomas J.; Di Matteo, Tiziana; Robertson, Brant; Springel, Volker

    2005-01-01

    (Abridged) Based on numerical simulations of galaxy mergers that incorporate black hole (BH) growth, we predict the faint end slope of the quasar luminosity function (QLF) and its evolution with redshift. Our simulations have yielded a new model for quasar lifetimes where the lifetime depends on both the instantaneous and peak quasar luminosities. This motivates a new interpretation of the QLF in which the bright end consists of quasars radiating at nearly their peak luminosities, but the fai...

  5. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  6. Depth profile measurement with lenslet images of the plenoptic camera

    Science.gov (United States)

    Yang, Peng; Wang, Zhaomin; Zhang, Wei; Zhao, Hongying; Qu, Weijuan; Zhao, Haimeng; Asundi, Anand; Yan, Lei

    2018-03-01

    An approach for carrying out depth profile measurement of an object with the plenoptic camera is proposed. A single plenoptic image consists of multiple lenslet images. To begin with, these images are processed directly with a refocusing technique to obtain the depth map, which does not need to align and decode the plenoptic image. Then, a linear depth calibration is applied based on the optical structure of the plenoptic camera for depth profile reconstruction. One significant improvement of the proposed method concerns the resolution of the depth map. Unlike the traditional method, our resolution is not limited by the number of microlenses inside the camera, and the depth map can be globally optimized. We validated the method with experiments on depth map reconstruction, depth calibration, and depth profile measurement, with the results indicating that the proposed approach is both efficient and accurate.

  7. A space-time tomography algorithm for the five-camera soft X-ray diagnostic at RTP

    Energy Technology Data Exchange (ETDEWEB)

    Lyadina, E.S.; Tanzi, C.P.; Cruz, D.F. da; Donne, A.J.H. [FOM-Instituut voor Plasmafysica, Rijnhuizen (Netherlands)

    1993-12-31

    A five-camera soft x-ray with 80 detector channels has been installed on the RTP tokamak with the object of studying MHD processes with a relatively high poloidal mode number (m=4). Numerical tomographic reconstruction algorithms used to reconstruct the plasma emissivity profile are constrained by the characteristics of the system. Especially high poloidal harmonics, which can be resolved due to the high number of cameras, can be strongly distorted by stochastic and systematic errors. Furthermore, small uncertainties in the relative position of the cameras in a multiple camera system can lead to strong artefacts in the reconstruction. (author) 6 refs., 4 figs.

  8. Camera Movement in Narrative Cinema

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2007-01-01

    section unearths what characterizes the literature on camera movement. The second section of the dissertation delineates the history of camera movement itself within narrative cinema. Several organizational principles subtending the on-screen effect of camera movement are revealed in section two...... but they are not organized into a coherent framework. This is the task that section three meets in proposing a functional taxonomy for camera movement in narrative cinema. Two presumptions subtend the taxonomy: That camera movement actively contributes to the way in which we understand the sound and images on the screen......, commentative or valuative manner. 4) Focalization: associating the movement of the camera with the viewpoints of characters or entities in the story world. 5) Reflexive: inviting spectators to engage with the artifice of camera movement. 6) Abstract: visualizing abstract ideas and concepts. In order...

  9. On the Nature of Ultra-faint Dwarf Galaxy Candidates. I. DES1, Eridanus III, and Tucana V

    Science.gov (United States)

    Conn, Blair C.; Jerjen, Helmut; Kim, Dongwon; Schirmer, Mischa

    2018-01-01

    We use deep Gemini/GMOS-S g, r photometry to study the three ultra-faint dwarf galaxy candidates DES1, Eridanus III (Eri III), and Tucana V (Tuc V). Their total luminosities, M V (DES1) = ‑1.42 ± 0.50 and M V (Eri III) = ‑2.07 ± 0.50, and mean metallicities, [{Fe}/{{H}}]=-{2.38}-0.19+0.21 and [{Fe}/{{H}}]=-{2.40}-0.12+0.19, are consistent with them being ultra-faint dwarf galaxies, as they fall just outside the 1σ confidence band of the luminosity–metallicity relation for Milky Way satellite galaxies. However, their positions in the size–luminosity relation suggest that they are star clusters. Interestingly, DES1 and Eri III are at relatively large Galactocentric distances, with DES1 located at {D}{GC}=74+/- 4 {kpc} and Eri III at {D}{GC}=91+/- 4 {kpc}. In projection, both objects are in the tail of gaseous filaments trailing the Magellanic Clouds and have similar 3D separations from the Small Magellanic Cloud (SMC): {{Δ }}{D}{SMC,{DES}1}=31.7 kpc and {{Δ }}{D}{SMC,{Eri}{III}}=41.0 kpc, respectively. It is plausible that these stellar systems are metal-poor SMC satellites. Tuc V represents an interesting phenomenon in its own right. Our deep photometry at the nominal position of Tuc V reveals a low-level excess of stars at various locations across the GMOS field without a well-defined center. An SMC Northern Overdensity–like isochrone would be an adequate match to the Tuc V color–magnitude diagram, and the proximity to the SMC (12.°1 {{Δ }}{D}{SMC,{Tuc}{{V}}}=13 kpc) suggests that Tuc V is either a chance grouping of stars related to the SMC halo or a star cluster in an advanced stage of dissolution.

  10. An evaluation of the effectiveness of observation camera placement within the MeerKAT radio telescope project

    Directory of Open Access Journals (Sweden)

    Heyns, Andries

    2015-08-01

    Full Text Available A recent development within the MeerKAT sub-project of the Square Kilometre Array radio telescope network was the placement of a network of three observation cameras in pursuit of two specific visibility objectives. In this paper, we evaluate the effectiveness of the locations of the MeerKAT observation camera network according to a novel multi-objective geographic information systems-based facility location framework. We find that the configuration chosen and implemented by the MeerKAT decision-makers is of very high quality, although we are able to uncover slightly superior alternative placement configurations. A significant amount of time and effort could, however, have been saved in the process of choosing the appropriate camera sites, had our solutions been available to the decision-makers.

  11. Visual fatigue modeling for stereoscopic video shot based on camera motion

    Science.gov (United States)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  12. Efficient view based 3-D object retrieval using Hidden Markov Model

    Science.gov (United States)

    Jain, Yogendra Kumar; Singh, Roshan Kumar

    2013-12-01

    Recent research effort has been dedicated to view based 3-D object retrieval, because of highly discriminative property of 3-D object and has multi view representation. The state-of-art method is highly depending on their own camera array setting for capturing views of 3-D object and use complex Zernike descriptor, HAC for representative view selection which limit their practical application and make it inefficient for retrieval. Therefore, an efficient and effective algorithm is required for 3-D Object Retrieval. In order to move toward a general framework for efficient 3-D object retrieval which is independent of camera array setting and avoidance of representative view selection, we propose an Efficient View Based 3-D Object Retrieval (EVBOR) method using Hidden Markov Model (HMM). In this framework, each object is represented by independent set of view, which means views are captured from any direction without any camera array restriction. In this, views are clustered (including query view) to generate the view cluster, which is then used to build the query model with HMM. In our proposed method, HMM is used in twofold: in the training (i.e. HMM estimate) and in the retrieval (i.e. HMM decode). The query model is trained by using these view clusters. The EVBOR query model is worked on the basis of query model combining with HMM. The proposed approach remove statically camera array setting for view capturing and can be apply for any 3-D object database to retrieve 3-D object efficiently and effectively. Experimental results demonstrate that the proposed scheme has shown better performance than existing methods. [Figure not available: see fulltext.

  13. STELLAR ARCHEOLOGY IN THE GALACTIC HALO WITH ULTRA-FAINT DWARFS. VII. HERCULES

    Energy Technology Data Exchange (ETDEWEB)

    Musella, Ilaria; Ripepi, Vincenzo; Marconi, Marcella, E-mail: ilaria@na.astro.it, E-mail: ripepi@na.astro.it, E-mail: marcella@na.astro.it [INAF, Osservatorio Astronomico di Capodimonte, I-8013 Napoli (Italy); and others

    2012-09-10

    We present the first time-series study of the ultra-faint dwarf galaxy Hercules. Using a variety of telescope/instrument facilities we secured about 50 V and 80 B epochs. These data allowed us to detect and characterize 10 pulsating variable stars in Hercules. Our final sample includes six fundamental-mode (ab-type) and three first-overtone (c-type) RR Lyrae stars, and one Anomalous Cepheid. The average period of the ab-type RR Lyrae stars, (P{sub ab}) = 0.68 days ({sigma} = 0.03 days), places Hercules in the Oosterhoff II group, as found for almost the totality of the ultra-faint dwarf galaxies investigated so far for variability. The RR Lyrae stars were used to obtain independent estimates of the metallicity, reddening, and distance to Hercules, for which we find [Fe/H] = -2.30 {+-} 0.15 dex, E(B - V) = 0.09 {+-} 0.02 mag, and (m - M){sub 0} = 20.6 {+-} 0.1 mag, in good agreement with the literature values. We have obtained a V, B - V color-magnitude diagram (CMD) of Hercules that reaches V {approx} 25 mag and extends beyond the galaxy's half-light radius over a total area of 40' Multiplication-Sign 36'. The CMD and the RR Lyrae stars indicate the presence of a population as old and metal-poor as (at least) the Galactic globular cluster M68.

  14. Real-time object detection, tracking and occlusion reasoning

    Science.gov (United States)

    Divakaran, Ajay; Yu, Qian; Tamrakar, Amir; Sawhney, Harpreet Singh; Zhu, Jiejie; Javed, Omar; Liu, Jingen; Cheng, Hui; Eledath, Jayakrishnan

    2018-02-27

    A system for object detection and tracking includes technologies to, among other things, detect and track moving objects, such as pedestrians and/or vehicles, in a real-world environment, handle static and dynamic occlusions, and continue tracking moving objects across the fields of view of multiple different cameras.

  15. Photogrammetry of a 5m Inflatable Space Antenna With Consumer Digital Cameras

    Science.gov (United States)

    Pappa, Richard S.; Giersch, Louis R.; Quagliaroli, Jessica M.

    2000-01-01

    This paper discusses photogrammetric measurements of a 5m-diameter inflatable space antenna using four Kodak DC290 (2.1 megapixel) digital cameras. The study had two objectives: 1) Determine the photogrammetric measurement precision obtained using multiple consumer-grade digital cameras and 2) Gain experience with new commercial photogrammetry software packages, specifically PhotoModeler Pro from Eos Systems, Inc. The paper covers the eight steps required using this hardware/software combination. The baseline data set contained four images of the structure taken from various viewing directions. Each image came from a separate camera. This approach simulated the situation of using multiple time-synchronized cameras, which will be required in future tests of vibrating or deploying ultra-lightweight space structures. With four images, the average measurement precision for more than 500 points on the antenna surface was less than 0.020 inches in-plane and approximately 0.050 inches out-of-plane.

  16. a Uav-Based Low-Cost Stereo Camera System for Archaeological Surveys - Experiences from Doliche (turkey)

    Science.gov (United States)

    Haubeck, K.; Prinz, T.

    2013-08-01

    The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.

  17. AN APPARATUS AND A METHOD OF RECORDING AN IMAGE OF AN OBJECT

    DEFF Research Database (Denmark)

    1999-01-01

    The invention relates to a method of recording an image of an object (103) using an electronic camera (102), one or more light sources (104), and means for light distribution (105), where light emitted from the light sources (104) is distributed to illuminate the object (103), light being reflected...... to the camera (102). In the light distribution, an integrating cavity (106) is used to whose inner side (107) a light reflecting coating has been applied, and which is provided with first and second openings (109, 110). The camera (102) is placed in alignment with the first opening (109) so that the optical...

  18. User-assisted visual search and tracking across distributed multi-camera networks

    Science.gov (United States)

    Raja, Yogesh; Gong, Shaogang; Xiang, Tao

    2011-11-01

    Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.

  19. Slow speed object detection for haul trucks

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2009-09-15

    Caterpillar integrates radar technology with its current camera based system. Caterpillar has developed the Integrated Object Detection System, a slow speed object detection system for mining haul trucks. Object detection is a system that aids the truck operator's awareness of their surroundings. The system consists of a color touch screen display along with medium- and short-range radar as well as cameras, harnesses and mounting hardware. It is integrated into the truck's Work Area Vision System (WAVS). After field testing in 2007, system commercialization began in 2008. Prototype systems are in operation in Australia, Utah and Arizona and the Integrated Object Detection System will be available in the fourth quarter of 2009 and on production trucks 785C, 789C, 793D and 797B. The article is adapted from a presentation by Mark Richards of Caterpillar to the Haulage & Loading 2009 conference, May, held in Phoenix, AZ. 1 fig., 5 photos.

  20. Children's everyday exposure to food marketing: an objective analysis using wearable cameras.

    Science.gov (United States)

    Signal, L N; Stanley, J; Smith, M; Barr, M B; Chambers, T J; Zhou, J; Duane, A; Gurrin, C; Smeaton, A F; McKerchar, C; Pearson, A L; Hoek, J; Jenkin, G L S; Ni Mhurchu, C

    2017-10-08

    Over the past three decades the global prevalence of childhood overweight and obesity has increased by 47%. Marketing of energy-dense nutrient-poor foods and beverages contributes to this worldwide increase. Previous research on food marketing to children largely uses self-report, reporting by parents, or third-party observation of children's environments, with the focus mostly on single settings and/or media. This paper reports on innovative research, Kids'Cam, in which children wore cameras to examine the frequency and nature of everyday exposure to food marketing across multiple media and settings. Kids'Cam was a cross-sectional study of 168 children (mean age 12.6 years, SD = 0.5) in Wellington, New Zealand. Each child wore a wearable camera on four consecutive days, capturing images automatically every seven seconds. Images were manually coded as either recommended (core) or not recommended (non-core) to be marketed to children by setting, marketing medium, and product category. Images in convenience stores and supermarkets were excluded as marketing examples were considered too numerous to count. On average, children were exposed to non-core food marketing 27.3 times a day (95% CI 24.8, 30.1) across all settings. This was more than twice their average exposure to core food marketing (12.3 per day, 95% CI 8.7, 17.4). Most non-core exposures occurred at home (33%), in public spaces (30%) and at school (19%). Food packaging was the predominant marketing medium (74% and 64% for core and non-core foods) followed by signs (21% and 28% for core and non-core). Sugary drinks, fast food, confectionary and snack foods were the most commonly encountered non-core foods marketed. Rates were calculated using Poisson regression. Children in this study were frequently exposed, across multiple settings, to marketing of non-core foods not recommended to be marketed to children. The study provides further evidence of the need for urgent action to reduce children's exposure to

  1. Mixel camera--a new push-broom camera concept for high spatial resolution keystone-free hyperspectral imaging.

    Science.gov (United States)

    Høye, Gudrun; Fridman, Andrei

    2013-05-06

    Current high-resolution push-broom hyperspectral cameras introduce keystone errors to the captured data. Efforts to correct these errors in hardware severely limit the optical design, in particular with respect to light throughput and spatial resolution, while at the same time the residual keystone often remains large. The mixel camera solves this problem by combining a hardware component--an array of light mixing chambers--with a mathematical method that restores the hyperspectral data to its keystone-free form, based on the data that was recorded onto the sensor with large keystone. A Virtual Camera software, that was developed specifically for this purpose, was used to compare the performance of the mixel camera to traditional cameras that correct keystone in hardware. The mixel camera can collect at least four times more light than most current high-resolution hyperspectral cameras, and simulations have shown that the mixel camera will be photon-noise limited--even in bright light--with a significantly improved signal-to-noise ratio compared to traditional cameras. A prototype has been built and is being tested.

  2. Engineering task plan for Tanks 241-AN-103, 104, 105 color video camera systems

    International Nuclear Information System (INIS)

    Kohlman, E.H.

    1994-01-01

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and installation of the video camera systems into the vapor space within tanks 241-AN-103, 104, and 105. The one camera remotely operated color video systems will be used to observe and record the activities within the vapor space. Activities may include but are not limited to core sampling, auger activities, crust layer examination, monitoring of equipment installation/removal, and any other activities. The objective of this task is to provide a single camera system in each of the tanks for the Flammable Gas Tank Safety Program

  3. VUV testing of science cameras at MSFC: QE measurement of the CLASP flight cameras

    Science.gov (United States)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-08-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint MSFC, National Astronomical Observatory of Japan (NAOJ), Instituto de Astrofisica de Canarias (IAC) and Institut D'Astrophysique Spatiale (IAS) sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512 × 512 detector, dual channel analog readout and an internally mounted cold block. At the flight CCD temperature of -20C, the CLASP cameras exceeded the low-noise performance requirements (UV, EUV and X-ray science cameras at MSFC.

  4. Neutron cameras for ITER

    International Nuclear Information System (INIS)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-01-01

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from 16 N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with 16 N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins

  5. Measuring the Angular Velocity of a Propeller with Video Camera Using Electronic Rolling Shutter

    Directory of Open Access Journals (Sweden)

    Yipeng Zhao

    2018-01-01

    Full Text Available Noncontact measurement for rotational motion has advantages over the traditional method which measures rotational motion by means of installing some devices on the object, such as a rotary encoder. Cameras can be employed as remote monitoring or inspecting sensors to measure the angular velocity of a propeller because of their commonplace availability, simplicity, and potentially low cost. A defect of the measurement with cameras is to process the massive data generated by cameras. In order to reduce the collected data from the camera, a camera using ERS (electronic rolling shutter is applied to measure angular velocities which are higher than the speed of the camera. The effect of rolling shutter can induce geometric distortion in the image, when the propeller rotates during capturing an image. In order to reveal the relationship between the angular velocity and the image distortion, a rotation model has been established. The proposed method was applied to measure the angular velocities of the two-blade propeller and the multiblade propeller. The experimental results showed that this method could detect the angular velocities which were higher than the camera speed, and the accuracy was acceptable.

  6. A Keck/DEIMOS spectroscopic survey of the faint M31 satellites AndIX, AndXI, AndXII and AndXIII†

    Science.gov (United States)

    Collins, M. L. M.; Chapman, S. C.; Irwin, M. J.; Martin, N. F.; Ibata, R. A.; Zucker, D. B.; Blain, A.; Ferguson, A. M. N.; Lewis, G. F.; McConnachie, A. W.; Peñarrubia, J.

    2010-10-01

    We present the first spectroscopic analysis of the faint M31 satellite galaxies, AndXI and AndXIII, as well as a re-analysis of existing spectroscopic data for two further faint companions, AndIX (correcting for an error in earlier geometric modelling that caused a misclassification of member stars in previous work) and AndXII. By combining data obtained using the Deep Imaging Multi-Object Spectrograph (DEIMOS) mounted on the Keck II telescope with deep photometry from the Suprime-Cam instrument on Subaru, we have identified the most probable members for each of the satellites based on their radial velocities (precise to several down to i ~ 22), distance from the centre of the dwarf spheroidal galaxies (dSphs) and their photometric [Fe/H]. Using both the photometric and spectroscopic data, we have also calculated global properties for the dwarfs, such as systemic velocities, metallicities and half-light radii. We find each dwarf to be very metal poor ([Fe/H] ~ -2 both photometrically and spectroscopically, from their stacked spectrum), and as such, they continue to follow the luminosity-metallicity relationship established with brighter dwarfs. We are unable to resolve dispersion for AndXI due to small sample size and low signal-to-noise ratio, but we set a 1σ upper limit of σv financial support of the W.M. Keck Foundation. Based in part on data collected at Subaru Telescope, which is operated by the National Astronomical Observatory of Japan. ‡ E-mail: mlmc2@ast.cam.ac.uk

  7. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many

  8. Determining fast orientation changes of multi-spectral line cameras from the primary images

    Science.gov (United States)

    Wohlfeil, Jürgen

    2012-01-01

    Fast orientation changes of airborne and spaceborne line cameras cannot always be avoided. In such cases it is essential to measure them with high accuracy to ensure a good quality of the resulting imagery products. Several approaches exist to support the orientation measurement by using optical information received through the main objective/telescope. In this article an approach is proposed that allows the determination of non-systematic orientation changes between every captured line. It does not require any additional camera hardware or onboard processing capabilities but the payload images and a rough estimate of the camera's trajectory. The approach takes advantage of the typical geometry of multi-spectral line cameras with a set of linear sensor arrays for different spectral bands on the focal plane. First, homologous points are detected within the heavily distorted images of different spectral bands. With their help a connected network of geometrical correspondences can be built up. This network is used to calculate the orientation changes of the camera with the temporal and angular resolution of the camera. The approach was tested with an extensive set of aerial surveys covering a wide range of different conditions and achieved precise and reliable results.

  9. A method of camera calibration in the measurement process with reference mark for approaching observation space target

    Science.gov (United States)

    Zhang, Hua; Zeng, Luan

    2017-11-01

    Binocular stereoscopic vision can be used for space-based space targets near observation. In order to solve the problem that the traditional binocular vision system cannot work normally after interference, an online calibration method of binocular stereo measuring camera with self-reference is proposed. The method uses an auxiliary optical imaging device to insert the image of the standard reference object into the edge of the main optical path and image with the target on the same focal plane, which is equivalent to a standard reference in the binocular imaging optical system; When the position of the system and the imaging device parameters are disturbed, the image of the standard reference will change accordingly in the imaging plane, and the position of the standard reference object does not change. The camera's external parameters can be re-calibrated by the visual relationship of the standard reference object. The experimental results show that the maximum mean square error of the same object can be reduced from the original 72.88mm to 1.65mm when the right camera is deflected by 0.4 degrees and the left camera is high and low with 0.2° rotation. This method can realize the online calibration of binocular stereoscopic vision measurement system, which can effectively improve the anti - jamming ability of the system.

  10. STS-37 Breakfast / Ingress / Launch & ISO Camera Views

    Science.gov (United States)

    1991-01-01

    The primary objective of the STS-37 mission was to deploy the Gamma Ray Observatory. The mission was launched at 9:22:44 am on April 5, 1991, onboard the space shuttle Atlantis. The mission was led by Commander Steven Nagel. The crew was Pilot Kenneth Cameron and Mission Specialists Jerry Ross, Jay Apt, and Linda Godwing. This videotape shows the crew having breakfast on the launch day, with the narrator introducing them. It then shows the crew's final preparations and the entry into the shuttle, while the narrator gives information about each of the crew members. The countdown and launch is shown including the shuttle separation from the solid rocket boosters. The launch is reshown from 17 different camera views. Some of the other camera views were in black and white.

  11. Improving Situational Awareness in camera surveillance by combining top-view maps with camera images

    NARCIS (Netherlands)

    Kooi, F.L.; Zeeders, R.

    2009-01-01

    The goal of the experiment described is to improve today's camera surveillance in public spaces. Three designs with the camera images combined on a top-view map were compared to each other and to the current situation in camera surveillance. The goal was to test which design makes spatial

  12. Do the enigmatic ``Infrared-Faint Radio Sources'' include pulsars?

    Science.gov (United States)

    Hobbs, George; Middelberg, Enno; Norris, Ray; Keith, Michael; Mao, Minnie; Champion, David

    2009-04-01

    The Australia Telescope Large Area Survey (ATLAS) team have surveyed seven square degrees of sky at 1.4GHz. During processing some unexpected infrared-faint radio sources (IFRS sources) were discovered. The nature of these sources is not understood, but it is possible that some of these sources may be pulsars within our own galaxy. We propose to observe the IFRS sources with steep spectral indices using standard search techniques to determine whether or not they are pulsars. A pulsar detection would 1) remove a subset of the IFRS sources from the ATLAS sample so they would not need to be observed with large optical/IR telescopes to find their hosts and 2) be intrinsically interesting as the pulsar would be a millisecond pulsar and/or have an extreme spatial velocity.

  13. Quality assurance procedures for the IAEA Department of Safeguards Twin Minolta Camera Surveillance System

    International Nuclear Information System (INIS)

    Geoffrion, R.R.; Bussolini, P.L.; Stark, W.A.; Ahlquist, A.J.; Sanders, K.E.; Rubinstein, G.

    1986-01-01

    The International Atomic Energy Agency (IAEA) safeguards program provides assurance to the international community that nations are complying with nuclear safeguards treaties. In one aspect of the program, the Department of Safeguards has developed a twin Minolta camera photo surveillance systems program to assure itself and the international community that material handling is accomplished according to safeguards treaty regulations. The camera systems are positioned in strategic locations in facilities such that objective evidence can be obtained for material transactions. The films are then processed, reviewed, and used to substantiate the conclusions that nuclear material has not been diverted. Procedures have been developed to document and aid in: 1) the performance of activities involved in positioning of the camera system; 2) installation of the systems; 3) review and use of the film taken from the cameras

  14. VUV Testing of Science Cameras at MSFC: QE Measurement of the CLASP Flight Cameras

    Science.gov (United States)

    Champey, Patrick R.; Kobayashi, Ken; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512x512 detector, dual channel analog readout electronics and an internally mounted cold block. At the flight operating temperature of -20 C, the CLASP cameras achieved the low-noise performance requirements (less than or equal to 25 e- read noise and greater than or equal to 10 e-/sec/pix dark current), in addition to maintaining a stable gain of approximately equal to 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Four flight-like cameras were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise, dark current and residual non-linearity of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV and EUV science cameras at MSFC.

  15. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  16. Accurate shear measurement with faint sources

    International Nuclear Information System (INIS)

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys

  17. Counting neutrons with a commercial S-CMOS camera

    Science.gov (United States)

    Patrick, Van Esch; Paolo, Mutti; Emilio, Ruiz-Martinez; Estefania, Abad Garcia; Marita, Mosconi; Jon, Ortega

    2018-01-01

    It is possible to detect individual flashes from thermal neutron impacts in a ZnS scintillator using a CMOS camera looking at the scintillator screen, and off line image processing. Some preliminary results indicated that the efficiency of recognition could be improved by optimizing the light collection and the image processing. We will report on this ongoing work which is a result from the collaboration between ESS Bilbao and the ILL. The main progress to be reported is situated on the level of the on-line treatment of the imaging data. If this technology is to work on a genuine scientific instrument, it is necessary that all the processing happens on line, to avoid the accumulation of large amounts of image data to be analyzed off line. An FPGA-based real-time full-deca mode VME-compatible CameraLink board has been developed at the SCI of the ILL, which is able to manage the data flow from the camera and convert it in a reasonable "neutron impact" data flow like from a usual neutron counting detector. The main challenge of the endeavor is the optical light collection from the scintillator. While the light yield of a ZnS scintillator is a priori rather important, the amount of light collected with a photographic objective is small. Different scintillators and different light collection techniques have been experimented with and results will be shown for different setups improving upon the light recuperation on the camera sensor. Improvements on the algorithm side will also be presented. The algorithms have to be at the same time efficient in their recognition of neutron signals, in their rejection of noise signals (internal and external to the camera) but also have to be simple enough to be easily implemented in the FPGA. The path from the idea of detecting individual neutron impacts with a CMOS camera to a practical working instrument detector is challenging, and in this paper we will give an overview of the part of the road that has already been walked.

  18. Counting neutrons with a commercial S-CMOS camera

    Directory of Open Access Journals (Sweden)

    Patrick Van Esch

    2018-01-01

    Full Text Available It is possible to detect individual flashes from thermal neutron impacts in a ZnS scintillator using a CMOS camera looking at the scintillator screen, and off line image processing. Some preliminary results indicated that the efficiency of recognition could be improved by optimizing the light collection and the image processing. We will report on this ongoing work which is a result from the collaboration between ESS Bilbao and the ILL. The main progress to be reported is situated on the level of the on-line treatment of the imaging data. If this technology is to work on a genuine scientific instrument, it is necessary that all the processing happens on line, to avoid the accumulation of large amounts of image data to be analyzed off line. An FPGA-based real-time full-deca mode VME-compatible CameraLink board has been developed at the SCI of the ILL, which is able to manage the data flow from the camera and convert it in a reasonable “neutron impact” data flow like from a usual neutron counting detector. The main challenge of the endeavor is the optical light collection from the scintillator. While the light yield of a ZnS scintillator is a priori rather important, the amount of light collected with a photographic objective is small. Different scintillators and different light collection techniques have been experimented with and results will be shown for different setups improving upon the light recuperation on the camera sensor. Improvements on the algorithm side will also be presented. The algorithms have to be at the same time efficient in their recognition of neutron signals, in their rejection of noise signals (internal and external to the camera but also have to be simple enough to be easily implemented in the FPGA. The path from the idea of detecting individual neutron impacts with a CMOS camera to a practical working instrument detector is challenging, and in this paper we will give an overview of the part of the road that has

  19. CEMP Stars in the Halo and Their Origin in Ultra-Faint Dwarf Galaxies

    Science.gov (United States)

    Beers, Timothy C.

    2018-06-01

    The very metal-poor (VMP; [Fe/H] 3.0) stars provide a direct view of Galactic chemical and dynamical evolution; detailed spectroscopic studies of these objects are the best way to identify and distinguish between various scenarios for the enrichment of early star-forming gas clouds soon after the Big Bang. It has been recognized that a large fraction of VMP (15-20%) and EMP stars (30-40%) possess significant over-abundances of carbon relative to iron, [C/Fe] > +0.7. This fraction rises to at least 80% for stars with [Fe/H] 3.0 belong to the CEMP-no sub-class, characterized by the lack of strong enhancements in the neutron-capture elements (e.g., [Ba/Fe] < 0.0). The CEMP-no abundance signature is commonly observed among stars ultra-faint dwarf spheroidal galaxies such as SEGUE-1. In addition, kinematic studies of CEMP-no stars strongly suggest an association with the outer-halo population of the Galaxy, which was likely formed from the accretion of low-mass mini-halos. These observations, and other lines of evidence, indicate that the CEMP-no stars of the Milky Way were born in low-mass dwarf galaxies, and later subsumed into the halo.

  20. Combine TV-L1 model with guided image filtering for wide and faint ring artifacts correction of in-line x-ray phase contrast computed tomography.

    Science.gov (United States)

    Ji, Dongjiang; Qu, Gangrong; Hu, Chunhong; Zhao, Yuqing; Chen, Xiaodong

    2018-01-01

    In practice, mis-calibrated detector pixels give rise to wide and faint ring artifacts in the reconstruction image of the In-line phase-contrast computed tomography (IL-PC-CT). Ring artifacts correction is essential in IL-PC-CT. In this study, a novel method of wide and faint ring artifacts correction was presented based on combining TV-L1 model with guided image filtering (GIF) in the reconstruction image domain. The new correction method includes two main steps namely, the GIF step and the TV-L1 step. To validate the performance of this method, simulation data and real experimental synchrotron data are provided. The results demonstrate that TV-L1 model with GIF step can effectively correct the wide and faint ring artifacts for IL-PC-CT.

  1. Instantaneous phase-shifting Fizeau interferometry with high-speed pixelated phase-mask camera

    Science.gov (United States)

    Yatagai, Toyohiko; Jackin, Boaz Jessie; Ono, Akira; Kiyohara, Kosuke; Noguchi, Masato; Yoshii, Minoru; Kiyohara, Motosuke; Niwa, Hayato; Ikuo, Kazuyuki; Onuma, Takashi

    2015-08-01

    A Fizeou interferometer with instantaneous phase-shifting ability using a Wollaston prism is designed. to measure dynamic phase change of objects, a high-speed video camera of 10-5s of shutter speed is used with a pixelated phase-mask of 1024 × 1024 elements. The light source used is a laser of wavelength 532 nm which is split into orthogonal polarization states by passing through a Wollaston prism. By adjusting the tilt of the reference surface it is possible to make the reference and object beam with orthogonal polarizations states to coincide and interfere. Then the pixelated phase-mask camera calculate the phase changes and hence the optical path length difference. Vibration of speakers and turbulence of air flow were successfully measured in 7,000 frames/sec.

  2. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  3. VizieR Online Data Catalog: Imaging observations of iPTF 13ajg (Vreeswijk+, 2014)

    Science.gov (United States)

    Vreeswijk, P. M.; Savaglio, S.; Gal-Yam, A.; De Cia, A.; Quimby, R. M.; Sullivan, M.; Cenko, S. B.; Perley, D. A.; Filippenko, A. V.; Clubb, K. I.; Taddia, F.; Sollerman, J.; Leloudas, G.; Arcavi, I.; Rubin, A.; Kasliwal, M. M.; Cao, Y.; Yaron, O.; Tal, D.; Ofek, E. O.; Capone, J.; Kutyrev, A. S.; Toy, V.; Nugent, P. E.; Laher, R.; Surace, J.; Kulkarni, S. R.

    2017-08-01

    iPTF 13ajg was imaged with the Palomar 48 inch (P48) Oschin iPTF survey telescope equipped with a 12kx8k CCD mosaic camera (Rahmer et al. 2008SPIE.7014E..4YR) in the Mould R filter, the Palomar 60 inch and CCD camera (Cenko et al. 2006PASP..118.1396C) in Johnson B and Sloan Digital Sky Survey (SDSS) gri, the 2.56 m Nordic Optical Telescope (on La Palma, Canary Islands) with the Andalucia Faint Object Spectrograph and Camera (ALFOSC) in SDSS ugriz, the 4.3 m Discovery Channel Telescope (at Lowell Observatory, Arizona) with the Large Monolithic Imager (LMI) in SDSS r, and with LRIS (Oke et al. 1995PASP..107..375O) and the Multi-Object Spectrometer for Infrared Exploration (MOSFIRE; McLean et al. 2012SPIE.8446E..0JM), both mounted on the 10 m Keck-I telescope (on Mauna Kea, Hawaii), in g and Rs with LRIS and J and Ks with MOSFIRE. (1 data file).

  4. Automatic helmet-wearing detection for law enforcement using CCTV cameras

    Science.gov (United States)

    Wonghabut, P.; Kumphong, J.; Satiennam, T.; Ung-arunyawee, R.; Leelapatra, W.

    2018-04-01

    The objective of this research is to develop an application for enforcing helmet wearing using CCTV cameras. The developed application aims to help law enforcement by police, and eventually resulting in changing risk behaviours and consequently reducing the number of accidents and its severity. Conceptually, the application software implemented using C++ language and OpenCV library uses two different angle of view CCTV cameras. Video frames recorded by the wide-angle CCTV camera are used to detect motorcyclists. If any motorcyclist without helmet is found, then the zoomed (narrow-angle) CCTV is activated to capture image of the violating motorcyclist and the motorcycle license plate in real time. Captured images are managed by database implemented using MySQL for ticket issuing. The results show that the developed program is able to detect 81% of motorcyclists on various motorcycle types during daytime and night-time. The validation results reveal that the program achieves 74% accuracy in detecting the motorcyclist without helmet.

  5. Improvement of passive THz camera images

    Science.gov (United States)

    Kowalski, Marcin; Piszczek, Marek; Palka, Norbert; Szustakowski, Mieczyslaw

    2012-10-01

    Terahertz technology is one of emerging technologies that has a potential to change our life. There are a lot of attractive applications in fields like security, astronomy, biology and medicine. Until recent years, terahertz (THz) waves were an undiscovered, or most importantly, an unexploited area of electromagnetic spectrum. The reasons of this fact were difficulties in generation and detection of THz waves. Recent advances in hardware technology have started to open up the field to new applications such as THz imaging. The THz waves can penetrate through various materials. However, automated processing of THz images can be challenging. The THz frequency band is specially suited for clothes penetration because this radiation does not point any harmful ionizing effects thus it is safe for human beings. Strong technology development in this band have sparked with few interesting devices. Even if the development of THz cameras is an emerging topic, commercially available passive cameras still offer images of poor quality mainly because of its low resolution and low detectors sensitivity. Therefore, THz image processing is very challenging and urgent topic. Digital THz image processing is a really promising and cost-effective way for demanding security and defense applications. In the article we demonstrate the results of image quality enhancement and image fusion of images captured by a commercially available passive THz camera by means of various combined methods. Our research is focused on dangerous objects detection - guns, knives and bombs hidden under some popular types of clothing.

  6. Directional Unfolded Source Term (DUST) for Compton Cameras.

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Dean J.; Mitchell, Dean J.; Horne, Steven M.; O' Brien, Sean; Thoreson, Gregory G

    2018-03-01

    A Directional Unfolded Source Term (DUST) algorithm was developed to enable improved spectral analysis capabilities using data collected by Compton cameras. Achieving this objective required modification of the detector response function in the Gamma Detector Response and Analysis Software (GADRAS). Experimental data that were collected in support of this work include measurements of calibration sources at a range of separation distances and cylindrical depleted uranium castings.

  7. Real-Time FPGA-Based Object Tracker with Automatic Pan-Tilt Features for Smart Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2017-05-01

    Full Text Available The design of smart video surveillance systems is an active research field among the computer vision community because of their ability to perform automatic scene analysis by selecting and tracking the objects of interest. In this paper, we present the design and implementation of an FPGA-based standalone working prototype system for real-time tracking of an object of interest in live video streams for such systems. In addition to real-time tracking of the object of interest, the implemented system is also capable of providing purposive automatic camera movement (pan-tilt in the direction determined by movement of the tracked object. The complete system, including camera interface, DDR2 external memory interface controller, designed object tracking VLSI architecture, camera movement controller and display interface, has been implemented on the Xilinx ML510 (Virtex-5 FX130T FPGA Board. Our proposed, designed and implemented system robustly tracks the target object present in the scene in real time for standard PAL (720 × 576 resolution color video and automatically controls camera movement in the direction determined by the movement of the tracked object.

  8. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  9. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    Energy Technology Data Exchange (ETDEWEB)

    Strehlow, J.P.

    1994-08-24

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE` s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1).

  10. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    International Nuclear Information System (INIS)

    Strehlow, J.P.

    1994-01-01

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE' s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1)

  11. Dual-camera design for coded aperture snapshot spectral imaging.

    Science.gov (United States)

    Wang, Lizhi; Xiong, Zhiwei; Gao, Dahua; Shi, Guangming; Wu, Feng

    2015-02-01

    Coded aperture snapshot spectral imaging (CASSI) provides an efficient mechanism for recovering 3D spectral data from a single 2D measurement. However, since the reconstruction problem is severely underdetermined, the quality of recovered spectral data is usually limited. In this paper we propose a novel dual-camera design to improve the performance of CASSI while maintaining its snapshot advantage. Specifically, a beam splitter is placed in front of the objective lens of CASSI, which allows the same scene to be simultaneously captured by a grayscale camera. This uncoded grayscale measurement, in conjunction with the coded CASSI measurement, greatly eases the reconstruction problem and yields high-quality 3D spectral data. Both simulation and experimental results demonstrate the effectiveness of the proposed method.

  12. Optical camera system for radiation field

    International Nuclear Information System (INIS)

    Maki, Koichi; Senoo, Makoto; Takahashi, Fuminobu; Shibata, Keiichiro; Honda, Takuro.

    1995-01-01

    An infrared-ray camera comprises a transmitting filter used exclusively for infrared-rays at a specific wavelength, such as far infrared-rays and a lens used exclusively for infrared rays. An infrared ray emitter-incorporated photoelectric image converter comprising an infrared ray emitting device, a focusing lens and a semiconductor image pick-up plate is disposed at a place of low gamma-ray dose rate. Infrared rays emitted from an objective member are passed through the lens system of the camera, and real images are formed by way of the filter. They are transferred by image fibers, introduced to the photoelectric image converter and focused on the image pick-up plate by the image-forming lens. Further, they are converted into electric signals and introduced to a display and monitored. With such a constitution, an optical material used exclusively for infrared rays, for example, ZnSe can be used for the lens system and the optical transmission system. Accordingly, it can be used in a radiation field of high gamma ray dose rate around the periphery of the reactor container. (I.N.)

  13. Moving object detection using background subtraction

    CERN Document Server

    Shaikh, Soharab Hossain; Chaki, Nabendu

    2014-01-01

    This Springer Brief presents a comprehensive survey of the existing methodologies of background subtraction methods. It presents a framework for quantitative performance evaluation of different approaches and summarizes the public databases available for research purposes. This well-known methodology has applications in moving object detection from video captured with a stationery camera, separating foreground and background objects and object classification and recognition. The authors identify common challenges faced by researchers including gradual or sudden illumination change, dynamic bac

  14. Fuzzy-Rule-Based Object Identification Methodology for NAVI System

    Directory of Open Access Journals (Sweden)

    Yaacob Sazali

    2005-01-01

    Full Text Available We present an object identification methodology applied in a navigation assistance for visually impaired (NAVI system. The NAVI has a single board processing system (SBPS, a digital video camera mounted headgear, and a pair of stereo earphones. The captured image from the camera is processed by the SBPS to generate a specially structured stereo sound suitable for vision impaired people in understanding the presence of objects/obstacles in front of them. The image processing stage is designed to identify the objects in the captured image. Edge detection and edge-linking procedures are applied in the processing of image. A concept of object preference is included in the image processing scheme and this concept is realized using a fuzzy-rule base. The blind users are trained with the stereo sound produced by NAVI for achieving a collision-free autonomous navigation.

  15. Fuzzy-Rule-Based Object Identification Methodology for NAVI System

    Science.gov (United States)

    Nagarajan, R.; Sainarayanan, G.; Yaacob, Sazali; Porle, Rosalyn R.

    2005-12-01

    We present an object identification methodology applied in a navigation assistance for visually impaired (NAVI) system. The NAVI has a single board processing system (SBPS), a digital video camera mounted headgear, and a pair of stereo earphones. The captured image from the camera is processed by the SBPS to generate a specially structured stereo sound suitable for vision impaired people in understanding the presence of objects/obstacles in front of them. The image processing stage is designed to identify the objects in the captured image. Edge detection and edge-linking procedures are applied in the processing of image. A concept of object preference is included in the image processing scheme and this concept is realized using a fuzzy-rule base. The blind users are trained with the stereo sound produced by NAVI for achieving a collision-free autonomous navigation.

  16. Design of an experimental four-camera setup for enhanced 3D surface reconstruction in microsurgery

    Directory of Open Access Journals (Sweden)

    Marzi Christian

    2017-09-01

    Full Text Available Future fully digital surgical visualization systems enable a wide range of new options. Caused by optomechanical limitations a main disadvantage of today’s surgical microscopes is their incapability of providing arbitrary perspectives to more than two observers. In a fully digital microscopic system, multiple arbitrary views can be generated from a 3D reconstruction. Modern surgical microscopes allow replacing the eyepieces by cameras in order to record stereoscopic videos. A reconstruction from these videos can only contain the amount of detail the recording camera system gathers from the scene. Therefore, covered surfaces can result in a faulty reconstruction for deviating stereoscopic perspectives. By adding cameras recording the object from different angles, additional information of the scene is acquired, allowing to improve the reconstruction. Our approach is to use a fixed four-camera setup as a front-end system to capture enhanced 3D topography of a pseudo-surgical scene. This experimental setup would provide images for the reconstruction algorithms and generation of multiple observing stereo perspectives. The concept of the designed setup is based on the common main objective (CMO principle of current surgical microscopes. These systems are well established and optically mature. Furthermore, the CMO principle allows a more compact design and a lowered effort in calibration than cameras with separate optics. Behind the CMO four pupils separate the four channels which are recorded by one camera each. The designed system captures an area of approximately 28mm × 28mm with four cameras. Thus, allowing to process images of 6 different stereo perspectives. In order to verify the setup, it is modelled in silico. It can be used in further studies to test algorithms for 3D reconstruction from up to four perspectives and provide information about the impact of additionally recorded perspectives on the enhancement of a reconstruction.

  17. View-based 3-D object retrieval

    CERN Document Server

    Gao, Yue

    2014-01-01

    Content-based 3-D object retrieval has attracted extensive attention recently and has applications in a variety of fields, such as, computer-aided design, tele-medicine,mobile multimedia, virtual reality, and entertainment. The development of efficient and effective content-based 3-D object retrieval techniques has enabled the use of fast 3-D reconstruction and model design. Recent technical progress, such as the development of camera technologies, has made it possible to capture the views of 3-D objects. As a result, view-based 3-D object retrieval has become an essential but challenging res

  18. Automatic locking radioisotope camera lock

    International Nuclear Information System (INIS)

    Rosauer, P.J.

    1978-01-01

    The lock of the present invention secures the isotope source in a stored shielded condition in the camera until a positive effort has been made to open the lock and take the source outside of the camera and prevents disconnection of the source pigtail unless the source is locked in a shielded condition in the camera. It also gives a visual indication of the locked or possible exposed condition of the isotope source and prevents the source pigtail from being completely pushed out of the camera, even when the lock is released. (author)

  19. The New Approach to Camera Calibration – GCPs or TLS Data?

    Directory of Open Access Journals (Sweden)

    J. Markiewicz

    2016-06-01

    Full Text Available Camera calibration is one of the basic photogrammetric tasks responsible for the quality of processed products. The majority of calibration is performed with a specially designed test field or during the self-calibration process. The research presented in this paper aims to answer the question of whether it is necessary to use control points designed in the standard way for determination of camera interior orientation parameters. Data from close-range laser scanning can be used as an alternative. The experiments shown in this work demonstrate the potential of laser measurements, since the number of points that may be involved in the calculation is much larger than that of commonly used ground control points. The problem which still exists is the correct and automatic identification of object details in the image, taken with a tested camera, as well as in the data set registered with the laser scanner.

  20. Acceptance/Operational Test Report for Tank 241-AN-104 camera and camera purge control system

    International Nuclear Information System (INIS)

    Castleberry, J.L.

    1995-11-01

    This Acceptance/Operational Test Procedure (ATP/OTP) will document the satisfactory operation of the camera purge panel, purge control panel, color camera system and associated control components destined for installation. The final acceptance of the complete system will be performed in the field. The purge panel and purge control panel will be tested for its safety interlock which shuts down the camera and pan-and-tilt inside the tank vapor space during loss of purge pressure and that the correct purge volume exchanges are performed as required by NFPA 496. This procedure is separated into seven sections. This Acceptance/Operational Test Report documents the successful acceptance and operability testing of the 241-AN-104 camera system and camera purge control system

  1. HST image of Gravitational Lens G2237 + 305 or 'Einstein Cross'

    Science.gov (United States)

    1990-01-01

    European Space Agency (ESA) Faint Object Camera (FOC) science image was taken from the Hubble Space Telescope (HST) of Gravitational Lens G2237 + 305 or 'Einstein Cross'. The gravitational lens G2237 + 305 or 'Einstein Cross' shows four images of a very distant quasar which has been multiple-imaged by a relatively nearby galaxy acting as a gravitational lens. The angular separation between the upper and lower images is 1.6 arc seconds. Photo was released from Goddard Space Flight Center (GSFC) 09-12-90.

  2. Automatic Chessboard Detection for Intrinsic and Extrinsic Camera Parameter Calibration

    Directory of Open Access Journals (Sweden)

    Jose María Armingol

    2010-03-01

    Full Text Available There are increasing applications that require precise calibration of cameras to perform accurate measurements on objects located within images, and an automatic algorithm would reduce this time consuming calibration procedure. The method proposed in this article uses a pattern similar to that of a chess board, which is found automatically in each image, when no information regarding the number of rows or columns is supplied to aid its detection. This is carried out by means of a combined analysis of two Hough transforms, image corners and invariant properties of the perspective transformation. Comparative analysis with more commonly used algorithms demonstrate the viability of the algorithm proposed, as a valuable tool for camera calibration.

  3. Convolutional Neural Network-Based Human Detection in Nighttime Images Using Visible Light Camera Sensors.

    Science.gov (United States)

    Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung

    2017-05-08

    Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR) illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1) and two open databases (Korea advanced institute of science and technology (KAIST) and computer vision center (CVC) databases), as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.

  4. Convolutional Neural Network-Based Human Detection in Nighttime Images Using Visible Light Camera Sensors

    Directory of Open Access Journals (Sweden)

    Jong Hyun Kim

    2017-05-01

    Full Text Available Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1 and two open databases (Korea advanced institute of science and technology (KAIST and computer vision center (CVC databases, as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.

  5. Development of high-speed video cameras

    Science.gov (United States)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  6. Optomechanical System Development of the AWARE Gigapixel Scale Camera

    Science.gov (United States)

    Son, Hui S.

    Electronic focal plane arrays (FPA) such as CMOS and CCD sensors have dramatically improved to the point that digital cameras have essentially phased out film (except in very niche applications such as hobby photography and cinema). However, the traditional method of mating a single lens assembly to a single detector plane, as required for film cameras, is still the dominant design used in cameras today. The use of electronic sensors and their ability to capture digital signals that can be processed and manipulated post acquisition offers much more freedom of design at system levels and opens up many interesting possibilities for the next generation of computational imaging systems. The AWARE gigapixel scale camera is one such computational imaging system. By utilizing a multiscale optical design, in which a large aperture objective lens is mated with an array of smaller, well corrected relay lenses, we are able to build an optically simple system that is capable of capturing gigapixel scale images via post acquisition stitching of the individual pictures from the array. Properly shaping the array of digital cameras allows us to form an effectively continuous focal surface using off the shelf (OTS) flat sensor technology. This dissertation details developments and physical implementations of the AWARE system architecture. It illustrates the optomechanical design principles and system integration strategies we have developed through the course of the project by summarizing the results of the two design phases for AWARE: AWARE-2 and AWARE-10. These systems represent significant advancements in the pursuit of scalable, commercially viable snapshot gigapixel imaging systems and should serve as a foundation for future development of such systems.

  7. PROPERTY OF THE LARGE FORMAT DIGITAL AERIAL CAMERA DMC II

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2012-07-01

    Full Text Available Z/I Imaging introduced with the DMC II 140, 230 and 250 digital aerial cameras with a very large format CCD for the panchromatic channel. The CCDs have with 140 / 230 / 250 mega pixel a size not available in photogrammetry before. CCDs in general have a very high relative accuracy, but the overall geometry has to be checked as well as the influence of not flat CCDs. A CCD with a size of 96mm × 82mm must have a flatness or knowledge of flatness in the range of 1μm if the camera accuracy in the range of 1.3μm shall not be influenced. The DMC II cameras have been evaluated with three different flying heights leading to 5cm, 9cm and 15cm or 20cm GSD, crossing flight lines and 60% side lap. The optimal test conditions guaranteed the precise determination of the object coordinates as well as the systematic image errors. All three camera types show only very small systematic image errors, ranging in the root mean square between 0.12μm up to 0.3μm with extreme values not exceeding 1.6μm. The remaining systematic image errors, determined by analysis of the image residuals and not covered by the additional parameters, are negligible. A standard deviation of the object point heights below the GSD, determined at independent check points, even in blocks with just 20% side lap and 60% end lap is standard. Corresponding to the excellent image geometry the object point coordinates are only slightly influenced by the self calibration. For all DMCII types the handling of image models for data acquisition must not be supported by an improvement of the image coordinates by the determined systematic image errors. Such an improvement up to now is not standard for photogrammetric software packages. The advantage of a single monolithic CCD is obvious. An edge analysis of pan-sharpened DMC II 250 images resulted in factors for the effective resolution below 1.0. The result below 1.0 is only possible by contrast enhancement, but this requires with low image noise

  8. The development of large-aperture test system of infrared camera and visible CCD camera

    Science.gov (United States)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  9. A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.

    Science.gov (United States)

    Qian, Shuo; Sheng, Yang

    2011-11-01

    Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.

  10. Real-time multi-camera video acquisition and processing platform for ADAS

    Science.gov (United States)

    Saponara, Sergio

    2016-04-01

    The paper presents the design of a real-time and low-cost embedded system for image acquisition and processing in Advanced Driver Assisted Systems (ADAS). The system adopts a multi-camera architecture to provide a panoramic view of the objects surrounding the vehicle. Fish-eye lenses are used to achieve a large Field of View (FOV). Since they introduce radial distortion of the images projected on the sensors, a real-time algorithm for their correction is also implemented in a pre-processor. An FPGA-based hardware implementation, re-using IP macrocells for several ADAS algorithms, allows for real-time processing of input streams from VGA automotive CMOS cameras.

  11. Near-Infrared Photon-Counting Camera for High-Sensitivity Observations

    Science.gov (United States)

    Jurkovic, Michael

    2012-01-01

    The dark current of a transferred-electron photocathode with an InGaAs absorber, responsive over the 0.9-to-1.7- micron range, must be reduced to an ultralow level suitable for low signal spectral astrophysical measurements by lowering the temperature of the sensor incorporating the cathode. However, photocathode quantum efficiency (QE) is known to reduce to zero at such low temperatures. Moreover, it has not been demonstrated that the target dark current can be reached at any temperature using existing photocathodes. Changes in the transferred-electron photocathode epistructure (with an In- GaAs absorber lattice-matched to InP and exhibiting responsivity over the 0.9- to-1.7- m range) and fabrication processes were developed and implemented that resulted in a demonstrated >13x reduction in dark current at -40 C while retaining >95% of the approximately equal to 25% saturated room-temperature QE. Further testing at lower temperature is needed to confirm a >25 C predicted reduction in cooling required to achieve an ultralow dark-current target suitable for faint spectral astronomical observations that are not otherwise possible. This reduction in dark current makes it possible to increase the integration time of the imaging sensor, thus enabling a much higher near-infrared (NIR) sensitivity than is possible with current technology. As a result, extremely faint phenomena and NIR signals emitted from distant celestial objects can be now observed and imaged (such as the dynamics of redshifting galaxies, and spectral measurements on extra-solar planets in search of water and bio-markers) that were not previously possible. In addition, the enhanced NIR sensitivity also directly benefits other NIR imaging applications, including drug and bomb detection, stand-off detection of improvised explosive devices (IED's), Raman spectroscopy and microscopy for life/physical science applications, and semiconductor product defect detection.

  12. Human tracking over camera networks: a review

    Science.gov (United States)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  13. Microprocessor-controlled wide-range streak camera

    Science.gov (United States)

    Lewis, Amy E.; Hollabaugh, Craig

    2006-08-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  14. Microprocessor-controlled, wide-range streak camera

    International Nuclear Information System (INIS)

    Amy E. Lewis; Craig Hollabaugh

    2006-01-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized

  15. Automatic Quadcopter Control Avoiding Obstacle Using Camera with Integrated Ultrasonic Sensor

    Science.gov (United States)

    Anis, Hanafi; Haris Indra Fadhillah, Ahmad; Darma, Surya; Soekirno, Santoso

    2018-04-01

    Automatic navigation on the drone is being developed these days, a wide variety of types of drones and its automatic functions. Drones used in this study was an aircraft with four propellers or quadcopter. In this experiment, image processing used to recognize the position of an object and ultrasonic sensor used to detect obstacle distance. The method used to trace an obsctacle in image processing was the Lucas-Kanade-Tomasi Tracker, which had been widely used due to its high accuracy. Ultrasonic sensor used to complement the image processing success rate to be fully detected object. The obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors. Visual feedback control based PID controllers are used as a control of drones movement. The conclusion of the obstacle avoidance system was to observe at the program decisions from some obstacle conditions read by the camera and ultrasonic sensors.

  16. OCAMS: The OSIRIS-REx Camera Suite

    Science.gov (United States)

    Rizk, B.; Drouet d'Aubigny, C.; Golish, D.; Fellows, C.; Merrill, C.; Smith, P.; Walker, M. S.; Hendershot, J. E.; Hancock, J.; Bailey, S. H.; DellaGiustina, D. N.; Lauretta, D. S.; Tanner, R.; Williams, M.; Harshman, K.; Fitzgibbon, M.; Verts, W.; Chen, J.; Connors, T.; Hamara, D.; Dowd, A.; Lowman, A.; Dubin, M.; Burt, R.; Whiteley, M.; Watson, M.; McMahon, T.; Ward, M.; Booher, D.; Read, M.; Williams, B.; Hunten, M.; Little, E.; Saltzman, T.; Alfred, D.; O'Dougherty, S.; Walthall, M.; Kenagy, K.; Peterson, S.; Crowther, B.; Perry, M. L.; See, C.; Selznick, S.; Sauve, C.; Beiser, M.; Black, W.; Pfisterer, R. N.; Lancaster, A.; Oliver, S.; Oquest, C.; Crowley, D.; Morgan, C.; Castle, C.; Dominguez, R.; Sullivan, M.

    2018-02-01

    The OSIRIS-REx Camera Suite (OCAMS) will acquire images essential to collecting a sample from the surface of Bennu. During proximity operations, these images will document the presence of satellites and plumes, record spin state, enable an accurate model of the asteroid's shape, and identify any surface hazards. They will confirm the presence of sampleable regolith on the surface, observe the sampling event itself, and image the sample head in order to verify its readiness to be stowed. They will document Bennu's history as an example of early solar system material, as a microgravity body with a planetesimal size-scale, and as a carbonaceous object. OCAMS is fitted with three cameras. The MapCam will record color images of Bennu as a point source on approach to the asteroid in order to connect Bennu's ground-based point-source observational record to later higher-resolution surface spectral imaging. The SamCam will document the sample site before, during, and after it is disturbed by the sample mechanism. The PolyCam, using its focus mechanism, will observe the sample site at sub-centimeter resolutions, revealing surface texture and morphology. While their imaging requirements divide naturally between the three cameras, they preserve a strong degree of functional overlap. OCAMS and the other spacecraft instruments will allow the OSIRIS-REx mission to collect a sample from a microgravity body on the same visit during which it was first optically acquired from long range, a useful capability as humanity reaches out to explore near-Earth, Main-Belt and Jupiter Trojan asteroids.

  17. Comparison of active SIFT-based 3D object recognition algorithms

    CSIR Research Space (South Africa)

    Keaikitse, M

    2013-09-01

    Full Text Available by the author of [8]. The following is the procedure used for obtaining that dataset. The training and testing datasets were captured using a Prosilica GE1900C camera. Everyday objects such as cereal and spice boxes were used. In compiling the training dataset....88 1 Yes Spice Bottle - - No Spray Can - - No Spray Can2 1.39 1 Yes images satisfies the condition: (|xi − xj | ≤ xT = 12) ∧ (|yi − yj | ≤ yT = 4), In our case, however, the camera is fixed and the object is placed on a rotating turntable. As a result...

  18. Alignment statistics of clusters with their brightest members at bright and faint isophotes

    International Nuclear Information System (INIS)

    Struble, M.F.

    1987-01-01

    For a sample of 21 first-ranked cluster galaxies with published isophotal photometry and position angles of these isophotes, it is found that the major axes of both the bright and faint isophotal contours tend to be aligned within about 30 deg of the major axis of the parent cluster. This supports the hypothesis that first-ranked galaxies are formed already aligned with their parent clusters rather than the hypothesis that only outer envelopes which accreted after formation are aligned with the cluster. 21 references

  19. Extended Schmidt law holds for faint dwarf irregular galaxies

    Science.gov (United States)

    Roychowdhury, Sambit; Chengalur, Jayaram N.; Shi, Yong

    2017-12-01

    Context. The extended Schmidt law (ESL) is a variant of the Schmidt which relates the surface densities of gas and star formation, with the surface density of stellar mass added as an extra parameter. Although ESL has been shown to be valid for a wide range of galaxy properties, its validity in low-metallicity galaxies has not been comprehensively tested. This is important because metallicity affects the crucial atomic-to-molecular transition step in the process of conversion of gas to stars. Aims: We empirically investigate for the first time whether low metallicity faint dwarf irregular galaxies (dIrrs) from the local universe follow the ESL. Here we consider the "global" law where surface densities are averaged over the galactic discs. dIrrs are unique not only because they are at the lowest end of mass and star formation scales for galaxies, but also because they are metal-poor compared to the general population of galaxies. Methods: Our sample is drawn from the Faint Irregular Galaxy GMRT Survey (FIGGS) which is the largest survey of atomic hydrogen in such galaxies. The gas surface densities are determined using their atomic hydrogen content. The star formation rates are calculated using GALEX far ultraviolet fluxes after correcting for dust extinction, whereas the stellar surface densities are calculated using Spitzer 3.6 μm fluxes. The surface densities are calculated over the stellar discs defined by the 3.6 μm images. Results: We find dIrrs indeed follow the ESL. The mean deviation of the FIGGS galaxies from the relation is 0.01 dex, with a scatter around the relation of less than half that seen in the original relation. In comparison, we also show that the FIGGS galaxies are much more deviant when compared to the "canonical" Kennicutt-Schmidt relation. Conclusions: Our results help strengthen the universality of the ESL, especially for galaxies with low metallicities. We suggest that models of star formation in which feedback from previous generations

  20. HUBBLE FRONTIER FIELDS FIRST COMPLETE CLUSTER DATA: FAINT GALAXIES AT z ∼ 5-10 FOR UV LUMINOSITY FUNCTIONS AND COSMIC REIONIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Ishigaki, Masafumi; Ouchi, Masami; Ono, Yoshiaki [Institute for Cosmic Ray Research, The University of Tokyo, Kashiwa, Chiba 277-8582 (Japan); Kawamata, Ryota; Shimasaku, Kazuhiro [Department of Astronomy, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Oguri, Masamune, E-mail: ishigaki@icrr.u-tokyo.ac.jp [Department of Physics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan)

    2015-01-20

    We present comprehensive analyses of faint dropout galaxies up to z ∼ 10 with the first full-depth data set of the A2744 lensing cluster and parallel fields observed by the Hubble Frontier Fields (HFF) program. We identify 54 dropouts at z ∼ 5-10 in the HFF fields and enlarge the size of the z ∼ 9 galaxy sample obtained to date. Although the number of highly magnified (μ ∼ 10) galaxies is small because of the tiny survey volume of strong lensing, our study reaches the galaxies' intrinsic luminosities comparable to the deepest-field HUDF studies. We derive UV luminosity functions with these faint dropouts, carefully evaluating by intensive simulations the combination of observational incompleteness and lensing effects in the image plane, including magnification, distortion, and multiplication of images, with the evaluation of mass model dependencies. Our results confirm that the faint-end slope, α, is as steep as –2 at z ∼ 6-8 and strengthen the evidence for the rapid decrease of UV luminosity densities, ρ{sub UV}, at z > 8 from the large z ∼ 9 sample. We examine whether the rapid ρ{sub UV} decrease trend can be reconciled with the large Thomson scattering optical depth, τ{sub e}, measured by cosmic microwave background experiments, allowing a large space of free parameters, such as an average ionizing photon escape fraction and a stellar-population-dependent conversion factor. No parameter set can reproduce both the rapid ρ{sub UV} decrease and the large τ {sub e}. It is possible that the ρ{sub UV} decrease moderates at z ≳ 11, that the free parameters significantly evolve toward high z, or that there exist additional sources of reionization such as X-ray binaries and faint active galactic nuclei.

  1. HUBBLE FRONTIER FIELDS FIRST COMPLETE CLUSTER DATA: FAINT GALAXIES AT z ∼ 5-10 FOR UV LUMINOSITY FUNCTIONS AND COSMIC REIONIZATION

    International Nuclear Information System (INIS)

    Ishigaki, Masafumi; Ouchi, Masami; Ono, Yoshiaki; Kawamata, Ryota; Shimasaku, Kazuhiro; Oguri, Masamune

    2015-01-01

    We present comprehensive analyses of faint dropout galaxies up to z ∼ 10 with the first full-depth data set of the A2744 lensing cluster and parallel fields observed by the Hubble Frontier Fields (HFF) program. We identify 54 dropouts at z ∼ 5-10 in the HFF fields and enlarge the size of the z ∼ 9 galaxy sample obtained to date. Although the number of highly magnified (μ ∼ 10) galaxies is small because of the tiny survey volume of strong lensing, our study reaches the galaxies' intrinsic luminosities comparable to the deepest-field HUDF studies. We derive UV luminosity functions with these faint dropouts, carefully evaluating by intensive simulations the combination of observational incompleteness and lensing effects in the image plane, including magnification, distortion, and multiplication of images, with the evaluation of mass model dependencies. Our results confirm that the faint-end slope, α, is as steep as –2 at z ∼ 6-8 and strengthen the evidence for the rapid decrease of UV luminosity densities, ρ UV , at z > 8 from the large z ∼ 9 sample. We examine whether the rapid ρ UV decrease trend can be reconciled with the large Thomson scattering optical depth, τ e , measured by cosmic microwave background experiments, allowing a large space of free parameters, such as an average ionizing photon escape fraction and a stellar-population-dependent conversion factor. No parameter set can reproduce both the rapid ρ UV decrease and the large τ e . It is possible that the ρ UV decrease moderates at z ≳ 11, that the free parameters significantly evolve toward high z, or that there exist additional sources of reionization such as X-ray binaries and faint active galactic nuclei

  2. Towards Adaptive Virtual Camera Control In Computer Games

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2011-01-01

    Automatic camera control aims to define a framework to control virtual camera movements in dynamic and unpredictable virtual environments while ensuring a set of desired visual properties. We inves- tigate the relationship between camera placement and playing behaviour in games and build a user...... model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platform game, we cluster gaze and camera information to identify camera behaviour profiles and we employ...... camera control in games is discussed....

  3. Adaptive Probabilistic Tracking Embedded in Smart Cameras for Distributed Surveillance in a 3D Model

    Directory of Open Access Journals (Sweden)

    Sven Fleck

    2006-12-01

    Full Text Available Tracking applications based on distributed and embedded sensor networks are emerging today, both in the fields of surveillance and industrial vision. Traditional centralized approaches have several drawbacks, due to limited communication bandwidth, computational requirements, and thus limited spatial camera resolution and frame rate. In this article, we present network-enabled smart cameras for probabilistic tracking. They are capable of tracking objects adaptively in real time and offer a very bandwidthconservative approach, as the whole computation is performed embedded in each smart camera and only the tracking results are transmitted, which are on a higher level of abstraction. Based on this, we present a distributed surveillance system. The smart cameras' tracking results are embedded in an integrated 3D environment as live textures and can be viewed from arbitrary perspectives. Also a georeferenced live visualization embedded in Google Earth is presented.

  4. Adaptive Probabilistic Tracking Embedded in Smart Cameras for Distributed Surveillance in a 3D Model

    Directory of Open Access Journals (Sweden)

    Fleck Sven

    2007-01-01

    Full Text Available Tracking applications based on distributed and embedded sensor networks are emerging today, both in the fields of surveillance and industrial vision. Traditional centralized approaches have several drawbacks, due to limited communication bandwidth, computational requirements, and thus limited spatial camera resolution and frame rate. In this article, we present network-enabled smart cameras for probabilistic tracking. They are capable of tracking objects adaptively in real time and offer a very bandwidthconservative approach, as the whole computation is performed embedded in each smart camera and only the tracking results are transmitted, which are on a higher level of abstraction. Based on this, we present a distributed surveillance system. The smart cameras' tracking results are embedded in an integrated 3D environment as live textures and can be viewed from arbitrary perspectives. Also a georeferenced live visualization embedded in Google Earth is presented.

  5. Integrating motion-detection cameras and hair snags for wolverine identification

    Science.gov (United States)

    Audrey J. Magoun; Clinton D. Long; Michael K. Schwartz; Kristine L. Pilgrim; Richard E. Lowell; Patrick Valkenburg

    2011-01-01

    We developed an integrated system for photographing a wolverine's (Gulo gulo) ventral pattern while concurrently collecting hair for microsatellite DNA genotyping. Our objectives were to 1) test the system on a wild population of wolverines using an array of camera and hair-snag (C&H) stations in forested habitat where wolverines were known to occur, 2)...

  6. Fast image reconstruction for Compton camera using stochastic origin ensemble approach.

    Science.gov (United States)

    Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna

    2011-01-01

    will be much larger when full Compton camera system model, including resolution recovery, is implemented and realistic Compton camera geometries are used. It was also shown in this article that while correctly reconstructing the relative distribution of the activity in the object, the SOE algorithm tends to underestimate the intensity values and increase variance in the images; improvements to the SOE reconstruction algorithm will be considered in future work.

  7. A Modified Adaptive Stochastic Resonance for Detecting Faint Signal in Sensors

    Directory of Open Access Journals (Sweden)

    Hengwei Li

    2007-02-01

    Full Text Available In this paper, an approach is presented to detect faint signals with strong noises in sensors by stochastic resonance (SR. We adopt the power spectrum as the evaluation tool of SR, which can be obtained by the fast Fourier transform (FFT. Furthermore, we introduce the adaptive filtering scheme to realize signal processing automatically. The key of the scheme is how to adjust the barrier height to satisfy the optimal condition of SR in the presence of any input. For the given input signal, we present an operable procedure to execute the adjustment scheme. An example utilizing one audio sensor to detect the fault information from the power supply is given. Simulation results show that th

  8. Multi-MGy Radiation Hardened Camera for Nuclear Facilities

    International Nuclear Information System (INIS)

    Girard, Sylvain; Boukenter, Aziz; Ouerdane, Youcef; Goiffon, Vincent; Corbiere, Franck; Rolando, Sebastien; Molina, Romain; Estribeau, Magali; Avon, Barbara; Magnan, Pierre; Paillet, Philippe; Duhamel, Olivier; Gaillardin, Marc; Raine, Melanie

    2015-01-01

    There is an increasing interest in developing cameras for surveillance systems to monitor nuclear facilities or nuclear waste storages. Particularly, for today's and the next generation of nuclear facilities increasing safety requirements consecutive to Fukushima Daiichi's disaster have to be considered. For some applications, radiation tolerance needs to overcome doses in the MGy(SiO 2 ) range whereas the most tolerant commercial or prototypes products based on solid state image sensors withstand doses up to few kGy. The objective of this work is to present the radiation hardening strategy developed by our research groups to enhance the tolerance to ionizing radiations of the various subparts of these imaging systems by working simultaneously at the component and system design levels. Developing radiation-hardened camera implies to combine several radiation-hardening strategies. In our case, we decided not to use the simplest one, the shielding approach. This approach is efficient but limits the camera miniaturization and is not compatible with its future integration in remote-handling or robotic systems. Then, the hardening-by-component strategy appears mandatory to avoid the failure of one of the camera subparts at doses lower than the MGy. Concerning the image sensor itself, the used technology is a CMOS Image Sensor (CIS) designed by ISAE team with custom pixel designs used to mitigate the total ionizing dose (TID) effects that occur well below the MGy range in classical image sensors (e.g. Charge Coupled Devices (CCD), Charge Injection Devices (CID) and classical Active Pixel Sensors (APS)), such as the complete loss of functionality, the dark current increase and the gain drop. We'll present at the conference a comparative study between these radiation-hardened pixel radiation responses with respect to conventional ones, demonstrating the efficiency of the choices made. The targeted strategy to develop the complete radiation hard camera

  9. Multi-MGy Radiation Hardened Camera for Nuclear Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Girard, Sylvain; Boukenter, Aziz; Ouerdane, Youcef [Universite de Saint-Etienne, Lab. Hubert Curien, UMR-CNRS 5516, F-42000 Saint-Etienne (France); Goiffon, Vincent; Corbiere, Franck; Rolando, Sebastien; Molina, Romain; Estribeau, Magali; Avon, Barbara; Magnan, Pierre [ISAE, Universite de Toulouse, F-31055 Toulouse (France); Paillet, Philippe; Duhamel, Olivier; Gaillardin, Marc; Raine, Melanie [CEA, DAM, DIF, F-91297 Arpajon (France)

    2015-07-01

    There is an increasing interest in developing cameras for surveillance systems to monitor nuclear facilities or nuclear waste storages. Particularly, for today's and the next generation of nuclear facilities increasing safety requirements consecutive to Fukushima Daiichi's disaster have to be considered. For some applications, radiation tolerance needs to overcome doses in the MGy(SiO{sub 2}) range whereas the most tolerant commercial or prototypes products based on solid state image sensors withstand doses up to few kGy. The objective of this work is to present the radiation hardening strategy developed by our research groups to enhance the tolerance to ionizing radiations of the various subparts of these imaging systems by working simultaneously at the component and system design levels. Developing radiation-hardened camera implies to combine several radiation-hardening strategies. In our case, we decided not to use the simplest one, the shielding approach. This approach is efficient but limits the camera miniaturization and is not compatible with its future integration in remote-handling or robotic systems. Then, the hardening-by-component strategy appears mandatory to avoid the failure of one of the camera subparts at doses lower than the MGy. Concerning the image sensor itself, the used technology is a CMOS Image Sensor (CIS) designed by ISAE team with custom pixel designs used to mitigate the total ionizing dose (TID) effects that occur well below the MGy range in classical image sensors (e.g. Charge Coupled Devices (CCD), Charge Injection Devices (CID) and classical Active Pixel Sensors (APS)), such as the complete loss of functionality, the dark current increase and the gain drop. We'll present at the conference a comparative study between these radiation-hardened pixel radiation responses with respect to conventional ones, demonstrating the efficiency of the choices made. The targeted strategy to develop the complete radiation hard camera

  10. Photogrammetric measurement of 3D freeform millimetre-sized objects with micro features: an experimental validation of the close-range camera calibration model for narrow angles of view

    Science.gov (United States)

    Percoco, Gianluca; Sánchez Salmerón, Antonio J.

    2015-09-01

    The measurement of millimetre and micro-scale features is performed by high-cost systems based on technologies with narrow working ranges to accurately control the position of the sensors. Photogrammetry would lower the costs of 3D inspection of micro-features and would be applicable to the inspection of non-removable micro parts of large objects too. Unfortunately, the behaviour of photogrammetry is not known when photogrammetry is applied to micro-features. In this paper, the authors address these issues towards the application of digital close-range photogrammetry (DCRP) to the micro-scale, taking into account that in literature there are research papers stating that an angle of view (AOV) around 10° is the lower limit to the application of the traditional pinhole close-range calibration model (CRCM), which is the basis of DCRP. At first a general calibration procedure is introduced, with the aid of an open-source software library, to calibrate narrow AOV cameras with the CRCM. Subsequently the procedure is validated using a reflex camera with a 60 mm macro lens, equipped with extension tubes (20 and 32 mm) achieving magnification of up to 2 times approximately, to verify literature findings with experimental photogrammetric 3D measurements of millimetre-sized objects with micro-features. The limitation experienced by the laser printing technology, used to produce the bi-dimensional pattern on common paper, has been overcome using an accurate pattern manufactured with a photolithographic process. The results of the experimental activity prove that the CRCM is valid for AOVs down to 3.4° and that DCRP results are comparable with the results of existing and more expensive commercial techniques.

  11. Photogrammetric measurement of 3D freeform millimetre-sized objects with micro features: an experimental validation of the close-range camera calibration model for narrow angles of view

    International Nuclear Information System (INIS)

    Percoco, Gianluca; Sánchez Salmerón, Antonio J

    2015-01-01

    The measurement of millimetre and micro-scale features is performed by high-cost systems based on technologies with narrow working ranges to accurately control the position of the sensors. Photogrammetry would lower the costs of 3D inspection of micro-features and would be applicable to the inspection of non-removable micro parts of large objects too. Unfortunately, the behaviour of photogrammetry is not known when photogrammetry is applied to micro-features.In this paper, the authors address these issues towards the application of digital close-range photogrammetry (DCRP) to the micro-scale, taking into account that in literature there are research papers stating that an angle of view (AOV) around 10° is the lower limit to the application of the traditional pinhole close-range calibration model (CRCM), which is the basis of DCRP.At first a general calibration procedure is introduced, with the aid of an open-source software library, to calibrate narrow AOV cameras with the CRCM. Subsequently the procedure is validated using a reflex camera with a 60 mm macro lens, equipped with extension tubes (20 and 32 mm) achieving magnification of up to 2 times approximately, to verify literature findings with experimental photogrammetric 3D measurements of millimetre-sized objects with micro-features. The limitation experienced by the laser printing technology, used to produce the bi-dimensional pattern on common paper, has been overcome using an accurate pattern manufactured with a photolithographic process.The results of the experimental activity prove that the CRCM is valid for AOVs down to 3.4° and that DCRP results are comparable with the results of existing and more expensive commercial techniques. (paper)

  12. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.; Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  13. The eye of the camera: effects of security cameras on pro-social behavior

    NARCIS (Netherlands)

    van Rompay, T.J.L.; Vonk, D.J.; Fransen, M.L.

    2009-01-01

    This study addresses the effects of security cameras on prosocial behavior. Results from previous studies indicate that the presence of others can trigger helping behavior, arising from the need for approval of others. Extending these findings, the authors propose that security cameras can likewise

  14. A Novel Abandoned Object Detection System Based on Three-Dimensional Image Information

    Directory of Open Access Journals (Sweden)

    Yiliang Zeng

    2015-03-01

    Full Text Available A new idea of an abandoned object detection system for road traffic surveillance systems based on three-dimensional image information is proposed in this paper to prevent traffic accidents. A novel Binocular Information Reconstruction and Recognition (BIRR algorithm is presented to implement the new idea. As initial detection, suspected abandoned objects are detected by the proposed static foreground region segmentation algorithm based on surveillance video from a monocular camera. After detection of suspected abandoned objects, three-dimensional (3D information of the suspected abandoned object is reconstructed by the proposed theory about 3D object information reconstruction with images from a binocular camera. To determine whether the detected object is hazardous to normal road traffic, road plane equation and height of suspected-abandoned object are calculated based on the three-dimensional information. Experimental results show that this system implements fast detection of abandoned objects and this abandoned object system can be used for road traffic monitoring and public area surveillance.

  15. The Joint Facial and Invasive Neck Trauma (J-FAINT) Project, Iraq and Afghanistan 2003-2011

    Science.gov (United States)

    2013-01-01

    Original Research— Facial Plastic and Reconstructive Surgery The Joint Facial and Invasive Neck Trauma (J-FAINT) Project, Iraq and Afghanistan 2003...number and type of facial and penetrat- ing neck trauma injuries sustained in Operation Iraqi Freedom (OIF) and Operation Enduring Freedom (OEF). Study...queried for data from OIF and OEF from January 2003 to May 2011. Information on demographics; type and severity of facial , neck, and associated trauma

  16. Passive auto-focus for digital still cameras and camera phones: Filter-switching and low-light techniques

    Science.gov (United States)

    Gamadia, Mark Noel

    In order to gain valuable market share in the growing consumer digital still camera and camera phone market, camera manufacturers have to continually add and improve existing features to their latest product offerings. Auto-focus (AF) is one such feature, whose aim is to enable consumers to quickly take sharply focused pictures with little or no manual intervention in adjusting the camera's focus lens. While AF has been a standard feature in digital still and cell-phone cameras, consumers often complain about their cameras' slow AF performance, which may lead to missed photographic opportunities, rendering valuable moments and events with undesired out-of-focus pictures. This dissertation addresses this critical issue to advance the state-of-the-art in the digital band-pass filter, passive AF method. This method is widely used to realize AF in the camera industry, where a focus actuator is adjusted via a search algorithm to locate the in-focus position by maximizing a sharpness measure extracted from a particular frequency band of the incoming image of the scene. There are no known systematic methods for automatically deriving the parameters such as the digital pass-bands or the search step-size increments used in existing passive AF schemes. Conventional methods require time consuming experimentation and tuning in order to arrive at a set of parameters which balance AF performance in terms of speed and accuracy ultimately causing a delay in product time-to-market. This dissertation presents a new framework for determining an optimal set of passive AF parameters, named Filter- Switching AF, providing an automatic approach to achieve superior AF performance, both in good and low lighting conditions based on the following performance measures (metrics): speed (total number of iterations), accuracy (offset from truth), power consumption (total distance moved), and user experience (in-focus position overrun). Performance results using three different prototype cameras

  17. Image compensation for camera and lighting variability

    Science.gov (United States)

    Daley, Wayne D.; Britton, Douglas F.

    1996-12-01

    With the current trend of integrating machine vision systems in industrial manufacturing and inspection applications comes the issue of camera and illumination stabilization. Unless each application is built around a particular camera and highly controlled lighting environment, the interchangeability of cameras of fluctuations in lighting become a problem as each camera usually has a different response. An empirical approach is proposed where color tile data is acquired using the camera of interest, and a mapping is developed to some predetermined reference image using neural networks. A similar analytical approach based on a rough analysis of the imaging systems is also considered for deriving a mapping between cameras. Once a mapping has been determined, all data from one camera is mapped to correspond to the images of the other prior to performing any processing on the data. Instead of writing separate image processing algorithms for the particular image data being received, the image data is adjusted based on each particular camera and lighting situation. All that is required when swapping cameras is the new mapping for the camera being inserted. The image processing algorithms can remain the same as the input data has been adjusted appropriately. The results of utilizing this technique are presented for an inspection application.

  18. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  19. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    Science.gov (United States)

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  20. THE HUBBLE WIDE FIELD CAMERA 3 TEST OF SURFACES IN THE OUTER SOLAR SYSTEM: SPECTRAL VARIATION ON KUIPER BELT OBJECTS

    International Nuclear Information System (INIS)

    Fraser, Wesley C.; Brown, Michael E.; Glass, Florian

    2015-01-01

    Here, we present additional photometry of targets observed as part of the Hubble Wide Field Camera 3 (WFC3) Test of Surfaces in the Outer Solar System. Twelve targets were re-observed with the WFC3 in the optical and NIR wavebands designed to complement those used during the first visit. Additionally, all of the observations originally presented by Fraser and Brown were reanalyzed through the same updated photometry pipeline. A re-analysis of the optical and NIR color distribution reveals a bifurcated optical color distribution and only two identifiable spectral classes, each of which occupies a broad range of colors and has correlated optical and NIR colors, in agreement with our previous findings. We report the detection of significant spectral variations on five targets which cannot be attributed to photometry errors, cosmic rays, point-spread function or sensitivity variations, or other image artifacts capable of explaining the magnitude of the variation. The spectrally variable objects are found to have a broad range of dynamical classes and absolute magnitudes, exhibit a broad range of apparent magnitude variations, and are found in both compositional classes. The spectrally variable objects with sufficiently accurate colors for spectral classification maintain their membership, belonging to the same class at both epochs. 2005 TV189 exhibits a sufficiently broad difference in color at the two epochs that span the full range of colors of the neutral class. This strongly argues that the neutral class is one single class with a broad range of colors, rather than the combination of multiple overlapping classes

  1. THE EVOLUTION OF THE REST-FRAME V-BAND LUMINOSITY FUNCTION FROM z = 4: A CONSTANT FAINT-END SLOPE OVER THE LAST 12 Gyr OF COSMIC HISTORY

    International Nuclear Information System (INIS)

    Marchesini, Danilo; Stefanon, Mauro; Brammer, Gabriel B.; Whitaker, Katherine E.

    2012-01-01

    We present the rest-frame V-band luminosity function (LF) of galaxies at 0.4 ≤ z < 4.0, measured from a near-infrared selected sample constructed from the NMBS, the FIRES, the FIREWORKS, and the ultra-deep NICMOS and WFC3 observations in the HDFN, HUDF, and GOODS-CDFS, all having high-quality optical-to-mid-infrared data. This unique sample combines data from surveys with a large range of depths and areas in a self-consistent way, allowing us to (1) minimize the uncertainties due to cosmic variance; and (2) simultaneously constrain the bright and faint ends with unprecedented accuracy over the targeted redshift range, probing the LF down to 0.1L* at z ∼ 3.9. We find that (1) the faint end is fairly flat and with a constant slope from z = 4, with α = –1.27 ± 0.05; (2) the characteristic magnitude has dimmed by 1.3 mag from z ∼ 3.7 to z = 0.1; (3) the characteristic density has increased by a factor of ∼8 from z ∼ 3.7 to z = 0.1, with 50% of this increase from z ∼ 4 to z ∼ 1.8; and (4) the luminosity density peaks at z ≈ 1-1.5, increasing by a factor of ∼4 from z = 4.0 to z ≈ 1-1.5, and subsequently decreasing by a factor of ∼1.5 by z = 0.1. We find no evidence for a steepening of the faint-end slope with redshift out to z = 4, in contrast with previous observational claims and theoretical predictions. The constant faint-end slope suggests that the efficiency of stellar feedback may evolve with redshift. Alternative interpretations are discussed, such as different masses of the halos hosting faint galaxies at low and high redshifts and/or environmental effects.

  2. SNAPSHOT SPECTRAL AND COLOR IMAGING USING A REGULAR DIGITAL CAMERA WITH A MONOCHROMATIC IMAGE SENSOR

    Directory of Open Access Journals (Sweden)

    J. Hauser

    2017-10-01

    Full Text Available Spectral imaging (SI refers to the acquisition of the three-dimensional (3D spectral cube of spatial and spectral data of a source object at a limited number of wavelengths in a given wavelength range. Snapshot spectral imaging (SSI refers to the instantaneous acquisition (in a single shot of the spectral cube, a process suitable for fast changing objects. Known SSI devices exhibit large total track length (TTL, weight and production costs and relatively low optical throughput. We present a simple SSI camera based on a regular digital camera with (i an added diffusing and dispersing phase-only static optical element at the entrance pupil (diffuser and (ii tailored compressed sensing (CS methods for digital processing of the diffused and dispersed (DD image recorded on the image sensor. The diffuser is designed to mix the spectral cube data spectrally and spatially and thus to enable convergence in its reconstruction by CS-based algorithms. In addition to performing SSI, this SSI camera is capable to perform color imaging using a monochromatic or gray-scale image sensor without color filter arrays.

  3. Soft x-ray streak cameras

    International Nuclear Information System (INIS)

    Stradling, G.L.

    1988-01-01

    This paper is a discussion of the development and of the current state of the art in picosecond soft x-ray streak camera technology. Accomplishments from a number of institutions are discussed. X-ray streak cameras vary from standard visible streak camera designs in the use of an x-ray transmitting window and an x-ray sensitive photocathode. The spectral sensitivity range of these instruments includes portions of the near UV and extends from the subkilovolt x- ray region to several tens of kilovolts. Attendant challenges encountered in the design and use of x-ray streak cameras include the accommodation of high-voltage and vacuum requirements, as well as manipulation of a photocathode structure which is often fragile. The x-ray transmitting window is generally too fragile to withstand atmospheric pressure, necessitating active vacuum pumping and a vacuum line of sight to the x-ray signal source. Because of the difficulty of manipulating x-ray beams with conventional optics, as is done with visible light, the size of the photocathode sensing area, access to the front of the tube, the ability to insert the streak tube into a vacuum chamber and the capability to trigger the sweep with very short internal delay times are issues uniquely relevant to x-ray streak camera use. The physics of electron imaging may place more stringent limitations on the temporal and spatial resolution obtainable with x-ray photocathodes than with the visible counterpart. Other issues which are common to the entire streak camera community also concern the x-ray streak camera users and manufacturers

  4. New camera systems for fuel services

    International Nuclear Information System (INIS)

    Hummel, W.; Beck, H.J.

    2010-01-01

    AREVA NP Fuel Services have many years of experience in visual examination and measurements on fuel assemblies and associated core components by using state of the art cameras and measuring technologies. The used techniques allow the surface and dimensional characterization of materials and shapes by visual examination. New enhanced and sophisticated technologies for fuel services f. e. are two shielded color camera systems for use under water and close inspection of a fuel assembly. Nowadays the market requirements for detecting and characterization of small defects (lower than the 10th of one mm) or cracks and analyzing surface appearances on an irradiated fuel rod cladding or fuel assembly structure parts have increased. Therefore it is common practice to use movie cameras with higher resolution. The radiation resistance of high resolution CCD cameras is in general very low and it is not possible to use them unshielded close to a fuel assembly. By extending the camera with a mirror system and shielding around the sensitive parts, the movie camera can be utilized for fuel assembly inspection. AREVA NP Fuel Services is now equipped with such kind of movie cameras. (orig.)

  5. Automatic multi-camera calibration for deployable positioning systems

    Science.gov (United States)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  6. Multi-Angle Snowflake Camera Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Stuefer, Martin [Univ. of Alaska, Fairbanks, AK (United States); Bailey, J. [Univ. of Alaska, Fairbanks, AK (United States)

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASC cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.

  7. CALIBRATION PROCEDURES ON OBLIQUE CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    G. Kemper

    2016-06-01

    Full Text Available Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna –IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first

  8. First Light with a 67-Million-Pixel WFI Camera

    Science.gov (United States)

    1999-01-01

    The newest astronomical instrument at the La Silla observatory is a super-camera with no less than sixty-seven million image elements. It represents the outcome of a joint project between the European Southern Observatory (ESO) , the Max-Planck-Institut für Astronomie (MPI-A) in Heidelberg (Germany) and the Osservatorio Astronomico di Capodimonte (OAC) near Naples (Italy), and was installed at the 2.2-m MPG/ESO telescope in December 1998. Following careful adjustment and testing, it has now produced the first spectacular test images. With a field size larger than the Full Moon, the new digital Wide Field Imager is able to obtain detailed views of extended celestial objects to very faint magnitudes. It is the first of a new generation of survey facilities at ESO with which a variety of large-scale searches will soon be made over extended regions of the southern sky. These programmes will lead to the discovery of particularly interesting and unusual (rare) celestial objects that may then be studied with large telescopes like the VLT at Paranal. This will in turn allow astronomers to penetrate deeper and deeper into the many secrets of the Universe. More light + larger fields = more information! The larger a telescope is, the more light - and hence information about the Universe and its constituents - it can collect. This simple truth represents the main reason for building ESO's Very Large Telescope (VLT) at the Paranal Observatory. However, the information-gathering power of astronomical equipment can also be increased by using a larger detector with more image elements (pixels) , thus permitting the simultaneous recording of images of larger sky fields (or more details in the same field). It is for similar reasons that many professional photographers prefer larger-format cameras and/or wide-angle lenses to the more conventional ones. The Wide Field Imager at the 2.2-m telescope Because of technological limitations, the sizes of detectors most commonly in use in

  9. Relative and Absolute Calibration of a Multihead Camera System with Oblique and Nadir Looking Cameras for a Uas

    Science.gov (United States)

    Niemeyer, F.; Schima, R.; Grenzdörffer, G.

    2013-08-01

    Numerous unmanned aerial systems (UAS) are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg) are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis" software and will give an overview of the results and experiences of test flights.

  10. Underwater video enhancement using multi-camera super-resolution

    Science.gov (United States)

    Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.

    2017-12-01

    Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.

  11. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    Science.gov (United States)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  12. Transmission electron microscope CCD camera

    Science.gov (United States)

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  13. Faint (and bright variable stars in the satellites of the Milky Way

    Directory of Open Access Journals (Sweden)

    Vivas A. Katherina

    2017-01-01

    Full Text Available I describe two ongoing projects related with variable stars in the satellites of the MilkyWay. In the first project, we are searching for dwarf Cepheid stars (a.k.a δ Scuti and/or SX Phe in some of the classical dwarf spheroidal galaxies. Our goal is to characterize the population of these variable stars under different environments (age, metallicity in order to study their use as standard candles in systems for which the metallicity is not necessarily known. In the second project we search for RR Lyrae stars in the new ultra-faint satellite galaxies that have been discovered around the Milky Way in recent years.

  14. Event Detection Intelligent Camera: Demonstration of flexible, real-time data taking and processing

    Energy Technology Data Exchange (ETDEWEB)

    Szabolics, Tamás, E-mail: szabolics.tamas@wigner.mta.hu; Cseh, Gábor; Kocsis, Gábor; Szepesi, Tamás; Zoletnik, Sándor

    2015-10-15

    Highlights: • We present EDICAM's operation principles description. • Firmware tests results. • Software test results. • Further developments. - Abstract: An innovative fast camera (EDICAM – Event Detection Intelligent CAMera) was developed by MTA Wigner RCP in the last few years. This new concept was designed for intelligent event driven processing to be able to detect predefined events and track objects in the plasma. The camera provides a moderate frame rate of 400 Hz at full frame resolution (1280 × 1024), and readout of smaller region of interests can be done in the 1–140 kHz range even during exposure of the full image. One of the most important advantages of this hardware is a 10 Gbit/s optical link which ensures very fast communication and data transfer between the PC and the camera, enabling two level of processing: primitive algorithms in the camera hardware and high-level processing in the PC. This camera hardware has successfully proven to be able to monitoring the plasma in several fusion devices for example at ASDEX Upgrade, KSTAR and COMPASS with the first version of firmware. A new firmware and software package is under development. It allows to detect predefined events in real time and therefore the camera is capable to change its own operation or to give warnings e.g. to the safety system of the experiment. The EDICAM system can handle a huge amount of data (up to TBs) with high data rate (950 MB/s) and will be used as the central element of the 10 camera overview video diagnostic system of Wendenstein 7-X (W7-X) stellarator. This paper presents key elements of the newly developed built-in intelligence stressing the revolutionary new features and the results of the test of the different software elements.

  15. MicroCameras and Photometers (MCP) on board the TARANIS satellite

    Science.gov (United States)

    Farges, T.; Hébert, P.; Le Mer-Dachard, F.; Ravel, K.; Gaillac, S.

    2017-12-01

    TARANIS (Tool for the Analysis of Radiations from lightNing and Sprites) is a CNES micro satellite. Its main objective is to study impulsive transfers of energy between the Earth atmosphere and the space environment. It will be sun-synchronous at an altitude of 700 km. It will be launched in 2019 for at least 2 years. Its payload is composed of several electromagnetic instruments in different wavelengths (from gamma-rays to radio waves including optical). TARANIS instruments are currently in calibration and qualification phase. The purpose is to present the MicroCameras and Photometers (MCP) design, to show its performances after its recent characterization and at last to discuss the scientific objectives and how we want to answer it with the MCP observations. The MicroCameras, developed by Sodern, are dedicated to the spatial description of TLEs and their parent lightning. They are able to differentiate sprite and lightning thanks to two narrow bands ([757-767 nm] and [772-782 nm]) that provide simultaneous pairs of images of an Event. Simulation results of the differentiation method will be shown. After calibration and tests, the MicroCameras are now delivered to the CNES for integration on the payload. The Photometers, developed by Bertin Technologies, will provide temporal measurements and spectral characteristics of TLEs and lightning. There are key instrument because of their capability to detect on-board TLEs and then switch all the instruments of the scientific payload in their high resolution acquisition mode. Photometers use four spectral bands in the [170-260 nm], [332-342 nm], [757-767 nm] and [600-900 nm] and have the same field of view as cameras. The on-board TLE detection algorithm remote-controlled parameters have been tuned before launch using the electronic board and simulated or real events waveforms. After calibration, the Photometers are now going through the environmental tests. They will be delivered to the CNES for integration on the

  16. Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task

    Directory of Open Access Journals (Sweden)

    Nicholas T. Bott

    2017-06-01

    Full Text Available Background: Web cameras are increasingly part of the standard hardware of most smart devices. Eye movements can often provide a noninvasive “window on the brain,” and the recording of eye movements using web cameras is a burgeoning area of research.Objective: This study investigated a novel methodology for administering a visual paired comparison (VPC decisional task using a web camera.To further assess this method, we examined the correlation between a standard eye-tracking camera automated scoring procedure [obtaining images at 60 frames per second (FPS] and a manually scored procedure using a built-in laptop web camera (obtaining images at 3 FPS.Methods: This was an observational study of 54 clinically normal older adults.Subjects completed three in-clinic visits with simultaneous recording of eye movements on a VPC decision task by a standard eye tracker camera and a built-in laptop-based web camera. Inter-rater reliability was analyzed using Siegel and Castellan's kappa formula. Pearson correlations were used to investigate the correlation between VPC performance using a standard eye tracker camera and a built-in web camera.Results: Strong associations were observed on VPC mean novelty preference score between the 60 FPS eye tracker and 3 FPS built-in web camera at each of the three visits (r = 0.88–0.92. Inter-rater agreement of web camera scoring at each time point was high (κ = 0.81–0.88. There were strong relationships on VPC mean novelty preference score between 10, 5, and 3 FPS training sets (r = 0.88–0.94. Significantly fewer data quality issues were encountered using the built-in web camera.Conclusions: Human scoring of a VPC decisional task using a built-in laptop web camera correlated strongly with automated scoring of the same task using a standard high frame rate eye tracker camera.While this method is not suitable for eye tracking paradigms requiring the collection and analysis of fine-grained metrics, such as

  17. An Amateur's Guide to Observing and Imaging the Heavens

    Science.gov (United States)

    Morison, Ian

    2014-06-01

    Foreword; Acknowledgments; Prologue: a tale of two scopes; 1. Telescope and observing fundamentals; 2. Refractors; 3. Binoculars and spotting scopes; 4. The Newtonian telescope and its derivatives; 5. The Cassegrain telescope and its derivatives - Schmidt-Cassegrains and Maksutovs; 6. Telescope maintenance, collimation and star testing; 7. Telescope accessories: finders, eyepieces and bino-viewers; 8. Telescope mounts: alt/az and equatorial with their computerised variants; 9. The art of visual observing; 10. Visual observations of the Moon and planets; 11. Imaging the Moon and planets with DSLRs and web-cams; 12. Observing and imaging the Sun in white light and H-alpha; 13. Observing with an astro-video camera to 'see' faint objects; 14. Deep sky imaging with standard and H-alpha modified DSLR cameras; 15. Deep sky imaging with cooled CCD cameras; 16. Auto-guiding techniques and equipment; 17. Spectral studies of the Sun, stars and galaxies; 18. Improving and enhancing images in Photoshop; Index.

  18. Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test

    Directory of Open Access Journals (Sweden)

    Bruno Roux

    2008-11-01

    Full Text Available The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1 the use of unprocessed image data did not improve the results of image analyses; 2 vignetting had a significant effect, especially for the modified camera, and 3 normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces.

  19. On the close environment of BL Lacertae objects

    Energy Technology Data Exchange (ETDEWEB)

    Falomo, R. (Osservatorio Astronomico di Padova (Italy)); Melnick, J. (European Southern Observatory, Santiago (Chile)); Tanzi, E.G. (Consiglio Nazionale delle Ricerche, Milan (Italy))

    1990-06-21

    The local environment of BL Lacertae objects, which resemble quasars but lack emission lines, is poorly understood. In the few cases where the surrounding nebulosity has been studied in detail, it is consistent with the presence of a giant elliptical galaxy, but the evidence that the BL Lac and the putative galaxy are physically associated rests solely on their positional coincidence. An alternative hypothesis, that BL Lacs are gravitationally lensed and that the surrounding emission is from the foreground lensing object, gains some support from a number of observations which reveal less than perfect alignment between BL Lacs and surrounding emission. We have begun a systematic programme of high-resolution imaging aimed at understanding in a general way the local environment of BL Lacs. Here we describe a first series of images, which show the presence of emission features around most of the BL Lacs observed. Typically, this emission is close (<5 arcsec) to the BL Lac, and faint (m{sub R} = 21). We discuss the interpretation of these companions in terms of both interacting objects and gravitational lenses. (author).

  20. The prototype cameras for trans-Neptunian automatic occultation survey

    Science.gov (United States)

    Wang, Shiang-Yu; Ling, Hung-Hsu; Hu, Yen-Sang; Geary, John C.; Chang, Yin-Chang; Chen, Hsin-Yo; Amato, Stephen M.; Huang, Pin-Jie; Pratlong, Jerome; Szentgyorgyi, Andrew; Lehner, Matthew; Norton, Timothy; Jorden, Paul

    2016-08-01

    The Transneptunian Automated Occultation Survey (TAOS II) is a three robotic telescope project to detect the stellar occultation events generated by TransNeptunian Objects (TNOs). TAOS II project aims to monitor about 10000 stars simultaneously at 20Hz to enable statistically significant event rate. The TAOS II camera is designed to cover the 1.7 degrees diameter field of view of the 1.3m telescope with 10 mosaic 4.5k×2k CMOS sensors. The new CMOS sensor (CIS 113) has a back illumination thinned structure and high sensitivity to provide similar performance to that of the back-illumination thinned CCDs. Due to the requirements of high performance and high speed, the development of the new CMOS sensor is still in progress. Before the science arrays are delivered, a prototype camera is developed to help on the commissioning of the robotic telescope system. The prototype camera uses the small format e2v CIS 107 device but with the same dewar and also the similar control electronics as the TAOS II science camera. The sensors, mounted on a single Invar plate, are cooled to the operation temperature of about 200K as the science array by a cryogenic cooler. The Invar plate is connected to the dewar body through a supporting ring with three G10 bipods. The control electronics consists of analog part and a Xilinx FPGA based digital circuit. One FPGA is needed to control and process the signal from a CMOS sensor for 20Hz region of interests (ROI) readout.

  1. Scintillation camera for high activity sources

    International Nuclear Information System (INIS)

    Arseneau, R.E.

    1978-01-01

    The invention described relates to a scintillation camera used for clinical medical diagnosis. Advanced recognition of many unacceptable pulses allows the scintillation camera to discard such pulses at an early stage in processing. This frees the camera to process a greater number of pulses of interest within a given period of time. Temporary buffer storage allows the camera to accommodate pulses received at a rate in excess of its maximum rated capability due to statistical fluctuations in the level of radioactivity of the radiation source measured. (U.K.)

  2. Decision about buying a gamma camera

    International Nuclear Information System (INIS)

    Ganatra, R.D.

    1992-01-01

    A large part of the referral to a nuclear medicine department is usually for imaging studies. Sooner or later, the nuclear medicine specialist will be called upon to make a decision about when and what type of gamma camera to buy. There is no longer an option of choosing between a rectilinear scanner and a gamma camera as the former is virtually out of the market. The decision that one has to make is when to invest in a gamma camera, and then on what basis to select the gamma camera

  3. Decision about buying a gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Ganatra, R D

    1993-12-31

    A large part of the referral to a nuclear medicine department is usually for imaging studies. Sooner or later, the nuclear medicine specialist will be called upon to make a decision about when and what type of gamma camera to buy. There is no longer an option of choosing between a rectilinear scanner and a gamma camera as the former is virtually out of the market. The decision that one has to make is when to invest in a gamma camera, and then on what basis to select the gamma camera 1 tab., 1 fig

  4. Selective-imaging camera

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  5. Video Chat with Multiple Cameras

    OpenAIRE

    MacCormick, John

    2012-01-01

    The dominant paradigm for video chat employs a single camera at each end of the conversation, but some conversations can be greatly enhanced by using multiple cameras at one or both ends. This paper provides the first rigorous investigation of multi-camera video chat, concentrating especially on the ability of users to switch between views at either end of the conversation. A user study of 23 individuals analyzes the advantages and disadvantages of permitting a user to switch between views at...

  6. Microprocessor-controlled, wide-range streak camera

    Energy Technology Data Exchange (ETDEWEB)

    Amy E. Lewis, Craig Hollabaugh

    2006-09-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera’s user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  7. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras

    Directory of Open Access Journals (Sweden)

    Yajie Liao

    2017-06-01

    Full Text Available Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices, which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer’s calibration.

  8. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras.

    Science.gov (United States)

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-06-24

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer's calibration.

  9. Family Of Calibrated Stereometric Cameras For Direct Intraoral Use

    Science.gov (United States)

    Curry, Sean; Moffitt, Francis; Symes, Douglas; Baumrind, Sheldon

    1983-07-01

    In order to study empirically the relative efficiencies of different types of orthodontic appliances in repositioning teeth in vivo, we have designed and constructed a pair of fixed-focus, normal case, fully-calibrated stereometric cameras. One is used to obtain stereo photography of single teeth, at a scale of approximately 2:1, and the other is designed for stereo imaging of the entire dentition, study casts, facial structures, and other related objects at a scale of approximately 1:8. Twin lenses simultaneously expose adjacent frames on a single roll of 70 mm film. Physical flatness of the film is ensured by the use of a spring-loaded metal pressure plate. The film is forced against a 3/16" optical glass plate upon which is etched an array of 16 fiducial marks which divide the film format into 9 rectangular regions. Using this approach, it has been possible to produce photographs which are undistorted for qualitative viewing and from which quantitative data can be acquired by direct digitization of conventional photographic enlargements. We are in the process of designing additional members of this family of cameras. All calibration and data acquisition and analysis techniques previously developed will be directly applicable to these new cameras.

  10. Video astronomy on the go using video cameras with small telescopes

    CERN Document Server

    Ashley, Joseph

    2017-01-01

    Author Joseph Ashley explains video astronomy's many benefits in this comprehensive reference guide for amateurs. Video astronomy offers a wonderful way to see objects in far greater detail than is possible through an eyepiece, and the ability to use the modern, entry-level video camera to image deep space objects is a wonderful development for urban astronomers in particular, as it helps sidestep the issue of light pollution. The author addresses both the positive attributes of these cameras for deep space imaging as well as the limitations, such as amp glow. The equipment needed for imaging as well as how it is configured is identified with hook-up diagrams and photographs. Imaging techniques are discussed together with image processing (stacking and image enhancement). Video astronomy has evolved to offer great results and great ease of use, and both novices and more experienced amateurs can use this book to find the set-up that works best for them. Flexible and portable, they open up a whole new way...

  11. Analyzer for gamma cameras diagnostic

    International Nuclear Information System (INIS)

    Oramas Polo, I.; Osorio Deliz, J. F.; Diaz Garcia, A.

    2013-01-01

    This research work was carried out to develop an analyzer for gamma cameras diagnostic. It is composed of an electronic system that includes hardware and software capabilities, and operates from the acquisition of the 4 head position signals of a gamma camera detector. The result is the spectrum of the energy delivered by nuclear radiation coming from the camera detector head. This system includes analog processing of position signals from the camera, digitization and the subsequent processing of the energy signal in a multichannel analyzer, sending data to a computer via a standard USB port and processing of data in a personal computer to obtain the final histogram. The circuits are composed of an analog processing board and a universal kit with micro controller and programmable gate array. (Author)

  12. The faint radio source population at 15.7 GHz - II. Multi-wavelength properties

    Science.gov (United States)

    Whittam, I. H.; Riley, J. M.; Green, D. A.; Jarvis, M. J.; Vaccari, M.

    2015-11-01

    A complete, flux density limited sample of 96 faint (>0.5 mJy) radio sources is selected from the 10C survey at 15.7 GHz in the Lockman Hole. We have matched this sample to a range of multi-wavelength catalogues, including Spitzer Extragalactic Representative Volume Survey, Spitzer Wide-area Infrared Extragalactic survey, United Kingdom Infrared Telescope Infrared Deep Sky Survey and optical data; multi-wavelength counterparts are found for 80 of the 96 sources and spectroscopic redshifts are available for 24 sources. Photometric redshifts are estimated for the sources with multi-wavelength data available; the median redshift of the sample is 0.91 with an interquartile range of 0.84. Radio-to-optical ratios show that at least 94 per cent of the sample are radio loud, indicating that the 10C sample is dominated by radio galaxies. This is in contrast to samples selected at lower frequencies, where radio-quiet AGN and star-forming galaxies are present in significant numbers at these flux density levels. All six radio-quiet sources have rising radio spectra, suggesting that they are dominated by AGN emission. These results confirm the conclusions of Paper I that the faint, flat-spectrum sources which are found to dominate the 10C sample below ˜1 mJy are the cores of radio galaxies. The properties of the 10C sample are compared to the Square Kilometre Array Design Studies Simulated Skies; a population of low-redshift star-forming galaxies predicted by the simulation is not found in the observed sample.

  13. Single Camera Calibration in 3D Vision

    Directory of Open Access Journals (Sweden)

    Caius SULIMAN

    2009-12-01

    Full Text Available Camera calibration is a necessary step in 3D vision in order to extract metric information from 2D images. A camera is considered to be calibrated when the parameters of the camera are known (i.e. principal distance, lens distorsion, focal length etc.. In this paper we deal with a single camera calibration method and with the help of this method we try to find the intrinsic and extrinsic camera parameters. The method was implemented with succes in the programming and simulation environment Matlab.

  14. RELATIVE AND ABSOLUTE CALIBRATION OF A MULTIHEAD CAMERA SYSTEM WITH OBLIQUE AND NADIR LOOKING CAMERAS FOR A UAS

    Directory of Open Access Journals (Sweden)

    F. Niemeyer

    2013-08-01

    Full Text Available Numerous unmanned aerial systems (UAS are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis“ software and will give an overview of the results and experiences of test flights.

  15. A camera specification for tendering purposes

    International Nuclear Information System (INIS)

    Lunt, M.J.; Davies, M.D.; Kenyon, N.G.

    1985-01-01

    A standardized document is described which is suitable for sending to companies which are being invited to tender for the supply of a gamma camera. The document refers to various features of the camera, the performance specification of the camera, maintenance details, price quotations for various options and delivery, installation and warranty details. (U.K.)

  16. A novel plane method to the calibration of the thermal camera

    Science.gov (United States)

    Wang, Xunsi; Huang, Wei; Nie, Qiu-hua; Xu, Tiefeng; Dai, Shixun; Shen, Xiang; Cheng, Weihai

    2009-07-01

    This paper provides an up-to-date review of research efforts in thermal camera and target object recognition techniques based on two-dimensional (2D) images in the infrared (IR) spectra (8-12μm). From the geometric point of view, a special target plate was constructed with a radiation source of lamp excited that allows all of these devices to be calibrated geometrically along a radiance-based approach. The calibration theory and actual experimental procedures were described, then an automated measurement of the circle targets by image centroid algorithm. The key parameters of IR camera were calibrated out with 3 inner and 6 outer of Tsai model in thermal imaging. The subsequent data processing and analysis were then outlined. The 3D model from the successful calibration of a representative sample of the infrared array camera was presented and discussed. They provide much new and easy way to the geometric characteristics of these imagers that can be used in car-night-vision, medical, industrial, military, and environmental applications.

  17. SVBRDF-Invariant Shape and Reflectance Estimation from a Light-Field Camera.

    Science.gov (United States)

    Wang, Ting-Chun; Chandraker, Manmohan; Efros, Alexei A; Ramamoorthi, Ravi

    2018-03-01

    Light-field cameras have recently emerged as a powerful tool for one-shot passive 3D shape capture. However, obtaining the shape of glossy objects like metals or plastics remains challenging, since standard Lambertian cues like photo-consistency cannot be easily applied. In this paper, we derive a spatially-varying (SV)BRDF-invariant theory for recovering 3D shape and reflectance from light-field cameras. Our key theoretical insight is a novel analysis of diffuse plus single-lobe SVBRDFs under a light-field setup. We show that, although direct shape recovery is not possible, an equation relating depths and normals can still be derived. Using this equation, we then propose using a polynomial (quadratic) shape prior to resolve the shape ambiguity. Once shape is estimated, we also recover the reflectance. We present extensive synthetic data on the entire MERL BRDF dataset, as well as a number of real examples to validate the theory, where we simultaneously recover shape and BRDFs from a single image taken with a Lytro Illum camera.

  18. The search for faint radio supernova remnants in the outer Galaxy: five new discoveries

    Science.gov (United States)

    Gerbrandt, Stephanie; Foster, Tyler J.; Kothes, Roland; Geisbüsch, Jörn; Tung, Albert

    2014-06-01

    Context. High resolution and sensitivity large-scale radio surveys of the Milky Way are critical in the discovery of very low surface brightness supernova remnants (SNRs), which may constitute a significant portion of the Galactic SNRs still unaccounted for (ostensibly the "missing SNR problem"). Aims: The overall purpose here is to present the results of a systematic, deep data-mining of the Canadian Galactic plane Survey (CGPS) for faint, extended non-thermal and polarized emission structures that are likely the shells of uncatalogued SNRs. Methods: We examine 5 × 5 degree mosaics from the entire 1420 MHz continuum and polarization dataset of the CGPS after removing unresolved "point" sources and subsequently smoothing them. Newly revealed extended emission objects are compared to similarly prepared CGPS 408 MHz continuum mosaics, as well as to source-removed mosaics from various existing radio surveys at 4.8 GHz, 2.7 GHz, and 327 MHz, to identify candidates with non-thermal emission characteristics. We integrate flux densities at each frequency to characterise the radio spectra behaviour of these candidates. We further look for mid- and high-frequency (1420 MHz, 4.8 GHz) ordered polarized emission from the limb brightened "shell"-like continuum features that the candidates sport. Finally, we use IR and optical maps to provide additional backing evidence. Results: Here we present evidence that five new objects, identified as filling all or some of the criteria above, are strong candidates for new SNRs. These five are designated by their Galactic coordinate names G108.5+11.0, G128.5+2.6, G149.5+3.2, G150.8+3.8, and G160.1-1.1. The radio spectrum of each is presented, highlighting their steepness, which is characteristic of synchrotron radiation. CGPS 1420 MHz polarization data and 4.8 GHz polarization data also provide evidence that these objects are newly discovered SNRs. These discoveries represent a significant increase in the number of SNRs known in the outer

  19. State of art in radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Choi; Young Soo; Kim, Seong Ho; Cho, Jae Wan; Kim, Chang Hoi; Seo, Young Chil

    2002-02-01

    Working in radiation environment such as nuclear power plant, RI facility, nuclear fuel fabrication facility, medical center has to be considered radiation exposure, and we can implement these job by remote observation and operation. However the camera used for general industry is weakened at radiation, so radiation-tolerant camera is needed for radiation environment. The application of radiation-tolerant camera system is nuclear industry, radio-active medical, aerospace, and so on. Specially nuclear industry, the demand is continuous in the inspection of nuclear boiler, exchange of pellet, inspection of nuclear waste. In the nuclear developed countries have been an effort to develop radiation-tolerant cameras. Now they have many kinds of radiation-tolerant cameras which can tolerate to 10{sup 6}-10{sup 8} rad total dose. In this report, we examine into the state-of-art about radiation-tolerant cameras, and analyze these technology. We want to grow up the concern of developing radiation-tolerant camera by this paper, and upgrade the level of domestic technology.

  20. INFLUENCE OF THE VIEWING GEOMETRY WITHIN HYPERSPECTRAL IMAGES RETRIEVED FROM UAV SNAPSHOT CAMERAS

    OpenAIRE

    Aasen, Helge

    2016-01-01

    Hyperspectral data has great potential for vegetation parameter retrieval. However, due to angular effects resulting from different sun-surface-sensor geometries, objects might appear differently depending on the position of an object within the field of view of a sensor. Recently, lightweight snapshot cameras have been introduced, which capture hyperspectral information in two spatial and one spectral dimension and can be mounted on unmanned aerial vehicles. This study investigates th...

  1. 16 CFR 501.1 - Camera film.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk still...

  2. Characterization results from several commercial soft X-ray streak cameras

    Science.gov (United States)

    Stradling, G. L.; Studebaker, J. K.; Cavailler, C.; Launspach, J.; Planes, J.

    The spatio-temporal performance of four soft X-ray streak cameras has been characterized. The objective in evaluating the performance capability of these instruments is to enable us to optimize experiment designs, to encourage quantitative analysis of streak data and to educate the ultra high speed photography and photonics community about the X-ray detector performance which is available. These measurements have been made collaboratively over the space of two years at the Forge pulsed X-ray source at Los Alamos and at the Ketjak laser facility an CEA Limeil-Valenton. The X-ray pulse lengths used for these measurements at these facilities were 150 psec and 50 psec respectively. The results are presented as dynamically-measured modulation transfer functions. Limiting temporal resolution values were also calculated. Emphasis is placed upon shot noise statistical limitations in the analysis of the data. Space charge repulsion in the streak tube limits the peak flux at ultra short experiments duration times. This limit results in a reduction of total signal and a decrease in signal to no ise ratio in the streak image. The four cameras perform well with 20 1p/mm resolution discernable in data from the French C650X, the Hadland X-Chron 540 and the Hamamatsu C1936X streak cameras. The Kentech X-ray streak camera has lower modulation and does not resolve below 10 1p/mm but has a longer photocathode.

  3. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  4. Principle of some gamma cameras (efficiencies, limitations, development)

    International Nuclear Information System (INIS)

    Allemand, R.; Bourdel, J.; Gariod, R.; Laval, M.; Levy, G.; Thomas, G.

    1975-01-01

    The quality of scintigraphic images is shown to depend on the efficiency of both the input collimator and the detector. Methods are described by which the quality of these images may be improved by adaptations to either the collimator (Fresnel zone camera, Compton effect camera) or the detector (Anger camera, image amplification camera). The Anger camera and image amplification camera are at present the two main instruments whereby acceptable space and energy resolutions may be obtained. A theoretical comparative study of their efficiencies is carried out, independently of their technological differences, after which the instruments designed or under study at the LETI are presented: these include the image amplification camera, the electron amplifier tube camera using a semi-conductor target CdTe and HgI 2 detector [fr

  5. Streak camera recording of interferometer fringes

    International Nuclear Information System (INIS)

    Parker, N.L.; Chau, H.H.

    1977-01-01

    The use of an electronic high-speed camera in the streaking mode to record interference fringe motion from a velocity interferometer is discussed. Advantages of this method over the photomultiplier tube-oscilloscope approach are delineated. Performance testing and data for the electronic streak camera are discussed. The velocity profile of a mylar flyer accelerated by an electrically exploded bridge, and the jump-off velocity of metal targets struck by these mylar flyers are measured in the camera tests. Advantages of the streak camera include portability, low cost, ease of operation and maintenance, simplified interferometer optics, and rapid data analysis

  6. The fly's eye camera system

    Science.gov (United States)

    Mészáros, L.; Pál, A.; Csépány, G.; Jaskó, A.; Vida, K.; Oláh, K.; Mezö, G.

    2014-12-01

    We introduce the Fly's Eye Camera System, an all-sky monitoring device intended to perform time domain astronomy. This camera system design will provide complementary data sets for other synoptic sky surveys such as LSST or Pan-STARRS. The effective field of view is obtained by 19 cameras arranged in a spherical mosaic form. These individual cameras of the device stand on a hexapod mount that is fully capable of achieving sidereal tracking for the subsequent exposures. This platform has many advantages. First of all it requires only one type of moving component and does not include unique parts. Hence this design not only eliminates problems implied by unique elements, but the redundancy of the hexapod allows smooth operations even if one or two of the legs are stuck. In addition, it can calibrate itself by observed stars independently from both the geographical location (including northen and southern hemisphere) and the polar alignment of the full mount. All mechanical elements and electronics are designed within the confines of our institute Konkoly Observatory. Currently, our instrument is in testing phase with an operating hexapod and reduced number of cameras.

  7. Learning Spatial Object Localization from Vision on a Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Jürgen Leitner

    2012-12-01

    Full Text Available We present a combined machine learning and computer vision approach for robots to localize objects. It allows our iCub humanoid to quickly learn to provide accurate 3D position estimates (in the centimetre range of objects seen. Biologically inspired approaches, such as Artificial Neural Networks (ANN and Genetic Programming (GP, are trained to provide these position estimates using the two cameras and the joint encoder readings. No camera calibration or explicit knowledge of the robot's kinematic model is needed. We find that ANN and GP are not just faster and have lower complexity than traditional techniques, but also learn without the need for extensive calibration procedures. In addition, the approach is localizing objects robustly, when placed in the robot's workspace at arbitrary positions, even while the robot is moving its torso, head and eyes.

  8. Motorcycle detection and counting using stereo camera, IR camera, and microphone array

    Science.gov (United States)

    Ling, Bo; Gibson, David R. P.; Middleton, Dan

    2013-03-01

    Detection, classification, and characterization are the key to enhancing motorcycle safety, motorcycle operations and motorcycle travel estimation. Average motorcycle fatalities per Vehicle Mile Traveled (VMT) are currently estimated at 30 times those of auto fatalities. Although it has been an active research area for many years, motorcycle detection still remains a challenging task. Working with FHWA, we have developed a hybrid motorcycle detection and counting system using a suite of sensors including stereo camera, thermal IR camera and unidirectional microphone array. The IR thermal camera can capture the unique thermal signatures associated with the motorcycle's exhaust pipes that often show bright elongated blobs in IR images. The stereo camera in the system is used to detect the motorcyclist who can be easily windowed out in the stereo disparity map. If the motorcyclist is detected through his or her 3D body recognition, motorcycle is detected. Microphones are used to detect motorcycles that often produce low frequency acoustic signals. All three microphones in the microphone array are placed in strategic locations on the sensor platform to minimize the interferences of background noises from sources such as rain and wind. Field test results show that this hybrid motorcycle detection and counting system has an excellent performance.

  9. Rats Can Acquire Conditional Fear of Faint Light Leaking through the Acrylic Resin Used to Mount Fiber Optic Cannulas

    Science.gov (United States)

    Eckmier, Adam; de Marcillac, Willy Daney; Maître, Agnès; Jay, Thérèse M.; Sanders, Matthew J.; Godsil, Bill P.

    2016-01-01

    Rodents are exquisitely sensitive to light and optogenetic behavioral experiments routinely introduce light-delivery materials into experimental situations, which raises the possibility that light could leak and influence behavioral performance. We examined whether rats respond to a faint diffusion of light, termed caplight, which emanated through…

  10. 2D virtual texture on 3D real object with coded structured light

    Science.gov (United States)

    Molinier, Thierry; Fofi, David; Salvi, Joaquim; Gorria, Patrick

    2008-02-01

    Augmented reality is used to improve color segmentation on human body or on precious no touch artifacts. We propose a technique to project a synthesized texture on real object without contact. Our technique can be used in medical or archaeological application. By projecting a suitable set of light patterns onto the surface of a 3D real object and by capturing images with a camera, a large number of correspondences can be found and the 3D points can be reconstructed. We aim to determine these points of correspondence between cameras and projector from a scene without explicit points and normals. We then project an adjusted texture onto the real object surface. We propose a global and automatic method to virtually texture a 3D real object.

  11. Robust Pose Estimation using the SwissRanger SR-3000 Camera

    DEFF Research Database (Denmark)

    Gudmundsson, Sigurjon Arni; Larsen, Rasmus; Ersbøll, Bjarne Kjær

    2007-01-01

    In this paper a robust method is presented to classify and estimate an objects pose from a real time range image and a low dimensional model. The model is made from a range image training set which is reduced dimensionally by a nonlinear manifold learning method named Local Linear Embedding (LLE)......). New range images are then projected to this model giving the low dimensional coordinates of the object pose in an efficient manner. The range images are acquired by a state of the art SwissRanger SR-3000 camera making the projection process work in real-time....

  12. Gamma camera with an original system of scintigraphic image printing incorporated

    International Nuclear Information System (INIS)

    Roux, G.

    A new gamma camera has been developed, using Anger's Principle to localise the scintillations and incorporating the latest improvements which give a standard of efficiency at present competitive for this kind of apparatus. In the general design of the system special care was devoted to its ease of employment and above all to the production of high-quality scintigraphic images, the recording of images obtained from the gamma camera posing a problem to which a solution is proposed. This consists in storing all the constituent data of an image in a cell matrix of format similar to the scope of the object, the superficial information density of the image being represented by the cell contents. When the examination is finished a special printer supplies a 35x43 cm 2 document in colour on paper, or in black and white on radiological film, at 2:1 or 1:1 magnifications. The laws of contrast representation by the colours or shades of grey are chosen a posteriori according to the organ examined. Documents of the same quality as those so far supplied by a rectilinear scintigraph are then obtained with the gamma camera, which offers its own advantages in addition. The first images acquired in vivo with the whole system, gamma camera plus printer, are presented [fr

  13. A sampling ultra-high-speed streak camera based on the use of a unique photomultiplier

    International Nuclear Information System (INIS)

    Marode, Emmanuel

    An apparatus reproducing the ''streak'' mode of a high-speed camera is proposed for the case of a slit AB whose variations in luminosity are repetitive. A photomultiplier, analysing the object AB point by point, and a still camera, photographing a slit fixed on the oscilloscope screen parallel to the sweep direction are placed on a mobile platform P. The movement of P assures a time-resolved analysis of AB. The resolution is of the order of 2.10 -9 s, and can be improved [fr

  14. Accurate and cost-effective MTF measurement system for lens modules of digital cameras

    Science.gov (United States)

    Chang, Gao-Wei; Liao, Chia-Cheng; Yeh, Zong-Mu

    2007-01-01

    For many years, the widening use of digital imaging products, e.g., digital cameras, has given rise to much attention in the market of consumer electronics. However, it is important to measure and enhance the imaging performance of the digital ones, compared to that of conventional cameras (with photographic films). For example, the effect of diffraction arising from the miniaturization of the optical modules tends to decrease the image resolution. As a figure of merit, modulation transfer function (MTF) has been broadly employed to estimate the image quality. Therefore, the objective of this paper is to design and implement an accurate and cost-effective MTF measurement system for the digital camera. Once the MTF of the sensor array is provided, that of the optical module can be then obtained. In this approach, a spatial light modulator (SLM) is employed to modulate the spatial frequency of light emitted from the light-source. The modulated light going through the camera under test is consecutively detected by the sensors. The corresponding images formed from the camera are acquired by a computer and then, they are processed by an algorithm for computing the MTF. Finally, through the investigation on the measurement accuracy from various methods, such as from bar-target and spread-function methods, it appears that our approach gives quite satisfactory results.

  15. Using the standard deviation of a region of interest in an image to estimate camera to emitter distance.

    Science.gov (United States)

    Cano-García, Angel E; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe

    2012-01-01

    In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information.

  16. Using the Standard Deviation of a Region of Interest in an Image to Estimate Camera to Emitter Distance

    Directory of Open Access Journals (Sweden)

    Felipe Espinoza

    2012-05-01

    Full Text Available In this study, a camera to infrared diode (IRED distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information.

  17. Imaging capabilities of germanium gamma cameras

    International Nuclear Information System (INIS)

    Steidley, J.W.

    1977-01-01

    Quantitative methods of analysis based on the use of a computer simulation were developed and used to investigate the imaging capabilities of germanium gamma cameras. The main advantage of the computer simulation is that the inherent unknowns of clinical imaging procedures are removed from the investigation. The effects of patient scattered radiation were incorporated using a mathematical LSF model which was empirically developed and experimentally verified. Image modifying effects of patient motion, spatial distortions, and count rate capabilities were also included in the model. Spatial domain and frequency domain modeling techniques were developed and used in the simulation as required. The imaging capabilities of gamma cameras were assessed using low contrast lesion source distributions. The results showed that an improvement in energy resolution from 10% to 2% offers significant clinical advantages in terms of improved contrast, increased detectability, and reduced patient dose. The improvements are of greatest significance for small lesions at low contrast. The results of the computer simulation were also used to compare a design of a hypothetical germanium gamma camera with a state-of-the-art scintillation camera. The computer model performed a parametric analysis of the interrelated effects of inherent and technological limitations of gamma camera imaging. In particular, the trade-off between collimator resolution and collimator efficiency for detection of a given low contrast lesion was directly addressed. This trade-off is an inherent limitation of both gamma cameras. The image degrading effects of patient motion, camera spatial distortions, and low count rate were shown to modify the improvements due to better energy resolution. Thus, based on this research, the continued development of germanium cameras to the point of clinical demonstration is recommended

  18. A Reaction-Diffusion-Based Coding Rate Control Mechanism for Camera Sensor Networks

    Directory of Open Access Journals (Sweden)

    Naoki Wakamiya

    2010-08-01

    Full Text Available A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  19. A reaction-diffusion-based coding rate control mechanism for camera sensor networks.

    Science.gov (United States)

    Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki

    2010-01-01

    A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  20. Low cost thermal camera for use in preclinical detection of diabetic peripheral neuropathy in primary care setting

    Science.gov (United States)

    Joshi, V.; Manivannan, N.; Jarry, Z.; Carmichael, J.; Vahtel, M.; Zamora, G.; Calder, C.; Simon, J.; Burge, M.; Soliz, P.

    2018-02-01

    Diabetic peripheral neuropathy (DPN) accounts for around 73,000 lower-limb amputations annually in the US on patients with diabetes. Early detection of DPN is critical. Current clinical methods for diagnosing DPN are subjective and effective only at later stages. Until recently, thermal cameras used for medical imaging have been expensive and hence prohibitive to be installed in primary care setting. The objective of this study is to compare results from a low-cost thermal camera with a high-end thermal camera used in screening for DPN. Thermal imaging has demonstrated changes in microvascular function that correlates with nerve function affected by DPN. The limitations for using low-cost cameras for DPN imaging are: less resolution (active pixels), frame rate, thermal sensitivity etc. We integrated two FLIR Lepton (80x60 active pixels, 50° HFOV, thermal sensitivity aged 35-76) were recruited. Difference in the temperature measurements between cameras was calculated for each subject and the results show that the difference between the temperature measurements of two cameras (mean difference=0.4, p-value=0.2) is not statistically significant. We conclude that the low-cost thermal camera system shows potential for use in detecting early-signs of DPN in under-served and rural clinics.

  1. Revealing a comet-like shape of the faint periphery of the nearby galaxy M 32

    Science.gov (United States)

    Georgiev, Ts. B.

    2016-02-01

    We performed BVRI photometry of the galaxy M 32 building images and isophote maps in magnitudes and in color indexes. While searching for the faint thick disk of M 32 we apply median filtering with aperture of 7.3 arcmin to detach the residual image of M 32 and its periphery above the surrounding magnitude or color background. The residual images in all photometric systems show that the periphery of M 32 possesses a comet-like shape with a tail oriented to SSE, in a direction opposite to the direction of M 110. The images calibrated in color indexes (b - v) and (b - v)+(r - i) show that the tail is redder than the local median background. The residual images in color indexes show that the red tail broadens and curves in direction towards S and SW. Simultaneously, the brightest part of M 32 occurs bounded from NW-NE-SE sides by a sickle-like formation with a significantly lower red color index. Generally, we do not find a faint thick disk of M 32. However, the comet-like shape on the periphery of M 32, especially as a formation with an increased red color index, provokes involuntarily the impression that the satellite M 32 overtakes the Andromeda galaxy. The redshifts show that the intimacy velocity of M 32 and Andromeda galaxy is about 100 km/s.

  2. Stereo Pinhole Camera: Assembly and experimental activities

    Directory of Open Access Journals (Sweden)

    Gilmário Barbosa Santos

    2015-05-01

    Full Text Available This work describes the assembling of a stereo pinhole camera for capturing stereo-pairs of images and proposes experimental activities with it. A pinhole camera can be as sophisticated as you want, or so simple that it could be handcrafted with practically recyclable materials. This paper describes the practical use of the pinhole camera throughout history and currently. Aspects of optics and geometry involved in the building of the stereo pinhole camera are presented with illustrations. Furthermore, experiments are proposed by using the images obtained by the camera for 3D visualization through a pair of anaglyph glasses, and the estimation of relative depth by triangulation is discussed.

  3. A versatile calibration procedure for portable coded aperture gamma cameras and RGB-D sensors

    Science.gov (United States)

    Paradiso, V.; Crivellaro, A.; Amgarou, K.; de Lanaute, N. Blanc; Fua, P.; Liénard, E.

    2018-04-01

    The present paper proposes a versatile procedure for the geometrical calibration of coded aperture gamma cameras and RGB-D depth sensors, using only one radioactive point source and a simple experimental set-up. Calibration data is then used for accurately aligning radiation images retrieved by means of the γ-camera with the respective depth images computed with the RGB-D sensor. The system resulting from such a combination is thus able to retrieve, automatically, the distance of radioactive hotspots by means of pixel-wise mapping between gamma and depth images. This procedure is of great interest for a wide number of applications, ranging from precise automatic estimation of the shape and distance of radioactive objects to Augmented Reality systems. Incidentally, the corresponding results validated the choice of a perspective design model for a coded aperture γ-camera.

  4. Aerial surveillance based on hierarchical object classification for ground target detection

    Science.gov (United States)

    Vázquez-Cervantes, Alberto; García-Huerta, Juan-Manuel; Hernández-Díaz, Teresa; Soto-Cajiga, J. A.; Jiménez-Hernández, Hugo

    2015-03-01

    Unmanned aerial vehicles have turned important in surveillance application due to the flexibility and ability to inspect and displace in different regions of interest. The instrumentation and autonomy of these vehicles have been increased; i.e. the camera sensor is now integrated. Mounted cameras allow flexibility to monitor several regions of interest, displacing and changing the camera view. A well common task performed by this kind of vehicles correspond to object localization and tracking. This work presents a hierarchical novel algorithm to detect and locate objects. The algorithm is based on a detection-by-example approach; this is, the target evidence is provided at the beginning of the vehicle's route. Afterwards, the vehicle inspects the scenario, detecting all similar objects through UTM-GPS coordinate references. Detection process consists on a sampling information process of the target object. Sampling process encode in a hierarchical tree with different sampling's densities. Coding space correspond to a huge binary space dimension. Properties such as independence and associative operators are defined in this space to construct a relation between the target object and a set of selected features. Different densities of sampling are used to discriminate from general to particular features that correspond to the target. The hierarchy is used as a way to adapt the complexity of the algorithm due to optimized battery duty cycle of the aerial device. Finally, this approach is tested in several outdoors scenarios, proving that the hierarchical algorithm works efficiently under several conditions.

  5. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    Science.gov (United States)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between

  6. Range camera on conveyor belts: estimating size distribution and systematic errors due to occlusion

    Science.gov (United States)

    Blomquist, Mats; Wernersson, Ake V.

    1999-11-01

    When range cameras are used for analyzing irregular material on a conveyor belt there will be complications like missing segments caused by occlusion. Also, a number of range discontinuities will be present. In a frame work towards stochastic geometry, conditions are found for the cases when range discontinuities take place. The test objects in this paper are pellets for the steel industry. An illuminating laser plane will give range discontinuities at the edges of each individual object. These discontinuities are used to detect and measure the chord created by the intersection of the laser plane and the object. From the measured chords we derive the average diameter and its variance. An improved method is to use a pair of parallel illuminating light planes to extract two chords. The estimation error for this method is not larger than the natural shape fluctuations (the difference in diameter) for the pellets. The laser- camera optronics is sensitive enough both for material on a conveyor belt and free falling material leaving the conveyor.

  7. A study on the performance evaluation of small gamma camera collimators using detective quantun efficiency

    International Nuclear Information System (INIS)

    Jeon, Ho Sang

    2008-02-01

    The anger-type gamma camera and novel marker compound using Tc-99m were firstly introduced in 1963. The gamma camera systems have being improved and applied to various fields, for example, medical, industrial, and environmental fields. Gamma camera is mainly composed of collimator, detector, and signal processor. And the radiative source is namely the imaging object. The collimator is essential component of gamma camera system because the imaging performance of system is mainly dependent on the collimator. The performance evaluation of collimators can be done by using evaluating factors. In this study, the novel factors for gamma camera evaluation are suggested. The established evaluating factors by NEMA are FWHM, sensitivity, and uniformity. They have some limitations in spite of their usefulness. Firstly, performance evaluation by those factors give insensitive and indirect results only. Secondly, the evaluation of noise property is ambiguous. Thirdly, there is no synthetic evaluation of system performance. Simulation with Monte Carlo code and experiment with a small camera camera were simultaenuously performed to verify novel evaluating factors. For the evaluation of spatial resolution, MTF was applied instead of FWHM. The MTF values presents excellent linear relationship with FWHM values. The NNPS was applied instead of uniformity and sensitivity for the evaluation of noise fluctuation. The NNPS values also presents linear relationship with sensitivity and unifomity. Moreover, these novel factors give quantities as the function of spatial frequencies. Finally, the DQE values were given by calculations with MTF, NNPS, and input SNR. DQE effectively presents the synthetic evaluation of gamma camera performance. It is the conclusion that MTF, NNPS, and DQE can be novel evaluating factors for gamma camera systems and the new factor for synthetic evaluation is derived

  8. The Eye of the Camera

    NARCIS (Netherlands)

    van Rompay, Thomas Johannes Lucas; Vonk, Dorette J.; Fransen, M.L.

    2009-01-01

    This study addresses the effects of security cameras on prosocial behavior. Results from previous studies indicate that the presence of others can trigger helping behavior, arising from the need for approval of others. Extending these findings, the authors propose that security cameras can likewise

  9. Friend or foe: exploiting sensor failures for transparent object localization and classification

    Science.gov (United States)

    Seib, Viktor; Barthen, Andreas; Marohn, Philipp; Paulus, Dietrich

    2017-02-01

    In this work we address the problem of detecting and recognizing transparent objects using depth images from an RGB-D camera. Using this type of sensor usually prohibits the localization of transparent objects since the structured light pattern of these cameras is not reflected by transparent surfaces. Instead, transparent surfaces often appear as undefined values in the resulting images. However, these erroneous sensor readings form characteristic patterns that we exploit in the presented approach. The sensor data is fed into a deep convolutional neural network that is trained to classify and localize drinking glasses. We evaluate our approach with four different types of transparent objects. To our best knowledge, no datasets offering depth images of transparent objects exist so far. With this work we aim at closing this gap by providing our data to the public.

  10. Poster: A Software-Defined Multi-Camera Network

    OpenAIRE

    Chen, Po-Yen; Chen, Chien; Selvaraj, Parthiban; Claesen, Luc

    2016-01-01

    The widespread popularity of OpenFlow leads to a significant increase in the number of applications developed in SoftwareDefined Networking (SDN). In this work, we propose the architecture of a Software-Defined Multi-Camera Network consisting of small, flexible, economic, and programmable cameras which combine the functions of the processor, switch, and camera. A Software-Defined Multi-Camera Network can effectively reduce the overall network bandwidth and reduce a large amount of the Capex a...

  11. Gamma camera performance: technical assessment protocol

    Energy Technology Data Exchange (ETDEWEB)

    Bolster, A.A. [West Glasgow Hospitals NHS Trust, London (United Kingdom). Dept. of Clinical Physics; Waddington, W.A. [University College London Hospitals NHS Trust, London (United Kingdom). Inst. of Nuclear Medicine

    1996-12-31

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera`s computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author).

  12. XMM-Newton and Swift spectroscopy of the newly discovered very faint X-ray transient IGR J17494-3030

    NARCIS (Netherlands)

    Armas Padilla, M.; Wijnands, R.; Degenaar, N.

    2013-01-01

    A growing group of low-mass X-ray binaries are found to be accreting at very faint X-ray luminosities of <1036 erg s−1 (2-10 keV). One such system is the new X-ray transient IGR J17494-3030. We present Swift and XMM-Newton observations obtained during its 2012 discovery outburst. The Swift

  13. Infrared-faint radio sources remain undetected at far-infrared wavelengths. Deep photometric observations using the Herschel Space Observatory

    Science.gov (United States)

    Herzog, A.; Norris, R. P.; Middelberg, E.; Spitler, L. R.; Leipski, C.; Parker, Q. A.

    2015-08-01

    Context. Showing 1.4 GHz flux densities in the range of a few to a few tens of mJy, infrared-faint radio sources (IFRS) are a type of galaxy characterised by faint or absent near-infrared counterparts and consequently extreme radio-to-infrared flux density ratios up to several thousand. Recent studies showed that IFRS are radio-loud active galactic nuclei (AGNs) at redshifts ≳2, potentially linked to high-redshift radio galaxies (HzRGs). Aims: This work explores the far-infrared emission of IFRS, providing crucial information on the star forming and AGN activity of IFRS. Furthermore, the data enable examining the putative relationship between IFRS and HzRGs and testing whether IFRS are more distant or fainter siblings of these massive galaxies. Methods: A sample of six IFRS was observed with the Herschel Space Observatory between 100 μm and 500 μm. Using these results, we constrained the nature of IFRS by modelling their broad-band spectral energy distribution (SED). Furthermore, we set an upper limit on their infrared SED and decomposed their emission into contributions from an AGN and from star forming activity. Results: All six observed IFRS were undetected in all five Herschel far-infrared channels (stacking limits: σ = 0.74 mJy at 100 μm, σ = 3.45 mJy at 500 μm). Based on our SED modelling, we ruled out the following objects to explain the photometric characteristics of IFRS: (a) known radio-loud quasars and compact steep-spectrum sources at any redshift; (b) starburst galaxies with and without an AGN and Seyfert galaxies at any redshift, even if the templates were modified; and (c) known HzRGs at z ≲ 10.5. We find that the IFRS analysed in this work can only be explained by objects that fulfil the selection criteria of HzRGs. More precisely, IFRS could be (a) known HzRGs at very high redshifts (z ≳ 10.5); (b) low-luminosity siblings of HzRGs with additional dust obscuration at lower redshifts; (c) scaled or unscaled versions of Cygnus A at any

  14. The Light Field Attachment: Turning a DSLR into a Light Field Camera Using a Low Budget Camera Ring

    KAUST Repository

    Wang, Yuwang

    2016-11-16

    We propose a concept for a lens attachment that turns a standard DSLR camera and lens into a light field camera. The attachment consists of 8 low-resolution, low-quality side cameras arranged around the central high-quality SLR lens. Unlike most existing light field camera architectures, this design provides a high-quality 2D image mode, while simultaneously enabling a new high-quality light field mode with a large camera baseline but little added weight, cost, or bulk compared with the base DSLR camera. From an algorithmic point of view, the high-quality light field mode is made possible by a new light field super-resolution method that first improves the spatial resolution and image quality of the side cameras and then interpolates additional views as needed. At the heart of this process is a super-resolution method that we call iterative Patch- And Depth-based Synthesis (iPADS), which combines patch-based and depth-based synthesis in a novel fashion. Experimental results obtained for both real captured data and synthetic data confirm that our method achieves substantial improvements in super-resolution for side-view images as well as the high-quality and view-coherent rendering of dense and high-resolution light fields.

  15. FAINT NEAR-ULTRAVIOLET/FAR-ULTRAVIOLET STANDARDS FROM SWIFT/UVOT, GALEX, AND SDSS PHOTOMETRY

    International Nuclear Information System (INIS)

    Siegel, Michael H.; Hoversten, Erik A.; Roming, Peter W. A.; Brown, Peter

    2010-01-01

    At present, the precision of deep ultraviolet photometry is somewhat limited by the dearth of faint ultraviolet standard stars. In an effort to improve this situation, we present a uniform catalog of 11 new faint (u ∼ 17) ultraviolet standard stars. High-precision photometry of these stars has been taken from the Sloan Digital Sky Survey and Galaxy Evolution Explorer archives and combined with new data from the Swift Ultraviolet Optical Telescope to provide precise photometric measures extending from the near-infrared to the far-ultraviolet. These stars were chosen because they are known to be hot (20, 000 eff < 50, 000 K) DA white dwarfs with published Sloan spectra that should be photometrically stable. This careful selection allows us to compare the combined photometry and Sloan spectroscopy to models of pure hydrogen atmospheres to both constrain the underlying properties of the white dwarfs and test the ability of white dwarf models to predict the photometric measures. We find that the photometry provides good constraints on white dwarf temperatures, which demonstrates the ability of Swift/UVOT to investigate the properties of hot luminous stars. We further find that the models reproduce the photometric measures in all 11 passbands to within their systematic uncertainties. Within the limits of our photometry, we find the standard stars to be photometrically stable. This success indicates that the models can be used to calibrate additional filters to our standard system, permitting easier comparison of photometry from heterogeneous sources. The largest source of uncertainty in the model fitting is the uncertainty in the foreground reddening curve, a problem that is especially acute in the UV.

  16. Comparison of myocardial perfusion imaging between the new high-speed gamma camera and the standard anger camera

    International Nuclear Information System (INIS)

    Tanaka, Hirokazu; Chikamori, Taishiro; Hida, Satoshi

    2013-01-01

    Cadmium-zinc-telluride (CZT) solid-state detectors have been recently introduced into the field of myocardial perfusion imaging. The aim of this study was to prospectively compare the diagnostic performance of the CZT high-speed gamma camera (Discovery NM 530c) with that of the standard 3-head gamma camera in the same group of patients. The study group consisted of 150 consecutive patients who underwent a 1-day stress-rest 99m Tc-sestamibi or tetrofosmin imaging protocol. Image acquisition was performed first on a standard gamma camera with a 15-min scan time each for stress and for rest. All scans were immediately repeated on a CZT camera with a 5-min scan time for stress and a 3-min scan time for rest, using list mode. The correlations between the CZT camera and the standard camera for perfusion and function analyses were strong within narrow Bland-Altman limits of agreement. Using list mode analysis, image quality for stress was rated as good or excellent in 97% of the 3-min scans, and in 100% of the ≥4-min scans. For CZT scans at rest, similarly, image quality was rated as good or excellent in 94% of the 1-min scans, and in 100% of the ≥2-min scans. The novel CZT camera provides excellent image quality, which is equivalent to standard myocardial single-photon emission computed tomography, despite a short scan time of less than half of the standard time. (author)

  17. High resolution RGB color line scan camera

    Science.gov (United States)

    Lynch, Theodore E.; Huettig, Fred

    1998-04-01

    A color line scan camera family which is available with either 6000, 8000 or 10000 pixels/color channel, utilizes off-the-shelf lenses, interfaces with currently available frame grabbers, includes on-board pixel by pixel offset correction, and is configurable and controllable via RS232 serial port for computer controlled or stand alone operation is described in this paper. This line scan camera is based on an available 8000 element monochrome line scan camera designed by AOA for OEM use. The new color version includes improvements such as better packaging and additional user features which make the camera easier to use. The heart of the camera is a tri-linear CCD sensor with on-chip color balancing for maximum accuracy and pinned photodiodes for low lag response. Each color channel is digitized to 12 bits and all three channels are multiplexed together so that the resulting camera output video is either a 12 or 8 bit data stream at a rate of up to 24Megpixels/sec. Conversion from 12 to 8 bit, or user-defined gamma, is accomplished by on board user-defined video look up tables. The camera has two user-selectable operating modes; lows speed, high sensitivity mode or high speed, reduced sensitivity mode. The intended uses of the camera include industrial inspection, digital archiving, document scanning, and graphic arts applications.

  18. An Open Standard for Camera Trap Data

    Directory of Open Access Journals (Sweden)

    Tavis Forrester

    2016-12-01

    Full Text Available Camera traps that capture photos of animals are a valuable tool for monitoring biodiversity. The use of camera traps is rapidly increasing and there is an urgent need for standardization to facilitate data management, reporting and data sharing. Here we offer the Camera Trap Metadata Standard as an open data standard for storing and sharing camera trap data, developed by experts from a variety of organizations. The standard captures information necessary to share data between projects and offers a foundation for collecting the more detailed data needed for advanced analysis. The data standard captures information about study design, the type of camera used, and the location and species names for all detections in a standardized way. This information is critical for accurately assessing results from individual camera trapping projects and for combining data from multiple studies for meta-analysis. This data standard is an important step in aligning camera trapping surveys with best practices in data-intensive science. Ecology is moving rapidly into the realm of big data, and central data repositories are becoming a critical tool and are emerging for camera trap data. This data standard will help researchers standardize data terms, align past data to new repositories, and provide a framework for utilizing data across repositories and research projects to advance animal ecology and conservation.

  19. The use of a portable gamma camera for preoperative lymphatic mapping: a comparison with a conventional gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Vidal-Sicart, Sergi; Paredes, Pilar [Hospital Clinic Barcelona, Nuclear Medicine Department (CDIC), Barcelona (Spain); Institut d' Investigacio Biomedica Agusti Pi Sunyer (IDIBAPS), Barcelona (Spain); Vermeeren, Lenka; Valdes-Olmos, Renato A. [Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital (NKI-AVL), Nuclear Medicine Department, Amsterdam (Netherlands); Sola, Oriol [Hospital Clinic Barcelona, Nuclear Medicine Department (CDIC), Barcelona (Spain)

    2011-04-15

    Planar lymphoscintigraphy is routinely used for preoperative sentinel node visualization, but large gamma cameras are not always available. We evaluated the reproducibility of lymphatic mapping with a smaller and portable gamma camera. In two centres, 52 patients with breast cancer received preoperative lymphoscintigraphy with a conventional gamma camera with a field of view of 40 x 40 cm. Static anterior and lateral images were performed at 15 min, 2 h and 4 h after injection of the radiotracer ({sup 99m}Tc-nanocolloid). At 2 h after injection, anterior and oblique images were also performed with a portable gamma camera (Sentinella, Oncovision) positioned to obtain a field of view of 20 x 20 cm. Visualization of lymphatic drainage on conventional images and images with the portable device were compared for number of nodes depicted, their intensity and localization of sentinel nodes. The images performed with the conventional gamma camera depicted sentinel nodes in 94%, while the portable gamma camera showed drainage in 73%. There was however no significant difference in visualization between the two devices when a lead shield was used to mask the injection area in 43 patients (95 vs 88%, p = 0.25). Second-echelon nodes were visualized in 62% of the patients with the conventional gamma camera and in 29% of the cases with the portable gamma camera. Preoperative imaging with a portable gamma camera fitted with a pinhole collimator to obtain a field of view of 20 x 20 cm is able to depict sentinel nodes in 88% of the cases, if a lead shield is used to mask the injection site. This device may be useful in centres without the possibility to perform a preoperative image. (orig.)

  20. The use of a portable gamma camera for preoperative lymphatic mapping: a comparison with a conventional gamma camera

    International Nuclear Information System (INIS)

    Vidal-Sicart, Sergi; Paredes, Pilar; Vermeeren, Lenka; Valdes-Olmos, Renato A.; Sola, Oriol

    2011-01-01

    Planar lymphoscintigraphy is routinely used for preoperative sentinel node visualization, but large gamma cameras are not always available. We evaluated the reproducibility of lymphatic mapping with a smaller and portable gamma camera. In two centres, 52 patients with breast cancer received preoperative lymphoscintigraphy with a conventional gamma camera with a field of view of 40 x 40 cm. Static anterior and lateral images were performed at 15 min, 2 h and 4 h after injection of the radiotracer ( 99m Tc-nanocolloid). At 2 h after injection, anterior and oblique images were also performed with a portable gamma camera (Sentinella, Oncovision) positioned to obtain a field of view of 20 x 20 cm. Visualization of lymphatic drainage on conventional images and images with the portable device were compared for number of nodes depicted, their intensity and localization of sentinel nodes. The images performed with the conventional gamma camera depicted sentinel nodes in 94%, while the portable gamma camera showed drainage in 73%. There was however no significant difference in visualization between the two devices when a lead shield was used to mask the injection area in 43 patients (95 vs 88%, p = 0.25). Second-echelon nodes were visualized in 62% of the patients with the conventional gamma camera and in 29% of the cases with the portable gamma camera. Preoperative imaging with a portable gamma camera fitted with a pinhole collimator to obtain a field of view of 20 x 20 cm is able to depict sentinel nodes in 88% of the cases, if a lead shield is used to mask the injection site. This device may be useful in centres without the possibility to perform a preoperative image. (orig.)

  1. A new Herbig-Haro object in the Gum nebula and its associated star

    International Nuclear Information System (INIS)

    Graham, J.A.

    1991-01-01

    Photographic and spectroscopic observations are presented of some faint nebulosity which is associated with the strong IRAS point source 08211 - 4158. Two components are observed. One relatively compact and knotty region has a purely gaseous spectrum characteristic of a low-excitation Herbig-Haro object, while another area shows a spectrum with strong continuum radiation and superposed emission lines which suggest that it is scattering light from an embedded young star. Radial-velocity measurements show that this star is at rest with respect to its surroundings while the Herbig-Haro object has a mean velocity of -38 km/s with respect to its local standard of rest. The evidence favors but does not conclusively show that the source 1 in the area, identified by Campbell and Persson (1988), marks the position of the embedded star which powers the Herbig-Haro object. 13 refs

  2. Video Sharing System Based on Wi-Fi Camera

    OpenAIRE

    Qidi Lin; Hewei Yu; Jinbin Huang; Weile Liang

    2015-01-01

    This paper introduces a video sharing platform based on WiFi, which consists of camera, mobile phone and PC server. This platform can receive wireless signal from the camera and show the live video on the mobile phone captured by camera. In addition, it is able to send commands to camera and control the camera's holder to rotate. The platform can be applied to interactive teaching and dangerous area's monitoring and so on. Testing results show that the platform can share ...

  3. EDICAM (Event Detection Intelligent Camera)

    Energy Technology Data Exchange (ETDEWEB)

    Zoletnik, S. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Szabolics, T., E-mail: szabolics.tamas@wigner.mta.hu [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Kocsis, G.; Szepesi, T.; Dunai, D. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary)

    2013-10-15

    Highlights: ► We present EDICAM's hardware modules. ► We present EDICAM's main design concepts. ► This paper will describe EDICAM firmware architecture. ► Operation principles description. ► Further developments. -- Abstract: A new type of fast framing camera has been developed for fusion applications by the Wigner Research Centre for Physics during the last few years. A new concept was designed for intelligent event driven imaging which is capable of focusing image readout to Regions of Interests (ROIs) where and when predefined events occur. At present these events mean intensity changes and external triggers but in the future more sophisticated methods might also be defined. The camera provides 444 Hz frame rate at full resolution of 1280 × 1024 pixels, but monitoring of smaller ROIs can be done in the 1–116 kHz range even during exposure of the full image. Keeping space limitations and the harsh environment in mind the camera is divided into a small Sensor Module and a processing card interconnected by a fast 10 Gbit optical link. This camera hardware has been used for passive monitoring of the plasma in different devices for example at ASDEX Upgrade and COMPASS with the first version of its firmware. The new firmware and software package is now available and ready for testing the new event processing features. This paper will present the operation principle and features of the Event Detection Intelligent Camera (EDICAM). The device is intended to be the central element in the 10-camera monitoring system of the Wendelstein 7-X stellarator.

  4. Minimum Delay Moving Object Detection

    KAUST Repository

    Lao, Dong

    2017-05-14

    This thesis presents a general framework and method for detection of an object in a video based on apparent motion. The object moves, at some unknown time, differently than the “background” motion, which can be induced from camera motion. The goal of proposed method is to detect and segment the object as soon it moves in an online manner. Since motion estimation can be unreliable between frames, more than two frames are needed to reliably detect the object. Observing more frames before declaring a detection may lead to a more accurate detection and segmentation, since more motion may be observed leading to a stronger motion cue. However, this leads to greater delay. The proposed method is designed to detect the object(s) with minimum delay, i.e., frames after the object moves, constraining the false alarms, defined as declarations of detection before the object moves or incorrect or inaccurate segmentation at the detection time. Experiments on a new extensive dataset for moving object detection show that our method achieves less delay for all false alarm constraints than existing state-of-the-art.

  5. APPLYING CCD CAMERAS IN STEREO PANORAMA SYSTEMS FOR 3D ENVIRONMENT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    A. Sh. Amini

    2012-07-01

    Full Text Available Proper recontruction of 3D environments is nowadays needed by many organizations and applications. In addition to conventional methods the use of stereo panoramas is an appropriate technique to use due to simplicity, low cost and the ability to view an environment the way it is in reality. This paper investigates the ability of applying stereo CCD cameras for 3D reconstruction and presentation of the environment and geometric measuring among that. For this purpose, a rotating stereo panorama was established using two CCDs with a base-length of 350 mm and a DVR (digital video recorder box. The stereo system was first calibrated using a 3D test-field and used to perform accurate measurements. The results of investigating the system in a real environment showed that although this kind of cameras produce noisy images and they do not have appropriate geometric stability, but they can be easily synchronized, well controlled and reasonable accuracy (about 40 mm in objects at 12 meters distance from the camera can be achieved.

  6. Spectroscopy of optically selected BL Lac objects and their γ-ray emission

    Energy Technology Data Exchange (ETDEWEB)

    Sandrinelli, A.; Treves, A.; Farina, E. P.; Landoni, M. [Università degli Studi dell' Insubria, Via Valleggio 11, I-22100 Como (Italy); Falomo, R. [INAF-Osservatorio Astronomico di Padova, Vicolo dell Osservatorio 5, I-35122 Padova (Italy); Foschini, L.; Sbarufatti, B., E-mail: angela.sandrinelli@brera.inaf.it [INAF-Osservatorio Astronomico di Brera, Via Emilio Bianchi 46, I-23807 Merate (Italy)

    2013-12-01

    We present Very Large Telescope optical spectroscopy of nine BL Lac objects of unknown redshift belonging to the list of optically selected radio-loud BL Lac candidates. We explore their spectroscopic properties and possible link with gamma-ray emission. From the new observations we determine the redshifts of four objects from faint emission lines or from absorption features of their host galaxies. In three cases we find narrow intervening absorptions from which a lower limit to the redshift is inferred. For the remaining two featureless sources, lower limits to the redshift are deduced from the absence of spectral lines. A search for γ counterpart emission shows that six out of the nine candidates are Fermi γ-ray emitters and we find two new detections. Our analysis suggests that most of the BL Lac objects still lacking redshift information are most likely located at high redshifts.

  7. An Open Standard for Camera Trap Data

    NARCIS (Netherlands)

    Forrester, Tavis; O'Brien, Tim; Fegraus, Eric; Jansen, P.A.; Palmer, Jonathan; Kays, Roland; Ahumada, Jorge; Stern, Beth; McShea, William

    2016-01-01

    Camera traps that capture photos of animals are a valuable tool for monitoring biodiversity. The use of camera traps is rapidly increasing and there is an urgent need for standardization to facilitate data management, reporting and data sharing. Here we offer the Camera Trap Metadata Standard as an

  8. Polarizing aperture stereoscopic cinema camera

    Science.gov (United States)

    Lipton, Lenny

    2012-07-01

    The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.

  9. Control system for gamma camera

    International Nuclear Information System (INIS)

    Miller, D.W.

    1977-01-01

    An improved gamma camera arrangement is described which utilizing a solid state detector, formed of high purity germanium. the central arrangement of the camera operates to effect the carrying out of a trapezoidal filtering operation over antisymmetrically summed spatial signals through gated integration procedures utilizing idealized integrating intervals. By simultaneously carrying out peak energy evaluation of the input signals, a desirable control over pulse pile-up phenomena is achieved. Additionally, through the use of the time derivative of incoming pulse or signal energy information to initially enable the control system, a low level information evaluation is provided serving to enhance the signal processing efficiency of the camera

  10. Scintillating camera

    International Nuclear Information System (INIS)

    Vlasbloem, H.

    1976-01-01

    The invention relates to a scintillating camera and in particular to an apparatus for determining the position coordinates of a light pulse emitting point on the anode of an image intensifier tube which forms part of a scintillating camera, comprising at least three photomultipliers which are positioned to receive light emitted by the anode screen on their photocathodes, circuit means for processing the output voltages of the photomultipliers to derive voltages that are representative of the position coordinates; a pulse-height discriminator circuit adapted to be fed with the sum voltage of the output voltages of the photomultipliers for gating the output of the processing circuit when the amplitude of the sum voltage of the output voltages of the photomultipliers lies in a predetermined amplitude range, and means for compensating the distortion introduced in the image on the anode screen

  11. Coded aperture solution for improving the performance of traffic enforcement cameras

    Science.gov (United States)

    Masoudifar, Mina; Pourreza, Hamid Reza

    2016-10-01

    A coded aperture camera is proposed for automatic license plate recognition (ALPR) systems. It captures images using a noncircular aperture. The aperture pattern is designed for the rapid acquisition of high-resolution images while preserving high spatial frequencies of defocused regions. It is obtained by minimizing an objective function, which computes the expected value of perceptual deblurring error. The imaging conditions and camera sensor specifications are also considered in the proposed function. The designed aperture improves the depth of field (DoF) and subsequently ALPR performance. The captured images can be directly analyzed by the ALPR software up to a specific depth, which is 13 m in our case, though it is 11 m for the circular aperture. Moreover, since the deblurring results of images captured by our aperture yield fewer artifacts than those captured by the circular aperture, images can be first deblurred and then analyzed by the ALPR software. In this way, the DoF and recognition rate can be improved at the same time. Our case study shows that the proposed camera can improve the DoF up to 17 m while it is limited to 11 m in the conventional aperture.

  12. The "All Sky Camera Network"

    Science.gov (United States)

    Caldwell, Andy

    2005-01-01

    In 2001, the "All Sky Camera Network" came to life as an outreach program to connect the Denver Museum of Nature and Science (DMNS) exhibit "Space Odyssey" with Colorado schools. The network is comprised of cameras placed strategically at schools throughout Colorado to capture fireballs--rare events that produce meteorites.…

  13. Initial laboratory evaluation of color video cameras: Phase 2

    Energy Technology Data Exchange (ETDEWEB)

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. The monochrome cameras were selected over color cameras because they have greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color camera technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories has established an on-going program to evaluate the newest color solid-state cameras. Phase One of the Sandia program resulted in the SAND91-2579/1 report titled: Initial Laboratory Evaluation of Color Video Cameras. The report briefly discusses imager chips, color cameras, and monitors, describes the camera selection, details traditional test parameters and procedures, and gives the results reached by evaluating 12 cameras. Here, in Phase Two of the report, we tested 6 additional cameras using traditional methods. In addition, all 18 cameras were tested by newly developed methods. This Phase 2 report details those newly developed test parameters and procedures, and evaluates the results.

  14. Wearable Cameras Are Useful Tools to Investigate and Remediate Autobiographical Memory Impairment: A Systematic PRISMA Review.

    Science.gov (United States)

    Allé, Mélissa C; Manning, Liliann; Potheegadoo, Jevita; Coutelle, Romain; Danion, Jean-Marie; Berna, Fabrice

    2017-03-01

    Autobiographical memory, central in human cognition and every day functioning, enables past experienced events to be remembered. A variety of disorders affecting autobiographical memory are characterized by the difficulty of retrieving specific detailed memories of past personal events. Owing to the impact of autobiographical memory impairment on patients' daily life, it is necessary to better understand these deficits and develop relevant methods to improve autobiographical memory. The primary objective of the present systematic PRISMA review was to give an overview of the first empirical evidence of the potential of wearable cameras in autobiographical memory investigation in remediating autobiographical memory impairments. The peer-reviewed literature published since 2004 on the usefulness of wearable cameras in research protocols was explored in 3 databases (PUBMED, PsycINFO, and Google Scholar). Twenty-eight published studies that used a protocol involving wearable camera, either to explore wearable camera functioning and impact on daily life, or to investigate autobiographical memory processing or remediate autobiographical memory impairment, were included. This review analyzed the potential of wearable cameras for 1) investigating autobiographical memory processes in healthy volunteers without memory impairment and in clinical populations, and 2) remediating autobiographical memory in patients with various kinds of memory disorder. Mechanisms to account for the efficacy of wearable cameras are also discussed. The review concludes by discussing certain limitations inherent to using cameras, and new research perspectives. Finally, ethical issues raised by this new technology are considered.

  15. Movement-based Interaction in Camera Spaces

    DEFF Research Database (Denmark)

    Eriksson, Eva; Riisgaard Hansen, Thomas; Lykke-Olesen, Andreas

    2006-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movement-based projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space......, relations, and feedback. We see these as central for describing and analysing movement-based systems using camera tracking and we show how these three concepts can be used to analyse other camera tracking applications....

  16. Photogrammetric Applications of Immersive Video Cameras

    OpenAIRE

    Kwiatek, K.; Tokarczyk, R.

    2014-01-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to ov...

  17. 21 CFR 886.1120 - Opthalmic camera.

    Science.gov (United States)

    2010-04-01

    ... DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1120 Opthalmic camera. (a) Identification. An ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding area...

  18. Occlusion Handling in Videos Object Tracking: A Survey

    International Nuclear Information System (INIS)

    Lee, B Y; Liew, L H; Cheah, W S; Wang, Y C

    2014-01-01

    Object tracking in video has been an active research since for decades. This interest is motivated by numerous applications, such as surveillance, human-computer interaction, and sports event monitoring. Many challenges related to tracking objects still remain, this can arise due to abrupt object motion, changing appearance patterns of objects and the scene, non-rigid object structures and most significant are occlusion of tracked object be it object-to-object or object-to-scene occlusions. Generally, occlusion in object tracking occur under three situations: self-occlusion, inter-object occlusion by background scene structure. Self-occlusion occurs most frequently while tracking articulated objects when one part of the object occludes another. Inter-object occlusion occurs when two objects being tracked occlude each other whereas occlusion by the background occurs when a structure in the background occludes the tracked objects. Typically, tracking methods handle occlusion by modelling the object motion using linear and non-linear dynamic models. The derived models will be used to continuously predicting the object location when a tracked object is occluded until the object reappears. Example of these method are Kalman filtering and Particle filtering trackers. Researchers have also utilised other features to resolved occlusion, for example, silhouette projections, colour histogram and optical flow. We will present some result from a previously conducted experiment when tracking single object using Kalman filter, Particle filter and Mean Shift trackers under various occlusion situation in this paper. We will also review various other occlusion handling methods that involved using multiple cameras. In a nutshell, the goal of this paper is to discuss in detail the problem of occlusion in object tracking and review the state of the art occlusion handling methods, classify them into different categories, and identify new trends. Moreover, we discuss the important

  19. Occlusion Handling in Videos Object Tracking: A Survey

    Science.gov (United States)

    Lee, B. Y.; Liew, L. H.; Cheah, W. S.; Wang, Y. C.

    2014-02-01

    Object tracking in video has been an active research since for decades. This interest is motivated by numerous applications, such as surveillance, human-computer interaction, and sports event monitoring. Many challenges related to tracking objects still remain, this can arise due to abrupt object motion, changing appearance patterns of objects and the scene, non-rigid object structures and most significant are occlusion of tracked object be it object-to-object or object-to-scene occlusions. Generally, occlusion in object tracking occur under three situations: self-occlusion, inter-object occlusion by background scene structure. Self-occlusion occurs most frequently while tracking articulated objects when one part of the object occludes another. Inter-object occlusion occurs when two objects being tracked occlude each other whereas occlusion by the background occurs when a structure in the background occludes the tracked objects. Typically, tracking methods handle occlusion by modelling the object motion using linear and non-linear dynamic models. The derived models will be used to continuously predicting the object location when a tracked object is occluded until the object reappears. Example of these method are Kalman filtering and Particle filtering trackers. Researchers have also utilised other features to resolved occlusion, for example, silhouette projections, colour histogram and optical flow. We will present some result from a previously conducted experiment when tracking single object using Kalman filter, Particle filter and Mean Shift trackers under various occlusion situation in this paper. We will also review various other occlusion handling methods that involved using multiple cameras. In a nutshell, the goal of this paper is to discuss in detail the problem of occlusion in object tracking and review the state of the art occlusion handling methods, classify them into different categories, and identify new trends. Moreover, we discuss the important

  20. Using VIS/NIR and IR spectral cameras for detecting and separating crime scene details

    Science.gov (United States)

    Kuula, Jaana; Pölönen, Ilkka; Puupponen, Hannu-Heikki; Selander, Tuomas; Reinikainen, Tapani; Kalenius, Tapani; Saari, Heikki

    2012-06-01

    Detecting invisible details and separating mixed evidence is critical for forensic inspection. If this can be done reliably and fast at the crime scene, irrelevant objects do not require further examination at the laboratory. This will speed up the inspection process and release resources for other critical tasks. This article reports on tests which have been carried out at the University of Jyväskylä in Finland together with the Central Finland Police Department and the National Bureau of Investigation for detecting and separating forensic details with hyperspectral technology. In the tests evidence was sought after at an assumed violent burglary scene with the use of VTT's 500-900 nm wavelength VNIR camera, Specim's 400- 1000 nm VNIR camera, and Specim's 1000-2500 nm SWIR camera. The tested details were dried blood on a ceramic plate, a stain of four types of mixed and absorbed blood, and blood which had been washed off a table. Other examined details included untreated latent fingerprints, gunshot residue, primer residue, and layered paint on small pieces of wood. All cameras could detect visible details and separate mixed paint. The SWIR camera could also separate four types of human and animal blood which were mixed in the same stain and absorbed into a fabric. None of the cameras could however detect primer residue, untreated latent fingerprints, or blood that had been washed off. The results are encouraging and indicate the need for further studies. The results also emphasize the importance of creating optimal imaging conditions into the crime scene for each kind of subjects and backgrounds.