WorldWideScience

Sample records for bird-borne camera shows

  1. A bird's eye view of discard reforms: bird-borne cameras reveal seabird/fishery interactions.

    Science.gov (United States)

    Votier, Stephen C; Bicknell, Anthony; Cox, Samantha L; Scales, Kylie L; Patrick, Samantha C

    2013-01-01

    Commercial capture fisheries produce huge quantities of offal, as well as undersized and unwanted catch in the form of discards. Declines in global catches and legislation to ban discarding will significantly reduce discards, but this subsidy supports a large scavenger community. Understanding the potential impact of declining discards for scavengers should feature in an eco-system based approach to fisheries management, but requires greater knowledge of scavenger/fishery interactions. Here we use bird-borne cameras, in tandem with GPS loggers, to provide a unique view of seabird/fishery interactions. 20,643 digital images (one min(-1)) from ten bird-borne cameras deployed on central place northern gannets Morus bassanus revealed that all birds photographed fishing vessels. These were large (>15 m) boats, with no small-scale vessels. Virtually all vessels were trawlers, and gannets were almost always accompanied by other scavenging birds. All individuals exhibited an Area-Restricted Search (ARS) during foraging, but only 42% of ARS were associated with fishing vessels, indicating much 'natural' foraging. The proportion of ARS behaviours associated with fishing boats were higher for males (81%) than females (30%), although the reasons for this are currently unclear. Our study illustrates that fisheries form a very important component of the prey-landscape for foraging gannets and that a discard ban, such as that proposed under reforms of the EU Common Fisheries Policy, may have a significant impact on gannet behaviour, particularly males. However, a continued reliance on 'natural' foraging suggests the ability to switch away from scavenging, but only if there is sufficient food to meet their needs in the absence of a discard subsidy. PMID:23483906

  2. A bird's eye view of discard reforms: bird-borne cameras reveal seabird/fishery interactions.

    Directory of Open Access Journals (Sweden)

    Stephen C Votier

    Full Text Available Commercial capture fisheries produce huge quantities of offal, as well as undersized and unwanted catch in the form of discards. Declines in global catches and legislation to ban discarding will significantly reduce discards, but this subsidy supports a large scavenger community. Understanding the potential impact of declining discards for scavengers should feature in an eco-system based approach to fisheries management, but requires greater knowledge of scavenger/fishery interactions. Here we use bird-borne cameras, in tandem with GPS loggers, to provide a unique view of seabird/fishery interactions. 20,643 digital images (one min(-1 from ten bird-borne cameras deployed on central place northern gannets Morus bassanus revealed that all birds photographed fishing vessels. These were large (>15 m boats, with no small-scale vessels. Virtually all vessels were trawlers, and gannets were almost always accompanied by other scavenging birds. All individuals exhibited an Area-Restricted Search (ARS during foraging, but only 42% of ARS were associated with fishing vessels, indicating much 'natural' foraging. The proportion of ARS behaviours associated with fishing boats were higher for males (81% than females (30%, although the reasons for this are currently unclear. Our study illustrates that fisheries form a very important component of the prey-landscape for foraging gannets and that a discard ban, such as that proposed under reforms of the EU Common Fisheries Policy, may have a significant impact on gannet behaviour, particularly males. However, a continued reliance on 'natural' foraging suggests the ability to switch away from scavenging, but only if there is sufficient food to meet their needs in the absence of a discard subsidy.

  3. A Bird’s Eye View of Discard Reforms: Bird-Borne Cameras Reveal Seabird/Fishery Interactions

    OpenAIRE

    Votier, Stephen C.; Bicknell, Anthony; Cox, Samantha L.; Scales, Kylie L.; Patrick, Samantha C

    2013-01-01

    Commercial capture fisheries produce huge quantities of offal, as well as undersized and unwanted catch in the form of discards. Declines in global catches and legislation to ban discarding will significantly reduce discards, but this subsidy supports a large scavenger community. Understanding the potential impact of declining discards for scavengers should feature in an eco-system based approach to fisheries management, but requires greater knowledge of scavenger/fishery interactions. Here w...

  4. Development of a safe ultraviolet camera system to enhance awareness by showing effects of UV radiation and UV protection of the skin (Conference Presentation)

    Science.gov (United States)

    Verdaasdonk, Rudolf M.; Wedzinga, Rosaline; van Montfrans, Bibi; Stok, Mirte; Klaessens, John; van der Veen, Albert

    2016-03-01

    The significant increase of skin cancer occurring in the western world is attributed to longer sun expose during leisure time. For prevention, people should become aware of the risks of UV light exposure by showing skin damage and the protective effect of sunscreen with an UV camera. An UV awareness imaging system optimized for 365 nm (UV-A) was develop using consumer components being interactive, safe and mobile. A Sony NEX5t camera was adapted to full spectral range. In addition, UV transparent lenses and filters were selected based on spectral characteristics measured (Schott S8612 and Hoya U-340 filters) to obtain the highest contrast for e.g. melanin spots and wrinkles on the skin. For uniform UV illumination, 2 facial tanner units were adapted with UV 365 nm black light fluorescent tubes. Safety of the UV illumination was determined relative to the sun and with absolute irradiance measurements at the working distance. A maximum exposure time over 15 minutes was calculate according the international safety standards. The UV camera was successfully demonstrated during the Dutch National Skin Cancer day and was well received by dermatologists and participating public. Especially, the 'black paint' effect putting sun screen on the face was dramatic and contributed to the awareness of regions on the face what are likely to be missed applying sunscreen. The UV imaging system shows to be promising for diagnostics and clinical studies in dermatology and potentially in other areas (dentistry and ophthalmology)

  5. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen......- erate cinematographic game experiences reducing, however, the player’s feeling of agency. We propose a methodology to integrate the player in the camera control loop that allows to design and generate personalised cinematographic expe- riences. Furthermore, we present an evaluation of the afore......- mentioned methodology showing that the generated camera movements are positively perceived by novice asnd intermediate players....

  6. Gamma camera

    International Nuclear Information System (INIS)

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  7. Underwater camera with depth measurement

    Science.gov (United States)

    Wang, Wei-Chih; Lin, Keng-Ren; Tsui, Chi L.; Schipf, David; Leang, Jonathan

    2016-04-01

    The objective of this study is to develop an RGB-D (video + depth) camera that provides three-dimensional image data for use in the haptic feedback of a robotic underwater ordnance recovery system. Two camera systems were developed and studied. The first depth camera relies on structured light (as used by the Microsoft Kinect), where the displacement of an object is determined by variations of the geometry of a projected pattern. The other camera system is based on a Time of Flight (ToF) depth camera. The results of the structural light camera system shows that the camera system requires a stronger light source with a similar operating wavelength and bandwidth to achieve a desirable working distance in water. This approach might not be robust enough for our proposed underwater RGB-D camera system, as it will require a complete re-design of the light source component. The ToF camera system instead, allows an arbitrary placement of light source and camera. The intensity output of the broadband LED light source in the ToF camera system can be increased by putting them into an array configuration and the LEDs can be modulated comfortably with any waveform and frequencies required by the ToF camera. In this paper, both camera were evaluated and experiments were conducted to demonstrate the versatility of the ToF camera.

  8. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  9. The BCAM Camera

    CERN Document Server

    Hashemi, K S

    2000-01-01

    The BCAM, or Boston CCD Angle Monitor, is a camera looking at one or more light sources. We describe the application of the The BCAM, or Boston CCD Angle Monitor, is a camera looking at one or more light sources. We describe the application of the BCAM to the ATLAS forward muon detector alignment system. We show that the camera's performance is only weakly dependent upon the brightness, focus and diameter of the source image. Its resolution is dominated by turbulence along the external light path. The camera electronics is radiation-resistant. With a field of view of ± 10 mrad, it tracks the bearing of a light source 16 m away with better than 3 µrad accuracy, well within the ATLAS requirements.

  10. TARGETLESS CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    L. Barazzetti

    2012-09-01

    Full Text Available In photogrammetry a camera is considered calibrated if its interior orientation parameters are known. These encompass the principal distance, the principal point position and some Additional Parameters used to model possible systematic errors. The current state of the art for automated camera calibration relies on the use of coded targets to accurately determine the image correspondences. This paper presents a new methodology for the efficient and rigorous photogrammetric calibration of digital cameras which does not require any longer the use of targets. A set of images depicting a scene with a good texture are sufficient for the extraction of natural corresponding image points. These are automatically matched with feature-based approaches and robust estimation techniques. The successive photogrammetric bundle adjustment retrieves the unknown camera parameters and their theoretical accuracies. Examples, considerations and comparisons with real data and different case studies are illustrated to show the potentialities of the proposed methodology.

  11. CCD Camera

    Science.gov (United States)

    Roth, Roger R.

    1983-01-01

    A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

  12. Deployable Wireless Camera Penetrators

    Science.gov (United States)

    Badescu, Mircea; Jones, Jack; Sherrit, Stewart; Wu, Jiunn Jeng

    2008-01-01

    A lightweight, low-power camera dart has been designed and tested for context imaging of sampling sites and ground surveys from an aerobot or an orbiting spacecraft in a microgravity environment. The camera penetrators also can be used to image any line-of-sight surface, such as cliff walls, that is difficult to access. Tethered cameras to inspect the surfaces of planetary bodies use both power and signal transmission lines to operate. A tether adds the possibility of inadvertently anchoring the aerobot, and requires some form of station-keeping capability of the aerobot if extended examination time is required. The new camera penetrators are deployed without a tether, weigh less than 30 grams, and are disposable. They are designed to drop from any altitude with the boost in transmitting power currently demonstrated at approximately 100-m line-of-sight. The penetrators also can be deployed to monitor lander or rover operations from a distance, and can be used for surface surveys or for context information gathering from a touch-and-go sampling site. Thanks to wireless operation, the complexity of the sampling or survey mechanisms may be reduced. The penetrators may be battery powered for short-duration missions, or have solar panels for longer or intermittent duration missions. The imaging device is embedded in the penetrator, which is dropped or projected at the surface of a study site at 90 to the surface. Mirrors can be used in the design to image the ground or the horizon. Some of the camera features were tested using commercial "nanny" or "spy" camera components with the charge-coupled device (CCD) looking at a direction parallel to the ground. Figure 1 shows components of one camera that weighs less than 8 g and occupies a volume of 11 cm3. This camera could transmit a standard television signal, including sound, up to 100 m. Figure 2 shows the CAD models of a version of the penetrator. A low-volume array of such penetrator cameras could be deployed from an

  13. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...... camera. We approach this problem by modelling it as a dynamic multi-objective optimisation problem and show how this metaphor allows a much richer expressiveness than a classical single objective approach. Finally, we showcase the application of a multi-objective evolutionary algorithm to generate a shot...

  14. Proactive PTZ Camera Control

    Science.gov (United States)

    Qureshi, Faisal Z.; Terzopoulos, Demetri

    We present a visual sensor network—comprising wide field-of-view (FOV) passive cameras and pan/tilt/zoom (PTZ) active cameras—capable of automatically capturing closeup video of selected pedestrians in a designated area. The passive cameras can track multiple pedestrians simultaneously and any PTZ camera can observe a single pedestrian at a time. We propose a strategy for proactive PTZ camera control where cameras plan ahead to select optimal camera assignment and handoff with respect to predefined observational goals. The passive cameras supply tracking information that is used to control the PTZ cameras.

  15. Vacuum Camera Cooler

    Science.gov (United States)

    Laugen, Geoffrey A.

    2011-01-01

    Acquiring cheap, moving video was impossible in a vacuum environment, due to camera overheating. This overheating is brought on by the lack of cooling media in vacuum. A water-jacketed camera cooler enclosure machined and assembled from copper plate and tube has been developed. The camera cooler (see figure) is cup-shaped and cooled by circulating water or nitrogen gas through copper tubing. The camera, a store-bought "spy type," is not designed to work in a vacuum. With some modifications the unit can be thermally connected when mounted in the cup portion of the camera cooler. The thermal conductivity is provided by copper tape between parts of the camera and the cooled enclosure. During initial testing of the demonstration unit, the camera cooler kept the CPU (central processing unit) of this video camera at operating temperature. This development allowed video recording of an in-progress test, within a vacuum environment.

  16. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  17. Constrained space camera assembly

    Science.gov (United States)

    Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.

    1999-01-01

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.

  18. Harpicon camera for HDTV

    Science.gov (United States)

    Tanada, Jun

    1992-08-01

    Ikegami has been involved in broadcast equipment ever since it was established as a company. In conjunction with NHK it has brought forth countless television cameras, from black-and-white cameras to color cameras, HDTV cameras, and special-purpose cameras. In the early days of HDTV (high-definition television, also known as "High Vision") cameras the specifications were different from those for the cameras of the present-day system, and cameras using all kinds of components, having different arrangements of components, and having different appearances were developed into products, with time spent on experimentation, design, fabrication, adjustment, and inspection. But recently the knowhow built up thus far in components, , printed circuit boards, and wiring methods has been incorporated in camera fabrication, making it possible to make HDTV cameras by metbods similar to the present system. In addition, more-efficient production, lower costs, and better after-sales service are being achieved by using the same circuits, components, mechanism parts, and software for both HDTV cameras and cameras that operate by the present system.

  19. Digital Pinhole Camera

    Science.gov (United States)

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  20. Movement-based Interaction in Camera Spaces

    DEFF Research Database (Denmark)

    Eriksson, Eva; Riisgaard Hansen, Thomas; Lykke-Olesen, Andreas

    2006-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movement-based projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space......, relations, and feedback. We see these as central for describing and analysing movement-based systems using camera tracking and we show how these three concepts can be used to analyse other camera tracking applications....

  1. Calibration method for a central catadioptric-perspective camera system.

    Science.gov (United States)

    He, Bingwei; Chen, Zhipeng; Li, Youfu

    2012-11-01

    A central catadioptric-perspective camera system is widely used nowadays. A critical problem is that current calibration methods cannot determine the extrinsic parameters between the central catadioptric camera and a perspective camera effectively. We present a novel calibration method for a central catadioptric-perspective camera system, in which the central catadioptric camera has a hyperbolic mirror. Two cameras are used to capture images of one calibration pattern at different spatial positions. A virtual camera is constructed at the origin of the central catadioptric camera and faced toward the calibration pattern. The transformation between the virtual camera and the calibration pattern could be computed first and the extrinsic parameters between the central catadioptric camera and the calibration pattern could be obtained. Three-dimensional reconstruction results of the calibration pattern show a high accuracy and validate the feasibility of our method.

  2. Optimising camera traps for monitoring small mammals.

    Science.gov (United States)

    Glen, Alistair S; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce

    2013-01-01

    Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea), feral cats (Felis catus) and hedgehogs (Erinaceuseuropaeus). Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  3. SUB-CAMERA CALIBRATION OF A PENTA-CAMERA

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-03-01

    for corresponding cameras of both blocks have the same trend, but as usual for block adjustments with self calibration, they still show significant differences. Based on the very high number of image points the remaining image residuals can be safely determined by overlaying and averaging the image residuals corresponding to their image coordinates. The size of the systematic image errors, not covered by the used additional parameters, is in the range of a square mean of 0.1 pixels corresponding to 0.6μm. They are not the same for both blocks, but show some similarities for corresponding cameras. In general the bundle block adjustment with a satisfying set of additional parameters, checked by remaining systematic errors, is required for use of the whole geometric potential of the penta camera. Especially for object points on facades, often only in two images and taken with a limited base length, the correct handling of systematic image errors is important. At least in the analyzed data sets the self calibration of sub-cameras by bundle block adjustment suffers from the correlation of the inner to the exterior calibration due to missing crossing flight directions. As usual, the systematic image errors differ from block to block even without the influence of the correlation to the exterior orientation.

  4. GRACE star camera noise

    Science.gov (United States)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  5. Microchannel plate streak camera

    Science.gov (United States)

    Wang, Ching L.

    1989-01-01

    An improved streak camera in which a microchannel plate electron multiplier is used in place of or in combination with the photocathode used in prior streak cameras. The improved streak camera is far more sensitive to photons (UV to gamma-rays) than the conventional x-ray streak camera which uses a photocathode. The improved streak camera offers gamma-ray detection with high temporal resolution. It also offers low-energy x-ray detection without attenuation inside the cathode. Using the microchannel plate in the improved camera has resulted in a time resolution of about 150 ps, and has provided a sensitivity sufficient for 1000 KeV x-rays.

  6. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  7. Polarization encoded color camera.

    Science.gov (United States)

    Schonbrun, Ethan; Möller, Guðfríður; Di Caprio, Giuseppe

    2014-03-15

    Digital cameras would be colorblind if they did not have pixelated color filters integrated into their image sensors. Integration of conventional fixed filters, however, comes at the expense of an inability to modify the camera's spectral properties. Instead, we demonstrate a micropolarizer-based camera that can reconfigure its spectral response. Color is encoded into a linear polarization state by a chiral dispersive element and then read out in a single exposure. The polarization encoded color camera is capable of capturing three-color images at wavelengths spanning the visible to the near infrared. PMID:24690806

  8. Ringfield lithographic camera

    Science.gov (United States)

    Sweatt, William C.

    1998-01-01

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D.sub.source .apprxeq.0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry with an increased etendue for the camera system. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors.

  9. Laser Dazzling of Focal Plane Array Cameras

    NARCIS (Netherlands)

    Schleijpen, H.M.A.; Heuvel, J.C. van den; Mieremet, A.J.; Mellier, B.; Putten, F.J.M. van

    2007-01-01

    Laser countermeasures against infrared focal plane array cameras aim to saturate the full camera image. In this paper we will discuss the results of three different dazzling experiments performed with MWIR lasers and show that the obtained results are independent of the read-out mechanism of the cam

  10. Camera Operator and Videographer

    Science.gov (United States)

    Moore, Pam

    2007-01-01

    Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…

  11. CCD Luminescence Camera

    Science.gov (United States)

    Janesick, James R.; Elliott, Tom

    1987-01-01

    New diagnostic tool used to understand performance and failures of microelectronic devices. Microscope integrated to low-noise charge-coupled-device (CCD) camera to produce new instrument for analyzing performance and failures of microelectronics devices that emit infrared light during operation. CCD camera also used to indentify very clearly parts that have failed where luminescence typically found.

  12. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  13. Dry imaging cameras

    Directory of Open Access Journals (Sweden)

    I K Indrajit

    2011-01-01

    Full Text Available Dry imaging cameras are important hard copy devices in radiology. Using dry imaging camera, multiformat images of digital modalities in radiology are created from a sealed unit of unexposed films. The functioning of a modern dry camera, involves a blend of concurrent processes, in areas of diverse sciences like computers, mechanics, thermal, optics, electricity and radiography. Broadly, hard copy devices are classified as laser and non laser based technology. When compared with the working knowledge and technical awareness of different modalities in radiology, the understanding of a dry imaging camera is often superficial and neglected. To fill this void, this article outlines the key features of a modern dry camera and its important issues that impact radiology workflow.

  14. NEW VERSATILE CAMERA CALIBRATION TECHNIQUE BASED ON LINEAR RECTIFICATION

    Institute of Scientific and Technical Information of China (English)

    Pan Feng; Wang Xuanyin

    2004-01-01

    A new versatile camera calibration technique for machine vision using off-the-shelf cameras is described. Aimed at the large distortion of the off-the-shelf cameras, a new camera distortion rectification technology based on line-rectification is proposed. A full-camera-distortion model is introduced and a linear algorithm is provided to obtain the solution. After the camera rectification intrinsic and extrinsic parameters are obtained based on the relationship between the homograph and absolute conic. This technology needs neither a high-accuracy three-dimensional calibration block, nor a complicated translation or rotation platform. Both simulations and experiments show that this method is effective and robust.

  15. Show Time

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    <正> Story: Show Time!The whole class presents the story"Under the Sea".Everyone is so excited and happy.Both Leo and Kathy show their parentsthe characters of the play."Who’s he?"asks Kathy’s mom."He’s the prince."Kathy replies."Who’s she?"asks Leo’s dad."She’s the queen."Leo replieswith a smile.

  16. Snobbish Show

    Institute of Scientific and Technical Information of China (English)

    YIN PUMIN

    2010-01-01

    @@ The State Administration of Radio,Film and Television (SARFT),China's media watchdog,issued a new set of mles on June 9 that strictly regulate TV match-making shows,which have been sweeping the country's primetime programming. "Improper social and love values such as money worship should not be presented in these shows.Humiliation,verbal attacks and sex-implied vulgar content are not allowed" the new roles said.

  17. Modelling Virtual Camera Behaviour Through Player Gaze

    DEFF Research Database (Denmark)

    Picardi, Andrea; Burelli, Paolo; Yannakakis, Georgios N.

    2012-01-01

    In a three-dimensional virtual environment, aspects such as narrative and interaction largely depend on the placement and animation of the virtual camera. Therefore, virtual camera control plays a critical role in player experience and, thereby, in the overall quality of a computer game. Both game...... industry and game AI research focus on the devel- opment of increasingly sophisticated systems to automate the control of the virtual camera integrating artificial intel- ligence algorithms within physical simulations. However, in both industry and academia little research has been carried out...... on the relationship between virtual camera, game-play and player behaviour. We run a game user experiment to shed some light on this relationship and identify relevant dif- ferences between camera behaviours through different game sessions, playing behaviours and player gaze patterns. Re- sults show that users can...

  18. Camera as Cultural Critique

    DEFF Research Database (Denmark)

    Suhr, Christian

    2015-01-01

    What does the use of cameras entail for the production of cultural critique in anthropology? Visual anthropological analysis and cultural critique starts at the very moment a camera is brought into the field or existing visual images are engaged. The framing, distances, and interactions between...... researchers, cameras, and filmed subjects already inherently comprise analytical decisions. It is these ethnographic qualities inherent in audiovisual and photographic imagery that make it of particular value to a participatory anthropological enterprise that seeks to resist analytic closure and seeks instead...

  19. EROBATIC SHOW

    Institute of Scientific and Technical Information of China (English)

    2016-01-01

    Visitors look at plane models of the Commercial Aircraft Corp. of China, developer of the count,s first homegrown large passenger jet C919, during the Singapore Airshow on February 16. The biennial event is the largest airshow in Asia and one of the most important aviation and defense shows worldwide. A number of Chinese companies took part in the event during which Okay Airways, the first privately owned aidine in China, signed a deal to acquire 12 Boeing 737 jets.

  20. Speed cameras : how they work and what effect they have.

    NARCIS (Netherlands)

    2011-01-01

    Much research has been carried out into the effects of speed cameras, and the research shows consistently positive results. International review studies report that speed cameras produce a reduction of approximately 20% in personal injury crashes on road sections where cameras are used. In the Nethe

  1. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  2. The MKID Camera

    Science.gov (United States)

    Maloney, P. R.; Czakon, N. G.; Day, P. K.; Duan, R.; Gao, J.; Glenn, J.; Golwala, S.; Hollister, M.; LeDuc, H. G.; Mazin, B.; Noroozian, O.; Nguyen, H. T.; Sayers, J.; Schlaerth, J.; Vaillancourt, J. E.; Vayonakis, A.; Wilson, P.; Zmuidzinas, J.

    2009-12-01

    The MKID Camera project is a collaborative effort of Caltech, JPL, the University of Colorado, and UC Santa Barbara to develop a large-format, multi-color millimeter and submillimeter-wavelength camera for astronomy using microwave kinetic inductance detectors (MKIDs). These are superconducting, micro-resonators fabricated from thin aluminum and niobium films. We couple the MKIDs to multi-slot antennas and measure the change in surface impedance produced by photon-induced breaking of Cooper pairs. The readout is almost entirely at room temperature and can be highly multiplexed; in principle hundreds or even thousands of resonators could be read out on a single feedline. The camera will have 576 spatial pixels that image simultaneously in four bands at 750, 850, 1100 and 1300 microns. It is scheduled for deployment at the Caltech Submillimeter Observatory in the summer of 2010. We present an overview of the camera design and readout and describe the current status of testing and fabrication.

  3. Gamma camera system

    International Nuclear Information System (INIS)

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  4. Segment Based Camera Calibration

    Institute of Scientific and Technical Information of China (English)

    马颂德; 魏国庆; 等

    1993-01-01

    The basic idea of calibrating a camera system in previous approaches is to determine camera parmeters by using a set of known 3D points as calibration reference.In this paper,we present a method of camera calibration in whih camera parameters are determined by a set of 3D lines.A set of constraints is derived on camea parameters in terms of perspective line mapping.Form these constraints,the same perspective transformation matrix as that for point mapping can be computed linearly.The minimum number of calibration lines is 6.This result generalizes that of Liu,Huang and Faugeras[12] for camera location determination in which at least 8 line correspondences are required for linear computation of camera location.Since line segments in an image can be located easily and more accurately than points,the use of lines as calibration reference tends to ease the computation in inage preprocessing and to improve calibration accuracy.Experimental results on the calibration along with stereo reconstruction are reported.

  5. Spacecraft camera image registration

    Science.gov (United States)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)

    1987-01-01

    A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).

  6. Close-range photogrammetry with video cameras

    Science.gov (United States)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1985-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  7. Neural network method for characterizing video cameras

    Science.gov (United States)

    Zhou, Shuangquan; Zhao, Dazun

    1998-08-01

    This paper presents a neural network method for characterizing color video camera. A multilayer feedforward network with the error back-propagation learning rule for training, is used as a nonlinear transformer to model a camera, which realizes a mapping from the CIELAB color space to RGB color space. With SONY video camera, D65 illuminant, Pritchard Spectroradiometer, 410 JIS color charts as training data and 36 charts as testing data, results show that the mean error of training data is 2.9 and that of testing data is 4.0 in a 2563 RGB space.

  8. CAOS-CMOS camera.

    Science.gov (United States)

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems. PMID:27410361

  9. The Dark Energy Camera

    Energy Technology Data Exchange (ETDEWEB)

    Flaugher, B. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States). et al.

    2015-04-11

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250-μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 Mpixel focal plane comprises 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. The CCDs have 15μm x 15μm pixels with a plate scale of 0.263" per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  10. The Dark Energy Camera

    CERN Document Server

    Flaugher, B; Honscheid, K; Abbott, T M C; Alvarez, O; Angstadt, R; Annis, J T; Antonik, M; Ballester, O; Beaufore, L; Bernstein, G M; Bernstein, R A; Bigelow, B; Bonati, M; Boprie, D; Brooks, D; Buckley-Geer, E J; Campa, J; Cardiel-Sas, L; Castander, F J; Castilla, J; Cease, H; Cela-Ruiz, J M; Chappa, S; Chi, E; Cooper, C; da Costa, L N; Dede, E; Derylo, G; DePoy, D L; de Vicente, J; Doel, P; Drlica-Wagner, A; Eiting, J; Elliott, A E; Emes, J; Estrada, J; Neto, A Fausti; Finley, D A; Flores, R; Frieman, J; Gerdes, D; Gladders, M D; Gregory, B; Gutierrez, G R; Hao, J; Holland, S E; Holm, S; Huffman, D; Jackson, C; James, D J; Jonas, M; Karcher, A; Karliner, I; Kent, S; Kessler, R; Kozlovsky, M; Kron, R G; Kubik, D; Kuehn, K; Kuhlmann, S; Kuk, K; Lahav, O; Lathrop, A; Lee, J; Levi, M E; Lewis, P; Li, T S; Mandrichenko, I; Marshall, J L; Martinez, G; Merritt, K W; Miquel, R; Munoz, F; Neilsen, E H; Nichol, R C; Nord, B; Ogando, R; Olsen, J; Palio, N; Patton, K; Peoples, J; Plazas, A A; Rauch, J; Reil, K; Rheault, J -P; Roe, N A; Rogers, H; Roodman, A; Sanchez, E; Scarpine, V; Schindler, R H; Schmidt, R; Schmitt, R; Schubnell, M; Schultz, K; Schurter, P; Scott, L; Serrano, S; Shaw, T M; Smith, R C; Soares-Santos, M; Stefanik, A; Stuermer, W; Suchyta, E; Sypniewski, A; Tarle, G; Thaler, J; Tighe, R; Tran, C; Tucker, D; Walker, A R; Wang, G; Watson, M; Weaverdyck, C; Wester, W; Woods, R; Yanny, B

    2015-01-01

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250 micron thick fully-depleted CCDs cooled inside a vacuum Dewar. The 570 Mpixel focal plane comprises 62 2kx4k CCDs for imaging and 12 2kx2k CCDs for guiding and focus. The CCDs have 15 microns x15 microns pixels with a plate scale of 0.263 arc sec per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construct...

  11. The Dark Energy Camera

    Science.gov (United States)

    Flaugher, B.; Diehl, H. T.; Honscheid, K.; Abbott, T. M. C.; Alvarez, O.; Angstadt, R.; Annis, J. T.; Antonik, M.; Ballester, O.; Beaufore, L.; Bernstein, G. M.; Bernstein, R. A.; Bigelow, B.; Bonati, M.; Boprie, D.; Brooks, D.; Buckley-Geer, E. J.; Campa, J.; Cardiel-Sas, L.; Castander, F. J.; Castilla, J.; Cease, H.; Cela-Ruiz, J. M.; Chappa, S.; Chi, E.; Cooper, C.; da Costa, L. N.; Dede, E.; Derylo, G.; DePoy, D. L.; de Vicente, J.; Doel, P.; Drlica-Wagner, A.; Eiting, J.; Elliott, A. E.; Emes, J.; Estrada, J.; Fausti Neto, A.; Finley, D. A.; Flores, R.; Frieman, J.; Gerdes, D.; Gladders, M. D.; Gregory, B.; Gutierrez, G. R.; Hao, J.; Holland, S. E.; Holm, S.; Huffman, D.; Jackson, C.; James, D. J.; Jonas, M.; Karcher, A.; Karliner, I.; Kent, S.; Kessler, R.; Kozlovsky, M.; Kron, R. G.; Kubik, D.; Kuehn, K.; Kuhlmann, S.; Kuk, K.; Lahav, O.; Lathrop, A.; Lee, J.; Levi, M. E.; Lewis, P.; Li, T. S.; Mandrichenko, I.; Marshall, J. L.; Martinez, G.; Merritt, K. W.; Miquel, R.; Muñoz, F.; Neilsen, E. H.; Nichol, R. C.; Nord, B.; Ogando, R.; Olsen, J.; Palaio, N.; Patton, K.; Peoples, J.; Plazas, A. A.; Rauch, J.; Reil, K.; Rheault, J.-P.; Roe, N. A.; Rogers, H.; Roodman, A.; Sanchez, E.; Scarpine, V.; Schindler, R. H.; Schmidt, R.; Schmitt, R.; Schubnell, M.; Schultz, K.; Schurter, P.; Scott, L.; Serrano, S.; Shaw, T. M.; Smith, R. C.; Soares-Santos, M.; Stefanik, A.; Stuermer, W.; Suchyta, E.; Sypniewski, A.; Tarle, G.; Thaler, J.; Tighe, R.; Tran, C.; Tucker, D.; Walker, A. R.; Wang, G.; Watson, M.; Weaverdyck, C.; Wester, W.; Woods, R.; Yanny, B.; DES Collaboration

    2015-11-01

    The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel-1. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6-9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  12. CAOS-CMOS camera.

    Science.gov (United States)

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  13. HIGH SPEED CAMERA

    Science.gov (United States)

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  14. Worldview and route planning using live public cameras

    Science.gov (United States)

    Kaseb, Ahmed S.; Chen, Wenyi; Gingade, Ganesh; Lu, Yung-Hsiang

    2015-03-01

    Planning a trip needs to consider many unpredictable factors along the route such as traffic, weather, accidents, etc. People are interested viewing the places they plan to visit and the routes they plan to take. This paper presents a system with an Android mobile application that allows users to: (i) Watch the live feeds (videos or snapshots) from more than 65,000 geotagged public cameras around the world. The user can select the cameras using an interactive world map. (ii) Search for and watch the live feeds from the cameras along the route between a starting point and a destination. The system consists of a server which maintains a database with the cameras' information, and a mobile application that shows the camera map and communicates with the cameras. In order to evaluate the system, we compare it with existing systems in terms of the total number of cameras, the cameras' coverage, and the number of cameras on various routes. We also discuss the response time of loading the camera map, finding the cameras on a route, and communicating with the cameras.

  15. Artificial human vision camera

    Science.gov (United States)

    Goudou, J.-F.; Maggio, S.; Fagno, M.

    2014-10-01

    In this paper we present a real-time vision system modeling the human vision system. Our purpose is to inspire from human vision bio-mechanics to improve robotic capabilities for tasks such as objects detection and tracking. This work describes first the bio-mechanical discrepancies between human vision and classic cameras and the retinal processing stage that takes place in the eye, before the optic nerve. The second part describes our implementation of these principles on a 3-camera optical, mechanical and software model of the human eyes and associated bio-inspired attention model.

  16. The Star Formation Camera

    OpenAIRE

    Scowen, Paul A.; Jansen, Rolf; Beasley, Matthew; Calzetti, Daniela; Desch, Steven; Fullerton, Alex; Gallagher, John; Lisman, Doug; Macenka, Steve; Malhotra, Sangeeta; McCaughrean, Mark; Nikzad, Shouleh; O'Connell, Robert; Oey, Sally; Padgett, Deborah

    2009-01-01

    The Star Formation Camera (SFC) is a wide-field (~15'x19, >280 arcmin^2), high-resolution (18x18 mas pixels) UV/optical dichroic camera designed for the Theia 4-m space-borne space telescope concept. SFC will deliver diffraction-limited images at lambda > 300 nm in both a blue (190-517nm) and a red (517-1075nm) channel simultaneously. Our aim is to conduct a comprehensive and systematic study of the astrophysical processes and environments relevant for the births and life cycles of stars and ...

  17. Determining Vision Graphs for Distributed Camera Networks Using Feature Digests

    Directory of Open Access Journals (Sweden)

    Cheng Zhaolin

    2007-01-01

    Full Text Available We propose a decentralized method for obtaining the vision graph for a distributed, ad-hoc camera network, in which each edge of the graph represents two cameras that image a sufficiently large part of the same environment. Each camera encodes a spatially well-distributed set of distinctive, approximately viewpoint-invariant feature points into a fixed-length "feature digest" that is broadcast throughout the network. Each receiver camera robustly matches its own features with the decompressed digest and decides whether sufficient evidence exists to form a vision graph edge. We also show how a camera calibration algorithm that passes messages only along vision graph edges can recover accurate 3D structure and camera positions in a distributed manner. We analyze the performance of different message formation schemes, and show that high detection rates ( can be achieved while maintaining low false alarm rates ( using a simulated 60-node outdoor camera network.

  18. Action selection for single-camera SLAM.

    Science.gov (United States)

    Vidal-Calleja, Teresa A; Sanfeliu, Alberto; Andrade-Cetto, Juan

    2010-12-01

    A method for evaluating, at video rate, the quality of actions for a single camera while mapping unknown indoor environments is presented. The strategy maximizes mutual information between measurements and states to help the camera avoid making ill-conditioned measurements that are appropriate to lack of depth in monocular vision systems. Our system prompts a user with the appropriate motion commands during 6-DOF visual simultaneous localization and mapping with a handheld camera. Additionally, the system has been ported to a mobile robotic platform, thus closing the control-estimation loop. To show the viability of the approach, simulations and experiments are presented for the unconstrained motion of a handheld camera and for the motion of a mobile robot with nonholonomic constraints. When combined with a path planner, the technique safely drives to a marked goal while, at the same time, producing an optimal estimated map. PMID:20350845

  19. Communities, Cameras, and Conservation

    Science.gov (United States)

    Patterson, Barbara

    2012-01-01

    Communities, Cameras, and Conservation (CCC) is the most exciting and valuable program the author has seen in her 30 years of teaching field science courses. In this citizen science project, students and community volunteers collect data on mountain lions ("Puma concolor") at four natural areas and public parks along the Front Range of Colorado.…

  20. Advanced Virgo phase cameras

    Science.gov (United States)

    van der Schaaf, L.; Agatsuma, K.; van Beuzekom, M.; Gebyehu, M.; van den Brand, J.

    2016-05-01

    A century after the prediction of gravitational waves, detectors have reached the sensitivity needed to proof their existence. One of them, the Virgo interferometer in Pisa, is presently being upgraded to Advanced Virgo (AdV) and will come into operation in 2016. The power stored in the interferometer arms raises from 20 to 700 kW. This increase is expected to introduce higher order modes in the beam, which could reduce the circulating power in the interferometer, limiting the sensitivity of the instrument. To suppress these higher-order modes, the core optics of Advanced Virgo is equipped with a thermal compensation system. Phase cameras, monitoring the real-time status of the beam constitute a critical component of this compensation system. These cameras measure the phases and amplitudes of the laser-light fields at the frequencies selected to control the interferometer. The measurement combines heterodyne detection with a scan of the wave front over a photodetector with pin-hole aperture. Three cameras observe the phase front of these laser sidebands. Two of them monitor the in-and output of the interferometer arms and the third one is used in the control of the aberrations introduced by the power recycling cavity. In this paper the working principle of the phase cameras is explained and some characteristic parameters are described.

  1. The world's fastest camera

    CERN Multimedia

    Piquepaille, Roland

    2006-01-01

    This image processor is not your typical digital camera. It took 6 years to 20 people and $6 million to build the "Regional Calorimeter Trigger"(RCT) which will be a component of the Compact Muon Solenoid (CMS) experiment, one of the detectors on the Large Hadron Collider (LHC) in Geneva, Switzerland (1 page)

  2. Make a Pinhole Camera

    Science.gov (United States)

    Fisher, Diane K.; Novati, Alexander

    2009-01-01

    On Earth, using ordinary visible light, one can create a single image of light recorded over time. Of course a movie or video is light recorded over time, but it is a series of instantaneous snapshots, rather than light and time both recorded on the same medium. A pinhole camera, which is simple to make out of ordinary materials and using ordinary…

  3. Image Sensors Enhance Camera Technologies

    Science.gov (United States)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  4. MISR radiometric camera-by-camera Cloud Mask V004

    Data.gov (United States)

    National Aeronautics and Space Administration — This file contains the Radiometric camera-by-camera Cloud Mask dataset. It is used to determine whether a scene is classified as clear or cloudy. A new parameter...

  5. Calibration Procedures on Oblique Camera Setups

    Science.gov (United States)

    Kemper, G.; Melykuti, B.; Yu, C.

    2016-06-01

    Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager) is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna -IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first step with the help of

  6. CALIBRATION PROCEDURES ON OBLIQUE CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    G. Kemper

    2016-06-01

    Full Text Available Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna –IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first

  7. Combustion pinhole camera system

    Science.gov (United States)

    Witte, Arvel B.

    1984-02-21

    A pinhole camera system utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.

  8. Camera Surveillance Quadrotor

    OpenAIRE

    Hjelm, Emil; Yousif, Robert

    2015-01-01

    A quadrotor is a helicopter with four rotors placed at equal distance from the crafts centre of gravity, controlled by letting the different rotors generate different amount of thrust. It uses various sensors to stay stable in the air, correct readings from these sensors are therefore critical. By reducing vibrations, electromagnetic interference and external disturbances the quadrotor’s stability can increase. The purpose of this project is to analyse the feasibility of a quadrotor camera su...

  9. The Star Formation Camera

    CERN Document Server

    Scowen, Paul A; Beasley, Matthew; Calzetti, Daniela; Desch, Steven; Fullerton, Alex; Gallagher, John; Lisman, Doug; Macenka, Steve; Malhotra, Sangeeta; McCaughrean, Mark; Nikzad, Shouleh; O'Connell, Robert; Oey, Sally; Padgett, Deborah; Rhoads, James; Roberge, Aki; Siegmund, Oswald; Shaklan, Stuart; Smith, Nathan; Stern, Daniel; Tumlinson, Jason; Windhorst, Rogier; Woodruff, Robert

    2009-01-01

    The Star Formation Camera (SFC) is a wide-field (~15'x19, >280 arcmin^2), high-resolution (18x18 mas pixels) UV/optical dichroic camera designed for the Theia 4-m space-borne space telescope concept. SFC will deliver diffraction-limited images at lambda > 300 nm in both a blue (190-517nm) and a red (517-1075nm) channel simultaneously. Our aim is to conduct a comprehensive and systematic study of the astrophysical processes and environments relevant for the births and life cycles of stars and their planetary systems, and to investigate and understand the range of environments, feedback mechanisms, and other factors that most affect the outcome of the star and planet formation process. This program addresses the origins and evolution of stars, galaxies, and cosmic structure and has direct relevance for the formation and survival of planetary systems like our Solar System and planets like Earth. We present the design and performance specifications resulting from the implementation study of the camera, conducted ...

  10. Hemispherical Laue camera

    Science.gov (United States)

    Li, James C. M.; Chu, Sungnee G.

    1980-01-01

    A hemispherical Laue camera comprises a crystal sample mount for positioning a sample to be analyzed at the center of sphere of a hemispherical, X-radiation sensitive film cassette, a collimator, a stationary or rotating sample mount and a set of standard spherical projection spheres. X-radiation generated from an external source is directed through the collimator to impinge onto the single crystal sample on the stationary mount. The diffracted beam is recorded on the hemispherical X-radiation sensitive film mounted inside the hemispherical film cassette in either transmission or back-reflection geometry. The distances travelled by X-radiation diffracted from the crystal to the hemispherical film are the same for all crystal planes which satisfy Bragg's Law. The recorded diffraction spots or Laue spots on the film thereby preserve both the symmetry information of the crystal structure and the relative intensities which are directly related to the relative structure factors of the crystal orientations. The diffraction pattern on the exposed film is compared with the known diffraction pattern on one of the standard spherical projection spheres for a specific crystal structure to determine the orientation of the crystal sample. By replacing the stationary sample support with a rotating sample mount, the hemispherical Laue camera can be used for crystal structure determination in a manner previously provided in conventional Debye-Scherrer cameras.

  11. Gamma ray camera

    Science.gov (United States)

    Perez-Mendez, Victor

    1997-01-01

    A gamma ray camera for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array.

  12. Designing Camera Networks by Convex Quadratic Programming

    KAUST Repository

    Ghanem, Bernard

    2015-05-04

    ​In this paper, we study the problem of automatic camera placement for computer graphics and computer vision applications. We extend the problem formulations of previous work by proposing a novel way to incorporate visibility constraints and camera-to-camera relationships. For example, the placement solution can be encouraged to have cameras that image the same important locations from different viewing directions, which can enable reconstruction and surveillance tasks to perform better. We show that the general camera placement problem can be formulated mathematically as a convex binary quadratic program (BQP) under linear constraints. Moreover, we propose an optimization strategy with a favorable trade-off between speed and solution quality. Our solution is almost as fast as a greedy treatment of the problem, but the quality is significantly higher, so much so that it is comparable to exact solutions that take orders of magnitude more computation time. Because it is computationally attractive, our method also allows users to explore the space of solutions for variations in input parameters. To evaluate its effectiveness, we show a range of 3D results on real-world floorplans (garage, hotel, mall, and airport). ​

  13. Integrating Scene Parallelism in Camera Auto-Calibration

    Institute of Scientific and Technical Information of China (English)

    LIU Yong (刘勇); WU ChengKe (吴成柯); Hung-Tat Tsui

    2003-01-01

    This paper presents an approach for camera auto-calibration from uncalibrated video sequences taken by a hand-held camera. The novelty of this approach lies in that the line parallelism is transformed to the constraints on the absolute quadric during camera autocalibration. This makes some critical cases solvable and the reconstruction more Euclidean. The approach is implemented and validated using simulated data and real image data. The experimental results show the effectiveness of the approach.

  14. Movement-based interaction in camera spaces: a conceptual framework

    DEFF Research Database (Denmark)

    Eriksson, Eva; Hansen, Thomas Riisgaard; Lykke-Olesen, Andreas

    2007-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movementbased projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space......, relations, and feedback. We see these as central for describing and analysing movement-based systems using camera tracking and we show how these three concepts can be used to analyse other camera tracking applications....

  15. Online camera-gyroscope autocalibration for cell phones.

    Science.gov (United States)

    Jia, Chao; Evans, Brian L

    2014-12-01

    The gyroscope is playing a key role in helping estimate 3D camera rotation for various vision applications on cell phones, including video stabilization and feature tracking. Successful fusion of gyroscope and camera data requires that the camera, gyroscope, and their relative pose to be calibrated. In addition, the timestamps of gyroscope readings and video frames are usually not well synchronized. Previous paper performed camera-gyroscope calibration and synchronization offline after the entire video sequence has been captured with restrictions on the camera motion, which is unnecessarily restrictive for everyday users to run apps that directly use the gyroscope. In this paper, we propose an online method that estimates all the necessary parameters, whereas a user is capturing video. Our contributions are: 1) simultaneous online camera self-calibration and camera-gyroscope calibration based on an implicit extended Kalman filter and 2) generalization of the multiple-view coplanarity constraint on camera rotation in a rolling shutter camera model for cell phones. The proposed method is able to estimate the needed calibration and synchronization parameters online with all kinds of camera motion and can be embedded in gyro-aided applications, such as video stabilization and feature tracking. Both Monte Carlo simulation and cell phone experiments show that the proposed online calibration and synchronization method converge fast to the ground truth values.

  16. Adaptive compressive sensing camera

    Science.gov (United States)

    Hsu, Charles; Hsu, Ming K.; Cha, Jae; Iwamura, Tomo; Landa, Joseph; Nguyen, Charles; Szu, Harold

    2013-05-01

    We have embedded Adaptive Compressive Sensing (ACS) algorithm on Charge-Coupled-Device (CCD) camera based on the simplest concept that each pixel is a charge bucket, and the charges comes from Einstein photoelectric conversion effect. Applying the manufactory design principle, we only allow altering each working component at a minimum one step. We then simulated what would be such a camera can do for real world persistent surveillance taking into account of diurnal, all weather, and seasonal variations. The data storage has saved immensely, and the order of magnitude of saving is inversely proportional to target angular speed. We did design two new components of CCD camera. Due to the matured CMOS (Complementary metal-oxide-semiconductor) technology, the on-chip Sample and Hold (SAH) circuitry can be designed for a dual Photon Detector (PD) analog circuitry for changedetection that predicts skipping or going forward at a sufficient sampling frame rate. For an admitted frame, there is a purely random sparse matrix [Φ] which is implemented at each bucket pixel level the charge transport bias voltage toward its neighborhood buckets or not, and if not, it goes to the ground drainage. Since the snapshot image is not a video, we could not apply the usual MPEG video compression and Hoffman entropy codec as well as powerful WaveNet Wrapper on sensor level. We shall compare (i) Pre-Processing FFT and a threshold of significant Fourier mode components and inverse FFT to check PSNR; (ii) Post-Processing image recovery will be selectively done by CDT&D adaptive version of linear programming at L1 minimization and L2 similarity. For (ii) we need to determine in new frames selection by SAH circuitry (i) the degree of information (d.o.i) K(t) dictates the purely random linear sparse combination of measurement data a la [Φ]M,N M(t) = K(t) Log N(t).

  17. PAU camera: detectors characterization

    Science.gov (United States)

    Casas, Ricard; Ballester, Otger; Cardiel-Sas, Laia; Castilla, Javier; Jiménez, Jorge; Maiorino, Marino; Pío, Cristóbal; Sevilla, Ignacio; de Vicente, Juan

    2012-07-01

    The PAU Camera (PAUCam) [1,2] is a wide field camera that will be mounted at the corrected prime focus of the William Herschel Telescope (Observatorio del Roque de los Muchachos, Canary Islands, Spain) in the next months. The focal plane of PAUCam is composed by a mosaic of 18 CCD detectors of 2,048 x 4,176 pixels each one with a pixel size of 15 microns, manufactured by Hamamatsu Photonics K. K. This mosaic covers a field of view (FoV) of 60 arcmin (minutes of arc), 40 of them are unvignetted. The behaviour of these 18 devices, plus four spares, and their electronic response should be characterized and optimized for the use in PAUCam. This job is being carried out in the laboratories of the ICE/IFAE and the CIEMAT. The electronic optimization of the CCD detectors is being carried out by means of an OG (Output Gate) scan and maximizing it CTE (Charge Transfer Efficiency) while the read-out noise is minimized. The device characterization itself is obtained with different tests. The photon transfer curve (PTC) that allows to obtain the electronic gain, the linearity vs. light stimulus, the full-well capacity and the cosmetic defects. The read-out noise, the dark current, the stability vs. temperature and the light remanence.

  18. Camera Calibration with Radial Variance Component Estimation

    Science.gov (United States)

    Mélykuti, B.; Kruck, E. J.

    2014-11-01

    Camera calibration plays a more and more important role in recent times. Beside real digital aerial survey cameras the photogrammetric market is dominated by a big number of non-metric digital cameras mounted on UAVs or other low-weight flying platforms. The in-flight calibration of those systems has a significant role to enhance the geometric accuracy of survey photos considerably. It is expected to have a better precision of photo measurements in the center of images then along the edges or in the corners. With statistical methods the accuracy of photo measurements in dependency of the distance of points from image center has been analyzed. This test provides a curve for the measurement precision as function of the photo radius. A high number of camera types have been tested with well penetrated point measurements in image space. The result of the tests led to a general consequence to show a functional connection between accuracy and radial distance and to give a method how to check and enhance the geometrical capability of the cameras in respect to these results.

  19. Phase camera experiment for Advanced Virgo

    Science.gov (United States)

    Agatsuma, Kazuhiro; van Beuzekom, Martin; van der Schaaf, Laura; van den Brand, Jo

    2016-07-01

    We report on a study of the phase camera, which is a frequency selective wave-front sensor of a laser beam. This sensor is utilized for monitoring sidebands produced by phase modulations in a gravitational wave (GW) detector. Regarding the operation of the GW detectors, the laser modulation/demodulation method is used to measure mirror displacements and used for the position controls. This plays a significant role because the quality of controls affect the noise level of the GW detector. The phase camera is able to monitor each sideband separately, which has a great benefit for the manipulation of the delicate controls. Also, overcoming mirror aberrations will be an essential part of Advanced Virgo (AdV), which is a GW detector close to Pisa. Especially low-frequency sidebands can be affected greatly by aberrations in one of the interferometer cavities. The phase cameras allow tracking such changes because the state of the sidebands gives information on mirror aberrations. A prototype of the phase camera has been developed and is currently tested. The performance checks are almost completed and the installation of the optics at the AdV site has started. After the installation and commissioning, the phase camera will be combined to a thermal compensation system that consists of CO2 lasers and compensation plates. In this paper, we focus on the prototype and show some limitations from the scanner performance.

  20. Comment on ‘From the pinhole camera to the shape of a lens: the camera-obscura reloaded’

    Science.gov (United States)

    Grusche, Sascha

    2016-09-01

    In the article ‘From the pinhole camera to the shape of a lens: the camera-obscura reloaded’ (Phys. Educ. 50 706), the authors show that a prism array, or an equivalent lens, can be used to bring together multiple camera obscura images from a pinhole array. It should be pointed out that the size of the camera obscura images is conserved by a prism array, but changed by a lens. To avoid this discrepancy in image size, the prism array, or the lens, should be made to touch the pinhole array.

  1. Transmission electron microscope CCD camera

    Science.gov (United States)

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  2. Novel gamma cameras

    International Nuclear Information System (INIS)

    The gamma-ray cameras described are based on radiation imaging devices which permit the direct recording of the distribution of radioactive material from a radiative source, such as a human organ. They consist in principle of a collimator, a converter matrix converting gamma photons to electrons, and an electron image multiplier producing a multiplied electron output, and means for reading out the information. The electron image multiplier is a device which produces a multiplied electron image. It can be in principle, either gas avalanche electron multiplier or a multi-channel plate. The multi-channel plate employed is a novel device, described elsewhere. The three described embodiments, in which the converter matrix can be either of metal type or of scintillation crystal type, were designed and are being developed

  3. The Circular Camera Movement

    DEFF Research Database (Denmark)

    Hansen, Lennard Højbjerg

    2014-01-01

    It has been an accepted precept in film theory that specific stylistic features do not express specific content. Nevertheless, it is possible to find many examples in the history of film in which stylistic features do express specific content: for instance, the circular camera movement is used...... repeatedly to convey the feeling of a man and a woman falling in love. This raises the question of why producers and directors choose certain stylistic features to narrate certain categories of content. Through the analysis of several short film and TV clips, this article explores whether...... or not there are perceptual aspects related to specific stylistic features that enable them to be used for delimited narrational purposes. The article further attempts to reopen this particular stylistic debate by exploring the embodied aspects of visual perception in relation to specific stylistic features...

  4. Calibration of a Stereo Radiation Detection Camera Using Planar Homography

    Directory of Open Access Journals (Sweden)

    Seung-Hae Baek

    2016-01-01

    Full Text Available This paper proposes a calibration technique of a stereo gamma detection camera. Calibration of the internal and external parameters of a stereo vision camera is a well-known research problem in the computer vision society. However, few or no stereo calibration has been investigated in the radiation measurement research. Since no visual information can be obtained from a stereo radiation camera, it is impossible to use a general stereo calibration algorithm directly. In this paper, we develop a hybrid-type stereo system which is equipped with both radiation and vision cameras. To calibrate the stereo radiation cameras, stereo images of a calibration pattern captured from the vision cameras are transformed in the view of the radiation cameras. The homography transformation is calibrated based on the geometric relationship between visual and radiation camera coordinates. The accuracy of the stereo parameters of the radiation camera is analyzed by distance measurements to both visual light and gamma sources. The experimental results show that the measurement error is about 3%.

  5. Camera traps can be heard and seen by animals.

    Science.gov (United States)

    Meek, Paul D; Ballard, Guy-Anthony; Fleming, Peter J S; Schaefer, Michael; Williams, Warwick; Falzon, Greg

    2014-01-01

    Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals' hearing and produce illumination that can be seen by many species.

  6. Radiation camera motion correction system

    Science.gov (United States)

    Hoffer, P.B.

    1973-12-18

    The device determines the ratio of the intensity of radiation received by a radiation camera from two separate portions of the object. A correction signal is developed to maintain this ratio at a substantially constant value and this correction signal is combined with the camera signal to correct for object motion. (Official Gazette)

  7. LISS-4 camera for Resourcesat

    Science.gov (United States)

    Paul, Sandip; Dave, Himanshu; Dewan, Chirag; Kumar, Pradeep; Sansowa, Satwinder Singh; Dave, Amit; Sharma, B. N.; Verma, Anurag

    2006-12-01

    The Indian Remote Sensing Satellites use indigenously developed high resolution cameras for generating data related to vegetation, landform /geomorphic and geological boundaries. This data from this camera is used for working out maps at 1:12500 scale for national level policy development for town planning, vegetation etc. The LISS-4 Camera was launched onboard Resourcesat-1 satellite by ISRO in 2003. LISS-4 is a high-resolution multi-spectral camera with three spectral bands and having a resolution of 5.8m and swath of 23Km from 817 Km altitude. The panchromatic mode provides a swath of 70Km and 5-day revisit. This paper briefly discusses the configuration of LISS-4 Camera of Resourcesat-1, its onboard performance and also the changes in the Camera being developed for Resourcesat-2. LISS-4 camera images the earth in push-broom mode. It is designed around a three mirror un-obscured telescope, three linear 12-K CCDs and associated electronics for each band. Three spectral bands are realized by splitting the focal plane in along track direction using an isosceles prism. High-speed Camera Electronics is designed for each detector with 12- bit digitization and digital double sampling of video. Seven bit data selected from 10 MSBs data by Telecommand is transmitted. The total dynamic range of the sensor covers up to 100% albedo. The camera structure has heritage of IRS- 1C/D. The optical elements are precisely glued to specially designed flexure mounts. The camera is assembled onto a rotating deck on spacecraft to facilitate +/- 26° steering in Pitch-Yaw plane. The camera is held on spacecraft in a stowed condition before deployment. The excellent imageries from LISS-4 Camera onboard Resourcesat-1 are routinely used worldwide. Such second Camera is being developed for Resourcesat-2 launch in 2007 with similar performance. The Camera electronics is optimized and miniaturized. The size and weight are reduced to one third and the power to half of the values in Resourcesat

  8. Camera sensitivity study

    Science.gov (United States)

    Schlueter, Jonathan; Murphey, Yi L.; Miller, John W. V.; Shridhar, Malayappan; Luo, Yun; Khairallah, Farid

    2004-12-01

    As the cost/performance Ratio of vision systems improves with time, new classes of applications become feasible. One such area, automotive applications, is currently being investigated. Applications include occupant detection, collision avoidance and lane tracking. Interest in occupant detection has been spurred by federal automotive safety rules in response to injuries and fatalities caused by deployment of occupant-side air bags. In principle, a vision system could control airbag deployment to prevent this type of mishap. Employing vision technology here, however, presents a variety of challenges, which include controlling costs, inability to control illumination, developing and training a reliable classification system and loss of performance due to production variations due to manufacturing tolerances and customer options. This paper describes the measures that have been developed to evaluate the sensitivity of an occupant detection system to these types of variations. Two procedures are described for evaluating how sensitive the classifier is to camera variations. The first procedure is based on classification accuracy while the second evaluates feature differences.

  9. Gamma camera system

    International Nuclear Information System (INIS)

    The invention provides a composite solid state detector for use in deriving a display, by spatial coordinate information, of the distribution or radiation emanating from a source within a region of interest, comprising several solid state detector components, each having a given surface arranged for exposure to impinging radiation and exhibiting discrete interactions therewith at given spatially definable locations. The surface of each component and the surface disposed opposite and substantially parallel thereto are associated with impedence means configured to provide for each opposed surface outputs for signals relating the given location of the interactions with one spatial coordinate parameter of one select directional sense. The detector components are arranged to provide groupings of adjacently disposed surfaces mutually linearly oriented to exhibit a common directional sense of the spatial coordinate parameter. Means interconnect at least two of the outputs associated with each of the surfaces within a given grouping for collecting the signals deriving therefrom. The invention also provides a camera system for imaging the distribution of a source of gamma radiation situated within a region of interest

  10. Proportional counter radiation camera

    Science.gov (United States)

    Borkowski, C.J.; Kopp, M.K.

    1974-01-15

    A gas-filled proportional counter camera that images photon emitting sources is described. A two-dimensional, positionsensitive proportional multiwire counter is provided as the detector. The counter consists of a high- voltage anode screen sandwiched between orthogonally disposed planar arrays of multiple parallel strung, resistively coupled cathode wires. Two terminals from each of the cathode arrays are connected to separate timing circuitry to obtain separate X and Y coordinate signal values from pulse shape measurements to define the position of an event within the counter arrays which may be recorded by various means for data display. The counter is further provided with a linear drift field which effectively enlarges the active gas volume of the counter and constrains the recoil electrons produced from ionizing radiation entering the counter to drift perpendicularly toward the planar detection arrays. A collimator is interposed between a subject to be imaged and the counter to transmit only the radiation from the subject which has a perpendicular trajectory with respect to the planar cathode arrays of the detector. (Official Gazette)

  11. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    Science.gov (United States)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced

  12. Vision Sensors and Cameras

    Science.gov (United States)

    Hoefflinger, Bernd

    Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 μm technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and μm-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.

  13. Assessing the Photogrammetric Potential of Cameras in Portable Devices

    Science.gov (United States)

    Smith, M. J.; Kokkas, N.

    2012-07-01

    In recent years, there have been an increasing number of portable devices, tablets and Smartphone's employing high-resolution digital cameras to satisfy consumer demand. In most cases, these cameras are designed primarily for capturing visually pleasing images and the potential of using Smartphone and tablet cameras for metric applications remains uncertain. The compact nature of the host's devices leads to very small cameras and therefore smaller geometric characteristics. This also makes them extremely portable and with their integration into a multi-function device, which is part of the basic unit cost often makes them readily available. Many application specialists may find them an attractive proposition where some modest photogrammetric capability would be useful. This paper investigates the geometric potential of these cameras for close range photogrammetric applications by: • investigating their geometric characteristics using the self-calibration method of camera calibration and comparing results from a state-of-the-art Digital SLR camera. • investigating their capability for 3D building modelling. Again, these results will be compared with findings from results obtained from a Digital SLR camera. The early results presented show that the iPhone has greater potential for photogrammetric use than the iPad.

  14. An Inexpensive Digital Infrared Camera

    Science.gov (United States)

    Mills, Allan

    2012-01-01

    Details are given for the conversion of an inexpensive webcam to a camera specifically sensitive to the near infrared (700-1000 nm). Some experiments and practical applications are suggested and illustrated. (Contains 9 figures.)

  15. The future of consumer cameras

    Science.gov (United States)

    Battiato, Sebastiano; Moltisanti, Marco

    2015-03-01

    In the last two decades multimedia, and in particular imaging devices (camcorders, tablets, mobile phones, etc.) have been dramatically diffused. Moreover the increasing of their computational performances, combined with an higher storage capability, allows them to process large amount of data. In this paper an overview of the current trends of consumer cameras market and technology will be given, providing also some details about the recent past (from Digital Still Camera up today) and forthcoming key issues.

  16. Fundus camera systems: a comparative analysis

    OpenAIRE

    DeHoog, Edward; Schwiegerling, James

    2009-01-01

    Retinal photography requires the use of a complex optical system, called a fundus camera, capable of illuminating and imaging the retina simultaneously. The patent literature shows two design forms but does not provide the specifics necessary for a thorough analysis of the designs to be performed. We have constructed our own designs based on the patent literature in optical design software and compared them for illumination efficiency, image quality, ability to accommodate for patient refract...

  17. Determining Vision Graphs for Distributed Camera Networks Using Feature Digests

    Directory of Open Access Journals (Sweden)

    Richard J. Radke

    2007-01-01

    Full Text Available We propose a decentralized method for obtaining the vision graph for a distributed, ad-hoc camera network, in which each edge of the graph represents two cameras that image a sufficiently large part of the same environment. Each camera encodes a spatially well-distributed set of distinctive, approximately viewpoint-invariant feature points into a fixed-length “feature digest” that is broadcast throughout the network. Each receiver camera robustly matches its own features with the decompressed digest and decides whether sufficient evidence exists to form a vision graph edge. We also show how a camera calibration algorithm that passes messages only along vision graph edges can recover accurate 3D structure and camera positions in a distributed manner. We analyze the performance of different message formation schemes, and show that high detection rates (>0.8 can be achieved while maintaining low false alarm rates (<0.05 using a simulated 60-node outdoor camera network.

  18. A universal method for camera calibration in UITS scenes

    Institute of Scientific and Technical Information of China (English)

    Zhaoxue Chen; Pengfei Shi

    2005-01-01

    @@ A universal approach to camera calibration based on features of some representative lines on traffic ground is presented. It uses only a set of three parallel edges with known intervals and one of their intersected lines with known slope to gain the focal length and orientation parameters of a camera. A set of equations that computes related camera parameters has been derived from geometric properties of the calibration pattern. With accurate analytical implementation, precision of the approach is only decided by accuracy of the calibration target selecting. Final experimental results have showed its validity by a snapshot from real automatic visual traffic surveillance (AVTS) scenes.

  19. Declarative camera control for automatic cinematography

    Energy Technology Data Exchange (ETDEWEB)

    Christianson, D.B.; Anderson, S.E.; Li-wei He [Univ. of Washington, Seattle, WA (United States)] [and others

    1996-12-31

    Animations generated by interactive 3D computer graphics applications are typically portrayed either from a particular character`s point of view or from a small set of strategically-placed viewpoints. By ignoring camera placement, such applications fail to realize important storytelling capabilities that have been explored by cinematographers for many years. In this paper, we describe several of the principles of cinematography and show how they can be formalized into a declarative language, called the Declarative Camera Control Language (DCCL). We describe the application of DCCL within the context of a simple interactive video game and argue that DCCL represents cinematic knowledge at the same level of abstraction as expert directors by encoding 16 idioms from a film textbook. These idioms produce compelling animations, as demonstrated on the accompanying videotape.

  20. Camera Augmented Mobile C-arm

    Science.gov (United States)

    Wang, Lejing; Weidert, Simon; Traub, Joerg; Heining, Sandro Michael; Riquarts, Christian; Euler, Ekkehard; Navab, Nassir

    The Camera Augmented Mobile C-arm (CamC) system that extends a regular mobile C-arm by a video camera provides an X-ray and video image overlay. Thanks to the mirror construction and one time calibration of the device, the acquired X-ray images are co-registered with the video images without any calibration or registration during the intervention. It is very important to quantify and qualify the system before its introduction into the OR. In this communication, we extended the previously performed overlay accuracy analysis of the CamC system by another clinically important parameter, the applied radiation dose for the patient. Since the mirror of the CamC system will absorb and scatter radiation, we introduce a method for estimating the correct applied dose by using an independent dose measurement device. The results show that the mirror absorbs and scatters 39% of X-ray radiation.

  1. How long is enough to detect terrestrial animals? Estimating the minimum trapping effort on camera traps

    Directory of Open Access Journals (Sweden)

    Xingfeng Si

    2014-05-01

    Full Text Available Camera traps is an important wildlife inventory tool for estimating species diversity at a site. Knowing what minimum trapping effort is needed to detect target species is also important to designing efficient studies, considering both the number of camera locations, and survey length. Here, we take advantage of a two-year camera trapping dataset from a small (24-ha study plot in Gutianshan National Nature Reserve, eastern China to estimate the minimum trapping effort actually needed to sample the wildlife community. We also evaluated the relative value of adding new camera sites or running cameras for a longer period at one site. The full dataset includes 1727 independent photographs captured during 13,824 camera days, documenting 10 resident terrestrial species of birds and mammals. Our rarefaction analysis shows that a minimum of 931 camera days would be needed to detect the resident species sufficiently in the plot, and c. 8700 camera days to detect all 10 resident species. In terms of detecting a diversity of species, the optimal sampling period for one camera site was c. 40, or long enough to record about 20 independent photographs. Our analysis of evaluating the increasing number of additional camera sites shows that rotating cameras to new sites would be more efficient for measuring species richness than leaving cameras at fewer sites for a longer period.

  2. How long is enough to detect terrestrial animals? Estimating the minimum trapping effort on camera traps.

    Science.gov (United States)

    Si, Xingfeng; Kays, Roland; Ding, Ping

    2014-01-01

    Camera traps is an important wildlife inventory tool for estimating species diversity at a site. Knowing what minimum trapping effort is needed to detect target species is also important to designing efficient studies, considering both the number of camera locations, and survey length. Here, we take advantage of a two-year camera trapping dataset from a small (24-ha) study plot in Gutianshan National Nature Reserve, eastern China to estimate the minimum trapping effort actually needed to sample the wildlife community. We also evaluated the relative value of adding new camera sites or running cameras for a longer period at one site. The full dataset includes 1727 independent photographs captured during 13,824 camera days, documenting 10 resident terrestrial species of birds and mammals. Our rarefaction analysis shows that a minimum of 931 camera days would be needed to detect the resident species sufficiently in the plot, and c. 8700 camera days to detect all 10 resident species. In terms of detecting a diversity of species, the optimal sampling period for one camera site was c. 40, or long enough to record about 20 independent photographs. Our analysis of evaluating the increasing number of additional camera sites shows that rotating cameras to new sites would be more efficient for measuring species richness than leaving cameras at fewer sites for a longer period.

  3. A miniature VGA SWIR camera using MT6415CA ROIC

    Science.gov (United States)

    Eminoglu, Selim; Yilmaz, S. Gokhan; Kocak, Serhat

    2014-06-01

    This paper reports the development of a new miniature VGA SWIR camera called NanoCAM-6415, which is developed to demonstrate the key features of the MT6415CA ROIC such as high integration level, low-noise, and low-power in a small volume. The NanoCAM-6415 uses an InGaAs Focal Plane Array (FPA) with a format of 640 × 512 and pixel pitch of 15 μm built using MT6415CA ROIC. MT6415CA is a low-noise CTIA ROIC, which has a system-on-chip architecture, allows generation of all the required timing and biases on-chip in the ROIC without requiring any external components or inputs, thus enabling the development of compact and low-noise SWIR cameras, with reduced size, weight, and power (SWaP). NanoCAM-6415 camera supports snapshot operation using Integrate-Then-Read (ITR) and Integrate-While-Read (IWR) modes. The camera has three gain settings enabled by the ROIC through programmable Full-Well-Capacity (FWC) values of 10.000 e-, 20.000 e-, and 350.000 e- in the very high gain (VHG), high-gain (HG), and low-gain (LG) modes, respectively. The camera has an input referred noise level of 10 e- rms in the VHG mode at 1 ms integration time, suitable for low-noise SWIR imaging applications. In order to reduce the size and power of the camera, only 2 outputs out of 8 of the ROIC are connected to the external Analog-to-Digital Converters (ADCs) in the camera electronics, providing a maximum frame rate of 50 fps through a 26-pin SDR type Camera Link connector. NanoCAM-6415 SWIR camera without the optics measures 32 mm × 32 mm × 35 mm, weighs 45gr, and dissipates less than 1.8 W using a 5 V supply. These results show that MT6415CA ROIC can successfully be used to develop cameras for SWIR imaging applications where SWaP is a concern. Mikro-Tasarim has also developed new imaging software to demonstrate the functionality of this miniature VGA camera. Mikro-Tasarim provides tested ROIC wafers and also offers compact and easy-to-use test electronics, demo cameras, and hardware

  4. The Clementine longwave infrared camera

    Energy Technology Data Exchange (ETDEWEB)

    Priest, R.E.; Lewis, I.T.; Sewall, N.R.; Park, H.S.; Shannon, M.J.; Ledebuhr, A.G.; Pleasance, L.D. [Lawrence Livermore National Lab., CA (United States); Massie, M.A. [Pacific Advanced Technology, Solvang, CA (United States); Metschuleit, K. [Amber/A Raytheon Co., Goleta, CA (United States)

    1995-04-01

    The Clementine mission provided the first ever complete, systematic surface mapping of the moon from the ultra-violet to the near-infrared regions. More than 1.7 million images of the moon, earth and space were returned from this mission. The longwave-infrared (LWIR) camera supplemented the UV/Visible and near-infrared mapping cameras providing limited strip coverage of the moon, giving insight to the thermal properties of the soils. This camera provided {approximately}100 m spatial resolution at 400 km periselene, and a 7 km across-track swath. This 2.1 kg camera using a 128 x 128 Mercury-Cadmium-Telluride (MCT) FPA viewed thermal emission of the lunar surface and lunar horizon in the 8.0 to 9.5 {micro}m wavelength region. A description of this light-weight, low power LWIR camera along with a summary of lessons learned is presented. Design goals and preliminary on-orbit performance estimates are addressed in terms of meeting the mission`s primary objective for flight qualifying the sensors for future Department of Defense flights.

  5. Motorcycle detection and counting using stereo camera, IR camera, and microphone array

    Science.gov (United States)

    Ling, Bo; Gibson, David R. P.; Middleton, Dan

    2013-03-01

    Detection, classification, and characterization are the key to enhancing motorcycle safety, motorcycle operations and motorcycle travel estimation. Average motorcycle fatalities per Vehicle Mile Traveled (VMT) are currently estimated at 30 times those of auto fatalities. Although it has been an active research area for many years, motorcycle detection still remains a challenging task. Working with FHWA, we have developed a hybrid motorcycle detection and counting system using a suite of sensors including stereo camera, thermal IR camera and unidirectional microphone array. The IR thermal camera can capture the unique thermal signatures associated with the motorcycle's exhaust pipes that often show bright elongated blobs in IR images. The stereo camera in the system is used to detect the motorcyclist who can be easily windowed out in the stereo disparity map. If the motorcyclist is detected through his or her 3D body recognition, motorcycle is detected. Microphones are used to detect motorcycles that often produce low frequency acoustic signals. All three microphones in the microphone array are placed in strategic locations on the sensor platform to minimize the interferences of background noises from sources such as rain and wind. Field test results show that this hybrid motorcycle detection and counting system has an excellent performance.

  6. Global Calibration of Multiple Cameras Based on Sphere Targets

    Science.gov (United States)

    Sun, Junhua; He, Huabin; Zeng, Debing

    2016-01-01

    Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three), while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view. PMID:26761007

  7. Prism-based single-camera system for stereo display

    Science.gov (United States)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  8. Global Calibration of Multiple Cameras Based on Sphere Targets

    Directory of Open Access Journals (Sweden)

    Junhua Sun

    2016-01-01

    Full Text Available Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three, while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view.

  9. Dark Energy Camera for Blanco

    Energy Technology Data Exchange (ETDEWEB)

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images from the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.

  10. The GISMO-2 Bolometer Camera

    Science.gov (United States)

    Staguhn, Johannes G.; Benford, Dominic J.; Fixsen, Dale J.; Hilton, Gene; Irwin, Kent D.; Jhabvala, Christine A.; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; Miller, Timothy M.; Moseley, Samuel H.; Sharp, Elemer H.; Wollack, Edward J.

    2012-01-01

    We present the concept for the GISMO-2 bolometer camera) which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISM0-2 will operate Simultaneously in the 1 mm and 2 mm atmospherical windows. The 1 mm channel uses a 32 x 40 TES-based Backshort Under Grid (BUG) bolometer array, the 2 mm channel operates with a 16 x 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISM0-2 was strongly influenced by our experience with the GISMO 2 mm bolometer camera which is successfully operating at the 30m telescope. GISMO is accessible to the astronomical community through the regular IRAM call for proposals.

  11. BLAST Autonomous Daytime Star Cameras

    CERN Document Server

    Rex, M; Devlin, M J; Gundersen, J; Klein, J; Pascale, E; Wiebe, D; Rex, Marie; Chapin, Edward; Devlin, Mark J.; Gundersen, Joshua; Klein, Jeff; Pascale, Enzo; Wiebe, Donald

    2006-01-01

    We have developed two redundant daytime star cameras to provide the fine pointing solution for the balloon-borne submillimeter telescope, BLAST. The cameras are capable of providing a reconstructed pointing solution with an absolute accuracy < 5 arcseconds. They are sensitive to stars down to magnitudes ~ 9 in daytime float conditions. Each camera combines a 1 megapixel CCD with a 200 mm f/2 lens to image a 2 degree x 2.5 degree field of the sky. The instruments are autonomous. An internal computer controls the temperature, adjusts the focus, and determines a real-time pointing solution at 1 Hz. The mechanical details and flight performance of these instruments are presented.

  12. EDICAM (Event Detection Intelligent Camera)

    International Nuclear Information System (INIS)

    Highlights: ► We present EDICAM's hardware modules. ► We present EDICAM's main design concepts. ► This paper will describe EDICAM firmware architecture. ► Operation principles description. ► Further developments. -- Abstract: A new type of fast framing camera has been developed for fusion applications by the Wigner Research Centre for Physics during the last few years. A new concept was designed for intelligent event driven imaging which is capable of focusing image readout to Regions of Interests (ROIs) where and when predefined events occur. At present these events mean intensity changes and external triggers but in the future more sophisticated methods might also be defined. The camera provides 444 Hz frame rate at full resolution of 1280 × 1024 pixels, but monitoring of smaller ROIs can be done in the 1–116 kHz range even during exposure of the full image. Keeping space limitations and the harsh environment in mind the camera is divided into a small Sensor Module and a processing card interconnected by a fast 10 Gbit optical link. This camera hardware has been used for passive monitoring of the plasma in different devices for example at ASDEX Upgrade and COMPASS with the first version of its firmware. The new firmware and software package is now available and ready for testing the new event processing features. This paper will present the operation principle and features of the Event Detection Intelligent Camera (EDICAM). The device is intended to be the central element in the 10-camera monitoring system of the Wendelstein 7-X stellarator

  13. The use of a portable gamma camera for preoperative lymphatic mapping: a comparison with a conventional gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Vidal-Sicart, Sergi; Paredes, Pilar [Hospital Clinic Barcelona, Nuclear Medicine Department (CDIC), Barcelona (Spain); Institut d' Investigacio Biomedica Agusti Pi Sunyer (IDIBAPS), Barcelona (Spain); Vermeeren, Lenka; Valdes-Olmos, Renato A. [Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital (NKI-AVL), Nuclear Medicine Department, Amsterdam (Netherlands); Sola, Oriol [Hospital Clinic Barcelona, Nuclear Medicine Department (CDIC), Barcelona (Spain)

    2011-04-15

    Planar lymphoscintigraphy is routinely used for preoperative sentinel node visualization, but large gamma cameras are not always available. We evaluated the reproducibility of lymphatic mapping with a smaller and portable gamma camera. In two centres, 52 patients with breast cancer received preoperative lymphoscintigraphy with a conventional gamma camera with a field of view of 40 x 40 cm. Static anterior and lateral images were performed at 15 min, 2 h and 4 h after injection of the radiotracer ({sup 99m}Tc-nanocolloid). At 2 h after injection, anterior and oblique images were also performed with a portable gamma camera (Sentinella, Oncovision) positioned to obtain a field of view of 20 x 20 cm. Visualization of lymphatic drainage on conventional images and images with the portable device were compared for number of nodes depicted, their intensity and localization of sentinel nodes. The images performed with the conventional gamma camera depicted sentinel nodes in 94%, while the portable gamma camera showed drainage in 73%. There was however no significant difference in visualization between the two devices when a lead shield was used to mask the injection area in 43 patients (95 vs 88%, p = 0.25). Second-echelon nodes were visualized in 62% of the patients with the conventional gamma camera and in 29% of the cases with the portable gamma camera. Preoperative imaging with a portable gamma camera fitted with a pinhole collimator to obtain a field of view of 20 x 20 cm is able to depict sentinel nodes in 88% of the cases, if a lead shield is used to mask the injection site. This device may be useful in centres without the possibility to perform a preoperative image. (orig.)

  14. Epipolar rectification method for a stereovision system with telecentric cameras

    Science.gov (United States)

    Liu, Haibo; Zhu, Zhaokun; Yao, Linshen; Dong, Jin; Chen, Shengyi; Zhang, Xiaohu; Shang, Yang

    2016-08-01

    3D metrology of a stereovision system requires epipolar rectification to be performed before dense stereo matching. In this study, we propose an epipolar rectification method for a stereovision system with two telecentric lens-based cameras. Given the orthographic projection matrices of each camera, the new projection matrices are computed by determining the new camera coordinates system in affine space and imposing some constraints on the intrinsic parameters. Then, the transformation that maps the old image planes on to the new image planes is achieved. Experiments are performed to validate the performance of the proposed rectification method. The test results show that the perpendicular distance and 3D reconstructed deviation obtained from the rectified images is not significantly higher than the corresponding values obtained from the original images. Considering the roughness of the extracted corner points and calibrated camera parameters, we can conclude that the proposed method can provide sufficiently accurate rectification results.

  15. Camera assisted multimodal user interaction

    Science.gov (United States)

    Hannuksela, Jari; Silvén, Olli; Ronkainen, Sami; Alenius, Sakari; Vehviläinen, Markku

    2010-01-01

    Since more processing power, new sensing and display technologies are already available in mobile devices, there has been increased interest in building systems to communicate via different modalities such as speech, gesture, expression, and touch. In context identification based user interfaces, these independent modalities are combined to create new ways how the users interact with hand-helds. While these are unlikely to completely replace traditional interfaces, they will considerably enrich and improve the user experience and task performance. We demonstrate a set of novel user interface concepts that rely on built-in multiple sensors of modern mobile devices for recognizing the context and sequences of actions. In particular, we use the camera to detect whether the user is watching the device, for instance, to make the decision to turn on the display backlight. In our approach the motion sensors are first employed for detecting the handling of the device. Then, based on ambient illumination information provided by a light sensor, the cameras are turned on. The frontal camera is used for face detection, while the back camera provides for supplemental contextual information. The subsequent applications triggered by the context can be, for example, image capturing, or bar code reading.

  16. Modeling and simulation of gamma camera

    International Nuclear Information System (INIS)

    Simulation techniques play a vital role in designing of sophisticated instruments and also for the training of operating and maintenance staff. Gamma camera systems have been used for functional imaging in nuclear medicine. Functional images are derived from the external counting of the gamma emitting radioactive tracer that after introduction in to the body mimics the behavior of native biochemical compound. The position sensitive detector yield the coordinates of the gamma ray interaction with the detector and are used to estimate the point of gamma ray emission within the tracer distribution space. This advanced imaging device is thus dependent on the performance of algorithm for coordinate computing, estimation of point of emission, generation of image and display of the image data. Contemporary systems also have protocols for quality control and clinical evaluation of imaging studies. Simulation of this processing leads to understanding of the basic camera design problems. This report describes a PC based package for design and simulation of gamma camera along with the options of simulating data acquisition and quality control of imaging studies. Image display and data processing the other options implemented in SIMCAM will be described in separate reports (under preparation). Gamma camera modeling and simulation in SIMCAM has preset configuration of the design parameters for various sizes of crystal detector with the option to pack the PMT on hexagon or square lattice. Different algorithm for computation of coordinates and spatial distortion removal are allowed in addition to the simulation of energy correction circuit. The user can simulate different static, dynamic, MUGA and SPECT studies. The acquired/ simulated data is processed for quality control and clinical evaluation of the imaging studies. Results show that the program can be used to assess these performances. Also the variations in performance parameters can be assessed due to the induced

  17. Stereo Calibration and Rectification for Omnidirectional Multi-camera Systems

    Directory of Open Access Journals (Sweden)

    Yanchang Wang

    2012-10-01

    Full Text Available Stereo vision has been studied for decades as a fundamental problem in the field of computer vision. In recent years, computer vision and image processing with a large field of view, especially using omnidirectional vision and panoramic images, has been receiving increasing attention. An important problem for stereo vision is calibration. Although various kinds of calibration methods for omnidirectional cameras are proposed, most of them are limited to calibrate catadioptric cameras or fish‐eye cameras and cannot be applied directly to multi‐camera systems. In this work, we propose an easy calibration method with closed‐form initialization and iterative optimization for omnidirectional multi‐camera systems. The method only requires image pairs of the 2D target plane in a few different views. A method based on the spherical camera model is also proposed for rectifying omnidirectional stereo pairs. Using real data captured by Ladybug3, we carry out some experiments, including stereo calibration, rectification and 3D reconstruction. Statistical analyses and comparisons of the experimental results are also presented. As the experimental results show, the calibration results are precise and the effect of rectification is promising.

  18. Lytro camera technology: theory, algorithms, performance analysis

    Science.gov (United States)

    Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio

    2013-03-01

    The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.

  19. Architectural Design Document for Camera Models

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study.......Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study....

  20. Electronographic cameras for space astronomy.

    Science.gov (United States)

    Carruthers, G. R.; Opal, C. B.

    1972-01-01

    Magnetically-focused electronographic cameras have been under development at the Naval Research Laboratory for use in far-ultraviolet imagery and spectrography, primarily in astronomical and optical-geophysical observations from sounding rockets and space vehicles. Most of this work has been with cameras incorporating internal optics of the Schmidt or wide-field all-reflecting types. More recently, we have begun development of electronographic spectrographs incorporating an internal concave grating, operating at normal or grazing incidence. We also are developing electronographic image tubes of the conventional end-window-photo-cathode type, for far-ultraviolet imagery at the focus of a large space telescope, with image formats up to 120 mm in diameter.

  1. The Dark Energy Survey Camera

    Science.gov (United States)

    Flaugher, Brenna

    2012-03-01

    The Dark Energy Survey Collaboration has built the Dark Energy Camera (DECam), a 3 square degree, 520 Megapixel CCD camera which is being mounted on the Blanco 4-meter telescope at CTIO. DECam will be used to carry out the 5000 sq. deg. Dark Energy Survey, using 30% of the telescope time over a 5 year period. During the remainder of the time, and after the survey, DECam will be available as a community instrument. Construction of DECam is complete. The final components were shipped to Chile in Dec. 2011 and post-shipping checkout is in progress in Dec-Jan. Installation and commissioning on the telescope are taking place in 2012. A summary of lessons learned and an update of the performance of DECam and the status of the DECam installation and commissioning will be presented.

  2. An optical metasurface planar camera

    CERN Document Server

    Arbabi, Amir; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei

    2016-01-01

    Optical metasurfaces are 2D arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optical design by enabling complex low cost systems where multiple metasurfaces are lithographically stacked on top of each other and are integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here, we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has an f-number of 0.9, an angle-of-view larger than 60$^\\circ$$\\times$60$^\\circ$, and operates at 850 nm wavelength with large transmission. The camera exhibits high image quality, which indicates the potential of this technology to produce a paradigm shift in future designs of imaging systems for microscopy, photograp...

  3. Camera Movement in Narrative Cinema

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2007-01-01

    Just like art historians have focused on e.g. composition or lighting, this dissertation takes a single stylistic parameter as its object of study: camera movement. Within film studies this localized avenue of middle-level research has become increasingly viable under the aegis of a perspective...... known as ‘the poetics of cinema.’ The dissertation embraces two branches of research within this perspective: stylistics and historical poetics (stylistic history). The dissertation takes on three questions in relation to camera movement and is accordingly divided into three major sections. The first...... cinematic poetics and interpretive criticism sensitive to style may gain from each other. There is no reason why stylistically informed interpretive criticism cannot be considered within a functional framework and there is no reason why one should not use a functional taxonomy as a basis on which to launch...

  4. Combustion pinhole-camera system

    Science.gov (United States)

    Witte, A.B.

    1982-05-19

    A pinhole camera system is described utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.

  5. ISO camera array development status

    Science.gov (United States)

    Sibille, F.; Cesarsky, C.; Agnese, P.; Rouan, D.

    1989-01-01

    A short outline is given of the Infrared Space Observatory Camera (ISOCAM), one of the 4 instruments onboard the Infrared Space Observatory (ISO), with the current status of its two 32x32 arrays, an InSb charge injection device (CID) and a Si:Ga direct read-out (DRO), and the results of the in orbit radiation simulation with gamma ray sources. A tentative technique for the evaluation of the flat fielding accuracy is also proposed.

  6. Graphic design of pinhole cameras

    Science.gov (United States)

    Edwards, H. B.; Chu, W. P.

    1979-01-01

    The paper describes a graphic technique for the analysis and optimization of pinhole size and focal length. The technique is based on the use of the transfer function of optical elements described by Scott (1959) to construct the transfer function of a circular pinhole camera. This transfer function is the response of a component or system to a pattern of lines having a sinusoidally varying radiance at varying spatial frequencies. Some specific examples of graphic design are presented.

  7. 21 CFR 886.1120 - Opthalmic camera.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Opthalmic camera. 886.1120 Section 886.1120 Food... DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1120 Opthalmic camera. (a) Identification. An ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding...

  8. 21 CFR 892.1110 - Positron camera.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Positron camera. 892.1110 Section 892.1110 Food... DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A positron camera is a device intended to image the distribution of positron-emitting radionuclides in the...

  9. 16 CFR 501.1 - Camera film.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Camera film. 501.1 Section 501.1 Commercial Practices FEDERAL TRADE COMMISSION RULES, REGULATIONS, STATEMENT OF GENERAL POLICY OR INTERPRETATION AND... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the...

  10. Solid-state array cameras.

    Science.gov (United States)

    Strull, G; List, W F; Irwin, E L; Farnsworth, D L

    1972-05-01

    Over the past few years there has been growing interest shown in the rapidly maturing technology of totally solid-state imaging. This paper presents a synopsis of developments made in this field at the Westinghouse ATL facilities with emphasis on row-column organized monolithic arrays of diffused junction phototransistors. The complete processing sequence applicable to the fabrication of modern highdensity arrays is described from wafer ingot preparation to final sensor testing. Special steps found necessary for high yield processing, such as surface etching prior to both sawing and lapping, are discussed along with the rationale behind their adoption. Camera systems built around matrix array photosensors are presented in a historical time-wise progression beginning with the first 50 x 50 element converter developed in 1965 and running through the most recent 400 x 500 element system delivered in 1972. The freedom of mechanical architecture made available to system designers by solid-state array cameras is noted from the description of a bare-chip packaged cubic inch camera. Hybrid scan systems employing one-dimensional line arrays are cited, and the basic tradeoffs to their use are listed. PMID:20119094

  11. Unassisted 3D camera calibration

    Science.gov (United States)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  12. Photogrammetric Accuracy and Modeling of Rolling Shutter Cameras

    Science.gov (United States)

    Vautherin, Jonas; Rutishauser, Simon; Schneider-Zapp, Klaus; Choi, Hon Fai; Chovancova, Venera; Glass, Alexis; Strecha, Christoph

    2016-06-01

    Unmanned aerial vehicles (UAVs) are becoming increasingly popular in professional mapping for stockpile analysis, construction site monitoring, and many other applications. Due to their robustness and competitive pricing, consumer UAVs are used more and more for these applications, but they are usually equipped with rolling shutter cameras. This is a significant obstacle when it comes to extracting high accuracy measurements using available photogrammetry software packages. In this paper, we evaluate the impact of the rolling shutter cameras of typical consumer UAVs on the accuracy of a 3D reconstruction. Hereto, we use a beta-version of the Pix4Dmapper 2.1 software to compare traditional (non rolling shutter) camera models against a newly implemented rolling shutter model with respect to both the accuracy of geo-referenced validation points and to the quality of the motion estimation. Multiple datasets have been acquired using popular quadrocopters (DJI Phantom 2 Vision+, DJI Inspire 1 and 3DR Solo) following a grid flight plan. For comparison, we acquired a dataset using a professional mapping drone (senseFly eBee) equipped with a global shutter camera. The bundle block adjustment of each dataset shows a significant accuracy improvement on validation ground control points when applying the new rolling shutter camera model for flights at higher speed (8m=s). Competitive accuracies can be obtained by using the rolling shutter model, although global shutter cameras are still superior. Furthermore, we are able to show that the speed of the drone (and its direction) can be solely estimated from the rolling shutter effect of the camera.

  13. Scalable IC Platform for Smart Cameras

    Directory of Open Access Journals (Sweden)

    Harry Broers

    2005-08-01

    Full Text Available Smart cameras are among the emerging new fields of electronics. The points of interest are in the application areas, software and IC development. In order to reduce cost, it is worthwhile to invest in a single architecture that can be scaled for the various application areas in performance (and resulting power consumption. In this paper, we show that the combination of an SIMD (single-instruction multiple-data processor and a general-purpose DSP is very advantageous for the image processing tasks encountered in smart cameras. While the SIMD processor gives the very high performance necessary by exploiting the inherent data parallelism found in the pixel crunching part of the algorithms, the DSP offers a friendly approach to the more complex tasks. The paper continues to motivate that SIMD processors have very convenient scaling properties in silicon, making the complete, SIMD-DSP architecture suitable for different application areas without changing the software suite. Analysis of the changes in power consumption due to scaling shows that for typical image processing tasks, it is beneficial to scale the SIMD processor to use the maximum level of parallelism available in the algorithm if the IC supply voltage can be lowered. If silicon cost is of importance, the parallelism of the processor should be scaled to just reach the desired performance given the speed of the silicon.

  14. HHEBBES! All sky camera system: status update

    Science.gov (United States)

    Bettonvil, F.

    2015-01-01

    A status update is given of the HHEBBES! All sky camera system. HHEBBES!, an automatic camera for capturing bright meteor trails, is based on a DSLR camera and a Liquid Crystal chopper for measuring the angular velocity. Purpose of the system is to a) recover meteorites; b) identify origin/parental bodies. In 2015, two new cameras were rolled out: BINGO! -alike HHEBBES! also in The Netherlands-, and POgLED, in Serbia. BINGO! is a first camera equipped with a longer focal length fisheye lens, to further increase the accuracy. Several minor improvements have been done and the data reduction pipeline was used for processing two prominent Dutch fireballs.

  15. Mini gamma camera, camera system and method of use

    Science.gov (United States)

    Majewski, Stanislaw; Weisenberger, Andrew G.; Wojcik, Randolph F.

    2001-01-01

    A gamma camera comprising essentially and in order from the front outer or gamma ray impinging surface: 1) a collimator, 2) a scintillator layer, 3) a light guide, 4) an array of position sensitive, high resolution photomultiplier tubes, and 5) printed circuitry for receipt of the output of the photomultipliers. There is also described, a system wherein the output supplied by the high resolution, position sensitive photomultipiler tubes is communicated to: a) a digitizer and b) a computer where it is processed using advanced image processing techniques and a specific algorithm to calculate the center of gravity of any abnormality observed during imaging, and c) optional image display and telecommunications ports.

  16. Simultaneous Camera Path Optimization and Distraction Removal for Improving Amateur Video.

    Science.gov (United States)

    Zhang, Fang-Lue; Wang, Jue; Zhao, Han; Martin, Ralph R; Hu, Shi-Min

    2015-12-01

    A major difference between amateur and professional video lies in the quality of camera paths. Previous work on video stabilization has considered how to improve amateur video by smoothing the camera path. In this paper, we show that additional changes to the camera path can further improve video aesthetics. Our new optimization method achieves multiple simultaneous goals: 1) stabilizing video content over short time scales; 2) ensuring simple and consistent camera paths over longer time scales; and 3) improving scene composition by automatically removing distractions, a common occurrence in amateur video. Our approach uses an L(1) camera path optimization framework, extended to handle multiple constraints. Two passes of optimization are used to address both low-level and high-level constraints on the camera path. The experimental and user study results show that our approach outputs video that is perceptually better than the input, or the results of using stabilization only.

  17. Phase and amplitude wave front sensing and reconstruction with a modified plenoptic camera

    Science.gov (United States)

    Wu, Chensheng; Ko, Jonathan; Nelson, William; Davis, Christopher C.

    2014-10-01

    A plenoptic camera is a camera that can retrieve the direction and intensity distribution of light rays collected by the camera and allows for multiple reconstruction functions such as: refocusing at a different depth, and for 3D microscopy. Its principle is to add a micro-lens array to a traditional high-resolution camera to form a semi-camera array that preserves redundant intensity distributions of the light field and facilitates back-tracing of rays through geometric knowledge of its optical components. Though designed to process incoherent images, we found that the plenoptic camera shows high potential in solving coherent illumination cases such as sensing both the amplitude and phase information of a distorted laser beam. Based on our earlier introduction of a prototype modified plenoptic camera, we have developed the complete algorithm to reconstruct the wavefront of the incident light field. In this paper the algorithm and experimental results will be demonstrated, and an improved version of this modified plenoptic camera will be discussed. As a result, our modified plenoptic camera can serve as an advanced wavefront sensor compared with traditional Shack- Hartmann sensors in handling complicated cases such as coherent illumination in strong turbulence where interference and discontinuity of wavefronts is common. Especially in wave propagation through atmospheric turbulence, this camera should provide a much more precise description of the light field, which would guide systems in adaptive optics to make intelligent analysis and corrections.

  18. Can camera traps monitor Komodo dragons a large ectothermic predator?

    Directory of Open Access Journals (Sweden)

    Achmad Ariefiandy

    Full Text Available Camera trapping has greatly enhanced population monitoring of often cryptic and low abundance apex carnivores. Effectiveness of passive infrared camera trapping, and ultimately population monitoring, relies on temperature mediated differences between the animal and its ambient environment to ensure good camera detection. In ectothermic predators such as large varanid lizards, this criterion is presumed less certain. Here we evaluated the effectiveness of camera trapping to potentially monitor the population status of the Komodo dragon (Varanus komodoensis, an apex predator, using site occupancy approaches. We compared site-specific estimates of site occupancy and detection derived using camera traps and cage traps at 181 trapping locations established across six sites on four islands within Komodo National Park, Eastern Indonesia. Detection and site occupancy at each site were estimated using eight competing models that considered site-specific variation in occupancy (ψand varied detection probabilities (p according to detection method, site and survey number using a single season site occupancy modelling approach. The most parsimonious model [ψ (site, p (site survey; ω = 0.74] suggested that site occupancy estimates differed among sites. Detection probability varied as an interaction between site and survey number. Our results indicate that overall camera traps produced similar estimates of detection and site occupancy to cage traps, irrespective of being paired, or unpaired, with cage traps. Whilst one site showed some evidence detection was affected by trapping method detection was too low to produce an accurate occupancy estimate. Overall, as camera trapping is logistically more feasible it may provide, with further validation, an alternative method for evaluating long-term site occupancy patterns in Komodo dragons, and potentially other large reptiles, aiding conservation of this species.

  19. Can camera traps monitor Komodo dragons a large ectothermic predator?

    Science.gov (United States)

    Ariefiandy, Achmad; Purwandana, Deni; Seno, Aganto; Ciofi, Claudio; Jessop, Tim S

    2013-01-01

    Camera trapping has greatly enhanced population monitoring of often cryptic and low abundance apex carnivores. Effectiveness of passive infrared camera trapping, and ultimately population monitoring, relies on temperature mediated differences between the animal and its ambient environment to ensure good camera detection. In ectothermic predators such as large varanid lizards, this criterion is presumed less certain. Here we evaluated the effectiveness of camera trapping to potentially monitor the population status of the Komodo dragon (Varanus komodoensis), an apex predator, using site occupancy approaches. We compared site-specific estimates of site occupancy and detection derived using camera traps and cage traps at 181 trapping locations established across six sites on four islands within Komodo National Park, Eastern Indonesia. Detection and site occupancy at each site were estimated using eight competing models that considered site-specific variation in occupancy (ψ)and varied detection probabilities (p) according to detection method, site and survey number using a single season site occupancy modelling approach. The most parsimonious model [ψ (site), p (site survey); ω = 0.74] suggested that site occupancy estimates differed among sites. Detection probability varied as an interaction between site and survey number. Our results indicate that overall camera traps produced similar estimates of detection and site occupancy to cage traps, irrespective of being paired, or unpaired, with cage traps. Whilst one site showed some evidence detection was affected by trapping method detection was too low to produce an accurate occupancy estimate. Overall, as camera trapping is logistically more feasible it may provide, with further validation, an alternative method for evaluating long-term site occupancy patterns in Komodo dragons, and potentially other large reptiles, aiding conservation of this species.

  20. Cryogenic mechanism for ISO camera

    Science.gov (United States)

    Luciano, G.

    1987-12-01

    The Infrared Space Observatory (ISO) camera configuration, architecture, materials, tribology, motorization, and development status are outlined. The operating temperature is 2 to 3 K, at 2.5 to 18 microns. Selected material is a titanium alloy, with MoS2/TiC lubrication. A stepping motor drives the ball-bearing mounted wheels to which the optical elements are fixed. Model test results are satisfactory, and also confirm the validity of the test facilities, particularly for vibration tests at 4K.

  1. Video clustering using camera motion

    OpenAIRE

    Tort Alsina, Laura

    2012-01-01

    Com el moviment de càmera en un clip de vídeo pot ser útil per a la seva classificació en termes semàntics. [ANGLÈS] This document contains the work done in INP Grenoble during the second semester of the academic year 2011-2012, completed in Barcelona during the first months of the 2012-2013. The work presented consists in a camera motion study in different types of video in order to group fragments that have some similarity in the content. In the document it is explained how the data extr...

  2. Far-infrared cameras for automotive safety

    Science.gov (United States)

    Lonnoy, Jacques; Le Guilloux, Yann; Moreira, Raphael

    2005-02-01

    Far Infrared cameras used initially for the driving of military vehicles are slowly coming into the area of commercial (luxury) cars while providing with the FIR imagery a useful assistance for driving at night or in adverse conditions (fog, smoke, ...). However this imagery needs a minimum driver effort as the image understanding is not so natural as the visible or near IR one. A developing field of FIR cameras is ADAS (Advanced Driver Assistance Systems) where FIR processed imagery fused with other sensors data (radar, ...) is providing a driver warning when dangerous situations are occurring. The communication will concentrate on FIR processed imagery for object or obstacles detection on the road or near the road. FIR imagery highlighting hot spots is a powerful detection tool as it provides a good contrast on some of the most common elements of the road scenery (engines, wheels, gas exhaust pipes, pedestrians, 2 wheelers, animals,...). Moreover FIR algorithms are much more robust than visible ones as there is less variability in image contrast with time (day/night, shadows, ...). We based our detection algorithm on one side on the peculiar aspect of vehicles, pedestrians in FIR images and on the other side on the analysis of motion along time, that allows anticipation of future motion. We will show results obtained with FIR processed imagery within the PAROTO project, supported by the French Ministry of Research, that ended in spring 04.

  3. Accuracy Assessment of GO Pro Hero 3 (black) Camera in Underwater Environment

    Science.gov (United States)

    Helmholz, , P.; Long, J.; Munsie, T.; Belton, D.

    2016-06-01

    Modern digital cameras are increasing in quality whilst decreasing in size. In the last decade, a number of waterproof consumer digital cameras (action cameras) have become available, which often cost less than 500. A possible application of such action cameras is in the field of Underwater Photogrammetry. Especially with respect to the fact that with the change of the medium to below water can in turn counteract the distortions present. The goal of this paper is to investigate the suitability of such action cameras for underwater photogrammetric applications focusing on the stability of the camera and the accuracy of the derived coordinates for possible photogrammetric applications. For this paper a series of image sequences was capture in a water tank. A calibration frame was placed in the water tank allowing the calibration of the camera and the validation of the measurements using check points. The accuracy assessment covered three test sets operating three GoPro sports cameras of the same model (Hero 3 black). The test set included the handling of the camera in a controlled manner where the camera was only dunked into the water tank using 7MP and 12MP resolution and a rough handling where the camera was shaken as well as being removed from the waterproof case using 12MP resolution. The tests showed that the camera stability was given with a maximum standard deviation of the camera constant σc of 0.0031mm for 7MB (for an average c of 2.720mm) and 0.0072 mm for 12MB (for an average c of 3.642mm). The residual test of the check points gave for the 7MB test series the largest rms value with only 0.450mm and the largest maximal residual of only 2.5 mm. For the 12MB test series the maximum rms value is 0. 653mm.

  4. Television Quiz Show Simulation

    Science.gov (United States)

    Hill, Jonnie Lynn

    2007-01-01

    This article explores the simulation of four television quiz shows for students in China studying English as a foreign language (EFL). It discusses the adaptation and implementation of television quiz shows and how the students reacted to them.

  5. Three-Dimensional Object Motion and Velocity Estimation Using a Single Computational RGB-D Camera

    Directory of Open Access Journals (Sweden)

    Seungwon Lee

    2015-01-01

    Full Text Available In this paper, a three-dimensional (3D object moving direction and velocity estimation method is presented using a dual off-axis color-filtered aperture (DCA-based computational camera. Conventional object tracking methods provided only two-dimensional (2D states of an object in the image for the target representation. The proposed method estimates depth information in the object region from a single DCA camera that transforms 2D spatial information into 3D model parameters of the object. We also present a calibration method of the DCA camera to estimate the entire set of camera parameters for a practical implementation. Experimental results show that the proposed DCA-based color and depth (RGB-D camera can calculate the 3D object moving direction and velocity of a randomly moving object in a single-camera framework.

  6. Camera Mouse Including “Ctrl-Alt-Del” Key Operation Using Gaze, Blink, and Mouth Shape

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2013-04-01

    Full Text Available This paper presents camera mouse system with additional feature: "CTRL - ALT - DEL" key. The previous gaze-based camera mouse systems are only considering how to obtain gaze and making selection. We proposed gaze-based camera mouse with "CTRL - ALT - DEL" key. Infrared camera is put on top of display while user looking ahead. User gaze is estimated based on eye gaze and head pose. Blinking and mouth detections are used to create "CTR - ALT - DEL" key. Pupil knowledge is used to improve robustness of eye gaze estimation against different users. Also, Gabor filter is used to extract face features. Skin color information and face features are used to estimate head pose. The experiments of each method have done and the results show that all methods work perfectly. By implemented this system, troubleshooting of camera mouse can be done by user itself and makes camera mouse be more sophisticated.

  7. The Dark Energy Camera (DECam)

    CERN Document Server

    Honscheid, K; Abbott, T; Annis, J; Antonik, M; Barcel, M; Bernstein, R; Bigelow, B; Brooks, D; Buckley-Geer, E; Campa, J; Cardiel, L; Castander, F; Castilla, J; Cease, H; Chappa, S; Dede, E; Derylo, G; Diehl, T; Doel, P; De Vicente, J; Eiting, J; Estrada, J; Finley, D; Flaugher, B; Gaztañaga, E; Gerdes, D; Gladders, M; Guarino, V; Gutíerrez, G; Hamilton, J; Haney, M; Holland, S; Huffman, D; Karliner, I; Kau, D; Kent, S; Kozlovsky, M; Kubik, D; Kühn, K; Kuhlmann, S; Kuk, K; Leger, F; Lin, H; Martínez, G; Martínez, M; Merritt, W; Mohr, J; Moore, P; Moore, T; Nord, B; Ogando, R; Olsen, J; Onal, B; Peoples, J; Qian, T; Roe, N; Sánchez, E; Scarpine, V; Schmidt, R; Schmitt, R; Schubnell, M; Schultz, K; Selen, M; Shaw, T; Simaitis, V; Slaughter, J; Smith, C; Spinka, H; Stefanik, A; Stuermer, W; Talaga, R; Tarle, G; Thaler, J; Tucker, D; Walker, A; Worswick, S; Zhao, A

    2008-01-01

    In this paper we describe the Dark Energy Camera (DECam), which will be the primary instrument used in the Dark Energy Survey. DECam will be a 3 sq. deg. mosaic camera mounted at the prime focus of the Blanco 4m telescope at the Cerro-Tololo International Observatory (CTIO). It consists of a large mosaic CCD focal plane, a five element optical corrector, five filters (g,r,i,z,Y), a modern data acquisition and control system and the associated infrastructure for operation in the prime focus cage. The focal plane includes of 62 2K x 4K CCD modules (0.27"/pixel) arranged in a hexagon inscribed within the roughly 2.2 degree diameter field of view and 12 smaller 2K x 2K CCDs for guiding, focus and alignment. The CCDs will be 250 micron thick fully-depleted CCDs that have been developed at the Lawrence Berkeley National Laboratory (LBNL). Production of the CCDs and fabrication of the optics, mechanical structure, mechanisms, and control system for DECam are underway; delivery of the instrument to CTIO is scheduled ...

  8. Decentralized tracking of humans using a camera network

    Science.gov (United States)

    Gruenwedel, Sebastian; Jelaca, Vedran; Niño-Castañeda, Jorge Oswaldo; Van Hese, Peter; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried

    2012-01-01

    Real-time tracking of people has many applications in computer vision and typically requires multiple cameras; for instance for surveillance, domotics, elderly-care and video conferencing. However, this problem is very challenging because of the need to deal with frequent occlusions and environmental changes. Another challenge is to develop solutions which scale well with the size of the camera network. Such solutions need to carefully restrict overall communication in the network and often involve distributed processing. In this paper we present a distributed person tracker, addressing the aforementioned issues. Real-time processing is achieved by distributing tasks between the cameras and a fusion node. The latter fuses only high level data based on low-bandwidth input streams from the cameras. This is achieved by performing tracking first on the image plane of each camera followed by sending only metadata to a local fusion node. We designed the proposed system with respect to a low communication load and towards robustness of the system. We evaluate the performance of the tracker in meeting scenarios where persons are often occluded by other persons and/or furniture. We present experimental results which show that our tracking approach is accurate even in cases of severe occlusions in some of the views.

  9. The Conformal Camera in Modeling Active Binocular Vision

    Directory of Open Access Journals (Sweden)

    Jacek Turski

    2016-08-01

    Full Text Available Primate vision is an active process that constructs a stable internal representation of the 3D world based on 2D sensory inputs that are inherently unstable due to incessant eye movements. We present here a mathematical framework for processing visual information for a biologically-mediated active vision stereo system with asymmetric conformal cameras. This model utilizes the geometric analysis on the Riemann sphere developed in the group-theoretic framework of the conformal camera, thus far only applicable in modeling monocular vision. The asymmetric conformal camera model constructed here includes the fovea’s asymmetric displacement on the retina and the eye’s natural crystalline lens tilt and decentration, as observed in ophthalmological diagnostics. We extend the group-theoretic framework underlying the conformal camera to the stereo system with asymmetric conformal cameras. Our numerical simulation shows that the theoretical horopter curves in this stereo system are conics that well approximate the empirical longitudinal horopters of the primate vision system.

  10. Optimization of precision localization microscopy using CMOS camera technology

    Science.gov (United States)

    Fullerton, Stephanie; Bennett, Keith; Toda, Eiji; Takahashi, Teruo

    2012-02-01

    Light microscopy imaging is being transformed by the application of computational methods that permit the detection of spatial features below the optical diffraction limit. Successful localization microscopy (STORM, dSTORM, PALM, PhILM, etc.) relies on the precise position detection of fluorescence emitted by single molecules using highly sensitive cameras with rapid acquisition speeds. Electron multiplying CCD (EM-CCD) cameras are the current standard detector for these applications. Here, we challenge the notion that EM-CCD cameras are the best choice for precision localization microscopy and demonstrate, through simulated and experimental data, that certain CMOS detector technology achieves better localization precision of single molecule fluorophores. It is well-established that localization precision is limited by system noise. Our findings show that the two overlooked noise sources relevant for precision localization microscopy are the shot noise of the background light in the sample and the excess noise from electron multiplication in EM-CCD cameras. At low light conditions (CCD cameras are the preferred detector. However, in practical applications, optical background noise is significant, creating conditions where CMOS performs better than EM-CCD. Furthermore, the excess noise of EM-CCD is equivalent to reducing the information content of each photon detected which, in localization microscopy, reduces the precision of the localization. Thus, new CMOS technology with 100fps, super resolution precision localization microscopy.

  11. Detection of the optimal region of interest for camera oximetry.

    Science.gov (United States)

    Karlen, Walter; Ansermino, J Mark; Dumont, Guy A; Scheffer, Cornie

    2013-01-01

    The estimation of heart rate and blood oxygen saturation with an imaging array on a mobile phone (camera oximetry) has great potential for mobile health applications as no additional hardware other than a camera and LED flash enabled phone are required. However, this approach is challenging as the configuration of the camera can negatively influence the estimation quality. Further, the number of photons recorded with the photo detector is largely dependent on the optical path length, resulting in a non-homogeneous image. In this paper we describe a novel method to automatically detect the optimal region of interest (ROI) for the captured image to extract a pulse waveform. We also present a study to select the optimal camera settings, notably the white balance. The experiments show that the incandescent white balance mode is the preferable setting for camera oximetry applications on the tested mobile phone (Samsung Galaxy Ace). Also, the ROI algorithm successfully identifies the frame regions which provide waveforms with the largest amplitudes. PMID:24110175

  12. Camera motion estimation by tracking contour deformation: Precision analysis

    OpenAIRE

    Alenyà, Guillem; Torras, Carme

    2010-01-01

    An algorithm to estimate camera motion from the progressive deformation of a tracked contour in the acquired video stream has been previously proposed. It relies on the fact that two views of a plane are related by an affinity, whose 6 parameters can be used to derive the 6 degrees-of-freedom of camera motion between the two views. In this paper we evaluate the accuracy of the algorithm. Monte Carlo simulations show that translations parallel to the image plane and rotations about the optical...

  13. Iterative reconstruction of detector response of an Anger gamma camera

    Science.gov (United States)

    Morozov, A.; Solovov, V.; Alves, F.; Domingos, V.; Martins, R.; Neves, F.; Chepel, V.

    2015-05-01

    Statistical event reconstruction techniques can give better results for gamma cameras than the traditional centroid method. However, implementation of such techniques requires detailed knowledge of the photomultiplier tube light-response functions. Here we describe an iterative method which allows one to obtain the response functions from flood irradiation data without imposing strict requirements on the spatial uniformity of the event distribution. A successful application of the method for medical gamma cameras is demonstrated using both simulated and experimental data. An implementation of the iterative reconstruction technique capable of operating in real time is presented. We show that this technique can also be used for monitoring photomultiplier gain variations.

  14. CCD camera full range pH sensor array.

    Science.gov (United States)

    Safavi, A; Maleki, N; Rostamzadeh, A; Maesum, S

    2007-01-15

    Changes in colors of an array of optical sensors that responds in full pH range were recorded using a CCD camera. The data of the camera were transferred to the computer through a capture card. Simple software was written to read the specific color of each sensor. In order to associate sensor array responses with pH values, a number of different mathematics and chemometrics methods were investigated and compared. The results show that the use of "Microsoft Excel's Solver" provides results which are in very good agreement with those obtained with chemometric methods such as artificial neural network (ANN) and partial least square (PLS) methods. PMID:19071333

  15. Robust Visual Control of Parallel Robots under Uncertain Camera Orientation

    Directory of Open Access Journals (Sweden)

    Miguel A. Trujano

    2012-10-01

    Full Text Available This work presents a stability analysis and experimental assessment of a visual control algorithm applied to a redundant planar parallel robot under uncertainty in relation to camera orientation. The key feature of the analysis is a strict Lyapunov function that allows the conclusion of asymptotic stability without invoking the Barbashin‐Krassovsky‐LaSalle invariance theorem. The controller does not rely on velocity measurements and has a structure similar to a classic Proportional Derivative control algorithm. Experiments in a laboratory prototype show that uncertainty in camera orientation does not significantly degrade closed‐loop performance.

  16. A new method to evaluate imaging quality of CCD cameras

    Institute of Scientific and Technical Information of China (English)

    LI Wen-juan; DU Hai-hui; DAI Jing-min; CHEN Ying-hang

    2005-01-01

    In order to evaluate the imaging quality of CCD cameras fully and rapidly,the minimum resolvable contrast (MRC) is presented in this paper and the system of measuring MRC is constructed as well,in which two integrating spheres are proposed to illuminate two sides of the target respectively.The variable contrast can be obtained by regulating the luminance of integrating spheres. Experimental results indicate that the error of measuring luminance is within ±0.3 cd/m2,MRC rises with the increase of the spatial frequency.The experimental results show that the method proposed is an effective approach to evaluate the imaging quality of CCD cameras.

  17. Distributed Sensing and Processing for Multi-Camera Networks

    Science.gov (United States)

    Sankaranarayanan, Aswin C.; Chellappa, Rama; Baraniuk, Richard G.

    Sensor networks with large numbers of cameras are becoming increasingly prevalent in a wide range of applications, including video conferencing, motion capture, surveillance, and clinical diagnostics. In this chapter, we identify some of the fundamental challenges in designing such systems: robust statistical inference, computationally efficiency, and opportunistic and parsimonious sensing. We show that the geometric constraints induced by the imaging process are extremely useful for identifying and designing optimal estimators for object detection and tracking tasks. We also derive pipelined and parallelized implementations of popular tools used for statistical inference in non-linear systems, of which multi-camera systems are examples. Finally, we highlight the use of the emerging theory of compressive sensing in reducing the amount of data sensed and communicated by a camera network.

  18. Camera calibration correction in shape from inconsistent silhouette

    Science.gov (United States)

    The use of shape from silhouette for reconstruction tasks is plagued by two types of real-world errors: camera calibration error and silhouette segmentation error. When either error is present, we call the problem the Shape from Inconsistent Silhouette (SfIS) problem. In this paper, we show how sm...

  19. 3D for the people: multi-camera motion capture in the field with consumer-grade cameras and open source software

    Directory of Open Access Journals (Sweden)

    Brandon E. Jackson

    2016-09-01

    Full Text Available Ecological, behavioral and biomechanical studies often need to quantify animal movement and behavior in three dimensions. In laboratory studies, a common tool to accomplish these measurements is the use of multiple, calibrated high-speed cameras. Until very recently, the complexity, weight and cost of such cameras have made their deployment in field situations risky; furthermore, such cameras are not affordable to many researchers. Here, we show how inexpensive, consumer-grade cameras can adequately accomplish these measurements both within the laboratory and in the field. Combined with our methods and open source software, the availability of inexpensive, portable and rugged cameras will open up new areas of biological study by providing precise 3D tracking and quantification of animal and human movement to researchers in a wide variety of field and laboratory contexts.

  20. 3D for the people: multi-camera motion capture in the field with consumer-grade cameras and open source software.

    Science.gov (United States)

    Jackson, Brandon E; Evangelista, Dennis J; Ray, Dylan D; Hedrick, Tyson L

    2016-09-15

    Ecological, behavioral and biomechanical studies often need to quantify animal movement and behavior in three dimensions. In laboratory studies, a common tool to accomplish these measurements is the use of multiple, calibrated high-speed cameras. Until very recently, the complexity, weight and cost of such cameras have made their deployment in field situations risky; furthermore, such cameras are not affordable to many researchers. Here, we show how inexpensive, consumer-grade cameras can adequately accomplish these measurements both within the laboratory and in the field. Combined with our methods and open source software, the availability of inexpensive, portable and rugged cameras will open up new areas of biological study by providing precise 3D tracking and quantification of animal and human movement to researchers in a wide variety of field and laboratory contexts.

  1. 3D for the people: multi-camera motion capture in the field with consumer-grade cameras and open source software.

    Science.gov (United States)

    Jackson, Brandon E; Evangelista, Dennis J; Ray, Dylan D; Hedrick, Tyson L

    2016-01-01

    Ecological, behavioral and biomechanical studies often need to quantify animal movement and behavior in three dimensions. In laboratory studies, a common tool to accomplish these measurements is the use of multiple, calibrated high-speed cameras. Until very recently, the complexity, weight and cost of such cameras have made their deployment in field situations risky; furthermore, such cameras are not affordable to many researchers. Here, we show how inexpensive, consumer-grade cameras can adequately accomplish these measurements both within the laboratory and in the field. Combined with our methods and open source software, the availability of inexpensive, portable and rugged cameras will open up new areas of biological study by providing precise 3D tracking and quantification of animal and human movement to researchers in a wide variety of field and laboratory contexts. PMID:27444791

  2. Action selection for single-camera SLAM

    OpenAIRE

    Vidal-Calleja, Teresa A.; Sanfeliu, Alberto; Andrade-Cetto, J

    2010-01-01

    A method for evaluating, at video rate, the quality of actions for a single camera while mapping unknown indoor environments is presented. The strategy maximizes mutual information between measurements and states to help the camera avoid making ill-conditioned measurements that are appropriate to lack of depth in monocular vision systems. Our system prompts a user with the appropriate motion commands during 6-DOF visual simultaneous localization and mapping with a handheld camera. Additionall...

  3. Development of biostereometric experiments. [stereometric camera system

    Science.gov (United States)

    Herron, R. E.

    1978-01-01

    The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.

  4. Comparative evaluation of consumer grade cameras and mobile phone cameras for close range photogrammetry

    Science.gov (United States)

    Chikatsu, Hirofumi; Takahashi, Yoji

    2009-08-01

    The authors have been concentrating on developing convenient 3D measurement methods using consumer grade digital cameras, and it was concluded that consumer grade digital cameras are expected to become a useful photogrammetric device for the various close range application fields. On the other hand, mobile phone cameras which have 10 mega pixels were appeared on the market in Japan. In these circumstances, we are faced with alternative epoch-making problem whether mobile phone cameras are able to take the place of consumer grade digital cameras in close range photogrammetric applications. In order to evaluate potentials of mobile phone cameras in close range photogrammetry, comparative evaluation between mobile phone cameras and consumer grade digital cameras are investigated in this paper with respect to lens distortion, reliability, stability and robustness. The calibration tests for 16 mobile phone cameras and 50 consumer grade digital cameras were conducted indoors using test target. Furthermore, practability of mobile phone camera for close range photogrammetry was evaluated outdoors. This paper presents that mobile phone cameras have ability to take the place of consumer grade digital cameras, and develop the market in digital photogrammetric fields.

  5. True-color night vision cameras

    Science.gov (United States)

    Kriesel, Jason; Gat, Nahum

    2007-04-01

    This paper describes True-Color Night Vision cameras that are sensitive to the visible to near-infrared (V-NIR) portion of the spectrum allowing for the "true-color" of scenes and objects to be displayed and recorded under low-light-level conditions. As compared to traditional monochrome (gray or green) night vision imagery, color imagery has increased information content and has proven to enable better situational awareness, faster response time, and more accurate target identification. Urban combat environments, where rapid situational awareness is vital, and marine operations, where there is inherent information in the color of markings and lights, are example applications that can benefit from True-Color Night Vision technology. Two different prototype cameras, employing two different true-color night vision technological approaches, are described and compared in this paper. One camera uses a fast-switching liquid crystal filter in front of a custom Gen-III image intensified camera, and the second camera is based around an EMCCD sensor with a mosaic filter applied directly to the sensor. In addition to visible light, both cameras utilize NIR to (1) increase the signal and (2) enable the viewing of laser aiming devices. The performance of the true-color cameras, along with the performance of standard (monochrome) night vision cameras, are reported and compared under various operating conditions in the lab and the field. In addition to subjective criterion, figures of merit designed specifically for the objective assessment of such cameras are used in this analysis.

  6. Research of Camera Calibration Based on DSP

    OpenAIRE

    Zheng Zhang; Yukun Wan; Lixin Cai

    2013-01-01

    To take advantage of the high-efficiency and stability of DSP in the data processing and the functions of OpenCV library, this study brought forward a scheme that camera calibration in DSP embedded system calibration. An arithmetic of camera calibration based on OpenCV is designed by analyzing the camera model and lens distortion. The transplantation of EMCV to DSP is completed and the arithmetic of camera calibration is migrated and optimized based on the CCS development environment and the ...

  7. Omnidirectional Underwater Camera Design and Calibration

    Directory of Open Access Journals (Sweden)

    Josep Bosch

    2015-03-01

    Full Text Available This paper presents the development of an underwater omnidirectional multi-camera system (OMS based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3 and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach.

  8. Framework for Evaluating Camera Opinions

    Directory of Open Access Journals (Sweden)

    K.M. Subramanian

    2015-03-01

    Full Text Available Opinion mining plays a most important role in text mining applications in brand and product positioning, customer relationship management, consumer attitude detection and market research. The applications lead to new generation of companies/products meant for online market perception, online content monitoring and reputation management. Expansion of the web inspires users to contribute/express opinions via blogs, videos and social networking sites. Such platforms provide valuable information for analysis of sentiment pertaining a product or service. This study investigates the performance of various feature extraction methods and classification algorithm for opinion mining. Opinions expressed in Amazon website for cameras are collected and used for evaluation. Features are extracted from the opinions using Term Document Frequency and Inverse Document Frequency (TDFIDF. Feature transformation is achieved through Principal Component Analysis (PCA and kernel PCA. Naïve Bayes, K Nearest Neighbor and Classification and Regression Trees (CART classification algorithms classify the features extracted.

  9. Illumination box and camera system

    Science.gov (United States)

    Haas, Jeffrey S.; Kelly, Fredrick R.; Bushman, John F.; Wiefel, Michael H.; Jensen, Wayne A.; Klunder, Gregory L.

    2002-01-01

    A hand portable, field-deployable thin-layer chromatography (TLC) unit and a hand portable, battery-operated unit for development, illumination, and data acquisition of the TLC plates contain many miniaturized features that permit a large number of samples to be processed efficiently. The TLC unit includes a solvent tank, a holder for TLC plates, and a variety of tool chambers for storing TLC plates, solvent, and pipettes. After processing in the TLC unit, a TLC plate is positioned in a collapsible illumination box, where the box and a CCD camera are optically aligned for optimal pixel resolution of the CCD images of the TLC plate. The TLC system includes an improved development chamber for chemical development of TLC plates that prevents solvent overflow.

  10. A Holographic Road Show.

    Science.gov (United States)

    Kirkpatrick, Larry D.; Rugheimer, Mac

    1979-01-01

    Describes the viewing sessions and the holograms of a holographic road show. The traveling exhibits, believed to stimulate interest in physics, include a wide variety of holograms and demonstrate several physical principles. (GA)

  11. HRSC: High resolution stereo camera

    Science.gov (United States)

    Neukum, G.; Jaumann, R.; Basilevsky, A.T.; Dumke, A.; Van Gasselt, S.; Giese, B.; Hauber, E.; Head, J. W.; Heipke, C.; Hoekzema, N.; Hoffmann, H.; Greeley, R.; Gwinner, K.; Kirk, R.; Markiewicz, W.; McCord, T.B.; Michael, G.; Muller, Jan-Peter; Murray, J.B.; Oberst, J.; Pinet, P.; Pischel, R.; Roatsch, T.; Scholten, F.; Willner, K.

    2009-01-01

    The High Resolution Stereo Camera (HRSC) on Mars Express has delivered a wealth of image data, amounting to over 2.5 TB from the start of the mapping phase in January 2004 to September 2008. In that time, more than a third of Mars was covered at a resolution of 10-20 m/pixel in stereo and colour. After five years in orbit, HRSC is still in excellent shape, and it could continue to operate for many more years. HRSC has proven its ability to close the gap between the low-resolution Viking image data and the high-resolution Mars Orbiter Camera images, leading to a global picture of the geological evolution of Mars that is now much clearer than ever before. Derived highest-resolution terrain model data have closed major gaps and provided an unprecedented insight into the shape of the surface, which is paramount not only for surface analysis and geological interpretation, but also for combination with and analysis of data from other instruments, as well as in planning for future missions. This chapter presents the scientific output from data analysis and highlevel data processing, complemented by a summary of how the experiment is conducted by the HRSC team members working in geoscience, atmospheric science, photogrammetry and spectrophotometry. Many of these contributions have been or will be published in peer-reviewed journals and special issues. They form a cross-section of the scientific output, either by summarising the new geoscientific picture of Mars provided by HRSC or by detailing some of the topics of data analysis concerning photogrammetry, cartography and spectral data analysis.

  12. MISR FIRSTLOOK radiometric camera-by-camera Cloud Mask V001

    Data.gov (United States)

    National Aeronautics and Space Administration — This file contains the FIRSTLOOK Radiometric camera-by-camera Cloud Mask (RCCM) dataset produced using ancillary inputs (RCCT) from the previous time period. It is...

  13. A Linear Approach for Depth and Colour Camera Calibration Using Hybrid Parameters

    Institute of Scientific and Technical Information of China (English)

    Ke-Li Cheng; Xuan Ju; Ruo-Feng Tong; Min Tang; Jian Chang; Jian-Jun Zhang

    2016-01-01

    Many recent applications of computer graphics and human computer interaction have adopted both colour cameras and depth cameras as input devices. Therefore, an effective calibration of both types of hardware taking different colour and depth inputs is required. Our approach removes the numerical difficulties of using non-linear optimization in previous methods which explicitly resolve camera intrinsics as well as the transformation between depth and colour cameras. A matrix of hybrid parameters is introduced to linearize our optimization. The hybrid parameters offer a transformation from a depth parametric space (depth camera image) to a colour parametric space (colour camera image) by combining the intrinsic parameters of depth camera and a rotation transformation from depth camera to colour camera. Both the rotation transformation and intrinsic parameters can be explicitly calculated from our hybrid parameters with the help of a standard QR factorisation. We test our algorithm with both synthesized data and real-world data where ground-truth depth information is captured by Microsoft Kinect. The experiments show that our approach can provide comparable accuracy of calibration with the state-of-the-art algorithms while taking much less computation time (1/50 of Herrera’s method and 1/10 of Raposo’s method) due to the advantage of using hybrid parameters.

  14. High-Speed Edge-Detecting Line Scan Smart Camera

    Science.gov (United States)

    Prokop, Norman F.

    2012-01-01

    A high-speed edge-detecting line scan smart camera was developed. The camera is designed to operate as a component in a NASA Glenn Research Center developed inlet shock detection system. The inlet shock is detected by projecting a laser sheet through the airflow. The shock within the airflow is the densest part and refracts the laser sheet the most in its vicinity, leaving a dark spot or shadowgraph. These spots show up as a dip or negative peak within the pixel intensity profile of an image of the projected laser sheet. The smart camera acquires and processes in real-time the linear image containing the shock shadowgraph and outputting the shock location. Previously a high-speed camera and personal computer would perform the image capture and processing to determine the shock location. This innovation consists of a linear image sensor, analog signal processing circuit, and a digital circuit that provides a numerical digital output of the shock or negative edge location. The smart camera is capable of capturing and processing linear images at over 1,000 frames per second. The edges are identified as numeric pixel values within the linear array of pixels, and the edge location information can be sent out from the circuit in a variety of ways, such as by using a microcontroller and onboard or external digital interface to include serial data such as RS-232/485, USB, Ethernet, or CAN BUS; parallel digital data; or an analog signal. The smart camera system can be integrated into a small package with a relatively small number of parts, reducing size and increasing reliability over the previous imaging system..

  15. Show-Bix &

    DEFF Research Database (Denmark)

    2014-01-01

    made from digital scans of the original dias slides located in the collection of the Museum of Contemporary Art in Roskilde. In front of the audience entering the space and placed on it’s own stand, is an original 60s style telephone with turning dial. Action begins when the audience lift the phone and......The anti-reenactment 'Show-Bix &' consists of 5 dias projectors, a dial phone, quintophonic sound, and interactive elements. A responsive interface will enable the Dias projectors to show copies of original dias slides from the Show-Bix piece ”March på Stedet”, 265 images in total. The copies are...... dial a number. Any number will make the Dias change. All numbers are also assigned to specific sound documents: clips form rare interviews and the complete sound-re-enactment of the Show-Bix piece ‘Omringning’ (‘Surrounding’) in five channels (a quintophonie). This was originally produced in...

  16. Honored Teacher Shows Commitment.

    Science.gov (United States)

    Ratte, Kathy

    1987-01-01

    Part of the acceptance speech of the 1985 National Council for the Social Studies Teacher of the Year, this article describes the censorship experience of this honored social studies teacher. The incident involved the showing of a videotape version of the feature film entitled "The Seduction of Joe Tynan." (JDH)

  17. A Visionary Show

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Seduction. Distinction. Relax. Pulsation. These are the "style universes" on display at Première Vision, heralded as "The World’s Premiere Fabric Show." Started more than 35 years ago by 15 French weavers, Première Vision has expanded beyond its

  18. Violence and TV Shows

    OpenAIRE

    ÖZTÜRK, Yrd. Doç. Dr. Şinasi

    2008-01-01

    This study aims to discuss theories on theviolent effects of TV shows on viewers, especiallyon children. Therefore, this study includes a briefdiscussion of definitions of violence, discussionof violence theories, main results of researcheson televised violence, measuring TV violence,perception of televised violence, individualdifferences and reactions to TV violence,aggressiveness and preferences for TV violence.

  19. Obesity in show cats.

    Science.gov (United States)

    Corbee, R J

    2014-12-01

    Obesity is an important disease with a high prevalence in cats. Because obesity is related to several other diseases, it is important to identify the population at risk. Several risk factors for obesity have been described in the literature. A higher incidence of obesity in certain cat breeds has been suggested. The aim of this study was to determine whether obesity occurs more often in certain breeds. The second aim was to relate the increased prevalence of obesity in certain breeds to the official standards of that breed. To this end, 268 cats of 22 different breeds investigated by determining their body condition score (BCS) on a nine-point scale by inspection and palpation, at two different cat shows. Overall, 45.5% of the show cats had a BCS > 5, and 4.5% of the show cats had a BCS > 7. There were significant differences between breeds, which could be related to the breed standards. Most overweight and obese cats were in the neutered group. It warrants firm discussions with breeders and cat show judges to come to different interpretations of the standards in order to prevent overweight conditions in certain breeds from being the standard of beauty. Neutering predisposes for obesity and requires early nutritional intervention to prevent obese conditions. PMID:24612018

  20. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many ca

  1. Improving Situational Awareness in camera surveillance by combining top-view maps with camera images

    NARCIS (Netherlands)

    Kooi, F.L.; Zeeders, R.

    2009-01-01

    The goal of the experiment described is to improve today's camera surveillance in public spaces. Three designs with the camera images combined on a top-view map were compared to each other and to the current situation in camera surveillance. The goal was to test which design makes spatial relationsh

  2. Camera self-calibration from translation by referring to a known camera.

    Science.gov (United States)

    Zhao, Bin; Hu, Zhaozheng

    2015-09-01

    This paper presents a novel linear method for camera self-calibration by referring to a known (or calibrated) camera. The method requires at least three images, with two images generated by the uncalibrated camera from pure translation and one image generated by the known reference camera. We first propose a method to compute the infinite homography from scene depths. Based on this, we use two images generated by translating the uncalibrated camera to recover scene depths, which are further utilized to linearly compute the infinite homography between an arbitrary uncalibrated image, and the image from the known camera. With the known camera as reference, the computed infinite homography is readily decomposed for camera calibration. The proposed self-calibration method has been tested with simulation and real image data. Experimental results demonstrate that the method is practical and accurate. This paper proposes using a "known reference camera" for camera calibration. The pure translation, as required in the method, is much more maneuverable, compared with some strict motions in the literature, such as pure rotation. The proposed self-calibration method has good potential for solving online camera calibration problems, which has important applications, especially for multicamera and zooming camera systems.

  3. Modeling of the over-exposed pixel area of CCD cameras caused by laser dazzling

    Science.gov (United States)

    Benoist, Koen W.; Schleijpen, Ric H. M. A.

    2014-10-01

    A simple model has been developed and implemented in Matlab code, predicting the over-exposed pixel area of cameras caused by laser dazzling. Inputs of this model are the laser irradiance on the front optics of the camera, the Point Spread Function (PSF) of the used optics, the integration time of the camera, and camera sensor specifications like pixel size, quantum efficiency and full well capacity. Effects of the read-out circuit of the camera are not incorporated. The model was evaluated with laser dazzle experiments on CCD cameras using a 532 nm CW laser dazzler and shows good agreement. For relatively low laser irradiance the model predicts the over-exposed laser spot area quite accurately and shows the cube root dependency of spot diameter on laser irradiance, caused by the PSF as demonstrated before for IR cameras. For higher laser power levels the laser induced spot diameter increases more rapidly than predicted, which probably can be attributed to scatter effects in the camera. Some first attempts to model scatter contributions, using a simple scatter power function f(θ), show good resemblance with experiments. Using this model, a tool is available which can assess the performance of observation sensor systems while being subjected to laser countermeasures.

  4. Shanghai Shows Its Heart

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The city known as China’s economic powerhouse showed a more caring face as host of the Special Olympic Games Between October 2 and 11,the Special Olympics Summer Games were hosted in Shanghai,the first time the 40-year-old athletic com- petition for people with intellectual disabilities came to a developing country. This Special Olympics was also larger than all previous games in temps of the number of athletes.

  5. Centering mount for a gamma camera

    International Nuclear Information System (INIS)

    A device for centering a γ-camera detector in case of radionuclide diagnosis is described. It permits the use of available medical coaches instead of a table with a transparent top. The device can be used for centering a detector (when it is fixed at the low end of a γ-camera) on a required area of the patient's body

  6. A BASIC CAMERA UNIT FOR MEDICAL PHOTOGRAPHY.

    Science.gov (United States)

    SMIALOWSKI, A; CURRIE, D J

    1964-08-22

    A camera unit suitable for most medical photographic purposes is described. The unit comprises a single-lens reflex camera, an electronic flash unit and supplementary lenses. Simple instructions for use of th's basic unit are presented. The unit is entirely suitable for taking fine-quality photographs of most medical subjects by persons who have had little photographic training.

  7. Cameras Monitor Spacecraft Integrity to Prevent Failures

    Science.gov (United States)

    2014-01-01

    The Jet Propulsion Laboratory contracted Malin Space Science Systems Inc. to outfit Curiosity with four of its cameras using the latest commercial imaging technology. The company parlayed the knowledge gained under working with NASA to develop an off-the-shelf line of cameras, along with a digital video recorder, designed to help troubleshoot problems that may arise on satellites in space.

  8. CCD Color Camera Characterization for Image Measurements

    NARCIS (Netherlands)

    Withagen, P.J.; Groen, F.C.A.; Schutte, K.

    2007-01-01

    In this article, we will analyze a range of different types of cameras for its use in measurements. We verify a general model of a charged coupled device camera using experiments. This model includes gain and offset, additive and multiplicative noise, and gamma correction. It is shown that for sever

  9. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  10. Depth Estimation Using a Sliding Camera.

    Science.gov (United States)

    Ge, Kailin; Hu, Han; Feng, Jianjiang; Zhou, Jie

    2016-02-01

    Image-based 3D reconstruction technology is widely used in different fields. The conventional algorithms are mainly based on stereo matching between two or more fixed cameras, and high accuracy can only be achieved using a large camera array, which is very expensive and inconvenient in many applications. Another popular choice is utilizing structure-from-motion methods for arbitrarily placed camera(s). However, due to too many degrees of freedom, its computational cost is heavy and its accuracy is rather limited. In this paper, we propose a novel depth estimation algorithm using a sliding camera system. By analyzing the geometric properties of the camera system, we design a camera pose initialization algorithm that can work satisfyingly with only a small number of feature points and is robust to noise. For pixels corresponding to different depths, an adaptive iterative algorithm is proposed to choose optimal frames for stereo matching, which can take advantage of continuously pose-changing imaging and save the time consumption amazingly too. The proposed algorithm can also be easily extended to handle less constrained situations (such as using a camera mounted on a moving robot or vehicle). Experimental results on both synthetic and real-world data have illustrated the effectiveness of the proposed algorithm.

  11. Depth Estimation Using a Sliding Camera.

    Science.gov (United States)

    Ge, Kailin; Hu, Han; Feng, Jianjiang; Zhou, Jie

    2016-02-01

    Image-based 3D reconstruction technology is widely used in different fields. The conventional algorithms are mainly based on stereo matching between two or more fixed cameras, and high accuracy can only be achieved using a large camera array, which is very expensive and inconvenient in many applications. Another popular choice is utilizing structure-from-motion methods for arbitrarily placed camera(s). However, due to too many degrees of freedom, its computational cost is heavy and its accuracy is rather limited. In this paper, we propose a novel depth estimation algorithm using a sliding camera system. By analyzing the geometric properties of the camera system, we design a camera pose initialization algorithm that can work satisfyingly with only a small number of feature points and is robust to noise. For pixels corresponding to different depths, an adaptive iterative algorithm is proposed to choose optimal frames for stereo matching, which can take advantage of continuously pose-changing imaging and save the time consumption amazingly too. The proposed algorithm can also be easily extended to handle less constrained situations (such as using a camera mounted on a moving robot or vehicle). Experimental results on both synthetic and real-world data have illustrated the effectiveness of the proposed algorithm. PMID:26685238

  12. Rosetta Star Tracker and Navigation Camera

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera.......Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera....

  13. Fazendo 3d com uma camera so

    CERN Document Server

    Lunazzi, J J

    2010-01-01

    A simple system to make stereo photography or videos based in just two mirrors was made in 1989 and recently adapted to a digital camera setup. Um sistema simples para fazer fotografia ou videos em estereo baseado em dois espelhos que dividem o campo da imagem foi criado no ano 1989, e recentemente adaptado para camera digital.

  14. Laser Dazzling of Focal Plane Array Cameras

    NARCIS (Netherlands)

    Schleijpen, H.M.A.; Dimmeler, A.; Eberle, B; Heuvel, J.C. van den; Mieremet, A.L.; Bekman, H.H.P.T.; Mellier, B.

    2007-01-01

    Laser countermeasures against infrared focal plane array cameras aim to saturate the full camera image. In this paper we will discuss the results of dazzling experiments performed with MWIR lasers. In the “low energy” pulse regime we observe an increasing saturated area with increasing power. The si

  15. Creating and Using a Camera Obscura

    Science.gov (United States)

    Quinnell, Justin

    2012-01-01

    The camera obscura (Latin for "darkened room") is the earliest optical device and goes back over 2500 years. The small pinhole or lens at the front of the room allows light to enter and this is then "projected" onto a screen inside the room. This differs from a camera, which projects its image onto light-sensitive material. Originally images were…

  16. Flow visualization by mobile phone cameras

    Science.gov (United States)

    Cierpka, Christian; Hain, Rainer; Buchmann, Nicolas A.

    2016-06-01

    Mobile smart phones were completely changing people's communication within the last ten years. However, these devices do not only offer communication through different channels but also devices and applications for fun and recreation. In this respect, mobile phone cameras include now relatively fast (up to 240 Hz) cameras to capture high-speed videos of sport events or other fast processes. The article therefore explores the possibility to make use of this development and the wide spread availability of these cameras in the terms of velocity measurements for industrial or technical applications and fluid dynamics education in high schools and at universities. The requirements for a simplistic PIV (particle image velocimetry) system are discussed. A model experiment of a free water jet was used to prove the concept and shed some light on the achievable quality and determine bottle necks by comparing the results obtained with a mobile phone camera with data taken by a high-speed camera suited for scientific experiments.

  17. Gamma camera performance: technical assessment protocol

    Energy Technology Data Exchange (ETDEWEB)

    Bolster, A.A. [West Glasgow Hospitals NHS Trust, London (United Kingdom). Dept. of Clinical Physics; Waddington, W.A. [University College London Hospitals NHS Trust, London (United Kingdom). Inst. of Nuclear Medicine

    1996-12-31

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera`s computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author).

  18. Adapting virtual camera behaviour through player modelling

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    the viewpoint movements to the player type and her game-play style. Ultimately, the methodology is applied to a 3D platform game and is evaluated through a controlled experiment; the results suggest that the resulting adaptive cinematographic experience is favoured by some player types and it can generate......Research in virtual camera control has focused primarily on finding methods to allow designers to place cameras effectively and efficiently in dynamic and unpredictable environments, and to generate complex and dynamic plans for cinematography in virtual environments. In this article, we propose...... a novel approach to virtual camera control, which builds upon camera control and player modelling to provide the user with an adaptive point-of-view. To achieve this goal, we propose a methodology to model the player’s preferences on virtual camera movements and we employ the resulting models to tailor...

  19. Show and Tell

    DEFF Research Database (Denmark)

    2013-01-01

    /DK), Pernille With Madsen, Emil Alenius, Andrés Galeano (E/DE), Kasper Vang & Mads Forsby, Nanna Lysholt Hansen og Molly & Me (Molly Haslund & Catherine Hoffmann (UK)) Kurateret af Judith Schwarzbart og Sanne Krogh Groth Produceret af studerende ved Performance-design Programmet var støttet af Statens Kunstråd...... og studienævnet på Performance-design. Show & Tell - Performance program: kl. 16.30-19 Adresse: Kunsthal Charlottenborg, Nyhavn 2, 1051 København K...

  20. Obesity in show dogs.

    Science.gov (United States)

    Corbee, R J

    2013-10-01

    Obesity is an important disease with a growing incidence. Because obesity is related to several other diseases, and decreases life span, it is important to identify the population at risk. Several risk factors for obesity have been described in the literature. A higher incidence of obesity in certain breeds is often suggested. The aim of this study was to determine whether obesity occurs more often in certain breeds. The second aim was to relate the increased prevalence of obesity in certain breeds to the official standards of that breed. To this end, we investigated 1379 dogs of 128 different breeds by determining their body condition score (BCS). Overall, 18.6% of the show dogs had a BCS >5, and 1.1% of the show dogs had a BCS>7. There were significant differences between breeds, which could be correlated to the breed standards. It warrants firm discussions with breeders and judges in order to come to different interpretations of the standards to prevent overweight conditions from being the standard of beauty. PMID:22882163

  1. Airborne Digital Camera. A digital view from above; Airborne Digital Camera. Der digitale Blick von oben

    Energy Technology Data Exchange (ETDEWEB)

    Roeser, H.P. [DLR Deutsches Zentrum fuer Luft- und Raumfahrt e.V., Berlin (Germany). Inst. fuer Weltraumsensorik und Planetenerkundung

    1999-09-01

    The Airborne Digital Camera is based on the WAOSS camera of the MARS-96 mission. The camera will provide a new basis for airborne photogrammetry and remote exploration. The ADC project aims at the development of the first commercial digital airborne camera. [German] Die Wurzeln des Projektes Airborne Digital Camera (ADC) liegen in der Mission MARS-96. Die hierfuer konzipierte Marskamera WAOSS lieferte die Grundlage fuer das innovative Konzept einer digitalen Flugzeugkamera. Diese ist auf dem Weg, die flugzeuggestuetzte Photogrammetrie und Fernerkundung auf eine technologisch voellig neue Basis zu stellen. Ziel des Projektes ADC ist die Entwicklung der ersten kommerziellen digitalen Luftbildkamera. (orig.)

  2. Not a "reality" show.

    Science.gov (United States)

    Wrong, Terence; Baumgart, Erica

    2013-01-01

    The authors of the preceding articles raise legitimate questions about patient and staff rights and the unintended consequences of allowing ABC News to film inside teaching hospitals. We explain why we regard their fears as baseless and not supported by what we heard from individuals portrayed in the filming, our decade-long experience making medical documentaries, and the full un-aired context of the scenes shown in the broadcast. The authors don't and can't know what conversations we had, what documents we reviewed, and what protections we put in place in each televised scene. Finally, we hope to correct several misleading examples cited by the authors as well as their offhand mischaracterization of our program as a "reality" show. PMID:23631336

  3. Not a "reality" show.

    Science.gov (United States)

    Wrong, Terence; Baumgart, Erica

    2013-01-01

    The authors of the preceding articles raise legitimate questions about patient and staff rights and the unintended consequences of allowing ABC News to film inside teaching hospitals. We explain why we regard their fears as baseless and not supported by what we heard from individuals portrayed in the filming, our decade-long experience making medical documentaries, and the full un-aired context of the scenes shown in the broadcast. The authors don't and can't know what conversations we had, what documents we reviewed, and what protections we put in place in each televised scene. Finally, we hope to correct several misleading examples cited by the authors as well as their offhand mischaracterization of our program as a "reality" show.

  4. NIR Camera/spectrograph: TEQUILA

    Science.gov (United States)

    Ruiz, E.; Sohn, E.; Cruz-Gonzalez, I.; Salas, L.; Parraga, A.; Torres, R.; Perez, M.; Cobos, F.; Tejada, C.; Iriarte, A.

    1998-11-01

    We describe the configuration and operation modes of the IR camera/spectrograph called TEQUILA, based on a 1024X1024 HgCdTe FPA (HAWAII). The optical system will allow three possible modes of operation: direct imaging, low and medium resolution spectroscopy and polarimetry. The basic system is being designed to consist of the following: 1) A LN$_2$ dewar that allocates the FPA together with the preamplifiers and a 24 filter position cylinder. 2) Control and readout electronics based on DSP modules linked to a workstation through fiber optics. 3) An optomechanical assembly cooled to -30oC that provides an efficient operation of the instrument in its various modes. 4) A control module for the moving parts of the instrument. The opto-mechanical assembly will have the necessary provisions to install a scanning Fabry-Perot interferometer and an adaptive optics correction system. The final image acquisition and control of the whole instrument is carried out in a workstation to provide the observer with a friendly environment. The system will operate at the 2.1 m telescope at the Observatorio Astronomico Nacional in San Pedro Martir, B.C. (Mexico), and is intended to be a first-light instrument for the new 7.8 m Mexican Infrared-Optical Telescope (TIM).

  5. Cloud Computing with Context Cameras

    Science.gov (United States)

    Pickles, A. J.; Rosing, W. E.

    2016-05-01

    We summarize methods and plans to monitor and calibrate photometric observations with our autonomous, robotic network of 2m, 1m and 40cm telescopes. These are sited globally to optimize our ability to observe time-variable sources. Wide field "context" cameras are aligned with our network telescopes and cycle every ˜2 minutes through BVr'i'z' filters, spanning our optical range. We measure instantaneous zero-point offsets and transparency (throughput) against calibrators in the 5-12m range from the all-sky Tycho2 catalog, and periodically against primary standards. Similar measurements are made for all our science images, with typical fields of view of ˜0.5 degrees. These are matched against Landolt, Stetson and Sloan standards, and against calibrators in the 10-17m range from the all-sky APASS catalog. Such measurements provide pretty good instantaneous flux calibration, often to better than 5%, even in cloudy conditions. Zero-point and transparency measurements can be used to characterize, monitor and inter-compare sites and equipment. When accurate calibrations of Target against Standard fields are required, monitoring measurements can be used to select truly photometric periods when accurate calibrations can be automatically scheduled and performed.

  6. Cloud Computing with Context Cameras

    CERN Document Server

    Pickles, A J

    2013-01-01

    We summarize methods and plans to monitor and calibrate photometric observations with our autonomous, robotic network of 2m, 1m and 40cm telescopes. These are sited globally to optimize our ability to observe time-variable sources. Wide field "context" cameras are aligned with our network telescopes and cycle every 2 minutes through BVriz filters, spanning our optical range. We measure instantaneous zero-point offsets and transparency (throughput) against calibrators in the 5-12m range from the all-sky Tycho2 catalog, and periodically against primary standards. Similar measurements are made for all our science images, with typical fields of view of 0.5 degrees. These are matched against Landolt, Stetson and Sloan standards, and against calibrators in the 10-17m range from the all-sky APASS catalog. Such measurements provide pretty good instantaneous flux calibration, often to better than 5%, even in cloudy conditions. Zero-point and transparency measurements can be used to characterize, monitor and inter-comp...

  7. Smart Camera Technology Increases Quality

    Science.gov (United States)

    2004-01-01

    When it comes to real-time image processing, everyone is an expert. People begin processing images at birth and rapidly learn to control their responses through the real-time processing of the human visual system. The human eye captures an enormous amount of information in the form of light images. In order to keep the brain from becoming overloaded with all the data, portions of an image are processed at a higher resolution than others, such as a traffic light changing colors. changing colors. In the same manner, image processing products strive to extract the information stored in light in the most efficient way possible. Digital cameras available today capture millions of pixels worth of information from incident light. However, at frame rates more than a few per second, existing digital interfaces are overwhelmed. All the user can do is store several frames to memory until that memory is full and then subsequent information is lost. New technology pairs existing digital interface technology with an off-the-shelf complementary metal oxide semiconductor (CMOS) imager to provide more than 500 frames per second of specialty image processing. The result is a cost-effective detection system unlike any other.

  8. True three-dimensional camera

    Science.gov (United States)

    Kornreich, Philipp; Farell, Bart

    2013-01-01

    An imager that can measure the distance from each pixel to the point on the object that is in focus at the pixel is described. This is accomplished by short photo-conducting lightguides at each pixel. In the eye the rods and cones are the fiber-like lightguides. The device uses ambient light that is only coherent in spherical shell-shaped light packets of thickness of one coherence length. Modern semiconductor technology permits the construction of lightguides shorter than a coherence length of ambient light. Each of the frequency components of the broad band light arriving at a pixel has a phase proportional to the distance from an object point to its image pixel. Light frequency components in the packet arriving at a pixel through a convex lens add constructively only if the light comes from the object point in focus at this pixel. The light in packets from all other object points cancels. Thus the pixel receives light from one object point only. The lightguide has contacts along its length. The lightguide charge carriers are generated by the light patterns. These light patterns, and thus the photocurrent, shift in response to the phase of the input signal. Thus, the photocurrent is a function of the distance from the pixel to its object point. Applications include autonomous vehicle navigation and robotic vision. Another application is a crude teleportation system consisting of a camera and a three-dimensional printer at a remote location.

  9. Showing Value (Editorial

    Directory of Open Access Journals (Sweden)

    Denise Koufogiannakis

    2009-06-01

    Full Text Available When Su Cleyle and I first decided to start Evidence Based Library and Information Practice, one of the things we agreed upon immediately was that the journal be open access. We knew that a major obstacle to librarians using the research literature was that they did not have access to the research literature. Although Su and I are both academic librarians who can access a wide variety of library and information literature from our institutions, we belong to a profession where not everyone has equal access to the research in our field. Without such access to our own body of literature, how can we ever hope for practitioners to use research evidence in their decision making? It would have been contradictory to the principles of evidence based library and information practice to do otherwise.One of the specific groups we thought could use such an open access venue for discovering research literature was school librarians. School librarians are often isolated and lacking access to the research literature that may help them prove to stakeholders the importance of their libraries and their role within schools. Certainly, school libraries have been in decline and the use of evidence to show value is needed. As Ken Haycock noted in his 2003 report, The Crisis in Canada’s School Libraries: The Case for Reform and Reinvestment, “Across the country, teacher-librarians are losing their jobs or being reassigned. Collections are becoming depleted owing to budget cuts. Some principals believe that in the age of the Internet and the classroom workstation, the school library is an artifact” (9. Within this context, school librarians are looking to our research literature for evidence of the impact that school library programs have on learning outcomes and student success. They are integrating that evidence into their practice, and reflecting upon what can be improved locally. They are focusing on students and showing the impact of school libraries and

  10. The Great Cometary Show

    Science.gov (United States)

    2007-01-01

    its high spatial and spectral resolution, it was possible to zoom into the very heart of this very massive star. In this innermost region, the observations are dominated by the extremely dense stellar wind that totally obscures the underlying central star. The AMBER observations show that this dense stellar wind is not spherically symmetric, but exhibits a clearly elongated structure. Overall, the AMBER observations confirm that the extremely high mass loss of Eta Carinae's massive central star is non-spherical and much stronger along the poles than in the equatorial plane. This is in agreement with theoretical models that predict such an enhanced polar mass-loss in the case of rapidly rotating stars. ESO PR Photo 06c/07 ESO PR Photo 06c/07 RS Ophiuchi in Outburst Several papers from this special feature focus on the later stages in a star's life. One looks at the binary system Gamma 2 Velorum, which contains the closest example of a star known as a Wolf-Rayet. A single AMBER observation allowed the astronomers to separate the spectra of the two components, offering new insights in the modeling of Wolf-Rayet stars, but made it also possible to measure the separation between the two stars. This led to a new determination of the distance of the system, showing that previous estimates were incorrect. The observations also revealed information on the region where the winds from the two stars collide. The famous binary system RS Ophiuchi, an example of a recurrent nova, was observed just 5 days after it was discovered to be in outburst on 12 February 2006, an event that has been expected for 21 years. AMBER was able to detect the extension of the expanding nova emission. These observations show a complex geometry and kinematics, far from the simple interpretation of a spherical fireball in extension. AMBER has detected a high velocity jet probably perpendicular to the orbital plane of the binary system, and allowed a precise and careful study of the wind and the shockwave

  11. Control and protection of outdoor embedded camera for astronomy

    Science.gov (United States)

    Rigaud, F.; Jegouzo, I.; Gaudemard, J.; Vaubaillon, J.

    2012-09-01

    The purpose of CABERNET- Podet-Met (CAmera BEtter Resolution NETwork, Pole sur la Dynamique de l'Environnement Terrestre - Meteor) project is the automated observation, by triangulation with three cameras, of meteor showers to perform a calculation of meteoroids trajectory and velocity. The scientific goal is to search the parent body, comet or asteroid, for each observed meteor. To install outdoor cameras in order to perform astronomy measurements for several years with high reliability requires a very specific design for the box. For these cameras, this contribution shows how we fulfilled the various functions of their boxes, such as cooling of the CCD, heating to melt snow and ice, the protecting against moisture, lightning and Solar light. We present the principal and secondary functions, the product breakdown structure, the technical solutions evaluation grid of criteria, the adopted technology products and their implementation in multifunction subsets for miniaturization purpose. To manage this project, we aim to get the lowest manpower and development time for every part. In appendix, we present the measurements the image quality evolution during the CCD cooling, and some pictures of the prototype.

  12. Hardware Middleware for Person Tracking on Embedded Distributed Smart Cameras

    Directory of Open Access Journals (Sweden)

    Ali Akbar Zarezadeh

    2012-01-01

    Full Text Available Tracking individuals is a prominent application in such domains like surveillance or smart environments. This paper provides a development of a multiple camera setup with jointed view that observes moving persons in a site. It focuses on a geometry-based approach to establish correspondence among different views. The expensive computational parts of the tracker are hardware accelerated via a novel system-on-chip (SoC design. In conjunction with this vision application, a hardware object request broker (ORB middleware is presented as the underlying communication system. The hardware ORB provides a hardware/software architecture to achieve real-time intercommunication among multiple smart cameras. Via a probing mechanism, a performance analysis is performed to measure network latencies, that is, time traversing the TCP/IP stack, in both software and hardware ORB approaches on the same smart camera platform. The empirical results show that using the proposed hardware ORB as client and server in separate smart camera nodes will considerably reduce the network latency up to 100 times compared to the software ORB.

  13. A Robust Camera-Based Interface for Mobile Entertainment.

    Science.gov (United States)

    Roig-Maimó, Maria Francesca; Manresa-Yee, Cristina; Varona, Javier

    2016-02-19

    Camera-based interfaces in mobile devices are starting to be used in games and apps, but few works have evaluated them in terms of usability or user perception. Due to the changing nature of mobile contexts, this evaluation requires extensive studies to consider the full spectrum of potential users and contexts. However, previous works usually evaluate these interfaces in controlled environments such as laboratory conditions, therefore, the findings cannot be generalized to real users and real contexts. In this work, we present a robust camera-based interface for mobile entertainment. The interface detects and tracks the user's head by processing the frames provided by the mobile device's front camera, and its position is then used to interact with the mobile apps. First, we evaluate the interface as a pointing device to study its accuracy, and different factors to configure such as the gain or the device's orientation, as well as the optimal target size for the interface. Second, we present an in the wild study to evaluate the usage and the user's perception when playing a game controlled by head motion. Finally, the game is published in an application store to make it available to a large number of potential users and contexts and we register usage data. Results show the feasibility of using this robust camera-based interface for mobile entertainment in different contexts and by different people.

  14. High-speed camera characterization of voluntary eye blinking kinematics.

    Science.gov (United States)

    Kwon, Kyung-Ah; Shipley, Rebecca J; Edirisinghe, Mohan; Ezra, Daniel G; Rose, Geoff; Best, Serena M; Cameron, Ruth E

    2013-08-01

    Blinking is vital to maintain the integrity of the ocular surface and its characteristics such as blink duration and speed can vary significantly, depending on the health of the eyes. The blink is so rapid that special techniques are required to characterize it. In this study, a high-speed camera was used to record and characterize voluntary blinking. The blinking motion of 25 healthy volunteers was recorded at 600 frames per second. Master curves for the palpebral aperture and blinking speed were constructed using palpebral aperture versus time data taken from the high-speed camera recordings, which show that one blink can be divided into four phases; closing, closed, early opening and late opening. Analysis of data from the high-speed camera images was used to calculate the palpebral aperture, peak blinking speed, average blinking speed and duration of voluntary blinking and compare it with data generated by other methods previously used to evaluate voluntary blinking. The advantages of the high-speed camera method over the others are discussed, thereby supporting the high potential usefulness of the method in clinical research.

  15. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-20 deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  16. Sky camera geometric calibration using solar observations

    Science.gov (United States)

    Urquhart, Bryan; Kurtz, Ben; Kleissl, Jan

    2016-09-01

    A camera model and associated automated calibration procedure for stationary daytime sky imaging cameras is presented. The specific modeling and calibration needs are motivated by remotely deployed cameras used to forecast solar power production where cameras point skyward and use 180° fisheye lenses. Sun position in the sky and on the image plane provides a simple and automated approach to calibration; special equipment or calibration patterns are not required. Sun position in the sky is modeled using a solar position algorithm (requiring latitude, longitude, altitude and time as inputs). Sun position on the image plane is detected using a simple image processing algorithm. The performance evaluation focuses on the calibration of a camera employing a fisheye lens with an equisolid angle projection, but the camera model is general enough to treat most fixed focal length, central, dioptric camera systems with a photo objective lens. Calibration errors scale with the noise level of the sun position measurement in the image plane, but the calibration is robust across a large range of noise in the sun position. Calibration performance on clear days ranged from 0.94 to 1.24 pixels root mean square error.

  17. Thermal characterization of a NIR hyperspectral camera

    Science.gov (United States)

    Parra, Francisca; Meza, Pablo; Pezoa, Jorge E.; Torres, Sergio N.

    2011-11-01

    The accuracy achieved by applications employing hyperspectral data collected by hyperspectral cameras depends heavily on a proper estimation of the true spectral signal. Beyond question, a proper knowledge about the sensor response is key in this process. It is argued here that the common first order representation for hyperspectral NIR sensors does not represent accurately their thermal wavelength-dependent response, hence calling for more sophisticated and precise models. In this work, a wavelength-dependent, nonlinear model for a near infrared (NIR) hyperspectral camera is proposed based on its experimental characterization. Experiments have shown that when temperature is used as the input signal, the camera response is almost linear at low wavelengths, while as the wavelength increases the response becomes exponential. This wavelength-dependent behavior is attributed to the nonlinear responsivity of the sensors in the NIR spectrum. As a result, the proposed model considers different nonlinear input/output responses, at different wavelengths. To complete the representation, both the nonuniform response of neighboring detectors in the camera and the time varying behavior of the input temperature have also been modeled. The experimental characterization and the proposed model assessment have been conducted using a NIR hyperspectral camera in the range of 900 to 1700 [nm] and a black body radiator source. The proposed model was utilized to successfully compensate for both: (i) the nonuniformity noise inherent to the NIR camera, and (ii) the stripping noise induced by the nonuniformity and the scanning process of the camera while rendering hyperspectral images.

  18. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2-deg deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  19. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (''bang-bang'') closed-loop control with a +-2-deg deadband. The deadband area is desirable to avoid operator ''seasickness'' caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator System SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system

  20. Acceptance tests of a new gamma camera

    International Nuclear Information System (INIS)

    For best patient service, a QA programme is needed to produce quantitative/qualitative data and keep records of the results and equipment faults. Gamma cameras must be checked against the manufacturer's specifications.The service manual is usually useful to achieve this goal. Acceptance tests are very important not only to accept a new gamma camera system for routine clinical use but also to have a role in a reference for future measurements. In this study, acceptance tests were performed for a new gamma camera in our department. It is a General Electric MG system with two detectors, two collimators. They are low energy general purpose (LEGP) and medium energy general purpose (MEGP). All intrinsic calibrations and corrections were done by the service engineer at installation (PM tune, dynamic correction, energy calibration, geometric calibration, energy correction, linearity correction and second order corrections).After installation, calibrations and corrections, a close physical inspection of the mechanical and electrical safety aspects of the cameras were done by the responsible physicist of the department. The planar system is based on measurement of system uniformity, resolution/linearity and multiple window spatial registration. All test procedures were performed according to NEMA procedures developed by the manufacturer. Intrinsic uniformity: NEMA uniformity was done first by using service manual and then other isotope uniformities were acquired with 99mTc, 131I, 201Tl and 67Ga. They were evaluated qualitatively and quantitatively, but non-uniformities were observed, especially for detector II, The service engineers repeated all tests and made necessary corrections. We repeated all the intrinsic uniformity tests. 99mTc intrinsic images were also performed at 'no correction', 'no energy correction', 'no linearity correction', 'all correction' and '±10% off peak', and compared. Extrinsic uniformity: At the beginning, collimators were checked for defects

  1. A new depth measuring method for stereo camera based on converted relative extrinsic parameters

    Science.gov (United States)

    Song, Xiaowei; Yang, Lei; Wu, Yuanzhao; Liu, Zhong

    2013-08-01

    This paper presents a new depth measuring method for the dual-view stereo camera based on the converted relative extrinsic parameters. The relative extrinsic parameters between left and right cameras, which obtained by the stereo camera calibration, can indicate the geometric relationships among the left principle point, right principle point and convergent point. Furthermore, the geometry which consists of the corresponding points and the object can be obtained by making conversion between the corresponding points and principle points. Therefore, the depth of the object can be calculated based on the obtained geometry. The correctness of the proposed method has been proved in 3ds Max, and the validity of the method has been verified on the binocular stereo system of flea2 cameras. We compared our experimental results with the popular RGB-D camera (e.g. Kinect). The comparison results show that our method is reliable and efficient, without epipolar rectification.

  2. Multi-Kinect v2 Camera Based Monitoring System for Radiotherapy Patient Safety.

    Science.gov (United States)

    Santhanam, Anand P; Min, Yugang; Kupelian, Patrick; Low, Daniel

    2016-01-01

    3D kinect camera systems are essential for real-time imaging of 3D treatment space that consists of both the patient anatomy as well as the treatment equipment setup. In this paper, we present the technical details of a 3D treatment room monitoring system that employs a scalable number of calibrated and coregistered Kinect v2 cameras. The monitoring system tracks radiation gantry and treatment couch positions, and tracks the patient and immobilization accessories. The number and positions of the cameras were selected to avoid line-of-sight issues and to adequately cover the treatment setup. The cameras were calibrated with a calibration error of 0.1 mm. Our tracking system evaluation show that both gantry and patient motion could be acquired at a rate of 30 frames per second. The transformations between the cameras yielded a 3D treatment space accuracy of < 2 mm error in a radiotherapy setup within 500mm around the isocenter. PMID:27046604

  3. Limitations of recreational camera traps for wildlife management and conservation research: a practitioner's perspective.

    Science.gov (United States)

    Newey, Scott; Davidson, Paul; Nazir, Sajid; Fairhurst, Gorry; Verdicchio, Fabio; Irvine, R Justin; van der Wal, René

    2015-11-01

    The availability of affordable 'recreational' camera traps has dramatically increased over the last decade. We present survey results which show that many conservation practitioners use cheaper 'recreational' units for research rather than more expensive 'professional' equipment. We present our perspective of using two popular models of 'recreational' camera trap for ecological field-based studies. The models used (for >2 years) presented us with a range of practical problems at all stages of their use including deployment, operation, and data management, which collectively crippled data collection and limited opportunities for quantification of key issues arising. Our experiences demonstrate that prospective users need to have a sufficient understanding of the limitations camera trap technology poses, dimensions we communicate here. While the merits of different camera traps will be study specific, the performance of more expensive 'professional' models may prove more cost-effective in the long-term when using camera traps for research.

  4. "Calibration-on-the-spot'': How to calibrate an EMCCD camera from its images

    DEFF Research Database (Denmark)

    Mortensen, Kim; Flyvbjerg, Henrik

    In localization-based microscopy, super-resolution is obtained by analyzing isolated diffraction-limited spots imaged, typically, with EMCCD cameras. To compare experiments and calculate localization precision, the photon-to-signal amplification factor is needed but unknown without a calibration...... of the camera. Here we show how this can be done post festum from just a recorded image. We demonstrate this (i) theoretically, mathematically, (ii) by analyzing images recorded with an EMCCD camera, and (iii) by analyzing simulated EMCCD images for which we know the true values of parameters. In summary, our...... method of calibration-on-the-spot allows calibration of a camera with unknown settings from old images on file, with no other info needed. Consequently, calibration-on-the-spot also makes future camera calibrations before and after measurements unnecessary, because the calibration is encoded in recorded...

  5. A solid state streak camera

    Science.gov (United States)

    Kleinfelder, Stuart; Kwiatkowski, Kris; Shah, Ashish

    2005-03-01

    A monolithic solid-state streak camera has been designed and fabricated in a standard 0.35 μm, 3.3V, thin-oxide digital CMOS process. It consists of a 1-D linear array of 150 integrated photodiodes, followed by fast analog buffers and on-chip, 150-deep analog frame storage. Each pixel's front-end consists of an n-diffusion / p-well photodiode, with fast complementary reset transistors, and a source-follower buffer. Each buffer drives a line of 150 sample circuits per pixel, with each sample circuit consisting of an n-channel sample switch, a 0.1 pF double-polysilicon sample capacitor, a reset switch to definitively clear the capacitor, and a multiplexed source-follower readout buffer. Fast on-chip sample clock generation was designed using a self-timed break-before-make operation that insures the maximum time for sample settling. The electrical analog bandwidth of each channels buffer and sampling circuits was designed to exceed 1 GHz. Sampling speeds of 400 M-frames/s have been achieved using electrical input signals. Operation with optical input signals has been demonstrated at 100 MHz sample rates. Sample output multiplexing allows the readout of all 22,500 samples (150 pixels times 150 samples per pixel) in about 3 ms. The chip"s output range was a maximum of 1.48 V on a 3.3V supply voltage, corresponding to a maximum 2.55 V swing at the photodiode. Time-varying output noise was measured to be 0.51 mV, rms, at 100 MHz, for a dynamic range of ~11.5 bits, rms. Circuit design details are presented, along with the results of electrical measurements and optical experiments with fast pulsed laser light sources at several wavelengths.

  6. Determining camera parameters for round glassware measurements

    Science.gov (United States)

    Baldner, F. O.; Costa, P. B.; Gomes, J. F. S.; Filho, D. M. E. S.; Leta, F. R.

    2015-01-01

    Nowadays there are many types of accessible cameras, including digital single lens reflex ones. Although these cameras are not usually employed in machine vision applications, they can be an interesting choice. However, these cameras have many available parameters to be chosen by the user and it may be difficult to select the best of these in order to acquire images with the needed metrological quality. This paper proposes a methodology to select a set of parameters that will supply a machine vision system with the needed quality image, considering the measurement required of a laboratory glassware.

  7. Uncertainty of temperature measurement with thermal cameras

    Science.gov (United States)

    Chrzanowski, Krzysztof; Matyszkiel, Robert; Fischer, Joachim; Barela, Jaroslaw

    2001-06-01

    All main international metrological organizations are proposing a parameter called uncertainty as a measure of the accuracy of measurements. A mathematical model that enables the calculations of uncertainty of temperature measurement with thermal cameras is presented. The standard uncertainty or the expanded uncertainty of temperature measurement of the tested object can be calculated when the bounds within which the real object effective emissivity (epsilon) r, the real effective background temperature Tba(r), and the real effective atmospheric transmittance (tau) a(r) are located and can be estimated; and when the intrinsic uncertainty of the thermal camera and the relative spectral sensitivity of the thermal camera are known.

  8. Screen-Camera Calibration Using Gray Codes

    OpenAIRE

    FRANCKEN, Yannick; Hermans, Chris; Bekaert, Philippe

    2009-01-01

    In this paper we present a method for efficient calibration of a screen-camera setup, in which the camera is not directly facing the screen. A spherical mirror is used to make the screen visible to the camera. Using Gray code illumination patterns, we can uniquely identify the reflection of each screen pixel on the imaged spherical mirror. This allows us to compute a large set of 2D-3D correspondences, using only two sphere locations. Compared to previous work, this means we require less manu...

  9. Fuzzy logic control for camera tracking system

    Science.gov (United States)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  10. Task Panel Sensing with a Movable Camera

    Science.gov (United States)

    Wolfe, William J.; Mathis, Donald W.; Magee, Michael; Hoff, William A.

    1990-03-01

    This paper discusses the integration of model based computer vision with a robot planning system. The vision system deals with structured objects with several movable parts (the "Task Panel"). The robot planning system controls a T3-746 manipulator that has a gripper and a wrist mounted camera. There are two control functions: move the gripper into position for manipulating the panel fixtures (doors, latches, etc.), and move the camera into positions preferred by the vision system. This paper emphasizes the issues related to repositioning the camera for improved viewpoints.

  11. Detecting method of subjects' 3D positions and experimental advanced camera control system

    Science.gov (United States)

    Kato, Daiichiro; Abe, Kazuo; Ishikawa, Akio; Yamada, Mitsuho; Suzuki, Takahito; Kuwashima, Shigesumi

    1997-04-01

    Steady progress is being made in the development of an intelligent robot camera capable of automatically shooting pictures with a powerful sense of reality or tracking objects whose shooting requires advanced techniques. Currently, only experienced broadcasting cameramen can provide these pictures.TO develop an intelligent robot camera with these abilities, we need to clearly understand how a broadcasting cameraman assesses his shooting situation and how his camera is moved during shooting. We use a real- time analyzer to study a cameraman's work and his gaze movements at studios and during sports broadcasts. This time, we have developed a detecting method of subjects' 3D positions and an experimental camera control system to help us further understand the movements required for an intelligent robot camera. The features are as follows: (1) Two sensor cameras shoot a moving subject and detect colors, producing its 3D coordinates. (2) Capable of driving a camera based on camera movement data obtained by a real-time analyzer. 'Moving shoot' is the name we have given to the object position detection technology on which this system is based. We used it in a soccer game, producing computer graphics showing how players moved. These results will also be reported.

  12. Surveillance of a 2D plane area with 3D deployed cameras.

    Science.gov (United States)

    Fu, Yi-Ge; Zhou, Jie; Deng, Lei

    2014-01-01

    As the use of camera networks has expanded, camera placement to satisfy some quality assurance parameters (such as a good coverage ratio, an acceptable resolution constraints, an acceptable cost as low as possible, etc.) has become an important problem. The discrete camera deployment problem is NP-hard and many heuristic methods have been proposed to solve it, most of which make very simple assumptions. In this paper, we propose a probability inspired binary Particle Swarm Optimization (PI-BPSO) algorithm to solve a homogeneous camera network placement problem. We model the problem under some more realistic assumptions: (1) deploy the cameras in the 3D space while the surveillance area is restricted to a 2D ground plane; (2) deploy the minimal number of cameras to get a maximum visual coverage under more constraints, such as field of view (FOV) of the cameras and the minimum resolution constraints. We can simultaneously optimize the number and the configuration of the cameras through the introduction of a regulation item in the cost function. The simulation results showed the effectiveness of the proposed PI-BPSO algorithm.

  13. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.

    Science.gov (United States)

    Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki

    2016-01-01

    Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961

  14. Security camera resolution measurements: Horizontal TV lines versus modulation transfer function measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Birch, Gabriel Carisle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Griffin, John Clark [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    The horizontal television lines (HTVL) metric has been the primary quantity used by division 6000 related to camera resolution for high consequence security systems. This document shows HTVL measurements are fundamen- tally insufficient as a metric to determine camera resolution, and propose a quantitative, standards based methodology by measuring the camera system modulation transfer function (MTF), the most common and accepted metric of res- olution in the optical science community. Because HTVL calculations are easily misinterpreted or poorly defined, we present several scenarios in which HTVL is frequently reported, and discuss their problems. The MTF metric is discussed, and scenarios are presented with calculations showing the application of such a metric.

  15. Camera Based Navigation System with Augmented Reality

    Directory of Open Access Journals (Sweden)

    M. Marcu

    2012-06-01

    Full Text Available Nowadays smart mobile devices have enough processing power, memory, storage and always connected wireless communication bandwidth that makes them available for any type of application. Augmented reality (AR proposes a new type of applications that tries to enhance the real world by superimposing or combining virtual objects or computer generated information with it. In this paper we present a camera based navigation system with augmented reality integration. The proposed system aims to the following: the user points the camera of the smartphone towards a point of interest, like a building or any other place, and the application searches for relevant information about that specific place and superimposes the data over the video feed on the display. When the user moves the camera away, changing its orientation, the data changes as well, in real-time, with the proper information about the place that is now in the camera view.

  16. Research of Camera Calibration Based on DSP

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2013-09-01

    Full Text Available To take advantage of the high-efficiency and stability of DSP in the data processing and the functions of OpenCV library, this study brought forward a scheme that camera calibration in DSP embedded system calibration. An arithmetic of camera calibration based on OpenCV is designed by analyzing the camera model and lens distortion. The transplantation of EMCV to DSP is completed and the arithmetic of camera calibration is migrated and optimized based on the CCS development environment and the DSP/BIOS system. On the premise of realizing calibration function, this arithmetic improves the efficiency of program execution and the precision of calibration and lays the foundation for further research of the visual location based on DSP embedded system.

  17. Increase in the Array Television Camera Sensitivity

    Science.gov (United States)

    Shakhrukhanov, O. S.

    A simple adder circuit for successive television frames that enables to considerably increase the sensitivity of such radiation detectors is suggested by the example of array television camera QN902K.

  18. Traffic Cameras, MDTA Cameras, Camera locations at MDTA, Camera location inside the tunnel (SENSITIVE), Published in 2010, 1:1200 (1in=100ft) scale, Maryland Transportation Authority.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Traffic Cameras dataset, published at 1:1200 (1in=100ft) scale, was produced all or in part from Field Survey/GPS information as of 2010. It is described as...

  19. Ge Quantum Dot Infrared Imaging Camera Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Luna Innovations Incorporated proposes to develop a high performance Ge quantum dots-based infrared (IR) imaging camera on Si substrate. The high sensitivity, large...

  20. Portable mini gamma camera for medical applications

    CERN Document Server

    Porras, E; Benlloch, J M; El-Djalil-Kadi-Hanifi, M; López, S; Pavon, N; Ruiz, J A; Sánchez, F; Sebastiá, A

    2002-01-01

    A small, portable and low-cost gamma camera for medical applications has been developed and clinically tested. This camera, based on a scintillator crystal and a Position Sensitive Photo-Multiplier Tube, has a useful field of view of 4.6 cm diameter and provides 2.2 mm of intrinsic spatial resolution. Its mobility and light weight allow to reach the patient from any desired direction. This camera images small organs with high efficiency and so addresses the demand for devices of specific clinical applications. In this paper, we present the camera and briefly describe the procedures that have led us to choose its configuration and the image reconstruction method. The clinical tests and diagnostic capability are also presented and discussed.

  1. A Survey of Catadioptric Omnidirectional Camera Calibration

    Directory of Open Access Journals (Sweden)

    Yan Zhang

    2013-02-01

    Full Text Available For dozen years, computer vision becomes more popular, in which omnidirectional camera has a larger field of view and widely been used in many fields, such as: robot navigation, visual surveillance, virtual reality, three-dimensional reconstruction, and so on. Camera calibration is an essential step to obtain three-dimensional geometric information from a two-dimensional image. Meanwhile, the omnidirectional camera image has catadioptric distortion, which need to be corrected in many applications, thus the study of such camera calibration method has important theoretical significance and practical applications. This paper firstly introduces the research status of catadioptric omnidirectional imaging system; then the image formation process of catadioptric omnidirectional imaging system has been given; finally a simple classification of omnidirectional imaging method is given, and we discussed the advantages and disadvantages of these methods.

  2. Compact stereo endoscopic camera using microprism arrays.

    Science.gov (United States)

    Yang, Sung-Pyo; Kim, Jae-Jun; Jang, Kyung-Won; Song, Weon-Kook; Jeong, Ki-Hun

    2016-03-15

    This work reports a microprism array (MPA) based compact stereo endoscopic camera with a single image sensor. The MPAs were monolithically fabricated by using two-step photolithography and geometry-guided resist reflow to form an appropriate prism angle for stereo image pair formation. The fabricated MPAs were transferred onto a glass substrate with a UV curable resin replica by using polydimethylsiloxane (PDMS) replica molding and then successfully integrated in front of a single camera module. The stereo endoscopic camera with MPA splits an image into two stereo images and successfully demonstrates the binocular disparities between the stereo image pairs for objects with different distances. This stereo endoscopic camera can serve as a compact and 3D imaging platform for medical, industrial, or military uses.

  3. Selecting the Right Camera for Your Desktop.

    Science.gov (United States)

    Rhodes, John

    1997-01-01

    Provides an overview of camera options and selection criteria for desktop videoconferencing. Key factors in image quality are discussed, including lighting, resolution, and signal-to-noise ratio; and steps to improve image quality are suggested. (LRW)

  4. Vacuum compatible miniature CCD camera head

    Science.gov (United States)

    Conder, Alan D.

    2000-01-01

    A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close(0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.

  5. SHOW

    Institute of Scientific and Technical Information of China (English)

    2013-01-01

    鞋如其人,由一个人对鞋的选择,便可知道他的兴趣与品位所在。无论是球星、艺人还是任何一位你可以叫得出名字的人,无论是在球场上、秀场上还是随处可以偶遇的街头巷尾,你都可以见到NIKE、adidas抑或是奢侈的LV.

  6. SHOW

    Institute of Scientific and Technical Information of China (English)

    2013-01-01

    鞋如其人,由一个人对鞋的选择,便可知道他的兴趣与品位所在。无论是球星、艺人还是任何一位你可以叫得出名字的人,无论是在球场上、秀场上还是随处可以偶遇的等着巷尾,你都可以见到NIKE、adidas抑或是奢侈的LV.

  7. SHOW

    Institute of Scientific and Technical Information of China (English)

    2013-01-01

    鞋如其人,由一个人对鞋的选择,便可知道他的兴趣与品位所在。无论是球星、艺人还是任何一位你可以叫得出名字的人,无论是在球场上、秀场上还是随处可以偶遇的街头巷尾,你都可以见到NIKE、adidas抑或是奢侈的LV。

  8. AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA)

    OpenAIRE

    Veena G.S; Chandrika Prasad; Khaleel K

    2013-01-01

    The proposed work aims to create a smart application camera, with the intention of eliminating the need for a human presence to detect any unwanted sinister activities, such as theft in this case. Spread among the campus, are certain valuable biometric identification systems at arbitrary locations. The application monitosr these systems (hereafter referred to as “object”) using our smart camera system based on an OpenCV platform. By using OpenCV Haar Training, employing the Vio...

  9. The Large APEX Bolometer Camera LABOCA

    CERN Document Server

    Siringo, G; Kovács, A; Schuller, F; Weiss, A; Esch, W; Gemuend, H P; Jethava, N; Lundershausen, G; Colin, A; Guesten, R; Menten, K M; Beelen, A; Bertoldi, F; Beeman, J W; Haller, E E

    2009-01-01

    The Large APEX Bolometer Camera, LABOCA, has been commissioned for operation as a new facility instrument t the Atacama Pathfinder Experiment 12m submillimeter telescope. This new 295-bolometer total power camera, operating in the 870 micron atmospheric window, combined with the high efficiency of APEX and the excellent atmospheric transmission at the site, offers unprecedented capability in mapping submillimeter continuum emission for a wide range of astronomical purposes.

  10. CMOS Camera Array With Onboard Memory

    Science.gov (United States)

    Gat, Nahum

    2009-01-01

    A compact CMOS (complementary metal oxide semiconductor) camera system has been developed with high resolution (1.3 Megapixels), a USB (universal serial bus) 2.0 interface, and an onboard memory. Exposure times, and other operating parameters, are sent from a control PC via the USB port. Data from the camera can be received via the USB port and the interface allows for simple control and data capture through a laptop computer.

  11. Imaging camera with multiwire proportional chamber

    International Nuclear Information System (INIS)

    The camera for imaging radioisotope dislocations for use in nuclear medicine or for other applications, claimed in the patent, is provided by two multiwire lattices for the x-coordinate connected to a first coincidence circuit, and by two multiwire lattices for the y-coordinate connected to a second coincidence circuit. This arrangement eliminates the need of using a collimator and increases camera sensitivity while reducing production cost. (Ha)

  12. An imaging system for a gamma camera

    International Nuclear Information System (INIS)

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  13. Image noise induced errors in camera positioning

    OpenAIRE

    G. Chesi; Hung, YS

    2007-01-01

    The problem of evaluating worst-case camera positioning error induced by unknown-but-bounded (UBB) image noise for a given object-camera configuration is considered. Specifically, it is shown that upper bounds to the rotation and translation worst-case error for a certain image noise intensity can be obtained through convex optimizations. These upper bounds, contrary to lower bounds provided by standard optimization tools, allow one to design robust visual servo systems. © 2007 IEEE.

  14. A stereoscopic lens for digital cinema cameras

    Science.gov (United States)

    Lipton, Lenny; Rupkalvis, John

    2015-03-01

    Live-action stereoscopic feature films are, for the most part, produced using a costly post-production process to convert planar cinematography into stereo-pair images and are only occasionally shot stereoscopically using bulky dual-cameras that are adaptations of the Ramsdell rig. The stereoscopic lens design described here might very well encourage more live-action image capture because it uses standard digital cinema cameras and workflow to save time and money.

  15. A comparison of colour micrographs obtained with a charged couple devise (CCD) camera and a 35-mm camera

    DEFF Research Database (Denmark)

    Pedersen, Mads Møller; Smedegaard, Jesper; Jensen, Peter Koch;

    2005-01-01

    ophthalmology, colour CCD camera, colour film, digital imaging, resolution, micrographs, histopathology, light microscopy......ophthalmology, colour CCD camera, colour film, digital imaging, resolution, micrographs, histopathology, light microscopy...

  16. Automatic Calibration of Stereo-Cameras Using Ordinary Chess-Board Patterns

    Science.gov (United States)

    Prokos, A.; Kalisperakis, I.; Petsa, E.; Karras, G.

    2012-07-01

    Automation of camera calibration is facilitated by recording coded 2D patterns. Our toolbox for automatic camera calibration using images of simple chess-board patterns is freely available on the Internet. But it is unsuitable for stereo-cameras whose calibration implies recovering camera geometry and their true-to-scale relative orientation. In contrast to all reported methods requiring additional specific coding to establish an object space coordinate system, a toolbox for automatic stereo-camera calibration relying on ordinary chess-board patterns is presented here. First, the camera calibration algorithm is applied to all image pairs of the pattern to extract nodes of known spacing, order them in rows and columns, and estimate two independent camera parameter sets. The actual node correspondences on stereo-pairs remain unknown. Image pairs of a textured 3D scene are exploited for finding the fundamental matrix of the stereo-camera by applying RANSAC to point matches established with the SIFT algorithm. A node is then selected near the centre of the left image; its match on the right image is assumed as the node closest to the corresponding epipolar line. This yields matches for all nodes (since these have already been ordered), which should also satisfy the 2D epipolar geometry. Measures for avoiding mismatching are taken. With automatically estimated initial orientation values, a bundle adjustment is performed constraining all pairs on a common (scaled) relative orientation. Ambiguities regarding the actual exterior orientations of the stereo-camera with respect to the pattern are irrelevant. Results from this automatic method show typical precisions not above 1/4 pixels for 640×480 web cameras.

  17. On the accuracy potential of focused plenoptic camera range determination in long distance operation

    Science.gov (United States)

    Sardemann, Hannes; Maas, Hans-Gerd

    2016-04-01

    Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.

  18. Lag Camera: A Moving Multi-Camera Array for Scene-Acquisition

    Directory of Open Access Journals (Sweden)

    Yi Xu

    2007-04-01

    Full Text Available Many applications, such as telepresence, virtual reality, and interactive walkthroughs, require a three-dimensional (3Dmodel of real-world environments. Methods, such as lightfields, geometric reconstruction and computer vision use cameras to acquire visual samples of the environment and construct a model. Unfortunately, obtaining models of real-world locations is a challenging task. In particular, important environments are often actively in use, containing moving objects, such as people entering and leaving the scene. The methods previously listed have difficulty in capturing the color and structure of the environment while in the presence of moving and temporary occluders. We describe a class of cameras called lag cameras. The main concept is to generalize a camera to take samples over space and time. Such a camera, can easily and interactively detect moving objects while continuously moving through the environment. Moreover, since both the lag camera and occluder are moving, the scene behind the occluder is captured by the lag camera even from viewpoints where the occluder lies in between the lag camera and the hidden scene. We demonstrate an implementation of a lag camera, complete with analysis and captured environments.

  19. High Resolution Camera for Mapping Titan Surface

    Science.gov (United States)

    Reinhardt, Bianca

    2011-01-01

    Titan, Saturn's largest moon, has a dense atmosphere and is the only object besides Earth to have stable liquids at its surface. The Cassini/Huygens mission has revealed the extraordinary breadth of geological processes shaping its surface. Further study requires high resolution imaging of the surface, which is restrained by light absorption by methane and scattering from aerosols. The Visual and Infrared Mapping Spectrometer (VIMS) onboard the Cassini spacecraft has demonstrated that Titan's surface can be observed within several windows in the near infrared, allowing us to process several regions in order to create a geological map and to determine the morphology. Specular reflections monitored on the lakes of the North Pole show little scattering at 5 microns, which, combined with the present study of Titan's northern pole area, refutes the paradigm that only radar can achieve high resolution mapping of the surface. The present data allowed us to monitor the evolution of lakes, to identify additional lakes at the Northern Pole, to examine Titan's hypothesis of non-synchronous rotation and to analyze the albedo of the North Pole surface. Future missions to Titan could carry a camera with 5 micron detectors and a carbon fiber radiator for weight reduction.

  20. Toward Long Distance, Sub-diffraction Imaging Using Coherent Camera Arrays

    CERN Document Server

    Holloway, Jason; Sharma, Manoj Kumar; Matsuda, Nathan; Horstmeyer, Roarke; Cossairt, Oliver; Veeraraghavan, Ashok

    2015-01-01

    In this work, we propose using camera arrays coupled with coherent illumination as an effective method of improving spatial resolution in long distance images by a factor of ten and beyond. Recent advances in ptychography have demonstrated that one can image beyond the diffraction limit of the objective lens in a microscope. We demonstrate a similar imaging system to image beyond the diffraction limit in long range imaging. We emulate a camera array with a single camera attached to an X-Y translation stage. We show that an appropriate phase retrieval based reconstruction algorithm can be used to effectively recover the lost high resolution details from the multiple low resolution acquired images. We analyze the effects of noise, required degree of image overlap, and the effect of increasing synthetic aperture size on the reconstructed image quality. We show that coherent camera arrays have the potential to greatly improve imaging performance. Our simulations show resolution gains of 10x and more are achievabl...

  1. New developments to improve SO2 cameras

    Science.gov (United States)

    Luebcke, P.; Bobrowski, N.; Hoermann, C.; Kern, C.; Klein, A.; Kuhn, J.; Vogel, L.; Platt, U.

    2012-12-01

    The SO2 camera is a remote sensing instrument that measures the two-dimensional distribution of SO2 (column densities) in volcanic plumes using scattered solar radiation as a light source. From these data SO2-fluxes can be derived. The high time resolution of the order of 1 Hz allows correlating SO2 flux measurements with other traditional volcanological measurement techniques, i.e., seismology. In the last years the application of SO2 cameras has increased, however, there is still potential to improve the instrumentation. First of all, the influence of aerosols and ash in the volcanic plume can lead to large errors in the calculated SO2 flux, if not accounted for. We present two different concepts to deal with the influence of ash and aerosols. The first approach uses a co-axial DOAS system that was added to a two filter SO2 camera. The camera used Filter A (peak transmission centred around 315 nm) to measures the optical density of SO2 and Filter B (centred around 330 nm) to correct for the influence of ash and aerosol. The DOAS system simultaneously performs spectroscopic measurements in a small area of the camera's field of view and gives additional information to correct for these effects. Comparing the optical densities for the two filters with the SO2 column density from the DOAS allows not only a much more precise calibration, but also to draw conclusions about the influence from ash and aerosol scattering. Measurement examples from Popocatépetl, Mexico in 2011 are shown and interpreted. Another approach combines the SO2 camera measurement principle with the extremely narrow and periodic transmission of a Fabry-Pérot interferometer. The narrow transmission window allows to select individual SO2 absorption bands (or series of bands) as a substitute for Filter A. Measurements are therefore more selective to SO2. Instead of Filter B, as in classical SO2 cameras, the correction for aerosol can be performed by shifting the transmission window of the Fabry

  2. Dynamic Human Body Modeling Using a Single RGB Camera.

    Science.gov (United States)

    Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan

    2016-03-18

    In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones.

  3. Energy Sharing in the 2-Electron Attosecond Streak Camera

    CERN Document Server

    Price, H; Emmanouilidou, A

    2011-01-01

    Using the recently developed concept of the 2-electron streak camera (see NJP 12, 103024 (2010)), we have studied the energy-sharing between the two ionizing electrons in single-photon double ionization of He(1s2s). We find that the most symmetric and asymmetric energy sharings correspond to different ionization dynamics with the ion's Coulomb potential significantly influencing the latter. This different dynamics for the two extreme energy sharings gives rise to different patterns in asymptotic observables and different time-delays between the emission of the two electrons. We show that the 2-electron streak camera resolves the time-delays between the emission of the two electrons for different energy sharings.

  4. Pedestrian Counting with Occlusion Handling Using Stereo Thermal Cameras

    Directory of Open Access Journals (Sweden)

    Miklas S. Kristoffersen

    2016-01-01

    Full Text Available The number of pedestrians walking the streets or gathered in public spaces is a valuable piece of information for shop owners, city governments, event organizers and many others. However, automatic counting that takes place day and night is challenging due to changing lighting conditions and the complexity of scenes with many people occluding one another. To address these challenges, this paper introduces the use of a stereo thermal camera setup for pedestrian counting. We investigate the reconstruction of 3D points in a pedestrian street with two thermal cameras and propose an algorithm for pedestrian counting based on clustering and tracking of the 3D point clouds. The method is tested on two five-minute video sequences captured at a public event with a moderate density of pedestrians and heavy occlusions. The counting performance is compared to the manually annotated ground truth and shows success rates of 95.4% and 99.1% for the two sequences.

  5. Dynamic Human Body Modeling Using a Single RGB Camera.

    Science.gov (United States)

    Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan

    2016-01-01

    In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones. PMID:26999159

  6. Pedestrian Counting with Occlusion Handling Using Stereo Thermal Cameras.

    Science.gov (United States)

    Kristoffersen, Miklas S; Dueholm, Jacob V; Gade, Rikke; Moeslund, Thomas B

    2016-01-01

    The number of pedestrians walking the streets or gathered in public spaces is a valuable piece of information for shop owners, city governments, event organizers and many others. However, automatic counting that takes place day and night is challenging due to changing lighting conditions and the complexity of scenes with many people occluding one another. To address these challenges, this paper introduces the use of a stereo thermal camera setup for pedestrian counting. We investigate the reconstruction of 3D points in a pedestrian street with two thermal cameras and propose an algorithm for pedestrian counting based on clustering and tracking of the 3D point clouds. The method is tested on two five-minute video sequences captured at a public event with a moderate density of pedestrians and heavy occlusions. The counting performance is compared to the manually annotated ground truth and shows success rates of 95.4% and 99.1% for the two sequences. PMID:26742047

  7. Remote sensing applications with NH hyperspectral portable video camera

    Science.gov (United States)

    Takara, Yohei; Manago, Naohiro; Saito, Hayato; Mabuchi, Yusaku; Kondoh, Akihiko; Fujimori, Takahiro; Ando, Fuminori; Suzuki, Makoto; Kuze, Hiroaki

    2012-11-01

    Recent advances in image sensor and information technologies have enabled the development of small hyperspectral imaging systems. EBA JAPAN (Tokyo, Japan) has developed a novel grating-based, portable hyperspectral imaging camera NH-1 and NH-7 that can acquire a 2D spatial image (640 x 480 and 1280 x 1024 pixels, respectively) with a single exposure using an internal self-scanning system. The imagers cover a wavelength range of 350 - 1100 nm, with a spectral resolution of 5 nm. Because of their small weight of 750 g, the NH camera systems can easily be installed on a small UAV platform. We show the results from the analysis of data obtained by remote sensing applications including land vegetation and atmospheric monitoring from both ground- and airborne/UAV-based observations.

  8. 4D ANIMATION RECONSTRUCTION FROM MULTI-CAMERA COORDINATES TRANSFORMATION

    Directory of Open Access Journals (Sweden)

    J. P. Jhan

    2016-06-01

    Full Text Available Reservoir dredging issues are important to extend the life of reservoir. The most effective and cost reduction way is to construct a tunnel to desilt the bottom sediment. Conventional technique is to construct a cofferdam to separate the water, construct the intake of tunnel inside and remove the cofferdam afterwards. In Taiwan, the ZengWen reservoir dredging project will install an Elephant-trunk Steel Pipe (ETSP in the water to connect the desilting tunnel without building the cofferdam. Since the installation is critical to the whole project, a 1:20 model was built to simulate the installation steps in a towing tank, i.e. launching, dragging, water injection, and sinking. To increase the construction safety, photogrammetry technic is adopted to record images during the simulation, compute its transformation parameters for dynamic analysis and reconstruct the 4D animations. In this study, several Australis© coded targets are fixed on the surface of ETSP for auto-recognition and measurement. The cameras orientations are computed by space resection where the 3D coordinates of coded targets are measured. Two approaches for motion parameters computation are proposed, i.e. performing 3D conformal transformation from the coordinates of cameras and relative orientation computation by the orientation of single camera. Experimental results show the 3D conformal transformation can achieve sub-mm simulation results, and relative orientation computation shows the flexibility for dynamic motion analysis which is easier and more efficiency.

  9. D Animation Reconstruction from Multi-Camera Coordinates Transformation

    Science.gov (United States)

    Jhan, J. P.; Rau, J. Y.; Chou, C. M.

    2016-06-01

    Reservoir dredging issues are important to extend the life of reservoir. The most effective and cost reduction way is to construct a tunnel to desilt the bottom sediment. Conventional technique is to construct a cofferdam to separate the water, construct the intake of tunnel inside and remove the cofferdam afterwards. In Taiwan, the ZengWen reservoir dredging project will install an Elephant-trunk Steel Pipe (ETSP) in the water to connect the desilting tunnel without building the cofferdam. Since the installation is critical to the whole project, a 1:20 model was built to simulate the installation steps in a towing tank, i.e. launching, dragging, water injection, and sinking. To increase the construction safety, photogrammetry technic is adopted to record images during the simulation, compute its transformation parameters for dynamic analysis and reconstruct the 4D animations. In this study, several Australiscoded targets are fixed on the surface of ETSP for auto-recognition and measurement. The cameras orientations are computed by space resection where the 3D coordinates of coded targets are measured. Two approaches for motion parameters computation are proposed, i.e. performing 3D conformal transformation from the coordinates of cameras and relative orientation computation by the orientation of single camera. Experimental results show the 3D conformal transformation can achieve sub-mm simulation results, and relative orientation computation shows the flexibility for dynamic motion analysis which is easier and more efficiency.

  10. How to Build Your Own Document Camera for around $100

    Science.gov (United States)

    Van Orden, Stephen

    2010-01-01

    Document cameras can have great utility in second language classrooms. However, entry-level consumer document cameras start at around $350. This article describes how the author built three document cameras and offers suggestions for how teachers can successfully build their own quality document camera using a webcam for around $100.

  11. 16 CFR 1025.45 - In camera materials.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false In camera materials. 1025.45 Section 1025.45... PROCEEDINGS Hearings § 1025.45 In camera materials. (a) Definition. In camera materials are documents... excluded from the public record. (b) In camera treatment of documents and testimony. The Presiding...

  12. On camera-based smoke and gas leakage detection

    Energy Technology Data Exchange (ETDEWEB)

    Nyboe, Hans Olav

    1999-07-01

    Gas detectors are found in almost every part of industry and in many homes as well. An offshore oil or gas platform may host several hundred gas detectors. The ability of the common point and open path gas detectors to detect leakages depends on their location relative to the location of a gas cloud. This thesis describes the development of a passive volume gas detector, that is, one than will detect a leakage anywhere in the area monitored. After the consideration of several detection techniques it was decided to use an ordinary monochrome camera as sensor. Because a gas leakage may perturb the index of refraction, parts of the background appear to be displaced from their true positions, and it is necessary to develop algorithms that can deal with small differences between images. The thesis develops two such algorithms. Many image regions can be defined and several feature values can be computed for each region. The value of the features depends on the pattern in the image regions. The classes studied in this work are: reference, gas, smoke and human activity. Test show that observation belonging to these classes can be classified fairly high accuracy. The features in the feature set were chosen and developed for this particular application. Basically, the features measure the magnitude of pixel differences, size of detected phenomena and image distortion. Interesting results from many experiments are presented. Most important, the experiments show that apparent motion caused by a gas leakage or heat convection can be detected by means of a monochrome camera. Small leakages of methane can be detected at a range of about four metres. Other gases, such as butane, where the densities differ more from the density of air than the density of methane does, can be detected further from the camera. Gas leakages large enough to cause condensation have been detected at a camera distance of 20 metres. 59 refs., 42 figs., 13 tabs.

  13. Camera calibration approach using circle-square-combined target

    Institute of Scientific and Technical Information of China (English)

    Fuqiang Zhou; Yexin Wang; Yi Cui; Haishu Tan

    2012-01-01

    Calibrating a small field camera is a challenging task because the traditional target with visible feature points that fit the limited space is difficult and costly to manufacture. We demonstrate a novel combined target used in camera calibration. The tangent points supplied by one circle located at the center of a square are used as invisible features, and the perspective projection invariance is proved. Both visible and invisible features extracted by the proposed feature extraction algorithm are used to solve the calibration. The target supplies a sufficient number of feature points to satisfy the requirements of calibration within a limited space. Experiments show that the approach can achieve high robustness and considerable accuracy. This approach has potential for computer vision applications particularly in small fields of view.%Calibrating a small field camera is a challenging task because the traditional target with visible feature points that fit the limited space is difficult and costly to manufacture.We demonstrate a novel combined target used in camera calibration.The tangent points supplied by one circle located at the center of a square are used as invisible features,and the perspective projection invariance is proved.Both visible and invisible features extracted by the proposed feature extraction algorithm are used to solve the calibration.The target supplies a sufficient number of feature points to satisfy the requirements of calibration within a limited space.Experiments show that the approach can achieve high robustness and considerable accuracy.This approach has potential for computer vision applications particularly in small fields of view.

  14. Design of Endoscopic Capsule With Multiple Cameras.

    Science.gov (United States)

    Gu, Yingke; Xie, Xiang; Li, Guolin; Sun, Tianjia; Wang, Dan; Yin, Zheng; Zhang, Pengfei; Wang, Zhihua

    2015-08-01

    In order to reduce the miss rate of the wireless capsule endoscopy, in this paper, we propose a new system of the endoscopic capsule with multiple cameras. A master-slave architecture, including an efficient bus architecture and a four level clock management architecture, is applied for the Multiple Cameras Endoscopic Capsule (MCEC). For covering more area of the gastrointestinal tract wall with low power, multiple cameras with a smart image capture strategy, including movement sensitive control and camera selection, are used in the MCEC. To reduce the data transfer bandwidth and power consumption to prolong the MCEC's working life, a low complexity image compressor with PSNR 40.7 dB and compression rate 86% is implemented. A chipset is designed and implemented for the MCEC and a six cameras endoscopic capsule prototype is implemented by using the chipset. With the smart image capture strategy, the coverage rate of the MCEC prototype can achieve 98% and its power consumption is only about 7.1 mW. PMID:25376042

  15. Calibration of action cameras for photogrammetric purposes.

    Science.gov (United States)

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  16. Calibration of Action Cameras for Photogrammetric Purposes

    Directory of Open Access Journals (Sweden)

    Caterina Balletti

    2014-09-01

    Full Text Available The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a easy to handle, (b capable of performing under extreme conditions and more importantly (c able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  17. Modulated CMOS camera for fluorescence lifetime microscopy.

    Science.gov (United States)

    Chen, Hongtao; Holst, Gerhard; Gratton, Enrico

    2015-12-01

    Widefield frequency-domain fluorescence lifetime imaging microscopy (FD-FLIM) is a fast and accurate method to measure the fluorescence lifetime of entire images. However, the complexity and high costs involved in construction of such a system limit the extensive use of this technique. PCO AG recently released the first luminescence lifetime imaging camera based on a high frequency modulated CMOS image sensor, QMFLIM2. Here we tested and provide operational procedures to calibrate the camera and to improve the accuracy using corrections necessary for image analysis. With its flexible input/output options, we are able to use a modulated laser diode or a 20 MHz pulsed white supercontinuum laser as the light source. The output of the camera consists of a stack of modulated images that can be analyzed by the SimFCS software using the phasor approach. The nonuniform system response across the image sensor must be calibrated at the pixel level. This pixel calibration is crucial and needed for every camera settings, e.g. modulation frequency and exposure time. A significant dependency of the modulation signal on the intensity was also observed and hence an additional calibration is needed for each pixel depending on the pixel intensity level. These corrections are important not only for the fundamental frequency, but also for the higher harmonics when using the pulsed supercontinuum laser. With these post data acquisition corrections, the PCO CMOS-FLIM camera can be used for various biomedical applications requiring a large frame and high speed acquisition. PMID:26500051

  18. Modulated CMOS camera for fluorescence lifetime microscopy.

    Science.gov (United States)

    Chen, Hongtao; Holst, Gerhard; Gratton, Enrico

    2015-12-01

    Widefield frequency-domain fluorescence lifetime imaging microscopy (FD-FLIM) is a fast and accurate method to measure the fluorescence lifetime of entire images. However, the complexity and high costs involved in construction of such a system limit the extensive use of this technique. PCO AG recently released the first luminescence lifetime imaging camera based on a high frequency modulated CMOS image sensor, QMFLIM2. Here we tested and provide operational procedures to calibrate the camera and to improve the accuracy using corrections necessary for image analysis. With its flexible input/output options, we are able to use a modulated laser diode or a 20 MHz pulsed white supercontinuum laser as the light source. The output of the camera consists of a stack of modulated images that can be analyzed by the SimFCS software using the phasor approach. The nonuniform system response across the image sensor must be calibrated at the pixel level. This pixel calibration is crucial and needed for every camera settings, e.g. modulation frequency and exposure time. A significant dependency of the modulation signal on the intensity was also observed and hence an additional calibration is needed for each pixel depending on the pixel intensity level. These corrections are important not only for the fundamental frequency, but also for the higher harmonics when using the pulsed supercontinuum laser. With these post data acquisition corrections, the PCO CMOS-FLIM camera can be used for various biomedical applications requiring a large frame and high speed acquisition.

  19. Calibration of action cameras for photogrammetric purposes.

    Science.gov (United States)

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-01-01

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898

  20. Managing a large database of camera fingerprints

    Science.gov (United States)

    Goljan, Miroslav; Fridrich, Jessica; Filler, Tomáš

    2010-01-01

    Sensor fingerprint is a unique noise-like pattern caused by slightly varying pixel dimensions and inhomogeneity of the silicon wafer from which the sensor is made. The fingerprint can be used to prove that an image came from a specific digital camera. The presence of a camera fingerprint in an image is usually established using a detector that evaluates cross-correlation between the fingerprint and image noise. The complexity of the detector is thus proportional to the number of pixels in the image. Although computing the detector statistic for a few megapixel image takes several seconds on a single-processor PC, the processing time becomes impractically large if a sizeable database of camera fingerprints needs to be searched through. In this paper, we present a fast searching algorithm that utilizes special "fingerprint digests" and sparse data structures to address several tasks that forensic analysts will find useful when deploying camera identification from fingerprints in practice. In particular, we develop fast algorithms for finding if a given fingerprint already resides in the database and for determining whether a given image was taken by a camera whose fingerprint is in the database.

  1. Hidden cameras everything you need to know about covert recording, undercover cameras and secret filming

    CERN Document Server

    Plomin, Joe

    2016-01-01

    Providing authoritative information on the practicalities of using hidden cameras to expose abuse or wrongdoing, this book is vital reading for anyone who may use or encounter secret filming. It gives specific advice on using phones or covert cameras and unravels the complex legal and ethical issues that need to be considered.

  2. Mobile phone camera benchmarking: combination of camera speed and image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  3. Structured photocathodes for improved high-energy x-ray efficiency in streak cameras

    Science.gov (United States)

    Opachich, Y. P.; Bell, P. M.; Bradley, D. K.; Chen, N.; Feng, J.; Gopal, A.; Hatch, B.; Hilsabeck, T. J.; Huffman, E.; Koch, J. A.; Landen, O. L.; MacPhee, A. G.; Nagel, S. R.; Udin, S.

    2016-11-01

    We have designed and fabricated a structured streak camera photocathode to provide enhanced efficiency for high energy X-rays (1-12 keV). This gold coated photocathode was tested in a streak camera and compared side by side against a conventional flat thin film photocathode. Results show that the measured electron yield enhancement at energies ranging from 1 to 10 keV scales well with predictions, and that the total enhancement can be more than 3×. The spatial resolution of the streak camera does not show degradation in the structured region. We predict that the temporal resolution of the detector will also not be affected as it is currently dominated by the slit width. This demonstration with Au motivates exploration of comparable enhancements with CsI and may revolutionize X-ray streak camera photocathode design.

  4. In-plane displacement and strain measurements using a camera phone and digital image correlation

    Science.gov (United States)

    Yu, Liping; Pan, Bing

    2014-05-01

    In-plane displacement and strain measurements of planar objects by processing the digital images captured by a camera phone using digital image correlation (DIC) are performed in this paper. As a convenient communication tool for everyday use, the principal advantages of a camera phone are its low cost, easy accessibility, and compactness. However, when used as a two-dimensional DIC system for mechanical metrology, the assumed imaging model of a camera phone may be slightly altered during the measurement process due to camera misalignment, imperfect loading, sample deformation, and temperature variations of the camera phone, which can produce appreciable errors in the measured displacements. In order to obtain accurate DIC measurements using a camera phone, the virtual displacements caused by these issues are first identified using an unstrained compensating specimen and then corrected by means of a parametric model. The proposed technique is first verified using in-plane translation and out-of-plane translation tests. Then, it is validated through a determination of the tensile strains and elastic properties of an aluminum specimen. Results of the present study show that accurate DIC measurements can be conducted using a common camera phone provided that an adequate correction is employed.

  5. Radiation damage of the PCO Pixelfly VGA CCD camera of the BES system on KSTAR tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Náfrádi, Gábor, E-mail: nafradi@reak.bme.hu [NTI, BME, EURATOM Association, H-1111 Budapest (Hungary); Kovácsik, Ákos, E-mail: kovacsik.akos@reak.bme.hu [NTI, BME, EURATOM Association, H-1111 Budapest (Hungary); Pór, Gábor, E-mail: por@reak.bme.hu [NTI, BME, EURATOM Association, H-1111 Budapest (Hungary); Lampert, Máté, E-mail: lampert.mate@wigner.mta.hu [Wigner RCP, RMI, EURATOM Association, POB 49, 1525 Budapest (Hungary); Un Nam, Yong, E-mail: yunam@nfri.re.kr [NFRI, 169-148 Gwahak-Ro, Yuseong-Gu, Daejeon 305-806 (Korea, Republic of); Zoletnik, Sándor, E-mail: zoletnik.sandor@wigner.mta.hu [Wigner RCP, RMI, EURATOM Association, POB 49, 1525 Budapest (Hungary)

    2015-01-11

    A PCO Pixelfly VGA CCD camera which is part a of the Beam Emission Spectroscopy (BES) diagnostic system of the Korea Superconducting Tokamak Advanced Research (KSTAR) used for spatial calibrations, suffered from serious radiation damage, white pixel defects have been generated in it. The main goal of this work was to identify the origin of the radiation damage and to give solutions to avoid it. Monte Carlo N-Particle eXtended (MCNPX) model was built using Monte Carlo Modeling Interface Program (MCAM) and calculations were carried out to predict the neutron and gamma-ray fields in the camera position. Besides the MCNPX calculations pure gamma-ray irradiations of the CCD camera were carried out in the Training Reactor of BME. Before, during and after the irradiations numerous frames were taken with the camera with 5 s long exposure times. The evaluation of these frames showed that with the applied high gamma-ray dose (1.7 Gy) and dose rate levels (up to 2 Gy/h) the number of the white pixels did not increase. We have found that the origin of the white pixel generation was the neutron-induced thermal hopping of the electrons which means that in the future only neutron shielding is necessary around the CCD camera. Another solution could be to replace the CCD camera with a more radiation tolerant one for example with a suitable CMOS camera or apply both solutions simultaneously.

  6. Sensor planning method for visual tracking in 3D camera networks

    Institute of Scientific and Technical Information of China (English)

    Anlong Ming; and Xin Chen

    2014-01-01

    Most sensors or cameras discussed in the sensor net-work community are usual y 3D homogeneous, even though their 2D coverage areas in the ground plane are heterogeneous. Mean-while, observed objects of camera networks are usual y simplified as 2D points in previous literature. However in actual application scenes, not only cameras are always heterogeneous with differ-ent height and action radiuses, but also the observed objects are with 3D features (i.e., height). This paper presents a sensor plan-ning formulation addressing the efficiency enhancement of visual tracking in 3D heterogeneous camera networks that track and de-tect people traversing a region. The problem of sensor planning consists of three issues: (i) how to model the 3D heterogeneous cameras;(i ) how to rank the visibility, which ensures that the object of interest is visible in a camera’s field of view;(i i) how to reconfi-gure the 3D viewing orientations of the cameras. This paper stud-ies the geometric properties of 3D heterogeneous camera net-works and addresses an evaluation formulation to rank the visibility of observed objects. Then a sensor planning method is proposed to improve the efficiency of visual tracking. Final y, the numerical results show that the proposed method can improve the tracking performance of the system compared to the conventional strate-gies.

  7. Robust and Accurate Multiple-camera Pose Estimation Toward Robotic Applications

    Directory of Open Access Journals (Sweden)

    Yong Liu

    2014-09-01

    Full Text Available Pose estimation methods in robotics applications frequently suffer from inaccuracy due to a lack of correspondence and real-time constraints, and instability from a wide range of viewpoints, etc. In this paper, we present a novel approach for estimating the poses of all the cameras in a multi-camera system in which each camera is placed rigidly using only a few coplanar points simultaneously. Instead of solving the orientation and translation for the multi-camera system from the overlapping point correspondences among all the cameras directly, we employ homography, which can map image points with 3D coplanar-referenced points. In our method, we first establish the corresponding relations between each camera by their Euclidean geometries and optimize the homographies of the cameras; then, we solve the orientation and translation for the optimal homographies. The results from simulations and real case experiments show that our approach is accurate and robust for implementation in robotics applications. Finally, a practical implementation in a ping-pong robot is described in order to confirm the validity of our approach.

  8. The development of large-aperture test system of infrared camera and visible CCD camera

    Science.gov (United States)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  9. The roles of time, place, value and relationships in collocated photo sharing with camera phones

    OpenAIRE

    Stelmaszewska, Hanna; Fields, Bob; Blandford, Ann

    2008-01-01

    Photo sharing on camera phones is becoming a common way to maintain closeness and relationships with friends and family. How people share their photos in collocated settings using camera phones, with whom they share, and what factors influence their sharing experience were the themes explored in this study. Results showed that people exhibit different photo sharing behaviour depending on who they share photos with, where the sharing takes place and what value a picture represents to its owner...

  10. Real-Time Camera Tracking and 3D Reconstruction Using Signed Distance Functions

    OpenAIRE

    Bylow, Erik; Sturm, Jürgen; Kerl, Christian; Kahl, Fredrik; Cremers, Daniel

    2013-01-01

    The ability to quickly acquire 3D models is an essential capability needed in many disciplines including robotics, computer vision, geodesy, and architecture. In this paper we present a novel method for real-time camera tracking and 3D reconstruction of static indoor environments using an RGB-D sensor. We show that by representing the geometry with a signed distance function (SDF), the camera pose can be efficiently estimated by directly minimizing the error of the depth images on the SDF....

  11. Progress in gamma-camera quality control

    International Nuclear Information System (INIS)

    The latest developments in the art of quality control of gamma cameras are emphasized in a simple historical manner. The exhibit describes methods developed by the Bureau of Radiological Health (BRH) in comparison with previously accepted techniques for routine evaluation of gamma-camera performance. Gamma cameras require periodic testing of their performance parameters to ensure that their optimum imaging capability is maintained. Quality control parameters reviewed are field uniformity, spatial distortion, intrinsic and spatial resolution, and temporal resolution. The methods developed for the measurement of these parameters are simple, not requiring additional electronic equipment or computers. The data has been arranged in six panels as follows: schematic diagrams of the most important test patterns used in nuclear medicine; field uniformity; regional displacements in transmission pattern image; spatial resolution using the BRH line-source phantom; instrinsic resolution using the BRH Test Pattern; and Temporal resolution and count losses at high counting rates

  12. Camera placement in integer lattices (extended abstract)

    Science.gov (United States)

    Pocchiola, Michel; Kranakis, Evangelos

    1990-09-01

    Techniques for studying an art gallery problem (the camera placement problem) in the infinite lattice (L sup d) of d tuples of integers are considered. A lattice point A is visible from a camera C positioned at a vertex of (L sup d) if A does not equal C and if the line segment joining A and C crosses no other lattice vertex. By using a combination of probabilistic, combinatorial optimization and algorithmic techniques the position they must occupy in the lattice (L sup d) in the order to maximize their visibility can be determined in polynomial time, for any given number s less than or equal to (5 sup d) of cameras. This improves previous results for s less than or equal to (3 sup d).

  13. Results of the prototype camera for FACT

    Energy Technology Data Exchange (ETDEWEB)

    Anderhub, H. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Backes, M. [Technische Universitaet Dortmund, D-44221 Dortmund (Germany); Biland, A.; Boller, A.; Braun, I. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Bretz, T. [Ecole Polytechnique Federale de Lausanne, CH-1015 Lausanne (Switzerland); Commichau, S.; Commichau, V. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Dorner, D. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); INTEGRAL Science Data Center, CH-1290 Versoix (Switzerland); Gendotti, A.; Grimm, O.; Gunten, H. von; Hildebrand, D.; Horisberger, U. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Koehne, J.-H. [Technische Universitaet Dortmund, D-44221 Dortmund (Germany); Kraehenbuehl, T., E-mail: thomas.kraehenbuehl@phys.ethz.c [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Kranich, D.; Lorenz, E.; Lustermann, W. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Mannheim, K. [Universitaet Wuerzburg, D-97074 Wuerzburg (Germany)

    2011-05-21

    The maximization of the photon detection efficiency (PDE) is a key issue in the development of cameras for Imaging Atmospheric Cherenkov Telescopes. Geiger-mode Avalanche Photodiodes (G-APD) are a promising candidate to replace the commonly used photomultiplier tubes by offering a larger PDE and in addition a facilitated handling. The FACT (First G-APD Cherenkov Telescope) project evaluates the feasibility of this change by building a camera based on 1440 G-APDs for an existing small telescope. As a first step towards a full camera, a prototype module using 144 G-APDs was successfully built and tested. The strong temperature dependence of G-APDs is compensated using a feedback system, which allows to keep the gain of the G-APDs constant to 0.5%.

  14. PEOPLE REIDENTIFCATION IN A DISTRIBUTED CAMERA NETWORK

    Directory of Open Access Journals (Sweden)

    Icaro Oliveira de Oliveira

    2010-06-01

    Full Text Available This paper presents an approach to the object reidentification problem in a distributed camera network system. The reidentification or reacquisition problem consists essentially on the matching process of images acquired from different cameras. This work is applied in a monitored environment by cameras. This application is important to modern security systems, in which the targets presence identification in the environment expands the capacity of action by security agents in real time and provides important parameters like localization for each target. We used target’s interest points and target’s color with features for reidentification. The satisfactory results were obtained from real experiments in public video datasets and synthetic images with noise.

  15. Mechanical Design of the LSST Camera

    Energy Technology Data Exchange (ETDEWEB)

    Nordby, Martin; Bowden, Gordon; Foss, Mike; Guiffre, Gary; /SLAC; Ku, John; /Unlisted; Schindler, Rafe; /SLAC

    2008-06-13

    The LSST camera is a tightly packaged, hermetically-sealed system that is cantilevered into the main beam of the LSST telescope. It is comprised of three refractive lenses, on-board storage for five large filters, a high-precision shutter, and a cryostat that houses the 3.2 giga-pixel CCD focal plane along with its support electronics. The physically large optics and focal plane demand large structural elements to support them, but the overall size of the camera and its components must be minimized to reduce impact on the image stability. Also, focal plane and optics motions must be minimized to reduce systematic errors in image reconstruction. Design and analysis for the camera body and cryostat will be detailed.

  16. Collimated trans-axial tomographic scintillation camera

    International Nuclear Information System (INIS)

    The objects of this invention are first to reduce the time required to obtain statistically significant data in trans-axial tomographic radioisotope scanning using a scintillation camera. Secondly, to provide a scintillation camera system to increase the rate of acceptance of radioactive events to contribute to the positional information obtainable from a known radiation source without sacrificing spatial resolution. Thirdly to reduce the scanning time without loss of image clarity. The system described comprises a scintillation camera detector, means for moving this in orbit about a cranial-caudal axis relative to a patient and a collimator having septa defining apertures such that gamma rays perpendicular to the axis are admitted with high spatial resolution, parallel to the axis with low resolution. The septa may be made of strips of lead. Detailed descriptions are given. (U.K.)

  17. HIGH SPEED KERR CELL FRAMING CAMERA

    Science.gov (United States)

    Goss, W.C.; Gilley, L.F.

    1964-01-01

    The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)

  18. Ultra-fast framing camera tube

    Science.gov (United States)

    Kalibjian, Ralph

    1981-01-01

    An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.

  19. Small Orbital Stereo Tracking Camera Technology Development

    Science.gov (United States)

    Bryan, Tom; MacLeod, Todd; Gagliano, Larry

    2016-01-01

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  20. Analysis of Brown camera distortion model

    Science.gov (United States)

    Nowakowski, Artur; Skarbek, Władysław

    2013-10-01

    Contemporary image acquisition devices introduce optical distortion into image. It results in pixel displacement and therefore needs to be compensated for many computer vision applications. The distortion is usually modeled by the Brown distortion model, which parameters can be included in camera calibration task. In this paper we describe original model, its dependencies and analyze orthogonality with regard to radius for its decentering distortion component. We also report experiments with camera calibration algorithm included in OpenCV library, especially a stability of distortion parameters estimation is evaluated.

  1. Camera-enabled techniques for organic synthesis

    Directory of Open Access Journals (Sweden)

    Steven V. Ley

    2013-05-01

    Full Text Available A great deal of time is spent within synthetic chemistry laboratories on non-value-adding activities such as sample preparation and work-up operations, and labour intensive activities such as extended periods of continued data collection. Using digital cameras connected to computer vision algorithms, camera-enabled apparatus can perform some of these processes in an automated fashion, allowing skilled chemists to spend their time more productively. In this review we describe recent advances in this field of chemical synthesis and discuss how they will lead to advanced synthesis laboratories of the future.

  2. Camera-enabled techniques for organic synthesis

    Science.gov (United States)

    Ingham, Richard J; O’Brien, Matthew; Browne, Duncan L

    2013-01-01

    Summary A great deal of time is spent within synthetic chemistry laboratories on non-value-adding activities such as sample preparation and work-up operations, and labour intensive activities such as extended periods of continued data collection. Using digital cameras connected to computer vision algorithms, camera-enabled apparatus can perform some of these processes in an automated fashion, allowing skilled chemists to spend their time more productively. In this review we describe recent advances in this field of chemical synthesis and discuss how they will lead to advanced synthesis laboratories of the future. PMID:23766820

  3. A multidetector scintillation camera with 254 channels

    DEFF Research Database (Denmark)

    Sveinsdottir, E; Larsen, B; Rommer, P;

    1977-01-01

    A computer-based scintillation camera has been designed for both dynamic and static radionuclide studies. The detecting head has 254 independent sodium iodide crystals, each with a photomultiplier and amplifier. In dynamic measurements simultaneous events can be recorded, and 1 million total counts...... per second can be accommodated with less than 0.5% loss in any one channel. This corresponds to a calculated deadtime of 5 nsec. The multidetector camera is being used for 133Xe dynamic studies of regional cerebral blood flow in man and for 99mTc and 197 Hg static imaging of the brain....

  4. Robust camera calibration for sport videos using court models

    Science.gov (United States)

    Farin, Dirk; Krabbe, Susanne; de With, Peter H. N.; Effelsberg, Wolfgang

    2003-12-01

    We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses a model of the arrangement of court lines for calibration. Since the court model can be specified by the user, the algorithm can be applied to a variety of different sports. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture constraints. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the succeeding input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient-descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.

  5. Lights, camera, A&E.

    Science.gov (United States)

    Gould, Mark

    Channel 4 series 24 Hours in A&E was one of the television highlights of 2011. Filmed at King's College Hospital in London, it showed the reality of life in an A&E department and may have improved the public's understanding of nursing. PMID:22324233

  6. Medium Format Camera Evaluation Based on the Latest Phase One Technology

    Science.gov (United States)

    Tölg, T.; Kemper, G.; Kalinski, D.

    2016-06-01

    In early 2016, Phase One Industrial launched a new high resolution camera with a 100 MP CMOS sensor. CCD sensors excel at ISOs up to 200, but in lower light conditions, exposure time must be increased and Forward Motion Compensation (FMC) has to be employed to avoid smearing the images. The CMOS sensor has an ISO range of up to 6400, which enables short exposures instead of using FMC. This paper aims to evaluate the strengths of each of the sensor types based on real missions over a test field in Speyer, Germany, used for airborne camera calibration. The test field area has about 30 Ground Control Points (GCPs), which enable a perfect scenario for a proper geometric evaluation of the cameras. The test field includes both a Siemen star and scale bars to show any blurring caused by forward motion. The result of the comparison showed that both cameras offer high accuracy photogrammetric results with post processing, including triangulation, calibration, orthophoto and DEM generation. The forward motion effect can be compensated by a fast shutter speed and a higher ISO range of the CMOS-based camera. The results showed no significant differences between cameras.

  7. Camera Trajectory fromWide Baseline Images

    Science.gov (United States)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the

  8. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  9. Inertial measurement unit-camera calibration based on incomplete inertial sensor information

    Institute of Scientific and Technical Information of China (English)

    Hong LIU; Yu-long ZHOU; Zhao-peng GU

    2014-01-01

    This paper is concerned with the problem of estimating the relative orientation between an inertial measurement unit (IMU) and a camera. Unlike most existing IMU-camera calibrations, the main challenge in this paper is that the information output from the IMU is incomplete. For example, only two tilt information can be read from the gravity sensor of a smart phone. Despite incomplete inertial information, there are strong restrictions between the IMU and camera coordinate systems. This paper addresses the incomplete information based IMU-camera calibration problem by exploiting the intrinsic restrictions among the coordinate transformations. First, the IMU transformation between two poses is formulated with the unknown IMU information. Then the defective IMU information is restored using the complementary visual information. Finally, the Levenberg-Marquardt (LM) algorithm is applied to estimate the optimal calibration result in noisy environments. Experiments on both synthetic and real data show the validity and robustness of our algorithm.

  10. The advantages of using a Lucky Imaging camera for observations of microlensing events

    CERN Document Server

    Sajadian, Sedighe; Dominik, Martin; Hundertmark, Markus

    2016-01-01

    In this work, we study the advantages of using a Lucky Imaging camera for the observations of potential planetary microlensing events. Our aim is to reduce the blending effect and enhance exoplanet signals in binary lensing systems composed of an exoplanet and the corresponding parent star. We simulate planetary microlensing light curves based on present microlensing surveys and follow-up telescopes where one of them is equipped with a Lucky imaging camera. This camera is used at the Danish $1.54$-m follow-up telescope. Using a specific observational strategy, For an Earth-mass planet in the resonance regime, where the detection probability in crowded-fields is smaller, lucky imaging observations improve the detection efficiency which reaches 2 per cent. Given the difficulty of detecting the signal of an Earth-mass planet in crowded-field imaging even in the resonance regime with conventional cameras, we show that Lucky Imaging can substantially improve the detection efficiency.

  11. Comparison of Digital Surface Models for Snow Depth Mapping with Uav and Aerial Cameras

    Science.gov (United States)

    Boesch, R.; Bühler, Y.; Marty, M.; Ginzler, C.

    2016-06-01

    Photogrammetric workflows for aerial images have improved over the last years in a typically black-box fashion. Most parameters for building dense point cloud are either excessive or not explained and often the progress between software releases is poorly documented. On the other hand, development of better camera sensors and positional accuracy of image acquisition is significant by comparing product specifications. This study shows, that hardware evolutions over the last years have a much stronger impact on height measurements than photogrammetric software releases. Snow height measurements with airborne sensors like the ADS100 and UAV-based DSLR cameras can achieve accuracies close to GSD * 2 in comparison with ground-based GNSS reference measurements. Using a custom notch filter on the UAV camera sensor during image acquisition does not yield better height accuracies. UAV based digital surface models are very robust. Different workflow parameter variations for ADS100 and UAV camera workflows seem to have only random effects.

  12. MOSS spectroscopic camera for imaging time resolved plasma species temperature and flow speed

    International Nuclear Information System (INIS)

    A MOSS (Modulated Optical Solid-State) spectroscopic camera has been devised to monitor the spatial and temporal variations of temperatures and flow speeds of plasma ion species, the Doppler broadening measurement being made of spectroscopic lines specified. As opposed to a single channel MOSS spectrometer, the camera images light from plasma onto an array of light detectors, being mentioned 2D imaging of plasma ion temperatures and flow speeds. In addition, compared to a conventional grating spectrometer, the MOSS camera shows an excellent light collecting performance which leads to the improvement of signal to noise ratio and of time resolution. The present paper first describes basic items of MOSS spectroscopy, then follows MOSS camera with an emphasis on the optical system of 2D imaging. (author)

  13. Objective Evaluation Criteria for Shooting Quality of Stereo Cameras over Short Distance

    Directory of Open Access Journals (Sweden)

    Yun Liu

    2015-04-01

    Full Text Available Stereo cameras are the basic tools used to obtain stereoscopic image pairs, which can lead to truly great image quality. However, some inappropriate shooting conditions may cause discomfort while viewing stereo images. It is therefore considerably necessary to establish the perceptual criteria that can be used to evaluate the shooting quality of stereo cameras. This article proposes objective quality evaluation criteria based on the characteristics of parallel and toed-in camera configurations. Considering the different internal structures and basic shooting principles, this paper focuses on short-distance shooting conditions and establishes assessment criteria for both parallel and toed-in camera configurations. Experimental results show that the proposed evaluation criteria can predict the visual perception of stereoscopic images and effectively evaluate stereoscopic image quality.

  14. Towards Adaptive Virtual Camera Control In Computer Games

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2011-01-01

    model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platform game, we cluster gaze and camera information to identify camera behaviour profiles and we employ......Automatic camera control aims to define a framework to control virtual camera movements in dynamic and unpredictable virtual environments while ensuring a set of desired visual properties. We inves- tigate the relationship between camera placement and playing behaviour in games and build a user...... machine learning to build predictive models of the virtual camera behaviour. The perfor- mance of the models on unseen data reveals accuracies above 70% for all the player behaviour types identified. The characteristics of the gener- ated models, their limits and their use for creating adaptive automatic...

  15. Lights, Camera, Read! Arizona Reading Program Manual.

    Science.gov (United States)

    Arizona State Dept. of Library, Archives and Public Records, Phoenix.

    This document is the manual for the Arizona Reading Program (ARP) 2003 entitled "Lights, Camera, Read!" This theme spotlights books that were made into movies, and allows readers to appreciate favorite novels and stories that have progressed to the movie screen. The manual consists of eight sections. The Introduction includes welcome letters from…

  16. FPS camera sync and reset chassis

    International Nuclear Information System (INIS)

    The sync and reset chassis provides all the circuitry required to synchronize an event to be studied, a remote free-running focus projection and scanning (FPS) data-acquisition TV camera, and a video signal recording system. The functions, design, and operation of this chassis are described in detail

  17. Increased Automation in Stereo Camera Calibration Techniques

    Directory of Open Access Journals (Sweden)

    Brandi House

    2006-08-01

    Full Text Available Robotic vision has become a very popular field in recent years due to the numerous promising applications it may enhance. However, errors within the cameras and in their perception of their environment can cause applications in robotics to fail. To help correct these internal and external imperfections, stereo camera calibrations are performed. There are currently many accurate methods of camera calibration available; however, most or all of them are time consuming and labor intensive. This research seeks to automate the most labor intensive aspects of a popular calibration technique developed by Jean-Yves Bouguet. His process requires manual selection of the extreme corners of a checkerboard pattern. The modified process uses embedded LEDs in the checkerboard pattern to act as active fiducials. Images are captured of the checkerboard with the LEDs on and off in rapid succession. The difference of the two images automatically highlights the location of the four extreme corners, and these corner locations take the place of the manual selections. With this modification to the calibration routine, upwards of eighty mouse clicks are eliminated per stereo calibration. Preliminary test results indicate that accuracy is not substantially affected by the modified procedure. Improved automation to camera calibration procedures may finally penetrate the barriers to the use of calibration in practice.

  18. GAMPIX: A new generation of gamma camera

    Science.gov (United States)

    Gmar, M.; Agelou, M.; Carrel, F.; Schoepff, V.

    2011-10-01

    Gamma imaging is a technique of great interest in several fields such as homeland security or decommissioning/dismantling of nuclear facilities in order to localize hot spots of radioactivity. In the nineties, previous works led by CEA LIST resulted in the development of a first generation of gamma camera called CARTOGAM, now commercialized by AREVA CANBERRA. Even if its performances can be adapted to many applications, its weight of 15 kg can be an issue. For several years, CEA LIST has been developing a new generation of gamma camera, called GAMPIX. This system is mainly based on the Medipix2 chip, hybridized to a 1 mm thick CdTe substrate. A coded mask replaces the pinhole collimator in order to increase the sensitivity of the gamma camera. Hence, we obtained a very compact device (global weight less than 1 kg without any shielding), which is easy to handle and to use. In this article, we present the main characteristics of GAMPIX and we expose the first experimental results illustrating the performances of this new generation of gamma camera.

  19. Parametrizable cameras for 3D computational steering

    NARCIS (Netherlands)

    Mulder, J.D.; Wijk, J.J. van

    1997-01-01

    We present a method for the definition of multiple views in 3D interfaces for computational steering. The method uses the concept of a point-based parametrizable camera object. This concept enables a user to create and configure multiple views on his custom 3D interface in an intuitive graphical man

  20. Camera! Action! Collaborate with Digital Moviemaking

    Science.gov (United States)

    Swan, Kathleen Owings; Hofer, Mark; Levstik, Linda S.

    2007-01-01

    Broadly defined, digital moviemaking integrates a variety of media (images, sound, text, video, narration) to communicate with an audience. There is near-ubiquitous access to the necessary software (MovieMaker and iMovie are bundled free with their respective operating systems) and hardware (computers with Internet access, digital cameras, etc.).…

  1. Camera Systems Rapidly Scan Large Structures

    Science.gov (United States)

    2013-01-01

    Needing a method to quickly scan large structures like an aircraft wing, Langley Research Center developed the line scanning thermography (LST) system. LST works in tandem with a moving infrared camera to capture how a material responds to changes in temperature. Princeton Junction, New Jersey-based MISTRAS Group Inc. now licenses the technology and uses it in power stations and industrial plants.

  2. Face identification in videos from mobile cameras

    NARCIS (Netherlands)

    Mu, Meiru; Spreeuwers, Luuk; Veldhuis, Raymond

    2014-01-01

    It is still challenging to recognize faces reliably in videos from mobile camera, although mature automatic face recognition technology for still images has been available for quite some time. Suppose we want to be alerted when suspects appear in the recording of a police Body-Cam, even a good face

  3. Digital Camera Project Fosters Communication Skills

    Science.gov (United States)

    Fisher, Ashley; Lazaros, Edward J.

    2009-01-01

    This article details the many benefits of educators' use of digital camera technology and provides an activity in which students practice taking portrait shots of classmates, manipulate the resulting images, and add language arts practice by interviewing their subjects to produce a photo-illustrated Word document. This activity gives…

  4. Case on Camera--An Audience Verdict.

    Science.gov (United States)

    Wober, J. M.

    In July 1984, British Channel 4 began televising Case on Camera, a series based on genuine arbitration of civil cases carried out by a retired judge, recorded as it happened, and edited into half hour programs. Because of the Independent Broadcasting Authority's concern for the rights to privacy, a systematic study of public reaction to the series…

  5. Development of a multispectral camera system

    Science.gov (United States)

    Sugiura, Hiroaki; Kuno, Tetsuya; Watanabe, Norihiro; Matoba, Narihiro; Hayashi, Junichiro; Miyake, Yoichi

    2000-05-01

    A highly accurate multispectral camera and the application software have been developed as a practical system to capture digital images of the artworks stored in galleries and museums. Instead of recording color data in the conventional three RGB primary colors, the newly developed camera and the software carry out a pixel-wise estimation of spectral reflectance, the color data specific to the object, to enable the practical multispectral imaging. In order to realize the accurate multispectral imaging, the dynamic range of the camera is set to 14 bits or over and the output bits to 14 bits so as to allow capturing even when the difference in light quantity between the each channel is large. Further, a small-size rotary color filter was simultaneously developed to keep the camera to a practical size. We have developed software capable of selecting the optimum combination of color filters available in the market. Using this software, n types of color filter can be selected from m types of color filter giving a minimum Euclidean distance or minimum color difference in CIELAB color space between actual and estimated spectral reflectance as to 147 types of oil paint samples.

  6. Teaching Camera Calibration by a Constructivist Methodology

    Science.gov (United States)

    Samper, D.; Santolaria, J.; Pastor, J. J.; Aguilar, J. J.

    2010-01-01

    This article describes the Metrovisionlab simulation software and practical sessions designed to teach the most important machine vision camera calibration aspects in courses for senior undergraduate students. By following a constructivist methodology, having received introductory theoretical classes, students use the Metrovisionlab application to…

  7. Video Analysis with a Web Camera

    Science.gov (United States)

    Wyrembeck, Edward P.

    2009-01-01

    Recent advances in technology have made video capture and analysis in the introductory physics lab even more affordable and accessible. The purchase of a relatively inexpensive web camera is all you need if you already have a newer computer and Vernier's Logger Pro 3 software. In addition to Logger Pro 3, other video analysis tools such as…

  8. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    Science.gov (United States)

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768

  9. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    Science.gov (United States)

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-08-31

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.

  10. Evaluation of trail-cameras for analyzing the diet of nesting raptors using the Northern Goshawk as a model.

    Directory of Open Access Journals (Sweden)

    Gonzalo García-Salgado

    Full Text Available Diet studies present numerous methodological challenges. We evaluated the usefulness of commercially available trail-cameras for analyzing the diet of Northern Goshawks (Accipiter gentilis as a model for nesting raptors during the period 2007-2011. We compared diet estimates obtained by direct camera monitoring of 80 nests with four indirect analyses of prey remains collected from the nests and surroundings (pellets, bones, feather-and-hair remains, and feather-hair-and-bone remains combined. In addition, we evaluated the performance of the trail-cameras and whether camera monitoring affected Goshawk behavior. The sensitivity of each diet-analysis method depended on prey size and taxonomic group, with no method providing unbiased estimates for all prey sizes and types. The cameras registered the greatest number of prey items and were probably the least biased method for estimating diet composition. Nevertheless this direct method yielded the largest proportion of prey unidentified to species level, and it underestimated small prey. Our trail-camera system was able to operate without maintenance for longer periods than what has been reported in previous studies with other types of cameras. Initially Goshawks showed distrust toward the cameras but they usually became habituated to its presence within 1-2 days. The habituation period was shorter for breeding pairs that had previous experience with cameras. Using trail-cameras to monitor prey provisioning to nests is an effective tool for studying the diet of nesting raptors. However, the technique is limited by technical failures and difficulties in identifying certain prey types. Our study also shows that cameras can alter adult Goshawk behavior, an aspect that must be controlled to minimize potential negative impacts.

  11. A novel fully integrated handheld gamma camera

    Science.gov (United States)

    Massari, R.; Ucci, A.; Campisi, C.; Scopinaro, F.; Soluri, A.

    2016-10-01

    In this paper, we present an innovative, fully integrated handheld gamma camera, namely designed to gather in the same device the gamma ray detector with the display and the embedded computing system. The low power consumption allows the prototype to be battery operated. To be useful in radioguided surgery, an intraoperative gamma camera must be very easy to handle since it must be moved to find a suitable view. Consequently, we have developed the first prototype of a fully integrated, compact and lightweight gamma camera for radiopharmaceuticals fast imaging. The device can operate without cables across the sterile field, so it may be easily used in the operating theater for radioguided surgery. The prototype proposed consists of a Silicon Photomultiplier (SiPM) array coupled with a proprietary scintillation structure based on CsI(Tl) crystals. To read the SiPM output signals, we have developed a very low power readout electronics and a dedicated analog to digital conversion system. One of the most critical aspects we faced designing the prototype was the low power consumption, which is mandatory to develop a battery operated device. We have applied this detection device in the lymphoscintigraphy technique (sentinel lymph node mapping) comparing the results obtained with those of a commercial gamma camera (Philips SKYLight). The results obtained confirm a rapid response of the device and an adequate spatial resolution for the use in the scintigraphic imaging. This work confirms the feasibility of a small gamma camera with an integrated display. This device is designed for radioguided surgery and small organ imaging, but it could be easily combined into surgical navigation systems.

  12. Measuring rainfall with low-cost cameras

    Science.gov (United States)

    Allamano, Paola; Cavagnero, Paolo; Croci, Alberto; Laio, Francesco

    2016-04-01

    In Allamano et al. (2015), we propose to retrieve quantitative measures of rainfall intensity by relying on the acquisition and analysis of images captured from professional cameras (SmartRAIN technique in the following). SmartRAIN is based on the fundamentals of camera optics and exploits the intensity changes due to drop passages in a picture. The main steps of the method include: i) drop detection, ii) blur effect removal, iii) estimation of drop velocities, iv) drop positioning in the control volume, and v) rain rate estimation. The method has been applied to real rain events with errors of the order of ±20%. This work aims to bridge the gap between the need of acquiring images via professional cameras and the possibility of exporting the technique to low-cost webcams. We apply the image processing algorithm to frames registered with low-cost cameras both in the lab (i.e., controlled rain intensity) and field conditions. The resulting images are characterized by lower resolutions and significant distortions with respect to professional camera pictures, and are acquired with fixed aperture and a rolling shutter. All these hardware limitations indeed exert relevant effects on the readability of the resulting images, and may affect the quality of the rainfall estimate. We demonstrate that a proper knowledge of the image acquisition hardware allows one to fully explain the artefacts and distortions due to the hardware. We demonstrate that, by correcting these effects before applying the image processing algorithm, quantitative rain intensity measures are obtainable with a good accuracy also with low-cost modules.

  13. X-ray imaging using digital cameras

    Science.gov (United States)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  14. Analysis of RED ONE Digital Cinema Camera and RED Workflow

    OpenAIRE

    Foroughi Mobarakeh, Taraneh

    2009-01-01

    RED Digital Cinema is a rather new company that has developed a camera that has shaken the world of the film industry, the RED One camera. RED One is a digital cinema camera with the characteristics of a 35mm film camera. With a custom made 12 megapixel CMOS sensor it offers images with a filmic look that cannot be achieved with many other digital cinema cameras. With a new camera comes a new set of media files to work with, which brings new software applications supporting them. RED Digital ...

  15. Control of the movement of a ROV camera; Controle de posicionamento da camera de um ROV

    Energy Technology Data Exchange (ETDEWEB)

    Lima, Alexandre S. de; Dutra, Max Suell [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE); Reis, Ney Robinson S. dos [PETROBRAS, Rio de Janeiro, RJ (Brazil). Centro de Pesquisas; Santos, Auderi V. dos [Pontificia Univ. Catolica do Rio de Janeiro, RJ (Brazil)

    2004-07-01

    The ROV's (Remotely Operated Vehicles) are used for installation and maintenance of underwater exploration systems in the oil industry. These systems are operated in distant areas thus being of essential importance the use of a cameras for the visualization of the work area. The synchronization necessary in the accomplishment of the tasks when operating the manipulator and the movement of the camera for the operator is a complex task. For the accomplishment of this synchronization is presented in this work the analysis of the interconnection of the systems. The concatenation of the systems is made through the interconnection of the electric signals of the proportional valves of the actuators of the manipulator with the signals of the proportional valves of the actuators of the camera. With this interconnection the approach accompaniment of the movement of the manipulator for the camera, keeping the object of the visualization of the field of vision of the operator is obtained. (author)

  16. Frequency Identification of Vibration Signals Using Video Camera Image Data

    Directory of Open Access Journals (Sweden)

    Chia-Hung Wu

    2012-10-01

    Full Text Available This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  17. Automatic target extraction in complicated background for camera calibration

    Science.gov (United States)

    Guo, Xichao; Wang, Cheng; Wen, Chenglu; Cheng, Ming

    2016-03-01

    In order to perform high precise calibration of camera in complex background, a novel design of planar composite target and the corresponding automatic extraction algorithm are presented. Unlike other commonly used target designs, the proposed target contains the information of feature point coordinate and feature point serial number simultaneously. Then based on the original target, templates are prepared by three geometric transformations and used as the input of template matching based on shape context. Finally, parity check and region growing methods are used to extract the target as final result. The experimental results show that the proposed method for automatic extraction and recognition of the proposed target is effective, accurate and reliable.

  18. Positioning beacon system using digital camera and LEDs

    OpenAIRE

    Liu, HS; G. Pang

    2003-01-01

    This paper is on a novel use of lighting or signaling devices constructed by light-emitting diodes (LEDs) as a positioning beacon. The idea is that the surface of the LED lighting device is divided into regions and used to show different visual patterns that are not noticeable by the human eye due to the high-frequency switching of the LEDs. A digital camera is used as a receiver to capture a sequence of images of the LED positioning beacon transmitter. Image-processing algorithms are used to...

  19. SIMULTANEOUS RECORDING OF FRINGE PATTERNS WITH ONE CAMERA

    Institute of Scientific and Technical Information of China (English)

    SU Fei; DAI Fulong; CHIAN Kerm Sin; YI Sung

    2004-01-01

    A novel method to separate and simultaneously record the Moiré interferometry fringe patterns of three deformation fields with only one CCD camera is developed; details of its operation principle, key points and error analysis are presented. With this technique, the deformation in U, V and W fields can be measured simultaneously, so dynamic test with comprehensive information can be performed. The advantage of this technique over other similar techniques lies in its simplicity, easy implementation and low cost. An application of this technique is given to show its feasibility. Technical problems that may be caused with this technique are also analyzed.

  20. Cluster Tracking with Time-of-Flight Cameras

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Hansen, Mads; Kirschmeyer, Martin;

    2008-01-01

    We describe a method for tracking people using a time-of-flight camera and apply the method for persistent authentication in a smart-environment. A background model is built by fusing information from intensity and depth images. While a geometric constraint is employed to improve pixel cluster...... coherence and reducing the influence of noise, the EM algorithm (expectation maximization) is used for tracking moving clusters of pixels significantly different from the background model. Each cluster is defined through a statistical model of points on the ground plane. We show the benefits of the time-of-flight...

  1. National Guidelines for Digital Camera Systems Certification

    Science.gov (United States)

    Yaron, Yaron; Keinan, Eran; Benhamu, Moshe; Regev, Ronen; Zalmanzon, Garry

    2016-06-01

    Digital camera systems are a key component in the production of reliable, geometrically accurate, high-resolution geospatial products. These systems have replaced film imaging in photogrammetric data capturing. Today, we see a proliferation of imaging sensors collecting photographs in different ground resolutions, spectral bands, swath sizes, radiometric characteristics, accuracies and carried on different mobile platforms. In addition, these imaging sensors are combined with navigational tools (such as GPS and IMU), active sensors such as laser scanning and powerful processing tools to obtain high quality geospatial products. The quality (accuracy, completeness, consistency, etc.) of these geospatial products is based on the use of calibrated, high-quality digital camera systems. The new survey regulations of the state of Israel specify the quality requirements for each geospatial product including: maps at different scales and for different purposes, elevation models, orthophotographs, three-dimensional models at different levels of details (LOD) and more. In addition, the regulations require that digital camera systems used for mapping purposes should be certified using a rigorous mapping systems certification and validation process which is specified in the Director General Instructions. The Director General Instructions for digital camera systems certification specify a two-step process as follows: 1. Theoretical analysis of system components that includes: study of the accuracy of each component and an integrative error propagation evaluation, examination of the radiometric and spectral response curves for the imaging sensors, the calibration requirements, and the working procedures. 2. Empirical study of the digital mapping system that examines a typical project (product scale, flight height, number and configuration of ground control points and process). The study examine all the aspects of the final product including; its accuracy, the product pixels size

  2. A ToF-camera as a 3D Vision Sensor for Autonomous Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Sobers Lourdu Xavier Francis

    2015-11-01

    Full Text Available The aim of this paper is to deploy a time-of-flight (ToF based photonic mixer device (PMD camera on an Autonomous Ground Vehicle (AGV whose overall target is to traverse from one point to another in hazardous and hostile environments employing obstacle avoidance without human intervention. The hypothesized approach of applying a ToF Camera for an AGV is a suitable approach to autonomous robotics because, as the ToF camera can provide three-dimensional (3D information at a low computational cost, it is utilized to extract information about obstacles after their calibration and ground testing, and is mounted and integrated with the Pioneer mobile robot. The workspace is a two-dimensional (2D world map which has been divided into a grid/cells, where the collision-free path defined by the graph search algorithm is a sequence of cells the AGV can traverse to reach the target. PMD depth data is used to populate traversable areas and obstacles by representing a grid/cells of suitable size. These camera data are converted into Cartesian coordinates for entry into a workspace grid map. A more optimal camera mounting angle is needed and adopted by analysing the camera’s performance discrepancy, such as pixel detection, the detection rate and the maximum perceived distances, and infrared (IR scattering with respect to the ground surface. This mounting angle is recommended to be half the vertical field-of-view (FoV of the PMD camera. A series of still and moving tests are conducted on the AGV to verify correct sensor operations, which show that the postulated application of the ToF camera in the AGV is not straightforward. Later, to stabilize the moving PMD camera and to detect obstacles, a tracking feature detection algorithm and the scene flow technique are implemented to perform a real-time experiment.

  3. Generating 3D hyperspectral information with lightweight UAV snapshot cameras for vegetation monitoring: From camera calibration to quality assurance

    Science.gov (United States)

    Aasen, Helge; Burkart, Andreas; Bolten, Andreas; Bareth, Georg

    2015-10-01

    This paper describes a novel method to derive 3D hyperspectral information from lightweight snapshot cameras for unmanned aerial vehicles for vegetation monitoring. Snapshot cameras record an image cube with one spectral and two spatial dimensions with every exposure. First, we describe and apply methods to radiometrically characterize and calibrate these cameras. Then, we introduce our processing chain to derive 3D hyperspectral information from the calibrated image cubes based on structure from motion. The approach includes a novel way for quality assurance of the data which is used to assess the quality of the hyperspectral data for every single pixel in the final data product. The result is a hyperspectral digital surface model as a representation of the surface in 3D space linked with the hyperspectral information emitted and reflected by the objects covered by the surface. In this study we use the hyperspectral camera Cubert UHD 185-Firefly, which collects 125 bands from 450 to 950 nm. The obtained data product has a spatial resolution of approximately 1 cm for the spatial and 21 cm for the hyperspectral information. The radiometric calibration yields good results with less than 1% offset in reflectance compared to an ASD FieldSpec 3 for most of the spectral range. The quality assurance information shows that the radiometric precision is better than 0.13% for the derived data product. We apply the approach to data from a flight campaign in a barley experiment with different varieties during the growth stage heading (BBCH 52 - 59) to demonstrate the feasibility for vegetation monitoring in the context of precision agriculture. The plant parameters retrieved from the data product correspond to in-field measurements of a single date field campaign for plant height (R2 = 0.7), chlorophyll (BGI2, R2 = 0.52), LAI (RDVI, R2 = 0.32) and biomass (RDVI, R2 = 0.29). Our approach can also be applied for other image-frame cameras as long as the individual bands of the

  4. Comparison of three different techniques for camera and motion control of a teleoperated robot.

    Science.gov (United States)

    Doisy, Guillaume; Ronen, Adi; Edan, Yael

    2017-01-01

    This research aims to evaluate new methods for robot motion control and camera orientation control through the operator's head orientation in robot teleoperation tasks. Specifically, the use of head-tracking in a non-invasive way, without immersive virtual reality devices was combined and compared with classical control modes for robot movements and camera control. Three control conditions were tested: 1) a condition with classical joystick control of both the movements of the robot and the robot camera, 2) a condition where the robot movements were controlled by a joystick and the robot camera was controlled by the user head orientation, and 3) a condition where the movements of the robot were controlled by hand gestures and the robot camera was controlled by the user head orientation. Performance, workload metrics and their evolution as the participants gained experience with the system were evaluated in a series of experiments: for each participant, the metrics were recorded during four successive similar trials. Results shows that the concept of robot camera control by user head orientation has the potential of improving the intuitiveness of robot teleoperation interfaces, specifically for novice users. However, more development is needed to reach a margin of progression comparable to a classical joystick interface.

  5. Multi-camera calibration based on openCV and multi-view registration

    Science.gov (United States)

    Deng, Xiao-ming; Wan, Xiong; Zhang, Zhi-min; Leng, Bi-yan; Lou, Ning-ning; He, Shuai

    2010-10-01

    For multi-camera calibration systems, a method based on OpenCV and multi-view registration combining calibration algorithm is proposed. First of all, using a Zhang's calibration plate (8X8 chessboard diagram) and a number of cameras (with three industrial-grade CCD) to be 9 group images shooting from different angles, using OpenCV to calibrate the parameters fast in the camera. Secondly, based on the corresponding relationship between each camera view, the computation of the rotation matrix and translation matrix is formulated as a constrained optimization problem. According to the Kuhn-Tucker theorem and the properties on the derivative of the matrix-valued function, the formulae of rotation matrix and translation matrix are deduced by using singular value decomposition algorithm. Afterwards an iterative method is utilized to get the entire coordinate transformation of pair-wise views, thus the precise multi-view registration can be conveniently achieved and then can get the relative positions in them(the camera outside the parameters).Experimental results show that the method is practical in multi-camera calibration .

  6. Color processing in camera phones: How good does it need to be?

    Science.gov (United States)

    Xiao, Feng; Zhang, Xuemei; Fowler, Boyd

    2005-02-01

    As the fastest-growing consumer electronics device in history, the camera phone has evolved from a toy into a real camera that competes with the compact digital camera in image quality. Due to severe constraints in cost and size, one key question that remains unanswered for camera phones is: how good does the image quality need to be so that resource can be allocated most efficiently. In this paper, we have tried to find the color processing tolerance through a study of 24 digital cameras from six manufacturers under five different light sources. We measured both the inter-brand (across manufacturers) and intra-brand (within manufacturers) mean and standard deviation for white balance and color reproduction. The white balance results showed that most cameras didn"t follow the complete white balance model. The difference between the captured white patch and the display white point increased when the correlated color temperature (CCT) of the illuminant was further away from 6500K. The standard deviation of the red/green and blue/green ratios for the white patch also increased when the illuminant was further away from 6500K. The color reproduction results revealed a similar trend for the inter-brand and intra-brand chromatic difference of the color patches. The average inter-brand chromatic difference increased from 3.87 ΔE units for the Δ65 light (6500K) to 10.13 ΔE units for the Horizon light (2300K).

  7. Kinect v2 and RGB Stereo Cameras Integration for Depth Map Enhancement

    Science.gov (United States)

    Ravanelli, R.; Nascetti, A.; Crespi, M.

    2016-06-01

    Today range cameras are widespread low-cost sensors based on two different principles of operation: we can distinguish between Structured Light (SL) range cameras (Kinect v1, Structure Sensor, ...) and Time Of Flight (ToF) range cameras (Kinect v2, ...). Both the types are easy to use 3D scanners, able to reconstruct dense point clouds at high frame rate. However the depth maps obtained are often noisy and not enough accurate, therefore it is generally essential to improve their quality. Standard RGB cameras can be a valuable solution to solve such issue. The aim of this paper is therefore to evaluate the integration feasibility of these two different 3D modelling techniques, characterized by complementary features and based on standard low-cost sensors. For this purpose, a 3D model of a DUPLOTM bricks construction was reconstructed both with the Kinect v2 range camera and by processing one stereo pair acquired with a Canon Eos 1200D DSLR camera. The scale of the photgrammetric model was retrieved from the coordinates measured by Kinect v2. The preliminary results are encouraging and show that the foreseen integration could lead to an higher metric accuracy and a major level of completeness with respect to that obtained by using only separated techniques.

  8. Non-Metric CCD Camera Calibration Algorithm in a Digital Photogrammetry System

    Institute of Scientific and Technical Information of China (English)

    YANG Hua-chao; DENG Ka-zhong; ZHANG Shu-bi; GUO Guang-li; ZHOU Ming

    2006-01-01

    Camera calibration is a critical process in photogrammetry and a necessary step to acquire 3D information from a 2D image. In this paper, a flexible approach for CCD camera calibration using 2D direct linear transformation (DLT) and bundle adjustment is proposed. The proposed approach assumes that the camera interior orientation elements are known, and addresses a new closed form solution in planar object space based on homogenous coordinate representation and matrix factorization. Homogeneous coordinate representation offers a direct matrix correspondence between the parameters of the 2D DLT and the collinearity equation. The matrix factorization starts by recovering the elements of the rotation matrix and then solving for the camera position with the collinearity equation. Camera calibration with high precision is addressed by bundle adjustment using the initial values of the camera orientation elements. The results show that the calibration precision of principal point and focal length is about 0.2 and 0.3 pixels respectively, which can meet the requirements of close-range photogrammetry with high accuracy.

  9. Extrinsic calibration of a non-overlapping camera network based on close-range photogrammetry.

    Science.gov (United States)

    Dong, Shuai; Shao, Xinxing; Kang, Xin; Yang, Fujun; He, Xiaoyuan

    2016-08-10

    In this paper, an extrinsic calibration method for a non-overlapping camera network is presented based on close-range photogrammetry. The method does not require calibration targets or the cameras to be moved. The visual sensors are relatively motionless and do not see the same area at the same time. The proposed method combines the multiple cameras using some arbitrarily distributed encoded targets. The calibration procedure consists of three steps: reconstructing the three-dimensional (3D) coordinates of the encoded targets using a hand-held digital camera, performing the intrinsic calibration of the camera network, and calibrating the extrinsic parameters of each camera with only one image. A series of experiments, including 3D reconstruction, rotation, and translation, are employed to validate the proposed approach. The results show that the relative error for the 3D reconstruction is smaller than 0.003%, the relative errors of both rotation and translation are less than 0.066%, and the re-projection error is only 0.09 pixels.

  10. Extrinsic calibration of a non-overlapping camera network based on close-range photogrammetry.

    Science.gov (United States)

    Dong, Shuai; Shao, Xinxing; Kang, Xin; Yang, Fujun; He, Xiaoyuan

    2016-08-10

    In this paper, an extrinsic calibration method for a non-overlapping camera network is presented based on close-range photogrammetry. The method does not require calibration targets or the cameras to be moved. The visual sensors are relatively motionless and do not see the same area at the same time. The proposed method combines the multiple cameras using some arbitrarily distributed encoded targets. The calibration procedure consists of three steps: reconstructing the three-dimensional (3D) coordinates of the encoded targets using a hand-held digital camera, performing the intrinsic calibration of the camera network, and calibrating the extrinsic parameters of each camera with only one image. A series of experiments, including 3D reconstruction, rotation, and translation, are employed to validate the proposed approach. The results show that the relative error for the 3D reconstruction is smaller than 0.003%, the relative errors of both rotation and translation are less than 0.066%, and the re-projection error is only 0.09 pixels. PMID:27534480

  11. On the absolute calibration of SO2 cameras

    Directory of Open Access Journals (Sweden)

    J. Zielcke

    2012-09-01

    Full Text Available Sulphur dioxide emission flux measurements are an important tool for volcanic monitoring and eruption risk assessment. The SO2 camera technique remotely measures volcanic emissions by analysing the ultraviolet absorption of SO2 in a narrow spectral window between 305 nm and 320 nm using solar radiation scattered in the atmosphere. The SO2 absorption is selectively detected by mounting band-pass interference filters in front of a two-dimensional, UV-sensitive CCD detector. While this approach is simple and delivers valuable insights into the two-dimensional SO2 distribution, absolute calibration has proven to be difficult. An accurate calibration of the SO2 camera (i.e., conversion from optical density to SO2 column density, CD is crucial to obtain correct SO2 CDs and flux measurements that are comparable to other measurement techniques and can be used for volcanological applications. The most common approach for calibrating SO2 camera measurements is based on inserting quartz cells (cuvettes containing known amounts of SO2 into the light path. It has been found, however, that reflections from the windows of the calibration cell can considerably affect the signal measured by the camera. Another possibility for calibration relies on performing simultaneous measurements in a small area of the camera's field-of-view (FOV by a narrow-field-of-view Differential Optical Absorption Spectroscopy (NFOV-DOAS system. This procedure combines the very good spatial and temporal resolution of the SO2 camera technique with the more accurate column densities obtainable from DOAS measurements. This work investigates the uncertainty of results gained through the two commonly used, but quite different calibration methods (DOAS and calibration cells. Measurements with three different instruments, an SO2 camera, a NFOV-DOAS system and an Imaging DOAS (IDOAS, are presented. We compare the calibration-cell approach with the calibration from the NFOV-DOAS system. The

  12. Method for out-of-focus camera calibration.

    Science.gov (United States)

    Bell, Tyler; Xu, Jing; Zhang, Song

    2016-03-20

    State-of-the-art camera calibration methods assume that the camera is at least nearly in focus and thus fail if the camera is substantially defocused. This paper presents a method which enables the accurate calibration of an out-of-focus camera. Specifically, the proposed method uses a digital display (e.g., liquid crystal display monitor) to generate fringe patterns that encode feature points into the carrier phase; these feature points can be accurately recovered, even if the fringe patterns are substantially blurred (i.e., the camera is substantially defocused). Experiments demonstrated that the proposed method can accurately calibrate a camera regardless of the amount of defocusing: the focal length difference is approximately 0.2% when the camera is focused compared to when the camera is substantially defocused.

  13. Calibrating a depth camera but ignoring it for SLAM

    OpenAIRE

    Castro, Daniel Herrera

    2014-01-01

    Recent improvements in resolution, accuracy, and cost have made depth cameras a very popular alternative for 3D reconstruction and navigation. Thus, accurate depth camera calibration a very relevant aspect of many 3D pipelines. We explore what are the limits of a practical depth camera calibration algorithm: how to accurately calibrate a noisy depth camera without a precise calibration object and without using brightness or depth discontinuities. We present an algorithm that uses an external ...

  14. Calibration of omnidirectional cameras in practice: A comparison of methods

    OpenAIRE

    Puig, Luis; Bermúdez, Jesús; Sturm, Peter; Guerrero, Josechu

    2012-01-01

    International audience Omnidirectional cameras are becoming increasingly popular in computer vision and robotics. Camera calibration is a step before performing any task involving metric scene measurement, required in nearly all robotics tasks. In recent years many different methods to calibrate central omnidirectional cameras have been developed, based on different camera models and often limited to a specific mirror shape. In this paper we review the existing methods designed to calibrat...

  15. Dynamic Vision Sensor Camera Based Bare Hand Gesture Recognition

    OpenAIRE

    kashmera ashish khedkkar safaya; Rekha Lathi

    2012-01-01

    This Paper proposes a method to recognize bare hand gestures using dynamic vision sensor (DVS) camera. DVS camera only responds asynchronously to pixels that have temporal changes in intensity which different from conventional camera. This paper attempts to recognize three different hand gestures rock, paper and scissors and using those hand gestures design mouse free interface.   Keywords: Dynamic vision sensor camera, Hand gesture recognition

  16. Dynamic Vision Sensor Camera Based Bare Hand Gesture Recognition

    Directory of Open Access Journals (Sweden)

    kashmera ashish khedkkar safaya

    2012-05-01

    Full Text Available This Paper proposes a method to recognize bare hand gestures using dynamic vision sensor (DVS camera. DVS camera only responds asynchronously to pixels that have temporal changes in intensity which different from conventional camera. This paper attempts to recognize three different hand gestures rock, paper and scissors and using those hand gestures design mouse free interface.   Keywords: Dynamic vision sensor camera, Hand gesture recognition

  17. Situational Awareness from a Low-Cost Camera System

    Science.gov (United States)

    Freudinger, Lawrence C.; Ward, David; Lesage, John

    2010-01-01

    A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.

  18. ASSESSING THE PHOTOGRAMMETRIC POTENTIAL OF CAMERAS IN PORTABLE DEVICES

    OpenAIRE

    Smith, M J; Kokkas, N.

    2012-01-01

    In recent years, there have been an increasing number of portable devices, tablets and Smartphone’s employing high-resolution digital cameras to satisfy consumer demand. In most cases, these cameras are designed primarily for capturing visually pleasing images and the potential of using Smartphone and tablet cameras for metric applications remains uncertain. The compact nature of the host’s devices leads to very small cameras and therefore smaller geometric characteristics. This also makes th...

  19. SLAM using camera and IMU sensors.

    Energy Technology Data Exchange (ETDEWEB)

    Rothganger, Fredrick H.; Muguira, Maritza M.

    2007-01-01

    Visual simultaneous localization and mapping (VSLAM) is the problem of using video input to reconstruct the 3D world and the path of the camera in an 'on-line' manner. Since the data is processed in real time, one does not have access to all of the data at once. (Contrast this with structure from motion (SFM), which is usually formulated as an 'off-line' process on all the data seen, and is not time dependent.) A VSLAM solution is useful for mobile robot navigation or as an assistant for humans exploring an unknown environment. This report documents the design and implementation of a VSLAM system that consists of a small inertial measurement unit (IMU) and camera. The approach is based on a modified Extended Kalman Filter. This research was performed under a Laboratory Directed Research and Development (LDRD) effort.

  20. Blind identification of cellular phone cameras

    Science.gov (United States)

    Çeliktutan, Oya; Avcibas, Ismail; Sankur, Bülent

    2007-02-01

    In this paper, we focus on blind source cell-phone identification problem. It is known various artifacts in the image processing pipeline, such as pixel defects or unevenness of the responses in the CCD sensor, black current noise, proprietary interpolation algorithms involved in color filter array [CFA] leave telltale footprints. These artifacts, although often imperceptible, are statistically stable and can be considered as a signature of the camera type or even of the individual device. For this purpose, we explore a set of forensic features, such as binary similarity measures, image quality measures and higher order wavelet statistics in conjunction SVM classifier to identify the originating cell-phone type. We provide identification results among 9 different brand cell-phone cameras. In addition to our initial results, we applied a set of geometrical operations to original images in order to investigate how much our proposed method is robust under these manipulations.

  1. Cervical SPECT Camera for Parathyroid Imaging

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2012-08-31

    Primary hyperparathyroidism characterized by one or more enlarged parathyroid glands has become one of the most common endocrine diseases in the world affecting about 1 per 1000 in the United States. Standard treatment is highly invasive exploratory neck surgery called Parathyroidectomy. The surgery has a notable mortality rate because of the close proximity to vital structures. The move to minimally invasive parathyroidectomy is hampered by the lack of high resolution pre-surgical imaging techniques that can accurately localize the parathyroid with respect to surrounding structures. We propose to develop a dedicated ultra-high resolution (~ 1 mm) and high sensitivity (10x conventional camera) cervical scintigraphic imaging device. It will be based on a multiple pinhole-camera SPECT system comprising a novel solid state CZT detector that offers the required performance. The overall system will be configured to fit around the neck and comfortably image a patient.

  2. First polarised light with the NIKA camera

    CERN Document Server

    Ritacco, A; Adane, A; Ade, P; André, P; Beelen, A; Belier, B; Benoît, A; Bideaud, A; Billot, N; Bourrion, O; Calvo, M; Catalano, A; Coiffard, G; Comis, B; D'Addabbo, A; Désert, F -X; Doyle, S; Goupy, J; Kramer, C; Leclercq, S; Macías-Pérez, J F; Martino, J; Mauskopf, P; Maury, A; Mayet, F; Monfardini, A; Pajot, F; Pascale, E; Perotto, L; Pisano, G; Ponthieu, N; Rebolo-Iglesias, M; Réveret, V; Rodriguez, L; Savini, G; Schuster, K; Sievers, A; Thum, C; Triqueneaux, S; Tucker, C; Zylka, R

    2015-01-01

    NIKA is a dual-band camera operating with 315 frequency multiplexed LEKIDs cooled at 100 mK. NIKA is designed to observe the sky in intensity and polarisation at 150 and 260 GHz from the IRAM 30-m telescope. It is a test-bench for the final NIKA2 camera. The incoming linear polarisation is modulated at four times the mechanical rotation frequency by a warm rotating multi-layer Half Wave Plate. Then, the signal is analysed by a wire grid and finally absorbed by the LEKIDs. The small time constant (< 1ms ) of the LEKID detectors combined with the modulation of the HWP enables the quasi-simultaneous measurement of the three Stokes parameters I, Q, U, representing linear polarisation. In this pa- per we present results of recent observational campaigns demonstrating the good performance of NIKA in detecting polarisation at mm wavelength.

  3. Camera Raw解读(1)

    Institute of Scientific and Technical Information of China (English)

    张恣宽

    2010-01-01

    Camera Raw是Adobe公司研发的,它是Photoshop软件中的一个RAW格式文件的转换插件。虽然一些大的相机生产商,如尼康、佳能公司各自都有自主开发的RAW格式转换软件,性能也很好,但Adobe以其Photoshop软件开发的优势,将RAW格式转换融合在Photoshop软件中,使RAW格式转换优势更加突出,功能十分强大。特别是PhotoshopCS4中的Camera Raw5,功能更加强大。

  4. AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA

    Directory of Open Access Journals (Sweden)

    Veena G.S

    2013-12-01

    Full Text Available The proposed work aims to create a smart application camera, with the intention of eliminating the need for a human presence to detect any unwanted sinister activities, such as theft in this case. Spread among the campus, are certain valuable biometric identification systems at arbitrary locations. The application monitosr these systems (hereafter referred to as “object” using our smart camera system based on an OpenCV platform. By using OpenCV Haar Training, employing the Viola-Jones algorithm implementation in OpenCV, we teach the machine to identify the object in environmental conditions. An added feature of face recognition is based on Principal Component Analysis (PCA to generate Eigen Faces and the test images are verified by using distance based algorithm against the eigenfaces, like Euclidean distance algorithm or Mahalanobis Algorithm. If the object is misplaced, or an unauthorized user is in the extreme vicinity of the object, an alarm signal is raised.

  5. Advanced EVA Suit Camera System Development Project

    Science.gov (United States)

    Mock, Kyla

    2016-01-01

    The National Aeronautics and Space Administration (NASA) at the Johnson Space Center (JSC) is developing a new extra-vehicular activity (EVA) suit known as the Advanced EVA Z2 Suit. All of the improvements to the EVA Suit provide the opportunity to update the technology of the video imagery. My summer internship project involved improving the video streaming capabilities of the cameras that will be used on the Z2 Suit for data acquisition. To accomplish this, I familiarized myself with the architecture of the camera that is currently being tested to be able to make improvements on the design. Because there is a lot of benefit to saving space, power, and weight on the EVA suit, my job was to use Altium Design to start designing a much smaller and simplified interface board for the camera's microprocessor and external components. This involved checking datasheets of various components and checking signal connections to ensure that this architecture could be used for both the Z2 suit and potentially other future projects. The Orion spacecraft is a specific project that may benefit from this condensed camera interface design. The camera's physical placement on the suit also needed to be determined and tested so that image resolution can be maximized. Many of the options of the camera placement may be tested along with other future suit testing. There are multiple teams that work on different parts of the suit, so the camera's placement could directly affect their research or design. For this reason, a big part of my project was initiating contact with other branches and setting up multiple meetings to learn more about the pros and cons of the potential camera placements we are analyzing. Collaboration with the multiple teams working on the Advanced EVA Z2 Suit is absolutely necessary and these comparisons will be used as further progress is made for the overall suit design. This prototype will not be finished in time for the scheduled Z2 Suit testing, so my time was

  6. Slit-Drum Camera For Projectile Studies

    Science.gov (United States)

    Liangyi, Chen; Shaoxiang, Zhou; Guanhua, Cha; Yuxi, Hu

    1983-03-01

    The' model XF-70 slit-drum camera has been developed to record projectile in flight for observation and acquisition. It has two operation modes: (1) synchro-ballistic photography, (2) streak record. The film is located on the inner surface of rotating drum to make it travel. The folding mirror is arranged to reflect light beam 90 degree on to film. The assembly of folding mirror and slit aperture can be together rotated about the optical axis of objective so that the camera makes a feature of recording projectile having any launching angle either in synchro-ballistic photography or in streak record through prerotating the folding mirror assembly by an appropriate angle. The mechanical-electric shutter preventing film from reexposing is close to the slit aperture. The loading mechanism is designed for use in daylight. LED fiducial mark and timing mark are printed at the edges of the frame for accurate measurements.

  7. Using a portable holographic camera in cosmetology

    Science.gov (United States)

    Bakanas, R.; Gudaitis, G. A.; Zacharovas, S. J.; Ratcliffe, D. B.; Hirsch, S.; Frey, S.; Thelen, A.; Ladrière, N.; Hering, P.

    2006-07-01

    The HSF-MINI portable holographic camera is used to record holograms of the human face. The recorded holograms are analyzed using a unique three-dimensional measurement system that provides topometric data of the face with resolution less than or equal to 0.5 mm. The main advantages of this method over other, more traditional methods (such as laser triangulation and phase-measurement triangulation) are discussed.

  8. Delay in camera-to-display systems

    OpenAIRE

    2011-01-01

    Today we see an increasing number of time dependent visual computer systems, ranging from interactive video installations, via high definition teleconferencing to the high performance computer vision disciplines for example in industry and robotics. Common for all of these are the requirement for low and predictable delays from the system itself and its components. In this thesis, we look into the delay of camera-to-display computer systems to understand the properties of their delay com...

  9. Rank-based camera spectral sensitivity estimation.

    Science.gov (United States)

    Finlayson, Graham; Darrodi, Maryam Mohammadzadeh; Mackiewicz, Michal

    2016-04-01

    In order to accurately predict a digital camera response to spectral stimuli, the spectral sensitivity functions of its sensor need to be known. These functions can be determined by direct measurement in the lab-a difficult and lengthy procedure-or through simple statistical inference. Statistical inference methods are based on the observation that when a camera responds linearly to spectral stimuli, the device spectral sensitivities are linearly related to the camera rgb response values, and so can be found through regression. However, for rendered images, such as the JPEG images taken by a mobile phone, this assumption of linearity is violated. Even small departures from linearity can negatively impact the accuracy of the recovered spectral sensitivities, when a regression method is used. In our work, we develop a novel camera spectral sensitivity estimation technique that can recover the linear device spectral sensitivities from linear images and the effective linear sensitivities from rendered images. According to our method, the rank order of a pair of responses imposes a constraint on the shape of the underlying spectral sensitivity curve (of the sensor). Technically, each rank-pair splits the space where the underlying sensor might lie in two parts (a feasible region and an infeasible region). By intersecting the feasible regions from all the ranked-pairs, we can find a feasible region of sensor space. Experiments demonstrate that using rank orders delivers equal estimation to the prior art. However, the Rank-based method delivers a step-change in estimation performance when the data is not linear and, for the first time, allows for the estimation of the effective sensitivities of devices that may not even have "raw mode." Experiments validate our method. PMID:27140768

  10. Rank-based camera spectral sensitivity estimation.

    Science.gov (United States)

    Finlayson, Graham; Darrodi, Maryam Mohammadzadeh; Mackiewicz, Michal

    2016-04-01

    In order to accurately predict a digital camera response to spectral stimuli, the spectral sensitivity functions of its sensor need to be known. These functions can be determined by direct measurement in the lab-a difficult and lengthy procedure-or through simple statistical inference. Statistical inference methods are based on the observation that when a camera responds linearly to spectral stimuli, the device spectral sensitivities are linearly related to the camera rgb response values, and so can be found through regression. However, for rendered images, such as the JPEG images taken by a mobile phone, this assumption of linearity is violated. Even small departures from linearity can negatively impact the accuracy of the recovered spectral sensitivities, when a regression method is used. In our work, we develop a novel camera spectral sensitivity estimation technique that can recover the linear device spectral sensitivities from linear images and the effective linear sensitivities from rendered images. According to our method, the rank order of a pair of responses imposes a constraint on the shape of the underlying spectral sensitivity curve (of the sensor). Technically, each rank-pair splits the space where the underlying sensor might lie in two parts (a feasible region and an infeasible region). By intersecting the feasible regions from all the ranked-pairs, we can find a feasible region of sensor space. Experiments demonstrate that using rank orders delivers equal estimation to the prior art. However, the Rank-based method delivers a step-change in estimation performance when the data is not linear and, for the first time, allows for the estimation of the effective sensitivities of devices that may not even have "raw mode." Experiments validate our method.

  11. CCD characterization for a range of color cameras

    NARCIS (Netherlands)

    Withagen, P.J.; Groen, F.C.A.; Schutte, K.

    2005-01-01

    CCD cameras are widely used for remote sensing and image processing applications. However, most cameras are produced to create nice images, not to do accurate measurements. Post processing operations such as gamma adjustment and automatic gain control are incorporated in the camera. When a (CCD) cam

  12. Camera Network Coverage Improving by Particle Swarm Optimization

    NARCIS (Netherlands)

    Xu, Y.-C.; Lei, B.; Hendriks, E.A.

    2011-01-01

    This paper studies how to improve the field of view (FOV) coverage of a camera network. We focus on a special but practical scenario where the cameras are randomly scattered in a wide area and each camera may adjust its orientation but cannot move in any direction. We propose a particle swarm optimi

  13. 16 CFR 3.45 - In camera orders.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false In camera orders. 3.45 Section 3.45... PRACTICE FOR ADJUDICATIVE PROCEEDINGS Hearings § 3.45 In camera orders. (a) Definition. Except as hereinafter provided, material made subject to an in camera order will be kept confidential and not placed...

  14. 39 CFR 3001.31a - In camera orders.

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false In camera orders. 3001.31a Section 3001.31a Postal... Applicability § 3001.31a In camera orders. (a) Definition. Except as hereinafter provided, documents and testimony made subject to in camera orders are not made a part of the public record, but are...

  15. 15 CFR 743.3 - Thermal imaging camera reporting.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Thermal imaging camera reporting. 743... REPORTING § 743.3 Thermal imaging camera reporting. (a) General requirement. Exports of thermal imaging cameras must be reported to BIS as provided in this section. (b) Transactions to be reported. Exports...

  16. 21 CFR 892.1100 - Scintillation (gamma) camera.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Scintillation (gamma) camera. 892.1100 Section 892...) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1100 Scintillation (gamma) camera. (a) Identification. A scintillation (gamma) camera is a device intended to image the distribution of radionuclides...

  17. 21 CFR 878.4160 - Surgical camera and accessories.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Surgical camera and accessories. 878.4160 Section... (CONTINUED) MEDICAL DEVICES GENERAL AND PLASTIC SURGERY DEVICES Surgical Devices § 878.4160 Surgical camera and accessories. (a) Identification. A surgical camera and accessories is a device intended to be...

  18. SPECT detectors: the Anger Camera and beyond.

    Science.gov (United States)

    Peterson, Todd E; Furenlid, Lars R

    2011-09-01

    The development of radiation detectors capable of delivering spatial information about gamma-ray interactions was one of the key enabling technologies for nuclear medicine imaging and, eventually, single-photon emission computed tomography (SPECT). The continuous sodium iodide scintillator crystal coupled to an array of photomultiplier tubes, almost universally referred to as the Anger Camera after its inventor, has long been the dominant SPECT detector system. Nevertheless, many alternative materials and configurations have been investigated over the years. Technological advances as well as the emerging importance of specialized applications, such as cardiac and preclinical imaging, have spurred innovation such that alternatives to the Anger Camera are now part of commercial imaging systems. Increased computing power has made it practical to apply advanced signal processing and estimation schemes to make better use of the information contained in the detector signals. In this review we discuss the key performance properties of SPECT detectors and survey developments in both scintillator and semiconductor detectors and their readouts with an eye toward some of the practical issues at least in part responsible for the continuing prevalence of the Anger Camera in the clinic. PMID:21828904

  19. Refocusing distance of a standard plenoptic camera.

    Science.gov (United States)

    Hahne, Christopher; Aggoun, Amar; Velisavljevic, Vladan; Fiebig, Susanne; Pesch, Matthias

    2016-09-19

    Recent developments in computational photography enabled variation of the optical focus of a plenoptic camera after image exposure, also known as refocusing. Existing ray models in the field simplify the camera's complexity for the purpose of image and depth map enhancement, but fail to satisfyingly predict the distance to which a photograph is refocused. By treating a pair of light rays as a system of linear functions, it will be shown in this paper that its solution yields an intersection indicating the distance to a refocused object plane. Experimental work is conducted with different lenses and focus settings while comparing distance estimates with a stack of refocused photographs for which a blur metric has been devised. Quantitative assessments over a 24 m distance range suggest that predictions deviate by less than 0.35 % in comparison to an optical design software. The proposed refocusing estimator assists in predicting object distances just as in the prototyping stage of plenoptic cameras and will be an essential feature in applications demanding high precision in synthetic focus or where depth map recovery is done by analyzing a stack of refocused photographs. PMID:27661891

  20. Terrain mapping camera for Chandrayaan-1

    Indian Academy of Sciences (India)

    A S Kiran Kumar; A Roy Chowdhury

    2005-12-01

    The Terrain Mapping Camera (TMC)on India ’s first satellite for lunar exploration,Chandrayaan-1, is for generating high-resolution 3-dimensional maps of the Moon.With this instrument,a complete topographic map of the Moon with 5 m spatial resolution and 10-bit quantization will be available for scienti fic studies.The TMC will image within the panchromatic spectral band of 0.4 to 0.9 m with a stereo view in the fore,nadir and aft directions of the spacecraft movement and have a B/H ratio of 1.The swath coverage will be 20 km.The camera is configured for imaging in the push broom-mode with three linear detectors in the image plane.The camera will have four gain settings to cover the varying illumination conditions of the Moon.Additionally,a provision of imaging with reduced resolution,for improving Signal-to-Noise Ratio (SNR)in polar regions,which have poor illumination conditions throughout,has been made.SNR of better than 100 is expected in the ± 60° latitude region for mature mare soil,which is one of the darkest regions on the lunar surface. This paper presents a brief description of the TMC instrument.

  1. Infrared stereo camera for human machine interface

    Science.gov (United States)

    Edmondson, Richard; Vaden, Justin; Chenault, David

    2012-06-01

    Improved situational awareness results not only from improved performance of imaging hardware, but also when the operator and human factors are considered. Situational awareness for IR imaging systems frequently depends on the contrast available. A significant improvement in effective contrast for the operator can result when depth perception is added to the display of IR scenes. Depth perception through flat panel 3D displays are now possible due to the number of 3D displays entering the consumer market. Such displays require appropriate and human friendly stereo IR video input in order to be effective in the dynamic military environment. We report on a stereo IR camera that has been developed for integration on to an unmanned ground vehicle (UGV). The camera has auto-convergence capability that significantly reduces ill effects due to image doubling, minimizes focus-convergence mismatch, and eliminates the need for the operator to manually adjust camera properties. Discussion of the size, weight, and power requirements as well as integration onto the robot platform will be given along with description of the stand alone operation.

  2. SPECT detectors: the Anger Camera and beyond

    Science.gov (United States)

    Peterson, Todd E.; Furenlid, Lars R.

    2011-09-01

    The development of radiation detectors capable of delivering spatial information about gamma-ray interactions was one of the key enabling technologies for nuclear medicine imaging and, eventually, single-photon emission computed tomography (SPECT). The continuous sodium iodide scintillator crystal coupled to an array of photomultiplier tubes, almost universally referred to as the Anger Camera after its inventor, has long been the dominant SPECT detector system. Nevertheless, many alternative materials and configurations have been investigated over the years. Technological advances as well as the emerging importance of specialized applications, such as cardiac and preclinical imaging, have spurred innovation such that alternatives to the Anger Camera are now part of commercial imaging systems. Increased computing power has made it practical to apply advanced signal processing and estimation schemes to make better use of the information contained in the detector signals. In this review we discuss the key performance properties of SPECT detectors and survey developments in both scintillator and semiconductor detectors and their readouts with an eye toward some of the practical issues at least in part responsible for the continuing prevalence of the Anger Camera in the clinic.

  3. Improvement of passive THz camera images

    Science.gov (United States)

    Kowalski, Marcin; Piszczek, Marek; Palka, Norbert; Szustakowski, Mieczyslaw

    2012-10-01

    Terahertz technology is one of emerging technologies that has a potential to change our life. There are a lot of attractive applications in fields like security, astronomy, biology and medicine. Until recent years, terahertz (THz) waves were an undiscovered, or most importantly, an unexploited area of electromagnetic spectrum. The reasons of this fact were difficulties in generation and detection of THz waves. Recent advances in hardware technology have started to open up the field to new applications such as THz imaging. The THz waves can penetrate through various materials. However, automated processing of THz images can be challenging. The THz frequency band is specially suited for clothes penetration because this radiation does not point any harmful ionizing effects thus it is safe for human beings. Strong technology development in this band have sparked with few interesting devices. Even if the development of THz cameras is an emerging topic, commercially available passive cameras still offer images of poor quality mainly because of its low resolution and low detectors sensitivity. Therefore, THz image processing is very challenging and urgent topic. Digital THz image processing is a really promising and cost-effective way for demanding security and defense applications. In the article we demonstrate the results of image quality enhancement and image fusion of images captured by a commercially available passive THz camera by means of various combined methods. Our research is focused on dangerous objects detection - guns, knives and bombs hidden under some popular types of clothing.

  4. Single eye or camera with depth perception

    Science.gov (United States)

    Kornreich, Philipp; Farell, Bart

    2012-10-01

    An imager that can measure the distance from each pixel to the point on the object that is in focus at the pixel is described. This is accomplished by a short photoconducting lossi lightguide section at each pixel. The eye or camera lens selects the object point who's range is to be determined at the pixel. Light arriving at an image point trough a convex lens adds constructively only if it comes from the object point that is in focus at this pixel.. Light waves from all other object points cancel. Thus the lightguide at this pixel receives light from one object point only. This light signal has a phase component proportional to the range. The light intensity modes and thus the photocurrent in the lightguides shift in response to the phase of the incoming light. Contacts along the length of the lightguide collect the photocurrent signal containing the range information. Applications of this camera include autonomous vehicle navigation and robotic vision. An interesting application is as part of a crude teleportation system consisting of this camera and a three dimensional printer at a remote location.

  5. Theory and applications of smart cameras

    CERN Document Server

    2016-01-01

    This book presents an overview of smart camera systems, considering practical applications but also reviewing fundamental aspects of the underlying technology.  It introduces in a tutorial style the principles of sensing and signal processing, and also describes topics such as wireless connection to the Internet of Things (IoT) which is expected to be the biggest market for smart cameras. It is an excellent guide to the fundamental of smart camera technology, and the chapters complement each other well as the authors have worked as a team under the auspice of GFP(Global Frontier Project), the largest-scale funded research in Korea.  This is the third of three books based on the Integrated Smart Sensors research project, which describe the development of innovative devices, circuits, and system-level enabling technologies.  The aim of the project was to develop common platforms on which various devices and sensors can be loaded, and to create systems offering significant improvements in information processi...

  6. Stereo cameras on the International Space Station

    Science.gov (United States)

    Sabbatini, Massimo; Visentin, Gianfranco; Collon, Max; Ranebo, Hans; Sunderland, David; Fortezza, Raimondo

    2007-02-01

    Three-dimensional media is a unique and efficient means to virtually visit/observe objects that cannot be easily reached otherwise, like the International Space Station. The advent of auto-stereoscopic displays and stereo projection system is making the stereo media available to larger audiences than the traditional scientists and design engineers communities. It is foreseen that a major demand for 3D content shall come from the entertainment area. Taking advantage of the 6 months long permanence on the International Space Station of a colleague European Astronaut, Thomas Reiter, the Erasmus Centre uploaded to the ISS a newly developed, fully digital stereo camera, the Erasmus Recording Binocular. Testing the camera and its human interfaces in weightlessness, as well as accurately mapping the interior of the ISS are the main objectives of the experiment that has just been completed at the time of writing. The intent of this paper is to share with the readers the design challenges tackled in the development and operation of the ERB camera and highlight some of the future plans the Erasmus Centre team has in the pipeline.

  7. Modulated electron-multiplied fluorescence lifetime imaging microscope: all-solid-state camera for fluorescence lifetime imaging.

    Science.gov (United States)

    Zhao, Qiaole; Schelen, Ben; Schouten, Raymond; van den Oever, Rein; Leenen, René; van Kuijk, Harry; Peters, Inge; Polderdijk, Frank; Bosiers, Jan; Raspe, Marcel; Jalink, Kees; Geert Sander de Jong, Jan; van Geest, Bert; Stoop, Karel; Young, Ian Ted

    2012-12-01

    We have built an all-solid-state camera that is directly modulated at the pixel level for frequency-domain fluorescence lifetime imaging microscopy (FLIM) measurements. This novel camera eliminates the need for an image intensifier through the use of an application-specific charge coupled device design in a frequency-domain FLIM system. The first stage of evaluation for the camera has been carried out. Camera characteristics such as noise distribution, dark current influence, camera gain, sampling density, sensitivity, linearity of photometric response, and optical transfer function have been studied through experiments. We are able to do lifetime measurement using our modulated, electron-multiplied fluorescence lifetime imaging microscope (MEM-FLIM) camera for various objects, e.g., fluorescein solution, fixed green fluorescent protein (GFP) cells, and GFP-actin stained live cells. A detailed comparison of a conventional microchannel plate (MCP)-based FLIM system and the MEM-FLIM system is presented. The MEM-FLIM camera shows higher resolution and a better image quality. The MEM-FLIM camera provides a new opportunity for performing frequency-domain FLIM.

  8. Monte Carlo simulation for dual head gamma camera

    International Nuclear Information System (INIS)

    Monte Carlo (MC) simulation technique was used widely in medical physics applications. In nuclear medicine MC was used to design new medical imaging devices such as positron emission tomography (PET), gamma camera and single photon emission computed tomography (SPECT). Also it can be used to study the factors affecting image quality and internal dosimetry, Gate is on of monte Carlo code that has a number of advantages for simulation of SPECT and PET. There is a limit accessibilities in machines which are used in clinics because of the work load of machines. This makes it hard to evaluate some factors effecting machine performance which must be evaluated routinely. Also because of difficulties of carrying out scientific research and training of students, MC model can be optimum solution for the problem. The aim of this study was to use gate monte Carlo code to model Nucline spirit, medico dual head gamma camera hosted in radiation and isotopes center of Khartoum which is equipped with low energy general purpose LEGP collimators. This was used model to evaluate spatial resolution and sensitivity which is important factor affecting image quality and to demonstrate the validity of gate by comparing experimental results with simulation results on spatial resolution. The gate model of Nuclide spirit, medico dual head gamma camera was developed by applying manufacturer specifications. Then simulation was run. In evaluation of spatial resolution the FWHM was calculated from image profile of line source of Tc 99m gammas emitter of energy 140 KeV at different distances from modeled camera head at 5,10,15,20,22,27,32,37 cm and for these distances the spatial resolution was founded to be 5.76, 7.73, 10.7, 13.8, 14.01,16.91, 19.75 and 21.9 mm, respectively. These results showed a decrement of spatial resolution with increase of the distance between object (line source) and collimator in linear manner. FWHM calculated at 10 cm was compared with experimental results. The

  9. Fading Supernova Creates Spectacular Light Show

    Science.gov (United States)

    2003-01-01

    This image of SN 1987A, taken November 28, 2003 by the Advanced Camera for Surveys aboard NASA's Hubble Space Telescope (HST), shows many bright spots along a ring of gas, like pearls on a necklace. These cosmic pearls are being produced as superior shock waves unleashed during an explosion slam into the ring at more than a million miles per hour. The collision is heating the gas ring, causing its irnermost regions to glow. Astronomers detected the first of these hot spots in 1996, but now they see dozens of them all around the ring. With temperatures surging from a few thousand degrees to a million degrees, the flares are increasing in number. In the next few years, the entire ring will be ablaze as it absorbs the full force of the crash and is expected to become bright enough to illuminate the star's surroundings. Astronomers will then be able to obtain information on how the star ejected material before the explosion. The elongated and expanding object in the center of the ring is debris form the supernova blast which is being heated by radioactive elements, principally titanium 44, that were created in the explosion. This explosion was first observed by astronomers seventeen years ago in 1987, although the explosion took place about 160,000 years ago.

  10. Laser guide star pointing camera for ESO LGS Facilities

    Science.gov (United States)

    Bonaccini Calia, D.; Centrone, M.; Pedichini, F.; Ricciardi, A.; Cerruto, A.; Ambrosino, F.

    2014-08-01

    Every observatory using LGS-AO routinely has the experience of the long time needed to bring and acquire the laser guide star in the wavefront sensor field of view. This is mostly due to the difficulty of creating LGS pointing models, because of the opto-mechanical flexures and hysteresis in the launch and receiver telescope structures. The launch telescopes are normally sitting on the mechanical structure of the larger receiver telescope. The LGS acquisition time is even longer in case of multiple LGS systems. In this framework the optimization of the LGS systems absolute pointing accuracy is relevant to boost the time efficiency of both science and technical observations. In this paper we show the rationale, the design and the feasibility tests of a LGS Pointing Camera (LPC), which has been conceived for the VLT Adaptive Optics Facility 4LGSF project. The LPC would assist in pointing the four LGS, while the VLT is doing the initial active optics cycles to adjust its own optics on a natural star target, after a preset. The LPC allows minimizing the needed accuracy for LGS pointing model calibrations, while allowing to reach sub-arcsec LGS absolute pointing accuracy. This considerably reduces the LGS acquisition time and observations operation overheads. The LPC is a smart CCD camera, fed by a 150mm diameter aperture of a Maksutov telescope, mounted on the top ring of the VLT UT4, running Linux and acting as server for the client 4LGSF. The smart camera is able to recognize within few seconds the sky field using astrometric software, determining the stars and the LGS absolute positions. Upon request it returns the offsets to give to the LGS, to position them at the required sky coordinates. As byproduct goal, once calibrated the LPC can calculate upon request for each LGS, its return flux, its fwhm and the uplink beam scattering levels.

  11. Preliminary field evaluation of solid state cameras for security applications

    International Nuclear Information System (INIS)

    Recent developments in solid state imager technology have resulted in a series of compact, lightweight, all-solid-state closed circuit television (CCTV) cameras. Although it is widely known that the various solid state cameras have less light sensitivity and lower resolution than their vacuum tube counterparts, the potential for having a much longer Mean Time Between Failure (MTBF) for the all-solid-state cameras is generating considerable interest within the security community. Questions have been raised as to whether the newest and best of the solid state cameras are a viable alternative to the high maintenance vacuum tube cameras in exterior security applications. To help answer these questions, a series of tests were performed by Sandia National Laboratories at various test sites and under several lighting conditions. In general, all-solid-state cameras need to be improved in four areas before they can be used as wholesale replacements for tube cameras in exterior security applications: resolution, sensitivity, contrast, and smear. However, with careful design some of the higher performance cameras can be used for perimeter security systems, and all of the cameras have applications where they are uniquely qualified. Many of the cameras are well suited for interior assessment and surveillance uses, and several of the cameras are well designed as robotics and machine vision devices

  12. Presence capture cameras - a new challenge to the image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  13. High-dimensional camera shake removal with given depth map.

    Science.gov (United States)

    Yue, Tao; Suo, Jinli; Dai, Qionghai

    2014-06-01

    Camera motion blur is drastically nonuniform for large depth-range scenes, and the nonuniformity caused by camera translation is depth dependent but not the case for camera rotations. To restore the blurry images of large-depth-range scenes deteriorated by arbitrary camera motion, we build an image blur model considering 6-degrees of freedom (DoF) of camera motion with a given scene depth map. To make this 6D depth-aware model tractable, we propose a novel parametrization strategy to reduce the number of variables and an effective method to estimate high-dimensional camera motion as well. The number of variables is reduced by temporal sampling motion function, which describes the 6-DoF camera motion by sampling the camera trajectory uniformly in time domain. To effectively estimate the high-dimensional camera motion parameters, we construct the probabilistic motion density function (PMDF) to describe the probability distribution of camera poses during exposure, and apply it as a unified constraint to guide the convergence of the iterative deblurring algorithm. Specifically, PMDF is computed through a back projection from 2D local blur kernels to 6D camera motion parameter space and robust voting. We conduct a series of experiments on both synthetic and real captured data, and validate that our method achieves better performance than existing uniform methods and nonuniform methods on large-depth-range scenes.

  14. Qualification Tests of Micro-camera Modules for Space Applications

    Science.gov (United States)

    Kimura, Shinichi; Miyasaka, Akira

    Visual capability is very important for space-based activities, for which small, low-cost space cameras are desired. Although cameras for terrestrial applications are continually being improved, little progress has been made on cameras used in space, which must be extremely robust to withstand harsh environments. This study focuses on commercial off-the-shelf (COTS) CMOS digital cameras because they are very small and are based on an established mass-market technology. Radiation and ultrahigh-vacuum tests were conducted on a small COTS camera that weighs less than 100 mg (including optics). This paper presents the results of the qualification tests for COTS cameras and for a small, low-cost COTS-based space camera.

  15. The GCT camera for the Cherenkov Telescope Array

    CERN Document Server

    Brown, Anthony M; Allan, D; Amans, J P; Armstrong, T P; Balzer, A; Berge, D; Boisson, C; Bousquet, J -J; Bryan, M; Buchholtz, G; Chadwick, P M; Costantini, H; Cotter, G; Daniel, M K; De Franco, A; De Frondat, F; Dournaux, J -L; Dumas, D; Fasola, G; Funk, S; Gironnet, J; Graham, J A; Greenshaw, T; Hervet, O; Hidaka, N; Hinton, J A; Huet, J -M; Jegouzo, I; Jogler, T; Kraus, M; Lapington, J S; Laporte, P; Lefaucheur, J; Markoff, S; Melse, T; Mohrmann, L; Molyneux, P; Nolan, S J; Okumura, A; Osborne, J P; Parsons, R D; Rosen, S; Ross, D; Rowell, G; Sato, Y; Sayede, F; Schmoll, J; Schoorlemmer, H; Servillat, M; Sol, H; Stamatescu, V; Stephan, M; Stuik, R; Sykes, J; Tajima, H; Thornhill, J; Tibaldo, L; Trichard, C; Vink, J; Watson, J J; White, R; Yamane, N; Zech, A; Zink, A; Zorn, J

    2016-01-01

    The Gamma-ray Cherenkov Telescope (GCT) is proposed for the Small-Sized Telescope component of the Cherenkov Telescope Array (CTA). GCT's dual-mirror Schwarzschild-Couder (SC) optical system allows the use of a compact camera with small form-factor photosensors. The GCT camera is ~0.4 m in diameter and has 2048 pixels; each pixel has a ~0.2 degree angular size, resulting in a wide field-of-view. The design of the GCT camera is high performance at low cost, with the camera housing 32 front-end electronics modules providing full waveform information for all of the camera's 2048 pixels. The first GCT camera prototype, CHEC-M, was commissioned during 2015, culminating in the first Cherenkov images recorded by a SC telescope and the first light of a CTA prototype. In this contribution we give a detailed description of the GCT camera and present preliminary results from CHEC-M's commissioning.

  16. Simple method for calibrating omnidirectional stereo with multiple cameras

    Science.gov (United States)

    Ha, Jong-Eun; Choi, I.-Sak

    2011-04-01

    Cameras can give useful information for the autonomous navigation of a mobile robot. Typically, one or two cameras are used for this task. Recently, an omnidirectional stereo vision system that can cover the whole surrounding environment of a mobile robot is adopted. They usually adopt a mirror that cannot offer uniform spatial resolution. In this paper, we deal with an omnidirectional stereo system which consists of eight cameras where each two vertical cameras constitute one stereo system. Camera calibration is the first necessary step to obtain 3D information. Calibration using a planar pattern requires many images acquired under different poses so it is a tedious step to calibrate all eight cameras. In this paper, we present a simple calibration procedure using a cubic-type calibration structure that surrounds the omnidirectional stereo system. We can calibrate all the cameras on an omnidirectional stereo system in just one shot.

  17. Calibration of asynchronous smart phone cameras from moving objects

    Science.gov (United States)

    Hagen, Oksana; Istenič, Klemen; Bharti, Vibhav; Dhali, Maruf Ahmed; Barmaimon, Daniel; Houssineau, Jérémie; Clark, Daniel

    2015-04-01

    Calibrating multiple cameras is a fundamental prerequisite for many Computer Vision applications. Typically this involves using a pair of identical synchronized industrial or high-end consumer cameras. This paper considers an application on a pair of low-cost portable cameras with different parameters that are found in smart phones. This paper addresses the issues of acquisition, detection of moving objects, dynamic camera registration and tracking of arbitrary number of targets. The acquisition of data is performed using two standard smart phone cameras and later processed using detections of moving objects in the scene. The registration of cameras onto the same world reference frame is performed using a recently developed method for camera calibration using a disparity space parameterisation and the single-cluster PHD filter.

  18. Camera simulation engine enables efficient system optimization for super-resolution imaging

    Science.gov (United States)

    Fullerton, Stephanie; Bennett, Keith; Toda, Eiji; Takahashi, Teruo

    2012-02-01

    Quantitative fluorescent imaging requires optimization of the complete optical system, from the sample to the detector. Such considerations are especially true for precision localization microscopy such as PALM and (d)STORM where the precision of the result is limited by the noise in both the optical and detection systems. Here, we present a Camera Simulation Engine (CSE) that allows comparison of imaging results from CCD, CMOS and EM-CCD cameras under various sample conditions and can accurately validate the quality of precision localization algorithms and camera performance. To achieve these results, the CSE incorporates the following parameters: 1) Sample conditions including optical intensity, wavelength, optical signal shot noise, and optical background shot noise; 2) Camera specifications including QE, pixel size, dark current, read noise, EM-CCD excess noise; 3) Camera operating conditions such as exposure, binning and gain. A key feature of the CSE is that, from a single image (either real or simulated "ideal") we generate a stack of statistically realistic images. We have used the CSE to validate experimental data showing that certain current scientific CMOS technology outperforms EM-CCD in most super-resolution scenarios. Our results support using the CSE to efficiently and methodically select cameras for quantitative imaging applications. Furthermore, the CSE can be used to robustly compare and evaluate new algorithms for data analysis and image reconstruction. These uses of the CSE are particularly relevant to super-resolution precision localization microscopy and provide a faster, simpler and more cost effective means of system optimization, especially camera selection.

  19. Improved iris localization by using wide and narrow field of view cameras for iris recognition

    Science.gov (United States)

    Kim, Yeong Gon; Shin, Kwang Yong; Park, Kang Ryoung

    2013-10-01

    Biometrics is a method of identifying individuals by their physiological or behavioral characteristics. Among other biometric identifiers, iris recognition has been widely used for various applications that require a high level of security. When a conventional iris recognition camera is used, the size and position of the iris region in a captured image vary according to the X, Y positions of a user's eye and the Z distance between a user and the camera. Therefore, the searching area of the iris detection algorithm is increased, which can inevitably decrease both the detection speed and accuracy. To solve these problems, we propose a new method of iris localization that uses wide field of view (WFOV) and narrow field of view (NFOV) cameras. Our study is new as compared to previous studies in the following four ways. First, the device used in our research acquires three images, one each of the face and both irises, using one WFOV and two NFOV cameras simultaneously. The relation between the WFOV and NFOV cameras is determined by simple geometric transformation without complex calibration. Second, the Z distance (between a user's eye and the iris camera) is estimated based on the iris size in the WFOV image and anthropometric data of the size of the human iris. Third, the accuracy of the geometric transformation between the WFOV and NFOV cameras is enhanced by using multiple matrices of the transformation according to the Z distance. Fourth, the searching region for iris localization in the NFOV image is significantly reduced based on the detected iris region in the WFOV image and the matrix of geometric transformation corresponding to the estimated Z distance. Experimental results showed that the performance of the proposed iris localization method is better than that of conventional methods in terms of accuracy and processing time.

  20. Markerless Camera Pose Estimation - An Overview

    OpenAIRE

    Nöll, Tobias; Pagani, Alain; Stricker, Didier

    2011-01-01

    As shown by the human perception, a correct interpretation of a 3D scene on the basis of a 2D image is possible without markers. Solely by identifying natural features of different objects, their locations and orientations on the image can be identified. This allows a three dimensional interpretation of a two dimensional pictured scene. The key aspect for this interpretation is the correct estimation of the camera pose, i.e. the knowledge of the orientation and location a picture was recorded...

  1. A positron camera for industrial application

    International Nuclear Information System (INIS)

    A positron camera for application to flow tracing and measurement in mechanical subjects is described. It is based on two 300 x 600 mm2 hybrid multiwire detectors; the cathodes are in the form of lead strips planted onto printed-circuit board, and delay lines are used to determine the location of photon interactions. Measurements of the positron detection efficiency (30 Hz μCi-1 for a centred unshielded source), the maximum data logging rate (3 kHz) and the spatial resolving power (point source response = 5.7 mm fwhm) are presented and discussed, and results from initial demonstration experiments are shown. (orig.)

  2. Calibrating Images from the MINERVA Cameras

    Science.gov (United States)

    Mercedes Colón, Ana

    2016-01-01

    The MINiature Exoplanet Radial Velocity Array (MINERVA) consists of an array of robotic telescopes located on Mount Hopkins, Arizona with the purpose of performing transit photometry and spectroscopy to find Earth-like planets around Sun-like stars. In order to make photometric observations, it is necessary to perform calibrations on the CCD cameras of the telescopes to take into account possible instrument error on the data. In this project, we developed a pipeline that takes optical images, calibrates them using sky flats, darks, and biases to generate a transit light curve.

  3. Development of a micro-PIXE camera

    International Nuclear Information System (INIS)

    We developed a system of μ-PIXE analysis at the division of Takasaki ion accelerator for advanced radiation application (TIARA) in Japan Atomic Energy Research institute (JAERI), which consists of a microbeam apparatus, a multi-parameter data acquisition system and a personal computer. Elemental analysis in the region of 500 μm x 500 μm can be performed with a spatial resolution of < 0.3 μm and multi-elemental distributions are presented as images on a computer display even during measurement. We call this system a micro-PIXE camera. (author)

  4. Computational cameras for moving iris recognition

    Science.gov (United States)

    McCloskey, Scott; Venkatesha, Sharath

    2015-05-01

    Iris-based biometric identification is increasingly used for facility access and other security applications. Like all methods that exploit visual information, however, iris systems are limited by the quality of captured images. Optical defocus due to a small depth of field (DOF) is one such challenge, as is the acquisition of sharply-focused iris images from subjects in motion. This manuscript describes the application of computational motion-deblurring cameras to the problem of moving iris capture, from the underlying theory to system considerations and performance data.

  5. Court Reconstruction for Camera Calibration in Broadcast Basketball Videos.

    Science.gov (United States)

    Wen, Pei-Chih; Cheng, Wei-Chih; Wang, Yu-Shuen; Chu, Hung-Kuo; Tang, Nick C; Liao, Hong-Yuan Mark

    2016-05-01

    We introduce a technique of calibrating camera motions in basketball videos. Our method particularly transforms player positions to standard basketball court coordinates and enables applications such as tactical analysis and semantic basketball video retrieval. To achieve a robust calibration, we reconstruct the panoramic basketball court from a video, followed by warping the panoramic court to a standard one. As opposed to previous approaches, which individually detect the court lines and corners of each video frame, our technique considers all video frames simultaneously to achieve calibration; hence, it is robust to illumination changes and player occlusions. To demonstrate the feasibility of our technique, we present a stroke-based system that allows users to retrieve basketball videos. Our system tracks player trajectories from broadcast basketball videos. It then rectifies the trajectories to a standard basketball court by using our camera calibration method. Consequently, users can apply stroke queries to indicate how the players move in gameplay during retrieval. The main advantage of this interface is an explicit query of basketball videos so that unwanted outcomes can be prevented. We show the results in Figs. 1, 7, 9, 10 and our accompanying video to exhibit the feasibility of our technique. PMID:27504515

  6. Segmentation of Camera Captured Business Card Images for Mobile Devices

    CERN Document Server

    Mollah, Ayatullah Faruk; Nasipuri, Mita

    2011-01-01

    Due to huge deformation in the camera captured images, variety in nature of the business cards and the computational constraints of the mobile devices, design of an efficient Business Card Reader (BCR) is challenging to the researchers. Extraction of text regions and segmenting them into characters is one of such challenges. In this paper, we have presented an efficient character segmentation technique for business card images captured by a cell-phone camera, designed in our present work towards developing an efficient BCR. At first, text regions are extracted from the card images and then the skewed ones are corrected using a computationally efficient skew correction technique. At last, these skew corrected text regions are segmented into lines and characters based on horizontal and vertical histogram. Experiments show that the present technique is efficient and applicable for mobile devices, and the mean segmentation accuracy of 97.48% is achieved with 3 mega-pixel (500-600 dpi) images. It takes only 1.1 se...

  7. A new design of filter system in streak camera

    Science.gov (United States)

    Zhou, Pengyu; Bai, Yonglin

    2015-10-01

    In order to reduce the frequency of researchers routing in and out of the testing site and ensure the fluency of the testing we design a new filter system applied to the streak cameras. This system promotes streak cameras' abilities on spatial discrimination and time resolution. This paper focuses on the instruction of the piezoelectric motor's principle based on field-effect tubes. Filter wheel is driven by piezoelectric motor. It can effectively avoid the influences of high field produced by streak tube. Finally we achieve auto regulation at different gears and promote the efficiency of operations and guarantee the safety of researchers. CD4046 introduces the driven clock of this system and we use an inverter to get two synchronous inverted signals. These signals are amplified by field-effect tubes to more than 300V. The amplified ones are integrated at the output terminals to generate sinusoidal signal. The test shows that in this filter system piezoelectric motor operates at its resonance frequency under a control signal of 62.5 KHz. Its working current is 1.9A and driving power is almost 10W. By adjusting the gears, the filter wheel costs less than 2 seconds to calibrate. We accomplish the test in respected results.

  8. Pothole Detection System Using a Black-box Camera

    Directory of Open Access Journals (Sweden)

    Youngtae Jo

    2015-11-01

    Full Text Available Aging roads and poor road-maintenance systems result a large number of potholes, whose numbers increase over time. Potholes jeopardize road safety and transportation efficiency. Moreover, they are often a contributing factor to car accidents. To address the problems associated with potholes, the locations and size of potholes must be determined quickly. Sophisticated road-maintenance strategies can be developed using a pothole database, which requires a specific pothole-detection system that can collect pothole information at low cost and over a wide area. However, pothole repair has long relied on manual detection efforts. Recent automatic detection systems, such as those based on vibrations or laser scanning, are insufficient to detect potholes correctly and inexpensively owing to the unstable detection of vibration-based methods and high costs of laser scanning-based methods. Thus, in this paper, we introduce a new pothole-detection system using a commercial black-box camera. The proposed system detects potholes over a wide area and at low cost. We have developed a novel pothole-detection algorithm specifically designed to work with the embedded computing environments of black-box cameras. Experimental results are presented with our proposed system, showing that potholes can be detected accurately in real-time.

  9. An efficient image compressor for charge coupled devices camera.

    Science.gov (United States)

    Li, Jin; Xing, Fei; You, Zheng

    2014-01-01

    Recently, the discrete wavelet transforms- (DWT-) based compressor, such as JPEG2000 and CCSDS-IDC, is widely seen as the state of the art compression scheme for charge coupled devices (CCD) camera. However, CCD images project on the DWT basis to produce a large number of large amplitude high-frequency coefficients because these images have a large number of complex texture and contour information, which are disadvantage for the later coding. In this paper, we proposed a low-complexity posttransform coupled with compressing sensing (PT-CS) compression approach for remote sensing image. First, the DWT is applied to the remote sensing image. Then, a pair base posttransform is applied to the DWT coefficients. The pair base are DCT base and Hadamard base, which can be used on the high and low bit-rate, respectively. The best posttransform is selected by the l p -norm-based approach. The posttransform is considered as the sparse representation stage of CS. The posttransform coefficients are resampled by sensing measurement matrix. Experimental results on on-board CCD camera images show that the proposed approach significantly outperforms the CCSDS-IDC-based coder, and its performance is comparable to that of the JPEG2000 at low bit rate and it does not have the high excessive implementation complexity of JPEG2000. PMID:25114977

  10. Quantifying photometric observing conditions on Paranal using an IR camera

    CERN Document Server

    Kerber, Florian; Hanuschik, Reinhard

    2014-01-01

    A Low Humidity and Temperature Profiling (LHATPRO) microwave radiometer, manufactured by Radiometer Physics GmbH (RPG), is used to monitor sky conditions over ESO's Paranal observatory in support of VLT science operations. In addition to measuring precipitable water vapour (PWV) the instrument also contains an IR camera measuring sky brightness temperature at 10.5 {\\mu}m. Due to its extended operating range down to -100 {\\deg}C it is capable of detecting very cold and very thin, even sub-visual, cirrus clouds. We present a set of instrument flux calibration values as compared with a detrended fluctuation analysis (DFA) of the IR camera zenith-looking sky brightness data measured above Paranal taken over the past two years. We show that it is possible to quantify photometric observing conditions and that the method is highly sensitive to the presence of even very thin clouds but robust against variations of sky brightness caused by effects other than clouds such as variations of precipitable water vapour. Henc...

  11. Algorithms for 3D shape scanning with a depth camera.

    Science.gov (United States)

    Cui, Yan; Schuon, Sebastian; Thrun, Sebastian; Stricker, Didier; Theobalt, Christian

    2013-05-01

    We describe a method for 3D object scanning by aligning depth scans that were taken from around an object with a Time-of-Flight (ToF) camera. These ToF cameras can measure depth scans at video rate. Due to comparably simple technology, they bear potential for economical production in big volumes. Our easy-to-use, cost-effective scanning solution, which is based on such a sensor, could make 3D scanning technology more accessible to everyday users. The algorithmic challenge we face is that the sensor's level of random noise is substantial and there is a nontrivial systematic bias. In this paper, we show the surprising result that 3D scans of reasonable quality can also be obtained with a sensor of such low data quality. Established filtering and scan alignment techniques from the literature fail to achieve this goal. In contrast, our algorithm is based on a new combination of a 3D superresolution method with a probabilistic scan alignment approach that explicitly takes into account the sensor's noise characteristics.

  12. An Early Fire Detection Algorithm Using IP Cameras

    Directory of Open Access Journals (Sweden)

    Hector Perez-Meana

    2012-05-01

    Full Text Available The presence of smoke is the first symptom of fire; therefore to achieve early fire detection, accurate and quick estimation of the presence of smoke is very important. In this paper we propose an algorithm to detect the presence of smoke using video sequences captured by Internet Protocol (IP cameras, in which important features of smoke, such as color, motion and growth properties are employed. For an efficient smoke detection in the IP camera platform, a detection algorithm must operate directly in the Discrete Cosine Transform (DCT domain to reduce computational cost, avoiding a complete decoding process required for algorithms that operate in spatial domain. In the proposed algorithm the DCT Inter-transformation technique is used to increase the detection accuracy without inverse DCT operation. In the proposed scheme, firstly the candidate smoke regions are estimated using motion and color smoke properties; next using morphological operations the noise is reduced. Finally the growth properties of the candidate smoke regions are furthermore analyzed through time using the connected component labeling technique. Evaluation results show that a feasible smoke detection method with false negative and false positive error rates approximately equal to 4% and 2%, respectively, is obtained.

  13. Derivation of Johnson-Cousins Magnitudes from DSLR Camera Observations

    Science.gov (United States)

    Park, Woojin; Pak, Soojong; Shim, Hyunjin; Le, Huynh Anh N.; Im, Myungshin; Chang, Seunghyuk; Yu, Joonkyu

    2016-01-01

    The RGB Bayer filter system consists of a mosaic of R, G, and B filters on the grid of the photo sensors which typical commercial DSLR (Digital Single Lens Reflex) cameras and CCD cameras are equipped with. Lot of unique astronomical data obtained using an RGB Bayer filter system are available, including transient objects, e.g. supernovae, variable stars, and solar system bodies. The utilization of such data in scientific research requires that reliable photometric transformation methods are available between the systems. In this work, we develop a series of equations to convert the observed magnitudes in the RGB Bayer filter system (RB, GB, and BB) into the Johnson-Cousins BVR filter system (BJ, VJ, and RC). The new transformation equations derive the calculated magnitudes in the Johnson-Cousins filters (BJcal, VJcal, and RCcal) as functions of RGB magnitudes and colors. The mean differences between the transformed magnitudes and original magnitudes, i.e. the residuals, are (BJ - BJcal) = 0.064 mag, (VJ - VJcal) = 0.041 mag, and (RC - RCcal) = 0.039 mag. The calculated Johnson-Cousins magnitudes from the transformation equations show a good linear correlation with the observed Johnson-Cousins magnitudes.

  14. Quantifying photometric observing conditions on Paranal using an IR camera

    Science.gov (United States)

    Kerber, Florian; Querel, Richard R.; Hanuschik, Reinhard

    2014-08-01

    A Low Humidity and Temperature Profiling (LHATPRO) microwave radiometer, manufactured by Radiometer Physics GmbH (RPG), is used to monitor sky conditions over ESO's Paranal observatory in support of VLT science operations. In addition to measuring precipitable water vapour (PWV) the instrument also contains an IR camera measuring sky brightness temperature at 10.5 μm. Due to its extended operating range down to -100 °C it is capable of detecting very cold and very thin, even sub-visual, cirrus clouds. We present a set of instrument flux calibration values as compared with a detrended fluctuation analysis (DFA) of the IR camera zenith-looking sky brightness data measured above Paranal taken over the past two years. We show that it is possible to quantify photometric observing conditions and that the method is highly sensitive to the presence of even very thin clouds but robust against variations of sky brightness caused by effects other than clouds such as variations of precipitable water vapour. Hence it can be used to determine photometric conditions for science operations. About 60 % of nights are free of clouds on Paranal. More work will be required to classify the clouds using this technique. For the future this approach might become part of VLT science operations for evaluating nightly sky conditions.

  15. Court Reconstruction for Camera Calibration in Broadcast Basketball Videos.

    Science.gov (United States)

    Wen, Pei-Chih; Cheng, Wei-Chih; Wang, Yu-Shuen; Chu, Hung-Kuo; Tang, Nick C; Liao, Hong-Yuan Mark

    2016-05-01

    We introduce a technique of calibrating camera motions in basketball videos. Our method particularly transforms player positions to standard basketball court coordinates and enables applications such as tactical analysis and semantic basketball video retrieval. To achieve a robust calibration, we reconstruct the panoramic basketball court from a video, followed by warping the panoramic court to a standard one. As opposed to previous approaches, which individually detect the court lines and corners of each video frame, our technique considers all video frames simultaneously to achieve calibration; hence, it is robust to illumination changes and player occlusions. To demonstrate the feasibility of our technique, we present a stroke-based system that allows users to retrieve basketball videos. Our system tracks player trajectories from broadcast basketball videos. It then rectifies the trajectories to a standard basketball court by using our camera calibration method. Consequently, users can apply stroke queries to indicate how the players move in gameplay during retrieval. The main advantage of this interface is an explicit query of basketball videos so that unwanted outcomes can be prevented. We show the results in Figs. 1, 7, 9, 10 and our accompanying video to exhibit the feasibility of our technique.

  16. Analysis of Camera Parameters Value in Various Object Distances Calibration

    International Nuclear Information System (INIS)

    In photogrammetric applications, good camera parameters are needed for mapping purpose such as an Unmanned Aerial Vehicle (UAV) that encompassed with non-metric camera devices. Simple camera calibration was being a common application in many laboratory works in order to get the camera parameter's value. In aerial mapping, interior camera parameters' value from close-range camera calibration is used to correct the image error. However, the causes and effects of the calibration steps used to get accurate mapping need to be analyze. Therefore, this research aims to contribute an analysis of camera parameters from portable calibration frame of 1.5 × 1 meter dimension size. Object distances of two, three, four, five, and six meters are the research focus. Results are analyzed to find out the changes in image and camera parameters' value. Hence, camera calibration parameter's of a camera is consider different depend on type of calibration parameters and object distances

  17. Risk Aversion in Game Shows

    DEFF Research Database (Denmark)

    Andersen, Steffen; Harrison, Glenn W.; Lau, Morten I.;

    2008-01-01

    We review the use of behavior from television game shows to infer risk attitudes. These shows provide evidence when contestants are making decisions over very large stakes, and in a replicated, structured way. Inferences are generally confounded by the subjective assessment of skill in some games...

  18. Time-of-Flight Microwave Camera

    Science.gov (United States)

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-10-01

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.

  19. Women's Creation of Camera Phone Culture

    Directory of Open Access Journals (Sweden)

    Dong-Hoo Lee

    2005-01-01

    Full Text Available A major aspect of the relationship between women and the media is the extent to which the new media environment is shaping how women live and perceive the world. It is necessary to understand, in a concrete way, how the new media environment is articulated to our gendered culture, how the symbolic or physical forms of the new media condition women’s experiences, and the degree to which a ‘post-gendered re-codification’ can be realized within a new media environment. This paper intends to provide an ethnographic case study of women’s experiences with camera phones, examining the extent to which these experiences recreate or reconstruct women’s subjectivity or identity. By taking a close look at the ways in which women utilize and appropriate the camera phone in their daily lives, it focuses not only on women’s cultural practices in making meanings but also on their possible effect in the deconstruction of gendered techno-culture.

  20. Infrared Camera Analysis of Laser Hardening

    Directory of Open Access Journals (Sweden)

    J. Tesar

    2012-01-01

    Full Text Available The improvement of surface properties such as laser hardening becomes very important in present manufacturing. Resulting laser hardening depth and surface hardness can be affected by changes in optical properties of material surface, that is, by absorptivity that gives the ratio between absorbed energy and incident laser energy. The surface changes on tested sample of steel block were made by engraving laser with different scanning velocity and repetition frequency. During the laser hardening the process was observed by infrared (IR camera system that measures infrared radiation from the heated sample and depicts it in a form of temperature field. The images from the IR camera of the sample are shown, and maximal temperatures of all engraved areas are evaluated and compared. The surface hardness was measured, and the hardening depth was estimated from the measured hardness profile in the sample cross-section. The correlation between reached temperature, surface hardness, and hardening depth is shown. The highest and the lowest temperatures correspond to the lowest/highest hardness and the highest/lowest hardening depth.

  1. Multi-band infrared camera systems

    Science.gov (United States)

    Davis, Tim; Lang, Frank; Sinneger, Joe; Stabile, Paul; Tower, John

    1994-12-01

    The program resulted in an IR camera system that utilizes a unique MOS addressable focal plane array (FPA) with full TV resolution, electronic control capability, and windowing capability. Two systems were delivered, each with two different camera heads: a Stirling-cooled 3-5 micron band head and a liquid nitrogen-cooled, filter-wheel-based, 1.5-5 micron band head. Signal processing features include averaging up to 16 frames, flexible compensation modes, gain and offset control, and real-time dither. The primary digital interface is a Hewlett-Packard standard GPID (IEEE-488) port that is used to upload and download data. The FPA employs an X-Y addressed PtSi photodiode array, CMOS horizontal and vertical scan registers, horizontal signal line (HSL) buffers followed by a high-gain preamplifier and a depletion NMOS output amplifier. The 640 x 480 MOS X-Y addressed FPA has a high degree of flexibility in operational modes. By changing the digital data pattern applied to the vertical scan register, the FPA can be operated in either an interlaced or noninterlaced format. The thermal sensitivity performance of the second system's Stirling-cooled head was the best of the systems produced.

  2. The design of aerial camera focusing mechanism

    Science.gov (United States)

    Hu, Changchang; Yang, Hongtao; Niu, Haijun

    2015-10-01

    In order to ensure the imaging resolution of aerial camera and compensating defocusing caused by the changing of atmospheric temperature, pressure, oblique photographing distance and other environmental factor [1,2], and to meeting the overall design requirements of the camera for the lower mass and smaller size , the linear focusing mechanism is designed. Through the target surface support, the target surface component is connected with focusing driving mechanism. Make use of precision ball screws, focusing mechanism transforms the input rotary motion of motor into linear motion of the focal plane assembly. Then combined with the form of linear guide restraint movement, the magnetic encoder is adopted to detect the response of displacement. And the closed loop control is adopted to realize accurate focusing. This paper illustrated the design scheme for a focusing mechanism and analyzed its error sources. It has the advantages of light friction and simple transmission chain and reducing the transmission error effectively. And this paper also analyses the target surface by finite element analysis and lightweight design. Proving that the precision of focusing mechanism can achieve higher than 3um, and the focusing range is +/-2mm.

  3. FIDO Rover Retracted Arm and Camera

    Science.gov (United States)

    1999-01-01

    The Field Integrated Design and Operations (FIDO) rover extends the large mast that carries its panoramic camera. The FIDO is being used in ongoing NASA field tests to simulate driving conditions on Mars. FIDO is controlled from the mission control room at JPL's Planetary Robotics Laboratory in Pasadena. FIDO uses a robot arm to manipulate science instruments and it has a new mini-corer or drill to extract and cache rock samples. Several camera systems onboard allow the rover to collect science and navigation images by remote-control. The rover is about the size of a coffee table and weighs as much as a St. Bernard, about 70 kilograms (150 pounds). It is approximately 85 centimeters (about 33 inches) wide, 105 centimeters (41 inches) long, and 55 centimeters (22 inches) high. The rover moves up to 300 meters an hour (less than a mile per hour) over smooth terrain, using its onboard stereo vision systems to detect and avoid obstacles as it travels 'on-the-fly.' During these tests, FIDO is powered by both solar panels that cover the top of the rover and by replaceable, rechargeable batteries.

  4. Gamma camera based FDG PET in oncology

    International Nuclear Information System (INIS)

    Positron Emission Tomography(PET) was introduced as a research tool in the 1970s and it took about 20 years before PET became an useful clinical imaging modality. In the USA, insurance coverage for PET procedures in the 1990s was the turning point, I believe, for this progress. Initially PET was used in neurology but recently more than 80% of PET procedures are in oncological applications. I firmly believe, in the 21st century, one can not manage cancer patients properly without PET and PET is very important medical imaging modality in basic and clinical sciences. PET is grouped into 2 categories; conventional (c) and gamma camera based (CB) PET. CBPET is more readily available utilizing dual-head gamma cameras and commercially available FDG to many medical centers at low cost to patients. In fact there are more CBPET in operation than cPET in the USA. CBPET is inferior to cPET in its performance but clinical studies in oncology is feasible without expensive infrastructures such as staffing, rooms and equipments. At Ajou university Hospital, CBPET was installed in late 1997 for the first time in Korea as well as in Asia and the system has been used successfully and effectively in oncological applications. Our was the fourth PET operation in Korea and I believe this may have been instrumental for other institutions got interested in clinical PET. The following is a brief description of our clinical experience of FDG CBPET in oncology

  5. The Mars NetLander panoramic camera

    Science.gov (United States)

    Jaumann, Ralf; Langevin, Yves; Hauber, Ernst; Oberst, Jürgen; Grothues, Hans-Georg; Hoffmann, Harald; Soufflot, Alain; Bertaux, Jean-Loup; Dimarellis, Emmanuel; Mottola, Stefano; Bibring, Jean-Pierre; Neukum, Gerhard; Albertz, Jörg; Masson, Philippe; Pinet, Patrick; Lamy, Philippe; Formisano, Vittorio

    2000-10-01

    The panoramic camera (PanCam) imaging experiment is designed to obtain high-resolution multispectral stereoscopic panoramic images from each of the four Mars NetLander 2005 sites. The main scientific objectives to be addressed by the PanCam experiment are (1) to locate the landing sites and support the NetLander network sciences, (2) to geologically investigate and map the landing sites, and (3) to study the properties of the atmosphere and of variable phenomena. To place in situ measurements at a landing site into a proper regional context, it is necessary to determine the lander orientation on ground and to exactly locate the position of the landing site with respect to the available cartographic database. This is not possible by tracking alone due to the lack of on-ground orientation and the so-called map-tie problem. Images as provided by the PanCam allow to determine accurate tilt and north directions for each lander and to identify the lander locations based on landmarks, which can also be recognized in appropriate orbiter imagery. With this information, it will be further possible to improve the Mars-wide geodetic control point network and the resulting geometric precision of global map products. The major geoscientific objectives of the PanCam lander images are the recognition of surface features like ripples, ridges and troughs, and the identification and characterization of different rock and surface units based on their morphology, distribution, spectral characteristics, and physical properties. The analysis of the PanCam imagery will finally result in the generation of precise map products for each of the landing sites. So far comparative geologic studies of the Martian surface are restricted to the timely separated Mars Pathfinder and the two Viking Lander Missions. Further lander missions are in preparation (Beagle-2, Mars Surveyor 03). NetLander provides the unique opportunity to nearly double the number of accessible landing site data by providing

  6. Mars Cameras Make Panoramic Photography a Snap

    Science.gov (United States)

    2008-01-01

    If you wish to explore a Martian landscape without leaving your armchair, a few simple clicks around the NASA Web site will lead you to panoramic photographs taken from the Mars Exploration Rovers, Spirit and Opportunity. Many of the technologies that enable this spectacular Mars photography have also inspired advancements in photography here on Earth, including the panoramic camera (Pancam) and its housing assembly, designed by the Jet Propulsion Laboratory and Cornell University for the Mars missions. Mounted atop each rover, the Pancam mast assembly (PMA) can tilt a full 180 degrees and swivel 360 degrees, allowing for a complete, highly detailed view of the Martian landscape. The rover Pancams take small, 1 megapixel (1 million pixel) digital photographs, which are stitched together into large panoramas that sometimes measure 4 by 24 megapixels. The Pancam software performs some image correction and stitching after the photographs are transmitted back to Earth. Different lens filters and a spectrometer also assist scientists in their analyses of infrared radiation from the objects in the photographs. These photographs from Mars spurred developers to begin thinking in terms of larger and higher quality images: super-sized digital pictures, or gigapixels, which are images composed of 1 billion or more pixels. Gigapixel images are more than 200 times the size captured by today s standard 4 megapixel digital camera. Although originally created for the Mars missions, the detail provided by these large photographs allows for many purposes, not all of which are limited to extraterrestrial photography.

  7. The NectarCAM camera project

    CERN Document Server

    Glicenstein, J-F; Barrio, J-A; Blanch, O; Boix, J; Bolmont, J; Boutonnet, C; Cazaux, S; Chabanne, E; Champion, C; Chateau, F; Colonges, S; Corona, P; Couturier, S; Courty, B; Delagnes, E; Delgado, C; Ernenwein, J-P; Fegan, S; Ferreira, O; Fesquet, M; Fontaine, G; Fouque, N; Henault, F; Gascón, D; Herranz, D; Hermel, R; Hoffmann, D; Houles, J; Karkar, S; Khelifi, B; Knödlseder, J; Martinez, G; Lacombe, K; Lamanna, G; LeFlour, T; Lopez-Coto, R; Louis, F; Mathieu, A; Moulin, E; Nayman, P; Nunio, F; Olive, J-F; Panazol, J-L; Petrucci, P-O; Punch, M; Prast, J; Ramon, P; Riallot, M; Ribó, M; Rosier-Lees, S; Sanuy, A; Siero, J; Tavernet, J-P; Tejedor, L A; Toussenel, F; Vasileiadis, G; Voisin, V; Waegebert, V; Zurbach, C

    2013-01-01

    In the framework of the next generation of Cherenkov telescopes, the Cherenkov Telescope Array (CTA), NectarCAM is a camera designed for the medium size telescopes covering the central energy range of 100 GeV to 30 TeV. NectarCAM will be finely pixelated (~ 1800 pixels for a 8 degree field of view, FoV) in order to image atmospheric Cherenkov showers by measuring the charge deposited within a few nanoseconds time-window. It will have additional features like the capacity to record the full waveform with GHz sampling for every pixel and to measure event times with nanosecond accuracy. An array of a few tens of medium size telescopes, equipped with NectarCAMs, will achieve up to a factor of ten improvement in sensitivity over existing instruments in the energy range of 100 GeV to 10 TeV. The camera is made of roughly 250 independent read-out modules, each composed of seven photo-multipliers, with their associated high voltage base and control, a read-out board and a multi-service backplane board. The read-out b...

  8. Focal Plane Metrology for the LSST Camera

    Energy Technology Data Exchange (ETDEWEB)

    A Rasmussen, Andrew P.; Hale, Layton; Kim, Peter; Lee, Eric; Perl, Martin; Schindler, Rafe; Takacs, Peter; Thurston, Timothy; /SLAC

    2007-01-10

    Meeting the science goals for the Large Synoptic Survey Telescope (LSST) translates into a demanding set of imaging performance requirements for the optical system over a wide (3.5{sup o}) field of view. In turn, meeting those imaging requirements necessitates maintaining precise control of the focal plane surface (10 {micro}m P-V) over the entire field of view (640 mm diameter) at the operating temperature (T {approx} -100 C) and over the operational elevation angle range. We briefly describe the hierarchical design approach for the LSST Camera focal plane and the baseline design for assembling the flat focal plane at room temperature. Preliminary results of gravity load and thermal distortion calculations are provided, and early metrological verification of candidate materials under cold thermal conditions are presented. A detailed, generalized method for stitching together sparse metrology data originating from differential, non-contact metrological data acquisition spanning multiple (non-continuous) sensor surfaces making up the focal plane, is described and demonstrated. Finally, we describe some in situ alignment verification alternatives, some of which may be integrated into the camera's focal plane.

  9. Lights, Camera, AG-Tion: Promoting Agricultural and Environmental Education on Camera

    Science.gov (United States)

    Fuhrman, Nicholas E.

    2016-01-01

    Viewing of online videos and television segments has become a popular and efficient way for Extension audiences to acquire information. This article describes a unique approach to teaching on camera that may help Extension educators communicate their messages with comfort and personality. The S.A.L.A.D. approach emphasizes using relevant teaching…

  10. An assessment of the effectiveness of high definition cameras as remote monitoring tools for dolphin ecology studies.

    Directory of Open Access Journals (Sweden)

    Estênio Guimarães Paiva

    Full Text Available Research involving marine mammals often requires costly field programs. This paper assessed whether the benefits of using cameras outweighs the implications of having personnel performing marine mammal detection in the field. The efficacy of video and still cameras to detect Indo-Pacific bottlenose dolphins (Tursiops aduncus in the Fremantle Harbour (Western Australia was evaluated, with consideration on how environmental conditions affect detectability. The cameras were set on a tower in the Fremantle Port channel and videos were perused at 1.75 times the normal speed. Images from the cameras were used to estimate position of dolphins at the water's surface. Dolphin detections ranged from 5.6 m to 463.3 m for the video camera, and from 10.8 m to 347.8 m for the still camera. Detection range showed to be satisfactory when compared to distances at which dolphins would be detected by field observers. The relative effect of environmental conditions on detectability was considered by fitting a Generalised Estimation Equations (GEEs model with Beaufort, level of glare and their interactions as predictors and a temporal auto-correlation structure. The best fit model indicated level of glare had an effect, with more intense periods of glare corresponding to lower occurrences of observed dolphins. However this effect was not large (-0.264 and the parameter estimate was associated with a large standard error (0.113. The limited field of view was the main restraint in that cameras can be only applied to detections of animals observed rather than counts of individuals. However, the use of cameras was effective for long term monitoring of occurrence of dolphins, outweighing the costs and reducing the health and safety risks to field personal. This study showed that cameras could be effectively implemented onshore for research such as studying changes in habitat use in response to development and construction activities.

  11. Online gamma-camera imaging of 103Pd seeds (OGIPS) for permanent breast seed implantation

    Science.gov (United States)

    Ravi, Ananth; Caldwell, Curtis B.; Keller, Brian M.; Reznik, Alla; Pignol, Jean-Philippe

    2007-09-01

    Permanent brachytherapy seed implantation is being investigated as a mode of accelerated partial breast irradiation for early stage breast cancer patients. Currently, the seeds are poorly visualized during the procedure making it difficult to perform a real-time correction of the implantation if required. The objective was to determine if a customized gamma-camera can accurately localize the seeds during implantation. Monte Carlo simulations of a CZT based gamma-camera were used to assess whether images of suitable quality could be derived by detecting the 21 keV photons emitted from 74 MBq 103Pd brachytherapy seeds. A hexagonal parallel hole collimator with a hole length of 38 mm, hole diameter of 1.2 mm and 0.2 mm septa, was modeled. The design of the gamma-camera was evaluated on a realistic model of the breast and three layers of the seed distribution (55 seeds) based on a pre-implantation CT treatment plan. The Monte Carlo simulations showed that the gamma-camera was able to localize the seeds with a maximum error of 2.0 mm, using only two views and 20 s of imaging. A gamma-camera can potentially be used as an intra-procedural image guidance system for quality assurance for permanent breast seed implantation.

  12. Online gamma-camera imaging of {sup 103}Pd seeds (OGIPS) for permanent breast seed implantation

    Energy Technology Data Exchange (ETDEWEB)

    Ravi, Ananth [Department of Medical Biophysics, University of Toronto (Canada); Caldwell, Curtis B [Department of Medical Biophysics, University of Toronto (Canada); Keller, Brian M [Medical Physics, Sunnybrook Health Sciences Centre (Canada); Reznik, Alla [Department of Medical Biophysics, University of Toronto (Canada); Pignol, Jean-Philippe [Department of Medical Biophysics, University of Toronto (Canada)

    2007-09-21

    Permanent brachytherapy seed implantation is being investigated as a mode of accelerated partial breast irradiation for early stage breast cancer patients. Currently, the seeds are poorly visualized during the procedure making it difficult to perform a real-time correction of the implantation if required. The objective was to determine if a customized gamma-camera can accurately localize the seeds during implantation. Monte Carlo simulations of a CZT based gamma-camera were used to assess whether images of suitable quality could be derived by detecting the 21 keV photons emitted from 74 MBq {sup 103}Pd brachytherapy seeds. A hexagonal parallel hole collimator with a hole length of 38 mm, hole diameter of 1.2 mm and 0.2 mm septa, was modeled. The design of the gamma-camera was evaluated on a realistic model of the breast and three layers of the seed distribution (55 seeds) based on a pre-implantation CT treatment plan. The Monte Carlo simulations showed that the gamma-camera was able to localize the seeds with a maximum error of 2.0 mm, using only two views and 20 s of imaging. A gamma-camera can potentially be used as an intra-procedural image guidance system for quality assurance for permanent breast seed implantation.

  13. Imaging of breast cancer with mid- and long-wave infrared camera.

    Science.gov (United States)

    Joro, R; Lääperi, A-L; Dastidar, P; Soimakallio, S; Kuukasjärvi, T; Toivonen, T; Saaristo, R; Järvenpää, R

    2008-01-01

    In this novel study the breasts of 15 women with palpable breast cancer were preoperatively imaged with three technically different infrared (IR) cameras - micro bolometer (MB), quantum well (QWIP) and photo voltaic (PV) - to compare their ability to differentiate breast cancer from normal tissue. The IR images were processed, the data for frequency analysis were collected from dynamic IR images by pixel-based analysis and from each image selectively windowed regional analysis was carried out, based on angiogenesis and nitric oxide production of cancer tissue causing vasomotor and cardiogenic frequency differences compared to normal tissue. Our results show that the GaAs QWIP camera and the InSb PV camera demonstrate the frequency difference between normal and cancerous breast tissue; the PV camera more clearly. With selected image processing operations more detailed frequency analyses could be applied to the suspicious area. The MB camera was not suitable for tissue differentiation, as the difference between noise and effective signal was unsatisfactory. PMID:18432466

  14. Development of the radial neutron camera system for the HL-2A tokamak.

    Science.gov (United States)

    Zhang, Y P; Yang, J W; Liu, Yi; Fan, T S; Luo, X B; Yuan, G L; Zhang, P F; Xie, X F; Song, X Y; Chen, W; Ji, X Q; Li, X; Du, T F; Ge, L J; Fu, B Z; Isobe, M; Song, X M; Shi, Z B; Yang, Q W; Duan, X R

    2016-06-01

    A new radial neutron camera system has been developed and operated recently in the HL-2A tokamak to measure the spatial and time resolved 2.5 MeV D-D fusion neutron, enhancing the understanding of the energetic-ion physics. The camera mainly consists of a multichannel collimator, liquid-scintillation detectors, shielding systems, and a data acquisition system. Measurements of the D-D fusion neutrons using the camera have been successfully performed during the 2015 HL-2A experiment campaign. The measurements show that the distribution of the fusion neutrons in the HL-2A plasma has a peaked profile, suggesting that the neutral beam injection beam ions in the plasma have a peaked distribution. It also suggests that the neutrons are primarily produced from beam-target reactions in the plasma core region. The measurement results from the neutron camera are well consistent with the results of both a standard (235)U fission chamber and NUBEAM neutron calculations. In this paper, the new radial neutron camera system on HL-2A and the first experimental results are described. PMID:27370450

  15. Development of the radial neutron camera system for the HL-2A tokamak

    Science.gov (United States)

    Zhang, Y. P.; Yang, J. W.; Liu, Yi; Fan, T. S.; Luo, X. B.; Yuan, G. L.; Zhang, P. F.; Xie, X. F.; Song, X. Y.; Chen, W.; Ji, X. Q.; Li, X.; Du, T. F.; Ge, L. J.; Fu, B. Z.; Isobe, M.; Song, X. M.; Shi, Z. B.; Yang, Q. W.; Duan, X. R.

    2016-06-01

    A new radial neutron camera system has been developed and operated recently in the HL-2A tokamak to measure the spatial and time resolved 2.5 MeV D-D fusion neutron, enhancing the understanding of the energetic-ion physics. The camera mainly consists of a multichannel collimator, liquid-scintillation detectors, shielding systems, and a data acquisition system. Measurements of the D-D fusion neutrons using the camera have been successfully performed during the 2015 HL-2A experiment campaign. The measurements show that the distribution of the fusion neutrons in the HL-2A plasma has a peaked profile, suggesting that the neutral beam injection beam ions in the plasma have a peaked distribution. It also suggests that the neutrons are primarily produced from beam-target reactions in the plasma core region. The measurement results from the neutron camera are well consistent with the results of both a standard 235U fission chamber and NUBEAM neutron calculations. In this paper, the new radial neutron camera system on HL-2A and the first experimental results are described.

  16. New Sensors for Cultural Heritage Metric Survey: The ToF Cameras

    Directory of Open Access Journals (Sweden)

    Filiberto Chiabrando

    2011-12-01

    Full Text Available ToF cameras are new instruments based on CCD/CMOS sensors which measure distances instead of radiometry. The resulting point clouds show the same properties (both in terms of accuracy and resolution of the point clouds acquired by means of traditional LiDAR devices. ToF cameras are cheap instruments (less than 10.000 € based on video real time distance measurements and can represent an interesting alternative to the more expensive LiDAR instruments. In addition, the limited weight and dimensions of ToF cameras allow a reduction of some practical problems such as transportation and on-site management. Most of the commercial ToF cameras use the phase-shift method to measure distances. Due to the use of only one wavelength, most of them have limited range of application (usually about 5 or 10 m. After a brief description of the main characteristics of these instruments, this paper explains and comments the results of the first experimental applications of ToF cameras in Cultural Heritage 3D metric survey.  The possibility to acquire more than 30 frames/s and future developments of these devices in terms of use of more than one wavelength to overcome the ambiguity problem allow to foresee new interesting applications.

  17. Monitoring of Heart and Breathing Rates Using Dual Cameras on a Smartphone.

    Directory of Open Access Journals (Sweden)

    Yunyoung Nam

    Full Text Available Some smartphones have the capability to process video streams from both the front- and rear-facing cameras simultaneously. This paper proposes a new monitoring method for simultaneous estimation of heart and breathing rates using dual cameras of a smartphone. The proposed approach estimates heart rates using a rear-facing camera, while at the same time breathing rates are estimated using a non-contact front-facing camera. For heart rate estimation, a simple application protocol is used to analyze the varying color signals of a fingertip placed in contact with the rear camera. The breathing rate is estimated from non-contact video recordings from both chest and abdominal motions. Reference breathing rates were measured by a respiration belt placed around the chest and abdomen of a subject; reference heart rates (HR were determined using the standard electrocardiogram. An automated selection of either the chest or abdominal video signal was determined by choosing the signal with a greater autocorrelation value. The breathing rate was then determined by selecting the dominant peak in the power spectrum. To evaluate the performance of the proposed methods, data were collected from 11 healthy subjects. The breathing ranges spanned both low and high frequencies (6-60 breaths/min, and the results show that the average median errors from the reflectance imaging on the chest and the abdominal walls based on choosing the maximum spectral peak were 1.43% and 1.62%, respectively. Similarly, HR estimates were also found to be accurate.

  18. Monitoring of Heart and Breathing Rates Using Dual Cameras on a Smartphone.

    Science.gov (United States)

    Nam, Yunyoung; Kong, Youngsun; Reyes, Bersain; Reljin, Natasa; Chon, Ki H

    2016-01-01

    Some smartphones have the capability to process video streams from both the front- and rear-facing cameras simultaneously. This paper proposes a new monitoring method for simultaneous estimation of heart and breathing rates using dual cameras of a smartphone. The proposed approach estimates heart rates using a rear-facing camera, while at the same time breathing rates are estimated using a non-contact front-facing camera. For heart rate estimation, a simple application protocol is used to analyze the varying color signals of a fingertip placed in contact with the rear camera. The breathing rate is estimated from non-contact video recordings from both chest and abdominal motions. Reference breathing rates were measured by a respiration belt placed around the chest and abdomen of a subject; reference heart rates (HR) were determined using the standard electrocardiogram. An automated selection of either the chest or abdominal video signal was determined by choosing the signal with a greater autocorrelation value. The breathing rate was then determined by selecting the dominant peak in the power spectrum. To evaluate the performance of the proposed methods, data were collected from 11 healthy subjects. The breathing ranges spanned both low and high frequencies (6-60 breaths/min), and the results show that the average median errors from the reflectance imaging on the chest and the abdominal walls based on choosing the maximum spectral peak were 1.43% and 1.62%, respectively. Similarly, HR estimates were also found to be accurate.

  19. Narrow Field-Of Visual Odometry Based on a Focused Plenoptic Camera

    Science.gov (United States)

    Zeller, N.; Quint, F.; Stilla, U.

    2015-03-01

    In this article we present a new method for visual odometry based on a focused plenoptic camera. This method fuses the depth data gained by a monocular Simultaneous Localization and Mapping (SLAM) algorithm and the one received from a focused plenoptic camera. Our algorithm uses the depth data and the totally focused images supplied by the plenoptic camera to run a real-time semi-dense direct SLAM algorithm. Based on this combined approach, the scale ambiguity of a monocular SLAM system can be overcome. Furthermore, the additional light-field information highly improves the tracking capabilities of the algorithm. Thus, visual odometry even for narrow field of view (FOV) cameras is possible. We show that not only tracking profits from the additional light-field information. By accumulating the depth information over multiple tracked images, also the depth accuracy of the focused plenoptic camera can be highly improved. This novel approach improves the depth error by one order of magnitude compared to the one received from a single light-field image.

  20. 3D-guided CT reconstruction using time-of-flight camera

    Science.gov (United States)

    Ismail, Mahmoud; Taguchi, Katsuyuki; Xu, Jingyan; Tsui, Benjamin M. W.; Boctor, Emad M.

    2011-03-01

    We propose the use of a time-of-flight (TOF) camera to obtain the patient's body contour in 3D guided imaging reconstruction scheme in CT and C-arm imaging systems with truncated projection. In addition to pixel intensity, a TOF camera provides the 3D coordinates of each point in the captured scene with respect to the camera coordinates. Information from the TOF camera was used to obtain a digitized surface of the patient's body. The digitization points are transformed to X-Ray detector coordinates by registering the two coordinate systems. A set of points corresponding to the slice of interest are segmented to form a 2D contour of the body surface. Radon transform is applied to the contour to generate the 'trust region' for the projection data. The generated 'trust region' is integrated as an input to augment the projection data. It is used to estimate the truncated, unmeasured projections using linear interpolation. Finally the image is reconstructed using the combination of the estimated and the measured projection data. The proposed method is evaluated using a physical phantom. Projection data for the phantom were obtained using a C-arm system. Significant improvement in the reconstructed image quality near the truncation edges was observed using the proposed method as compared to that without truncation correction. This work shows that the proposed 3D guided CT image reconstruction using a TOF camera represents a feasible solution to the projection data truncation problem.

  1. Generic camera model and its calibration for computational integral imaging and 3D reconstruction.

    Science.gov (United States)

    Li, Weiming; Li, Youfu

    2011-03-01

    Integral imaging (II) is an important 3D imaging technology. To reconstruct 3D information of the viewed objects, modeling and calibrating the optical pickup process of II are necessary. This work focuses on the modeling and calibration of an II system consisting of a lenslet array, an imaging lens, and a charge-coupled device camera. Most existing work on such systems assumes a pinhole array model (PAM). In this work, we explore a generic camera model that accommodates more generality. This model is an empirical model based on measurements, and we constructed a setup for its calibration. Experimental results show a significant difference between the generic camera model and the PAM. Images of planar patterns and 3D objects were computationally reconstructed with the generic camera model. Compared with the images reconstructed using the PAM, the images present higher fidelity and preserve more high spatial frequency components. To the best of our knowledge, this is the first attempt in applying a generic camera model to an II system.

  2. Fusing inertial sensor data in an extended Kalman filter for 3D camera tracking.

    Science.gov (United States)

    Erdem, Arif Tanju; Ercan, Ali Özer

    2015-02-01

    In a setup where camera measurements are used to estimate 3D egomotion in an extended Kalman filter (EKF) framework, it is well-known that inertial sensors (i.e., accelerometers and gyroscopes) are especially useful when the camera undergoes fast motion. Inertial sensor data can be fused at the EKF with the camera measurements in either the correction stage (as measurement inputs) or the prediction stage (as control inputs). In general, only one type of inertial sensor is employed in the EKF in the literature, or when both are employed they are both fused in the same stage. In this paper, we provide an extensive performance comparison of every possible combination of fusing accelerometer and gyroscope data as control or measurement inputs using the same data set collected at different motion speeds. In particular, we compare the performances of different approaches based on 3D pose errors, in addition to camera reprojection errors commonly found in the literature, which provides further insight into the strengths and weaknesses of different approaches. We show using both simulated and real data that it is always better to fuse both sensors in the measurement stage and that in particular, accelerometer helps more with the 3D position tracking accuracy, whereas gyroscope helps more with the 3D orientation tracking accuracy. We also propose a simulated data generation method, which is beneficial for the design and validation of tracking algorithms involving both camera and inertial measurement unit measurements in general.

  3. Bayesian inference in camera trapping studies for a class of spatial capture-recapture models

    Science.gov (United States)

    Royle, J. Andrew; Karanth, K. Ullas; Gopalaswamy, Arjun M.; Kumar, N. Samba

    2009-01-01

    We develop a class of models for inference about abundance or density using spatial capture-recapture data from studies based on camera trapping and related methods. The model is a hierarchical model composed of two components: a point process model describing the distribution of individuals in space (or their home range centers) and a model describing the observation of individuals in traps. We suppose that trap- and individual-specific capture probabilities are a function of distance between individual home range centers and trap locations. We show that the models can be regarded as generalized linear mixed models, where the individual home range centers are random effects. We adopt a Bayesian framework for inference under these models using a formulation based on data augmentation. We apply the models to camera trapping data on tigers from the Nagarahole Reserve, India, collected over 48 nights in 2006. For this study, 120 camera locations were used, but cameras were only operational at 30 locations during any given sample occasion. Movement of traps is common in many camera-trapping studies and represents an important feature of the observation model that we address explicitly in our application.

  4. Three-dimensional temperature field measurement of flame using a single light field camera.

    Science.gov (United States)

    Sun, Jun; Xu, Chuanlong; Zhang, Biao; Hossain, Md Moinul; Wang, Shimin; Qi, Hong; Tan, Heping

    2016-01-25

    Compared with conventional camera, the light field camera takes the advantage of being capable of recording the direction and intensity information of each ray projected onto the CCD (charge couple device) sensor simultaneously. In this paper, a novel method is proposed for reconstructing three-dimensional (3-D) temperature field of a flame based on a single light field camera. A radiative imaging of a single light field camera is also modeled for the flame. In this model, the principal ray represents the beam projected onto the pixel of the CCD sensor. The radiation direction of the ray from the flame outside the camera is obtained according to thin lens equation based on geometrical optics. The intensities of the principal rays recorded by the pixels on the CCD sensor are mathematically modeled based on radiative transfer equation. The temperature distribution of the flame is then reconstructed by solving the mathematical model through the use of least square QR-factorization algorithm (LSQR). The numerical simulations and experiments are carried out to investigate the validity of the proposed method. The results presented in this study show that the proposed method is capable of reconstructing the 3-D temperature field of a flame.

  5. Development of the radial neutron camera system for the HL-2A tokamak.

    Science.gov (United States)

    Zhang, Y P; Yang, J W; Liu, Yi; Fan, T S; Luo, X B; Yuan, G L; Zhang, P F; Xie, X F; Song, X Y; Chen, W; Ji, X Q; Li, X; Du, T F; Ge, L J; Fu, B Z; Isobe, M; Song, X M; Shi, Z B; Yang, Q W; Duan, X R

    2016-06-01

    A new radial neutron camera system has been developed and operated recently in the HL-2A tokamak to measure the spatial and time resolved 2.5 MeV D-D fusion neutron, enhancing the understanding of the energetic-ion physics. The camera mainly consists of a multichannel collimator, liquid-scintillation detectors, shielding systems, and a data acquisition system. Measurements of the D-D fusion neutrons using the camera have been successfully performed during the 2015 HL-2A experiment campaign. The measurements show that the distribution of the fusion neutrons in the HL-2A plasma has a peaked profile, suggesting that the neutral beam injection beam ions in the plasma have a peaked distribution. It also suggests that the neutrons are primarily produced from beam-target reactions in the plasma core region. The measurement results from the neutron camera are well consistent with the results of both a standard (235)U fission chamber and NUBEAM neutron calculations. In this paper, the new radial neutron camera system on HL-2A and the first experimental results are described.

  6. Calibration of the Lunar Reconnaissance Orbiter Camera

    Science.gov (United States)

    Tschimmel, M.; Robinson, M. S.; Humm, D. C.; Denevi, B. W.; Lawrence, S. J.; Brylow, S.; Ravine, M.; Ghaemi, T.

    2008-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) onboard the NASA Lunar Reconnaissance Orbiter (LRO) spacecraft consists of three cameras: the Wide-Angle Camera (WAC) and two identical Narrow Angle Cameras (NAC-L, NAC-R). The WAC is push-frame imager with 5 visible wavelength filters (415 to 680 nm) at a spatial resolution of 100 m/pixel and 2 UV filters (315 and 360 nm) with a resolution of 400 m/pixel. In addition to the multicolor imaging the WAC can operate in monochrome mode to provide a global large- incidence angle basemap and a time-lapse movie of the illumination conditions at both poles. The WAC has a highly linear response, a read noise of 72 e- and a full well capacity of 47,200 e-. The signal-to-noise ratio in each band is 140 in the worst case. There are no out-of-band leaks and the spectral response of each filter is well characterized. Each NAC is a monochrome pushbroom scanner, providing images with a resolution of 50 cm/pixel from a 50-km orbit. A single NAC image has a swath width of 2.5 km and a length of up to 26 km. The NACs are mounted to acquire side-by-side imaging for a combined swath width of 5 km. The NAC is designed to fully characterize future human and robotic landing sites in terms of topography and hazard risks. The North and South poles will be mapped on a 1-meter-scale poleward of 85.5° latitude. Stereo coverage can be provided by pointing the NACs off-nadir. The NACs are also highly linear. Read noise is 71 e- for NAC-L and 74 e- for NAC-R and the full well capacity is 248,500 e- for NAC-L and 262,500 e- for NAC- R. The focal lengths are 699.6 mm for NAC-L and 701.6 mm for NAC-R; the system MTF is 28% for NAC-L and 26% for NAC-R. The signal-to-noise ratio is at least 46 (terminator scene) and can be higher than 200 (high sun scene). Both NACs exhibit a straylight feature, which is caused by out-of-field sources and is of a magnitude of 1-3%. However, as this feature is well understood it can be greatly reduced during ground

  7. Photometric Calibration of Consumer Video Cameras

    Science.gov (United States)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  8. Measuring performance at trade shows

    DEFF Research Database (Denmark)

    Hansen, Kåre

    2004-01-01

    Trade shows is an increasingly important marketing activity to many companies, but current measures of trade show performance do not adequately capture dimensions important to exhibitors. Based on the marketing literature's outcome and behavior-based control system taxonomy, a model is built...... that captures a outcome-based sales dimension and four behavior-based dimensions (i.e. information-gathering, relationship building, image building, and motivation activities). A 16-item instrument is developed for assessing exhibitors perceptions of their trade show performance. The paper presents evidence...

  9. LLiST - a new star tracker camera for tip-tilt correction at IOTA

    OpenAIRE

    Schuller, P.A.; Lacasse, M. G.; Lydon, D.; McGonagle, W. H.; Pedretti, E; Reich, R. K.; Schloerb, F. P.; Traub, W. A.

    2004-01-01

    The tip-tilt correction system at the Infrared Optical Telescope Array (IOTA) has been upgraded with a new star tracker camera. The camera features a backside-illuminated CCD chip offering doubled overall quantum efficiency and a four times higher system gain compared to the previous system. Tests carried out to characterize the new system showed a higher system gain with a lower read-out noise electron level. Shorter read-out cycle times now allow to compensate tip-tilt fluctuations so that ...

  10. LLiST - a new star tracker camera for tip-tilt correction at IOTA

    CERN Document Server

    Schuller, P A; Lydon, D; McGonagle, W H; Pedretti, E; Reich, R K; Schloerb, F P; Traub, W A

    2004-01-01

    The tip-tilt correction system at the Infrared Optical Telescope Array (IOTA) has been upgraded with a new star tracker camera. The camera features a backside-illuminated CCD chip offering doubled overall quantum efficiency and a four times higher system gain compared to the previous system. Tests carried out to characterize the new system showed a higher system gain with a lower read-out noise electron level. Shorter read-out cycle times now allow to compensate tip-tilt fluctuations so that their error imposed on visibility measurements becomes comparable to, and even smaller than, that of higher-order aberrations.

  11. Spectral Measurement of Atmospheric Pressure Plasma by Means of Digital Camera

    Institute of Scientific and Technical Information of China (English)

    葛袁静; 张广秋; 刘益民; 赵志发

    2002-01-01

    A digital camera measuring system has been used successfully to measure the space fluctuation behaviors of Induced Dielectric Barrier Discharge (IDBD) plasma at atmospheric pressure. The experimental results showed that: (1) The uniformity of electron temperature in space depends on discharge condition and structure of web electrode. For a certain web electrode the higher the discharge voltage is, the more uniform distribution of electron temperature in space will be. For a certain discharge the finer and denser the holes on web electrode are, the more uniform distribution of electron temperature in space will be (2). Digital camera is an available equipment to measure some behaviors of the plasma working at atmospheric pressure.

  12. Spectral measurement of atmospheric pressure plasma by means of digital camera

    International Nuclear Information System (INIS)

    A digital camera measuring system has been used successfully to measure the space fluctuation behaviors of Induced Dielectric Barrier Discharge (IDBD) plasma at atmospheric pressure. The experimental results showed that: (1) The uniformity of electron temperature in space depends on discharge condition and structure of web electrode. For a certain web electrode the higher the discharge voltage is, the more uniform distribution of electron temperature in space will be. For a certain discharge the finer and denser the holes on web electrode are, the more uniform distribution of electron temperature in space will be. (2) Digital camera is an available equipment to measure some behaviors of the plasma working at atmospheric pressure

  13. Experimental platform for moving double-camera system based on binocular vergence eye movements

    Institute of Scientific and Technical Information of China (English)

    LI Heng-yu; LUO Jun; XIE Shao-rong; LI Lei; LI Qing-mei

    2009-01-01

    A control model of binocular vergence eye movements is presented. The control model can reduce blind areas caused by the double cameras in motion platform. In order to validate the model performance, an experimental platform and its control system based on TMS320LF2407 are designed. The control system has its compacted configuration and high reliability. The simulation and experimental results show that the control system can realize binocular vergence movements. Compared with the conventional moving double cameras system, this new system can considerably reduce blind areas.

  14. Real-time tracking for virtual environments using scaat kalman filtering and unsynchronised cameras

    DEFF Research Database (Denmark)

    Rasmussen, Niels Tjørnly; Störring, Morritz; Moeslund, Thomas B.;

    2006-01-01

    This paper presents a real-time outside-in camera-based tracking system for wireless 3D pose tracking of a user’s head and hand in a virtual environment. The system uses four unsynchronised cameras as sensors and passive retroreflective markers arranged in rigid bodies as targets. In order...... to achieve high update rates and to cope with the unsynchronised data a single-constraint-at-a-time (SCAAT) Extended Kalman Filtering approach is used that recursively integrates measurements as soon as they are available one-at-a-time. Tests show that this approach is more robust to occlusions and provides...

  15. Influence of camera calibration conditions on the accuracy of 3D reconstruction.

    Science.gov (United States)

    Poulin-Girard, Anne-Sophie; Thibault, Simon; Laurendeau, Denis

    2016-02-01

    For stereoscopic systems designed for metrology applications, the accuracy of camera calibration dictates the precision of the 3D reconstruction. In this paper, the impact of various calibration conditions on the reconstruction quality is studied using a virtual camera calibration technique and the design file of a commercially available lens. This technique enables the study of the statistical behavior of the reconstruction task in selected calibration conditions. The data show that the mean reprojection error should not always be used to evaluate the performance of the calibration process and that a low quality of feature detection does not always lead to a high mean reconstruction error.

  16. Using camera trap data to assess the impact of bushmeat hunting on forest mammals in Tanzania

    DEFF Research Database (Denmark)

    Hegerl, Carla; Burgess, Neil David; Nielsen, Martin Reinhardt;

    2016-01-01

    evaluated the impacts of illegal bushmeat hunting on the mammal community of two ecologically similar forests in the Udzungwa Mountains of Tanzania. The forests differ only in their protection status: one is a National Park and the other a Forest Reserve. We deployed systematic camera trap surveys...... in these forests, amounting to 850 and 917 camera days in the Forest Reserve and the National Park, respectively, and investigated differences between the two areas in estimated species-specific occupancies, detectabilities and species richness. We show that the mammal community in the Forest Reserve is degraded...

  17. Microcontroller-based intelligent low-cost-linear-sensor-camera for general edge detection

    Science.gov (United States)

    Hussmann, Stephan; Justen, Detlef

    1997-09-01

    With this paper we would like to present an intelligent low- cost-camera. Intelligent means that a microcontroller does all the controlling and provides several in- and outputs. The camera is a stand-alone system. The basic element of the camera is a linear sensor that consists of a photodiode array (PDA). In comparison with standard CCD-chips this type of sensor is a low cost component and its operation is very simple. Furthermore this paper shows the mechanical, electrical and electro-optical differences between CCD- and PDA-sensors. So the reader will be able to choose the right sensor for a particular task. Two cases of industrial applications are listed at the end of this paper.

  18. Optimal Camera Placement to measure Distances Conservativly Regarding Static and Dynamic Obstacles

    CERN Document Server

    Hänel, Maria; Henrich, Dominik; Grüne, Lars; Pannek, Jürgen

    2011-01-01

    In modern production facilities industrial robots and humans are supposed to interact sharing a common working area. In order to avoid collisions, the distances between objects need to be measured conservatively which can be done by a camera network. To estimate the acquired distance, unmodelled objects, e.g., an interacting human, need to be modelled and distinguished from premodelled objects like workbenches or robots by image processing such as the background subtraction method. The quality of such an approach massively depends on the settings of the camera network, that is the positions and orientations of the individual cameras. Of particular interest in this context is the minimization of the error of the distance using the objects modelled by the background subtraction method instead of the real objects. Here, we show how this minimization can be formulated as an abstract optimization problem. Moreover, we state various aspects on the implementation as well as reasons for the selection of a suitable op...

  19. A fast 3D reconstruction system with a low-cost camera accessory.

    Science.gov (United States)

    Zhang, Yiwei; Gibson, Graham M; Hay, Rebecca; Bowman, Richard W; Padgett, Miles J; Edgar, Matthew P

    2015-06-09

    Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object.

  20. Astronaut Charles M. Duke, Jr., in shadow of Lunar Module behind ultraviolet camera

    Science.gov (United States)

    1972-01-01

    Astronaut Charles M. Duke, Jr., lunar module pilot, stands in the shadow of the Lunar Module (LM) behind the ultraviolet (UV) camera which is in operation. This photograph was taken by astronaut John W. Young, mission commander, during the mission's second extravehicular activity (EVA-2). The UV camera's gold surface is designed to maintain the correct temperature. The astronauts set the prescribed angles of azimuth and elevation (here 14 degrees for photography of the large Magellanic Cloud) and pointed the camera. Over 180 photographs and spectra in far-ultraviolet light were obtained showing clouds of hydrogen and other gases and several thousand stars. The United States flag and Lunar Roving Vehicle (LRV) are in the left background. While astronauts Young and Duke descended in the Apollo 16 Lunar Module (lm) 'Orion' to explore the Descartes highlands landing site on the Moon, astronaut Thomas K. Mattingly II, command module pilot, remained with the Command and Service Modules (csm) 'Casper' in lunar orbit.

  1. Camera systems in human motion analysis for biomedical applications

    Science.gov (United States)

    Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.

    2015-05-01

    Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.

  2. A wide-angle camera module for disposable endoscopy

    Science.gov (United States)

    Shim, Dongha; Yeon, Jesun; Yi, Jason; Park, Jongwon; Park, Soo Nam; Lee, Nanhee

    2016-06-01

    A wide-angle miniaturized camera module for disposable endoscope is demonstrated in this paper. A lens module with 150° angle of view (AOV) is designed and manufactured. All plastic injection-molded lenses and a commercial CMOS image sensor are employed to reduce the manufacturing cost. The image sensor and LED illumination unit are assembled with a lens module. The camera module does not include a camera processor to further reduce its size and cost. The size of the camera module is 5.5 × 5.5 × 22.3 mm3. The diagonal field of view (FOV) of the camera module is measured to be 110°. A prototype of a disposable endoscope is implemented to perform a pre-clinical animal testing. The esophagus of an adult beagle dog is observed. These results demonstrate the feasibility of a cost-effective and high-performance camera module for disposable endoscopy.

  3. A wide-angle camera module for disposable endoscopy

    Science.gov (United States)

    Shim, Dongha; Yeon, Jesun; Yi, Jason; Park, Jongwon; Park, Soo Nam; Lee, Nanhee

    2016-08-01

    A wide-angle miniaturized camera module for disposable endoscope is demonstrated in this paper. A lens module with 150° angle of view (AOV) is designed and manufactured. All plastic injection-molded lenses and a commercial CMOS image sensor are employed to reduce the manufacturing cost. The image sensor and LED illumination unit are assembled with a lens module. The camera module does not include a camera processor to further reduce its size and cost. The size of the camera module is 5.5 × 5.5 × 22.3 mm3. The diagonal field of view (FOV) of the camera module is measured to be 110°. A prototype of a disposable endoscope is implemented to perform a pre-clinical animal testing. The esophagus of an adult beagle dog is observed. These results demonstrate the feasibility of a cost-effective and high-performance camera module for disposable endoscopy.

  4. Method for Traffic Flow Estimation using On-dashboard Camera Image

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2014-02-01

    Full Text Available This paper presents the method to estimate the traffic flow on the urban roadway by using car’s on-dashboard camera image. The system described, shows something new which utilizes only road traffic photo images to get the information about urban roadway traffic flow automatically.

  5. Method for Traffic Flow Estimation using On-dashboard Camera Image

    OpenAIRE

    Kohei Arai; Steven Ray Sentinuwo

    2014-01-01

    This paper presents the method to estimate the traffic flow on the urban roadway by using car’s on-dashboard camera image. The system described, shows something new which utilizes only road traffic photo images to get the information about urban roadway traffic flow automatically.

  6. Streak camera measurements of laser pulse temporal dispersion in short graded-index optical fibers

    International Nuclear Information System (INIS)

    Streak camera measurements were used to determine temporal dispersion in short (5 to 30 meter) graded-index optical fibers. Results show that 50-ps, 1.06-μm and 0.53-μm laser pulses can be propagated without significant dispersion when care is taken to prevent propagation of energy in fiber cladding modes

  7. Dust visualisation in TJ-II with intensified visible Fast Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Cal, E. de la; Pablos, J. L. de; Carralero, D.; Hidalgo, C.

    2010-10-21

    A visible fast camera equipped with an image Intensifier and atomic line filters is used in TJ-II for spectroscopic dust observation. First results show characteristic features depending on filter and clearly differing from those without the filters as is usually done in existing experiments. Preliminary discussions of the observed results are presented. (Author) 5 refs.

  8. Quality assessment of user-generated video using camera motion

    OpenAIRE

    Guo, Jinlin; Gurrin, Cathal; Hopfgartner, Frank; Zhang, ZhenXing; Lao, Songyang

    2013-01-01

    With user-generated video (UGV) becoming so popular on theWeb, the availability of a reliable quality assessment (QA) measure of UGV is necessary for improving the users’ quality of experience in videobased application. In this paper, we explore QA of UGV based on how much irregular camera motion it contains with low-cost manner. A blockmatch based optical flow approach has been employed to extract camera motion features in UGV, based on which, irregular camera motion is calculated and ...

  9. IR Camera Report for the 7 Day Production Test

    Energy Technology Data Exchange (ETDEWEB)

    Holloway, Michael Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-22

    The following report gives a summary of the IR camera performance results and data for the 7 day production run that occurred from 10 Sep 2015 thru 16 Sep 2015. During this production run our goal was to see how well the camera performed its task of monitoring the target window temperature with our improved alignment procedure and emissivity measurements. We also wanted to see if the increased shielding would be effective in protecting the camera from damage and failure.

  10. Experimental demonstration of RGB LED-based optical camera communications

    OpenAIRE

    Luo, Pengfei; Min ZHANG; Ghassemlooy, Zabih; Minh, Hoa Le; Tsai, Hsin-Mu; Tang, Xuan; Png, Lih Chieh; Han, Dahai

    2015-01-01

    Red, green, and blue (RGB) light-emitting diodes (LEDs) are widely used in everyday illumination, particularly where color-changing lighting is required. On the other hand, digital cameras with color filter arrays over image sensors have been also extensively integrated in smart devices. Therefore, optical camera communications (OCC) using RGB LEDs and color cameras is a promising candidate for cost-effective parallel visible light communications (VLC). In this paper, a single RGB LED-based O...

  11. Abnormal Event Detection via Multikernel Learning for Distributed Camera Networks

    OpenAIRE

    Tian Wang; Jie Chen; Paul Honeine; Hichem Snoussi

    2015-01-01

    Distributed camera networks play an important role in public security surveillance. Analyzing video sequences from cameras set at different angles will provide enhanced performance for detecting abnormal events. In this paper, an abnormal detection algorithm is proposed to identify unusual events captured by multiple cameras. The visual event is summarized and represented by the histogram of the optical flow orientation descriptor, and then a multikernel strategy that takes the multiview scen...

  12. PHOTOGRAMMETRIC PROCESSING OF APOLLO 15 METRIC CAMERA OBLIQUE IMAGES

    OpenAIRE

    K. L. Edmundson; O. Alexandrov; Archinal, B. A.; Becker, K.J.; Becker, T. L.; Kirk, R L; Moratto, Z. M.; Nefian, A. V.; Richie, J. O.; Robinson, M S

    2016-01-01

    The integrated photogrammetric mapping system flown on the last three Apollo lunar missions (15, 16, and 17) in the early 1970s incorporated a Metric (mapping) Camera, a high-resolution Panoramic Camera, and a star camera and laser altimeter to provide support data. In an ongoing collaboration, the U.S. Geological Survey’s Astrogeology Science Center, the Intelligent Robotics Group of the NASA Ames Research Center, and Arizona State University are working to achieve the most complete...

  13. IR Camera Report for the 7 Day Production Test

    International Nuclear Information System (INIS)

    The following report gives a summary of the IR camera performance results and data for the 7 day production run that occurred from 10 Sep 2015 thru 16 Sep 2015. During this production run our goal was to see how well the camera performed its task of monitoring the target window temperature with our improved alignment procedure and emissivity measurements. We also wanted to see if the increased shielding would be effective in protecting the camera from damage and failure.

  14. On Pixel Detection Threshold in the Gigavision Camera

    OpenAIRE

    Yang, F.; Sbaiz, L.; Charbon, E.; Susstrunk, S.; Vetterli, M.

    2010-01-01

    Recently, we have proposed a new image device called gigavision camera whose most important characteristic is that pixels have binary response. The response function of a gigavision sensor is non-linear and similar to a logarithmic function, which makes the camera suitable for high dynamic range imaging. One important parameter in the gigavision camera is the threshold for generating binary pixels. Threshold T relates to the number of photo-electrons necessary for the pixel output to switch f...

  15. Central Acceptance Testing for Camera Technologies for CTA

    OpenAIRE

    Bonardi, A.; T. Buanes; Chadwick, P.; Dazzi, F.; A. Förster(CERN, Geneva, Switzerland); Hörandel, J. R.; Punch, M.; Consortium, R. M. Wagner for the CTA

    2015-01-01

    The Cherenkov Telescope Array (CTA) is an international initiative to build the next generation ground based very-high energy gamma-ray observatory. It will consist of telescopes of three different sizes, employing several different technologies for the cameras that detect the Cherenkov light from the observed air showers. In order to ensure the compliance of each camera technology with CTA requirements, CTA will perform central acceptance testing of each camera technology. To assist with thi...

  16. Analysis of Camera Arrays Applicable to the Internet of Things

    OpenAIRE

    Jiachen Yang; Ru Xu; Zhihan Lv; Houbing Song

    2016-01-01

    The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are...

  17. Tokyo Motor Show 2003; Tokyo Motor Show 2003

    Energy Technology Data Exchange (ETDEWEB)

    Joly, E.

    2004-01-01

    The text which follows present the different techniques exposed during the 37. Tokyo Motor Show. The report points out the great tendencies of developments of the Japanese automobile industry. The hybrid electric-powered vehicles or those equipped with fuel cells have been highlighted by the Japanese manufacturers which allow considerable budgets in the research of less polluting vehicles. The exposed models, although being all different according to the manufacturer, use always a hybrid system: fuel cell/battery. The manufacturers have stressed too on the intelligent systems for navigation and safety as well as on the design and comfort. (O.M.)

  18. Pembrolizumab Shows Promise for NSCLC.

    Science.gov (United States)

    2015-06-01

    Data from the KEYNOTE-001 trial show that pembrolizumab improves clinical outcomes for patients with advanced non-small cell lung cancer, and is well tolerated. PD-L1 expression in at least 50% of tumor cells correlated with improved efficacy.

  19. Testing and evaluation of thermal cameras for absolute temperature measurement

    Science.gov (United States)

    Chrzanowski, Krzysztof; Fischer, Joachim; Matyszkiel, Robert

    2000-09-01

    The accuracy of temperature measurement is the most important criterion for the evaluation of thermal cameras used in applications requiring absolute temperature measurement. All the main international metrological organizations currently propose a parameter called uncertainty as a measure of measurement accuracy. We propose a set of parameters for the characterization of thermal measurement cameras. It is shown that if these parameters are known, then it is possible to determine the uncertainty of temperature measurement due to only the internal errors of these cameras. Values of this uncertainty can be used as an objective criterion for comparisons of different thermal measurement cameras.

  20. 360 deg Camera Head for Unmanned Sea Surface Vehicles

    Science.gov (United States)

    Townsend, Julie A.; Kulczycki, Eric A.; Willson, Reginald G.; Huntsberger, Terrance L.; Garrett, Michael S.; Trebi-Ollennu, Ashitey; Bergh, Charles F.

    2012-01-01

    The 360 camera head consists of a set of six color cameras arranged in a circular pattern such that their overlapping fields of view give a full 360 view of the immediate surroundings. The cameras are enclosed in a watertight container along with support electronics and a power distribution system. Each camera views the world through a watertight porthole. To prevent overheating or condensation in extreme weather conditions, the watertight container is also equipped with an electrical cooling unit and a pair of internal fans for circulation.

  1. Mid-IR image acquisition using a standard CCD camera

    DEFF Research Database (Denmark)

    Dam, Jeppe Seidelin; Sørensen, Knud Palmelund; Pedersen, Christian;

    2010-01-01

    Direct image acquisition in the 3-5 µm range is realized using a standard CCD camera and a wavelength up-converter unit. The converter unit transfers the image information to the NIR range were state-of-the-art cameras exist.......Direct image acquisition in the 3-5 µm range is realized using a standard CCD camera and a wavelength up-converter unit. The converter unit transfers the image information to the NIR range were state-of-the-art cameras exist....

  2. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    Science.gov (United States)

    Shortis, Mark

    2015-12-07

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  3. MICADO: the E-ELT Adaptive Optics Imaging Camera

    CERN Document Server

    Davies, R

    2010-01-01

    MICADO is the adaptive optics imaging camera for the E-ELT. It has been designed and optimised to be mounted to the LGS-MCAO system MAORY, and will provide diffraction limited imaging over a wide (about 1 arcmin) field of view. For initial operations, it can also be used with its own simpler AO module that provides on-axis diffraction limited performance using natural guide stars. We discuss the instrument's key capabilities and expected performance, and show how the science drivers have shaped its design. We outline the technical concept, from the opto-mechanical design to operations and data processing. We describe the AO module, summarise the instrument performance, and indicate some possible future developments.

  4. Noninvasive particle sizing using camera-based diffuse reflectance spectroscopy

    DEFF Research Database (Denmark)

    Abildgaard, Otto Højager Attermann; Frisvad, Jeppe Revall; Falster, Viggo;

    2016-01-01

    Diffuse reflectance measurements are useful for noninvasive inspection of optical properties such as reduced scattering and absorption coefficients. Spectroscopic analysis of these optical properties can be used for particle sizing. Systems based on optical fiber probes are commonly employed, but...... their low spatial resolution limits their validity ranges for the coefficients. To cover a wider range of coefficients, we use camera-based spectroscopic oblique incidence reflectometry. We develop a noninvasive technique for acquisition of apparent particle size distributions based on this approach....... Our technique is validated using stable oil-in-water emulsions with a wide range of known particle size distributions. We also measure the apparent particle size distributions of complex dairy products. These results show that our tool, in contrast to those based on fiber probes, can deal with a range...

  5. A camera for a narrow and deep welding groove

    Science.gov (United States)

    Vehmanen, Miika S.; Korhonen, Mika; Mäkynen, Anssi J.

    2008-06-01

    In this paper welding seam imaging in a very narrow and deep groove is presented. Standard camera optics can not be used as it does not reach the bottom of the groove. Therefore, selecting suitable imaging optics and components was the main challenge of the study. The implementation is based on image transmission via a borescope. The borescope has a long and narrow tube with graded index relay optics inside. To avoid excessive heating, the borescope tube is enclosed in a cooling pipe. The performance of the imaging system was tested by measuring its modulation transfer function (MTF) and visually evaluated its distortion. The results show that a borescope providing VGA resolution is adequate for the application. The spectrum of the welding processes was studied to determine optimum window to observe the welding seam and electrode. Optimal bandwidth was found in region of 700nm-1000nm.

  6. Robust multi-camera view face recognition

    CERN Document Server

    Kisku, Dakshina Ranjan; Gupta, Phalguni; Sing, Jamuna Kanta

    2010-01-01

    This paper presents multi-appearance fusion of Principal Component Analysis (PCA) and generalization of Linear Discriminant Analysis (LDA) for multi-camera view offline face recognition (verification) system. The generalization of LDA has been extended to establish correlations between the face classes in the transformed representation and this is called canonical covariate. The proposed system uses Gabor filter banks for characterization of facial features by spatial frequency, spatial locality and orientation to make compensate to the variations of face instances occurred due to illumination, pose and facial expression changes. Convolution of Gabor filter bank to face images produces Gabor face representations with high dimensional feature vectors. PCA and canonical covariate are then applied on the Gabor face representations to reduce the high dimensional feature spaces into low dimensional Gabor eigenfaces and Gabor canonical faces. Reduced eigenface vector and canonical face vector are fused together usi...

  7. Retinal oximetry with a multiaperture camera

    Science.gov (United States)

    Lemaillet, Paul; Lompado, Art; Ibrahim, Mohamed; Nguyen, Quan Dong; Ramella-Roman, Jessica C.

    2010-02-01

    Oxygen saturation measurements in the retina is an essential measurement in monitoring eye health of diabetic patient. In this paper, preliminary result of oxygen saturation measurements for a healthy patient retina is presented. The retinal oximeter used is based on a regular fundus camera to which was added an optimized optical train designed to perform aperture division whereas a filter array help select the requested wavelengths. Hence, nine equivalent wavelength-dependent sub-images are taken in a snapshot which helps minimizing the effects of eye movements. The setup is calibrated by using a set of reflectance calibration phantoms and a lookuptable (LUT) is computed. An inverse model based on the LUT is presented to extract the optical properties of a patient fundus and further estimate the oxygen saturation in a retina vessel.

  8. Relevance of ellipse eccentricity for camera calibration

    Science.gov (United States)

    Mordwinzew, W.; Tietz, B.; Boochs, F.; Paulus, D.

    2015-05-01

    Plane circular targets are widely used within calibrations of optical sensors through photogrammetric set-ups. Due to this popularity, their advantages and disadvantages are also well studied in the scientific community. One main disadvantage occurs when the projected target is not parallel to the image plane. In this geometric constellation, the target has an elliptic geometry with an offset between its geometric and its projected center. This difference is referred to as ellipse eccentricity and is a systematic error which, if not treated accordingly, has a negative impact on the overall achievable accuracy. The magnitude and direction of eccentricity errors are dependent on various factors. The most important one is the target size. The bigger an ellipse in the image is, the bigger the error will be. Although correction models dealing with eccentricity have been available for decades, it is mostly seen as a planning task in which the aim is to choose the target size small enough so that the resulting eccentricity error remains negligible. Besides the fact that advanced mathematical models are available and that the influence of this error on camera calibration results is still not completely investigated, there are various additional reasons why bigger targets can or should not be avoided. One of them is the growing image resolution as a by-product from advancements in the sensor development. Here, smaller pixels have a lower S/N ratio, necessitating more pixels to assure geometric quality. Another scenario might need bigger targets due to larger scale differences whereas distant targets should still contain enough information in the image. In general, bigger ellipses contain more contour pixels and therefore more information. This supports the target-detection algorithms to perform better even at non-optimal conditions such as data from sensors with a high noise level. In contrast to rather simple measuring situations in a stereo or multi-image mode, the impact

  9. Dark energy camera installation at CTIO: overview

    Science.gov (United States)

    Abbott, Timothy M.; Muñoz, Freddy; Walker, Alistair R.; Smith, Chris; Montane, Andrés.; Gregory, Brooke; Tighe, Roberto; Schurter, Patricio; van der Bliek, Nicole S.; Schumacher, German

    2012-09-01

    The Dark Energy Camera (DECam) has been installed on the V. M. Blanco telescope at Cerro Tololo Inter-American Observatory in Chile. This major upgrade to the facility has required numerous modifications to the telescope and improvements in observatory infrastructure. The telescope prime focus assembly has been entirely replaced, and the f/8 secondary change procedure radically changed. The heavier instrument means that telescope balance has been significantly modified. The telescope control system has been upgraded. NOAO has established a data transport system to efficiently move DECam's output to the NCSA for processing. The observatory has integrated the DECam highpressure, two-phase cryogenic cooling system into its operations and converted the Coudé room into an environmentally-controlled instrument handling facility incorporating a high quality cleanroom. New procedures to ensure the safety of personnel and equipment have been introduced.

  10. Camera Raw解读(3)

    Institute of Scientific and Technical Information of China (English)

    张恣宽

    2010-01-01

    接上期,继续介绍Camera Raw的调整面板。(2).【色调曲线】面板单击【色调曲线】按钮.进入【色调曲线】选项面板(快捷键Ctrl+Alt+2)。该面板主要是对图片中间色调进行精细调整,从Photoshop CS3开始.在曲线背景中加入了色阶中才有的直方图波形,使我们可以直观地看到照片调整前后的色阶变化。

  11. Collimated trans-axial tomographic scintillation camera

    International Nuclear Information System (INIS)

    The principal problem in trans-axial tomographic radioisotope scanning is the length of time required to obtain meaningful data. Patient movement and radioisotope migration during the scanning period can cause distortion of the image. The object of this invention is to reduce the scanning time without degrading the images obtained. A system is described in which a scintillation camera detector is moved to an orbit about the cranial-caudal axis relative to the patient. A collimator is used in which lead septa are arranged so as to admit gamma rays travelling perpendicular to this axis with high spatial resolution and those travelling in the direction of the axis with low spatial resolution, thus increasing the rate of acceptance of radioactive events to contribute to the positional information obtainable without sacrificing spatial resolution. (author)

  12. Neutron camera employing row and column summations

    Science.gov (United States)

    Clonts, Lloyd G.; Diawara, Yacouba; Donahue, Jr, Cornelius; Montcalm, Christopher A.; Riedel, Richard A.; Visscher, Theodore

    2016-06-14

    For each photomultiplier tube in an Anger camera, an R.times.S array of preamplifiers is provided to detect electrons generated within the photomultiplier tube. The outputs of the preamplifiers are digitized to measure the magnitude of the signals from each preamplifier. For each photomultiplier tube, a corresponding summation circuitry including R row summation circuits and S column summation circuits numerically add the magnitudes of the signals from preamplifiers for each row and for each column to generate histograms. For a P.times.Q array of photomultiplier tubes, P.times.Q summation circuitries generate P.times.Q row histograms including R entries and P.times.Q column histograms including S entries. The total set of histograms include P.times.Q.times.(R+S) entries, which can be analyzed by a position calculation circuit to determine the locations of events (detection of a neutron).

  13. Fast Camera Imaging of Hall Thruster Ignition

    International Nuclear Information System (INIS)

    Hall thrusters provide efficient space propulsion by electrostatic acceleration of ions. Rotating electron clouds in the thruster overcome the space charge limitations of other methods. Images of the thruster startup, taken with a fast camera, reveal a bright ionization period which settles into steady state operation over 50 (micro)s. The cathode introduces azimuthal asymmetry, which persists for about 30 (micro)s into the ignition. Plasma thrusters are used on satellites for repositioning, orbit correction and drag compensation. The advantage of plasma thrusters over conventional chemical thrusters is that the exhaust energies are not limited by chemical energy to about an electron volt. For xenon Hall thrusters, the ion exhaust velocity can be 15-20 km/s, compared to 5 km/s for a typical chemical thruster.

  14. First Light for World's Largest 'Thermometer Camera'

    Science.gov (United States)

    2007-08-01

    LABOCA in Service at APEX The world's largest bolometer camera for submillimetre astronomy is now in service at the 12-m APEX telescope, located on the 5100m high Chajnantor plateau in the Chilean Andes. LABOCA was specifically designed for the study of extremely cold astronomical objects and, with its large field of view and very high sensitivity, will open new vistas in our knowledge of how stars form and how the first galaxies emerged from the Big Bang. ESO PR Photo 35a/07 ESO PR Photo 35a/07 LABOCA on APEX "A large fraction of all the gas in the Universe has extremely cold temperatures of around minus 250 degrees Celsius, a mere 20 degrees above absolute zero," says Karl Menten, director at the Max Planck Institute for Radioastronomy (MPIfR) in Bonn, Germany, that built LABOCA. "Studying these cold clouds requires looking at the light they radiate in the submillimetre range, with very sophisticated detectors." Astronomers use bolometers for this task, which are, in essence, thermometers. They detect incoming radiation by registering the resulting rise in temperature. More specifically, a bolometer detector consists of an extremely thin foil that absorbs the incoming light. Any change of the radiation's intensity results in a slight change in temperature of the foil, which can then be registered by sensitive electronic thermometers. To be able to measure such minute temperature fluctuations requires the bolometers to be cooled down to less than 0.3 degrees above absolute zero, that is below minus 272.85 degrees Celsius. "Cooling to such low temperatures requires using liquid helium, which is no simple feat for an observatory located at 5100m altitude," says Carlos De Breuck, the APEX instrument scientist at ESO. Nor is it simple to measure the weak temperature radiation of astronomical objects. Millimetre and submillimetre radiation opens a window into the enigmatic cold Universe, but the signals from space are heavily absorbed by water vapour in the Earth

  15. Picasso on Show in Shanghai

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    A staff member of the National Picasso Museum of France checks one of the great Spanish artist Pablo Picasso’s works at the China Pavilion inside the site of the 2010 World Expo in Shanghai on October 12.Sixty-two priceless paintings and statues selected from the works of the renowned artist have been brought to the pavilion for an upcoming exhibition to premiere on October 18.Besides these representative masterpieces,50 valuable photographs showing the artist’s whole life will also be presented.The exhibition’s estimated value is 678 million euros ($934 million).It will be held until January 10,2012.

  16. A tiny VIS-NIR snapshot multispectral camera

    Science.gov (United States)

    Geelen, Bert; Blanch, Carolina; Gonzalez, Pilar; Tack, Nicolaas; Lambrechts, Andy

    2015-03-01

    Spectral imaging can reveal a lot of hidden details about the world around us, but is currently confined to laboratory environments due to the need for complex, costly and bulky cameras. Imec has developed a unique spectral sensor concept in which the spectral unit is monolithically integrated on top of a standard CMOS image sensor at wafer level, hence enabling the design of compact, low cost and high acquisition speed spectral cameras with a high design flexibility. This flexibility has previously been demonstrated by imec in the form of three spectral camera architectures: firstly a high spatial and spectral resolution scanning camera, secondly a multichannel snapshot multispectral camera and thirdly a per-pixel mosaic snapshot spectral camera. These snapshot spectral cameras sense an entire multispectral data cube at one discrete point in time, extending the domain of spectral imaging towards dynamic, video-rate applications. This paper describes the integration of our per-pixel mosaic snapshot spectral sensors inside a tiny, portable and extremely user-friendly camera. Our prototype demonstrator cameras can acquire multispectral image cubes, either of 272x512 pixels over 16 bands in the VIS (470-620nm) or of 217x409 pixels over 25 bands in the VNIR (600-900nm) at 170 cubes per second for normal machine vision illumination levels. The cameras themselves are extremely compact based on Ximea xiQ cameras, measuring only 26x26x30mm, and can be operated from a laptop-based USB3 connection, making them easily deployable in very diverse environments.

  17. A novel optimization method of camera parameters used for vision measurement

    Science.gov (United States)

    Zhou, Fuqiang; Cui, Yi; Peng, Bin; Wang, Yexin

    2012-09-01

    Camera calibration plays an important role in the field of machine vision applications. During the process of camera calibration, nonlinear optimization technique is crucial to obtain the best performance of camera parameters. Currently, the existing optimization method aims at minimizing the distance error between the detected image point and the calculated back-projected image point, based on 2D image pixels coordinate. However, the vision measurement process is conducted in 3D space while the optimization method generally adopted is carried out in 2D image plane. Moreover, the error criterion with respect to optimization and measurement is different. In other words, the equal pixel distance error in 2D image plane leads to diverse 3D metric distance error at different position before the camera. All the reasons mentioned above will cause accuracy decrease for 3D vision measurement. To solve the problem, a novel optimization method of camera parameters used for vision measurement is proposed. The presented method is devoted to minimizing the metric distance error between the calculated point and the real point in 3D measurement coordinate system. Comparatively, the initial camera parameters acquired through linear calibration are optimized through two different methods: one is the conventional method and the other is the novel method presented by this paper. Also, the calibration accuracy and measurement accuracy of the parameters obtained by the two methods are thoroughly analyzed and the choice of a suitable accuracy evaluation method is discussed. Simulative and real experiments to estimate the performance of the proposed method on test data are reported, and the results show that the proposed 3D optimization method is quite efficient to improve measurement accuracy compared with traditional method. It can meet the practical requirement of high precision in 3D vision metrology engineering.

  18. Single photon detection and localization accuracy with an ebCMOS camera

    Energy Technology Data Exchange (ETDEWEB)

    Cajgfinger, T. [CNRS/IN2P3, Institut de Physique Nucléaire de Lyon, Villeurbanne F-69622 (France); Dominjon, A., E-mail: agnes.dominjon@nao.ac.jp [Université de Lyon, Université de Lyon 1, Lyon 69003 France. (France); Barbier, R. [CNRS/IN2P3, Institut de Physique Nucléaire de Lyon, Villeurbanne F-69622 (France); Université de Lyon, Université de Lyon 1, Lyon 69003 France. (France)

    2015-07-01

    The CMOS sensor technologies evolve very fast and offer today very promising solutions to existing issues facing by imaging camera systems. CMOS sensors are very attractive for fast and sensitive imaging thanks to their low pixel noise (1e-) and their possibility of backside illumination. The ebCMOS group of IPNL has produced a camera system dedicated to Low Light Level detection and based on a 640 kPixels ebCMOS with its acquisition system. After reminding the principle of detection of an ebCMOS and the characteristics of our prototype, we confront our camera to other imaging systems. We compare the identification efficiency and the localization accuracy of a point source by four different photo-detection devices: the scientific CMOS (sCMOS), the Charge Coupled Device (CDD), the Electron Multiplying CCD (emCCD) and the Electron Bombarded CMOS (ebCMOS). Our ebCMOS camera is able to identify a single photon source in less than 10 ms with a localization accuracy better than 1 µm. We report as well efficiency measurement and the false positive identification of the ebCMOS camera by identifying more than hundreds of single photon sources in parallel. About 700 spots are identified with a detection efficiency higher than 90% and a false positive percentage lower than 5. With these measurements, we show that our target tracking algorithm can be implemented in real time at 500 frames per second under a photon flux of the order of 8000 photons per frame. These results demonstrate that the ebCMOS camera concept with its single photon detection and target tracking algorithm is one of the best devices for low light and fast applications such as bioluminescence imaging, quantum dots tracking or adaptive optics.

  19. Single photon detection and localization accuracy with an ebCMOS camera

    Science.gov (United States)

    Cajgfinger, T.; Dominjon, A.; Barbier, R.

    2015-07-01

    The CMOS sensor technologies evolve very fast and offer today very promising solutions to existing issues facing by imaging camera systems. CMOS sensors are very attractive for fast and sensitive imaging thanks to their low pixel noise (1e-) and their possibility of backside illumination. The ebCMOS group of IPNL has produced a camera system dedicated to Low Light Level detection and based on a 640 kPixels ebCMOS with its acquisition system. After reminding the principle of detection of an ebCMOS and the characteristics of our prototype, we confront our camera to other imaging systems. We compare the identification efficiency and the localization accuracy of a point source by four different photo-detection devices: the scientific CMOS (sCMOS), the Charge Coupled Device (CDD), the Electron Multiplying CCD (emCCD) and the Electron Bombarded CMOS (ebCMOS). Our ebCMOS camera is able to identify a single photon source in less than 10 ms with a localization accuracy better than 1 μm. We report as well efficiency measurement and the false positive identification of the ebCMOS camera by identifying more than hundreds of single photon sources in parallel. About 700 spots are identified with a detection efficiency higher than 90% and a false positive percentage lower than 5. With these measurements, we show that our target tracking algorithm can be implemented in real time at 500 frames per second under a photon flux of the order of 8000 photons per frame. These results demonstrate that the ebCMOS camera concept with its single photon detection and target tracking algorithm is one of the best devices for low light and fast applications such as bioluminescence imaging, quantum dots tracking or adaptive optics.

  20. Optimization of grate combustion by means of an IR camera. Final report; Optimering af risteforbraending IR-kamera. Slut rapport

    Energy Technology Data Exchange (ETDEWEB)

    Didriksen, H.; Jensen, Joergen Peter; Hansen, Joergen (DONG Energy, Fredericia (Denmark)); Clausen, Soennik; Larsen, Henning (Technical Univ. of Denmark, Risoe National Lab. for Sustainable Energy, Roskilde (Denmark))

    2010-09-15

    The target of the project has been to improve the control and regulation of grate-fired straw boilers by involving measuring signals from a specially developed IR camera in a new regulation concept. The project was carried out with the straw boiler at the Avedoere power station. The conclusion has been that it is a very demanding task to develop an IR camera, including software, which must function as a process measuring device for continuous on-line measuring under very demanding conditions in a straw fired boiler. The result showed that this was not possible within the framework of this project. The developed camera has on the other hand proved to be very well suited for measuring campaigns, where the camera is ''manned''/continuously monitored. (Energy 11)