WorldWideScience

Sample records for level infrared camera

  1. Design and development of wafer-level near-infrared micro-camera

    Science.gov (United States)

    Zeller, John W.; Rouse, Caitlin; Efstathiadis, Harry; Haldar, Pradeep; Dhar, Nibir K.; Lewis, Jay S.; Wijewarnasuriya, Priyalal; Puri, Yash R.; Sood, Ashok K.

    2015-08-01

    SiGe offers a low-cost alternative to conventional infrared sensor material systems such as InGaAs, InSb, and HgCdTe for developing near-infrared (NIR) photodetector devices that do not require cooling and can offer high bandwidths and responsivities. As a result of the significant difference in thermal expansion coefficients between germanium and silicon, tensile strain incorporated into Ge epitaxial layers deposited on Si utilizing specialized growth processes can extend the operational range of detection to 1600 nm and longer wavelengths. We have fabricated SiGe based PIN detector devices on 300 mm diameter Si wafers in order to take advantage of high throughput, large-area complementary metal-oxide semiconductor (CMOS) technology. This device fabrication process involves low temperature epitaxial deposition of Ge to form a thin p+ seed/buffer layer, followed by higher temperature deposition of a thicker Ge intrinsic layer. An n+-Ge layer formed by ion implantation of phosphorus, passivating oxide cap, and then top copper contacts complete the PIN photodetector design. Various techniques including transmission electron microscopy (TEM) and secondary ion mass spectrometry (SIMS) have been employed to characterize the material and structural properties of the epitaxial growth and fabricated detector devices. In addition, electrical characterization was performed to compare the I-V dark current vs. photocurrent response as well as the time and wavelength varying photoresponse properties of the fabricated devices, results of which are likewise presented.

  2. Evaluation of low level laser and interferential current in the therapy of complex regional pain syndrome by infrared thermographic camera

    Directory of Open Access Journals (Sweden)

    Kocić Mirjana

    2010-01-01

    Full Text Available Background/Aim. Complex regional pain syndrome type I (CRPS I is characterized by continuous regional pain, disproportional according to duration and intensity and to the sort of trauma or other lesion it was caused by. The aim of the study was to evaluate and compare, by using thermovison, the effects of low level laser therapy and therapy with interferential current in treatment of CRPS I. Methods. The prospective randomized controlled clinical study included 45 patients with unilateral CRPS I, after a fracture of the distal end of the radius, of the tibia and/or the fibula, treated in the Clinical Centre in Nis from 2004 to 2007. The group A consisted of 20 patients treated by low level laser therapy and kinesy-therapy, while the patients in the group B (n = 25 were treated by interferential current and kinesy-therapy. The regions of interest were filmed by a thermovision camera on both sides, before and after the 20 therapeutic procedures had been applied. Afterwards, the quantitative analysis and the comparing of thermograms taken before and after the applied therapy were performed. Results. There was statistically significant decrease of the mean maximum temperature difference between the injured and the contralateral extremity after the therapy in comparison to the status before the therapy, with the patients of the group A (p < 0.001 as well as those of the group B (p < 0.001. The decrease was statistically significantly higher in the group A than in the group B (p < 0.05. Conclusions. By the use of the infrared thermovision we showed that in the treatment of CRPS I both physical medicine methods were effective, but the effectiveness of laser therapy was statistically significantly higher compared to that of the interferential current therapy.

  3. Results with the UKIRT infrared camera

    International Nuclear Information System (INIS)

    Mclean, I.S.

    1987-01-01

    Recent advances in focal plane array technology have made an immense impact on infrared astronomy. Results from the commissioning of the first infrared camera on UKIRT (the world's largest IR telescope) are presented. The camera, called IRCAM 1, employs the 62 x 58 InSb DRO array from SBRC in an otherwise general purpose system which is briefly described. Several imaging modes are possible including staring, chopping and a high-speed snapshot mode. Results to be presented include the first true high resolution images at IR wavelengths of the entire Orion nebula

  4. Dual-band infrared camera

    Science.gov (United States)

    Vogel, H.; Schlemmer, H.

    2005-10-01

    Every year, numerous accidents happen on European roads due to bad visibility (fog, night, heavy rain). Similarly, the dramatic aviation accidents of year 2001 in Milan and Zurich have reminded us that aviation safety is equally affected by reduced visibility. A dual-band thermal imager was developed in order to raise human situation awareness under conditions of reduced visibility especially in the automotive and aeronautical context but also for all transportation or surveillance tasks. The chosen wavelength bands are the Short Wave Infrared SWIR and the Long Wave Infrared LWIR band which are less obscured by reduced visibility conditions than the visible band. Furthermore, our field tests clearly show that the two different spectral bands very often contain complementary information. Pyramidal fusion is used to integrate complementary and redundant features of the multi-spectral images into a fused image which can be displayed on a monitor to provide more and better information for the driver or pilot.

  5. Application of infrared camera to bituminous concrete pavements: measuring vehicle

    Science.gov (United States)

    Janků, Michal; Stryk, Josef

    2017-09-01

    Infrared thermography (IR) has been used for decades in certain fields. However, the technological level of advancement of measuring devices has not been sufficient for some applications. Over the recent years, good quality thermal cameras with high resolution and very high thermal sensitivity have started to appear on the market. The development in the field of measuring technologies allowed the use of infrared thermography in new fields and for larger number of users. This article describes the research in progress in Transport Research Centre with a focus on the use of infrared thermography for diagnostics of bituminous road pavements. A measuring vehicle, equipped with a thermal camera, digital camera and GPS sensor, was designed for the diagnostics of pavements. New, highly sensitive, thermal cameras allow to measure very small temperature differences from the moving vehicle. This study shows the potential of a high-speed inspection without lane closures while using IR thermography.

  6. The contribution to the modal analysis using an infrared camera

    Directory of Open Access Journals (Sweden)

    Dekys Vladimír

    2018-01-01

    Full Text Available The paper deals with modal analysis using an infrared camera. The test objects were excited by the modal exciter with narrowband noise and the response was registered as a frame sequence by the high speed infrared camera FLIR SC7500. The resonant frequencies and the modal shapes were determined from the infrared spectrum recordings. Lock-in technology has also been used. The experimental results were compared with calculated natural frequencies and modal shapes.

  7. Hyperspectral Longwave Infrared Focal Plane Array and Camera Based on Quantum Well Infrared Photodetectors, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a hyperspectral focal plane array and camera imaging in a large number of sharp hyperspectral bands in the thermal infrared. The camera is...

  8. Seeing in a different light—using an infrared camera to teach heat transfer and optical phenomena

    Science.gov (United States)

    Pei Wong, Choun; Subramaniam, R.

    2018-05-01

    The infrared camera is a useful tool in physics education to ‘see’ in the infrared. In this paper, we describe four simple experiments that focus on phenomena related to heat transfer and optics that are encountered at undergraduate physics level using an infrared camera, and discuss the strengths and limitations of this tool for such purposes.

  9. Seeing in a Different Light--Using an Infrared Camera to Teach Heat Transfer and Optical Phenomena

    Science.gov (United States)

    Wong, Choun Pei; Subramaniam, R.

    2018-01-01

    The infrared camera is a useful tool in physics education to 'see' in the infrared. In this paper, we describe four simple experiments that focus on phenomena related to heat transfer and optics that are encountered at undergraduate physics level using an infrared camera, and discuss the strengths and limitations of this tool for such purposes.

  10. Ge Quantum Dot Infrared Imaging Camera, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Luna Innovations Incorporated proposes to develop a high performance Ge quantum dots-based infrared (IR) imaging camera on Si substrate. The high sensitivity, large...

  11. Portable Long-Wavelength Infrared Camera for Civilian Application

    Science.gov (United States)

    Gunapala, S. D.; Krabach, T. N.; Bandara, S. V.; Liu, J. K.

    1997-01-01

    In this paper, we discuss the performance of this portable long-wavelength infrared camera in quantum efficiency, NEAT, minimum resolvable temperature differnce (MRTD), uniformity, etc. and its application in science, medicine and defense.

  12. ARNICA, the Arcetri near-infrared camera: Astronomical performance assessment.

    Science.gov (United States)

    Hunt, L. K.; Lisi, F.; Testi, L.; Baffa, C.; Borelli, S.; Maiolino, R.; Moriondo, G.; Stanga, R. M.

    1996-01-01

    The Arcetri near-infrared camera ARNICA was built as a users' instrument for the Infrared Telescope at Gornergrat (TIRGO), and is based on a 256x256 NICMOS 3 detector. In this paper, we discuss ARNICA's optical and astronomical performance at the TIRGO and at the William Herschel Telescope on La Palma. Optical performance is evaluated in terms of plate scale, distortion, point spread function, and ghosting. Astronomical performance is characterized by camera efficiency, sensitivity, and spatial uniformity of the photometry.

  13. Spectrally-Tunable Infrared Camera Based on Highly-Sensitive Quantum Well Infrared Photodetectors, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a SPECTRALLY-TUNABLE INFRARED CAMERA based on quantum well infrared photodetector (QWIP) focal plane array (FPA) technology. This will build on...

  14. Handheld Longwave Infrared Camera Based on Highly-Sensitive Quantum Well Infrared Photodetectors, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a compact handheld longwave infrared camera based on quantum well infrared photodetector (QWIP) focal plane array (FPA) technology. Based on...

  15. Students' Framing of Laboratory Exercises Using Infrared Cameras

    Science.gov (United States)

    Haglund, Jesper; Jeppsson, Fredrik; Hedberg, David; Schönborn, Konrad J.

    2015-01-01

    Thermal science is challenging for students due to its largely imperceptible nature. Handheld infrared cameras offer a pedagogical opportunity for students to see otherwise invisible thermal phenomena. In the present study, a class of upper secondary technology students (N = 30) partook in four IR-camera laboratory activities, designed around the…

  16. The development of large-aperture test system of infrared camera and visible CCD camera

    Science.gov (United States)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  17. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    Science.gov (United States)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellutta, Paolo; Sherwin, Gary W.

    2011-01-01

    The ability to perform off-road autonomous navigation at any time of day or night is a requirement for some unmanned ground vehicle (UGV) programs. Because there are times when it is desirable for military UGVs to operate without emitting strong, detectable electromagnetic signals, a passive only terrain perception mode of operation is also often a requirement. Thermal infrared (TIR) cameras can be used to provide day and night passive terrain perception. TIR cameras have a detector sensitive to either mid-wave infrared (MWIR) radiation (3-5?m) or long-wave infrared (LWIR) radiation (8-12?m). With the recent emergence of high-quality uncooled LWIR cameras, TIR cameras have become viable passive perception options for some UGV programs. The Jet Propulsion Laboratory (JPL) has used a stereo pair of TIR cameras under several UGV programs to perform stereo ranging, terrain mapping, tree-trunk detection, pedestrian detection, negative obstacle detection, and water detection based on object reflections. In addition, we have evaluated stereo range data at a variety of UGV speeds, evaluated dual-band TIR classification of soil, vegetation, and rock terrain types, analyzed 24 hour water and 12 hour mud TIR imagery, and analyzed TIR imagery for hazard detection through smoke. Since TIR cameras do not currently provide the resolution available from megapixel color cameras, a UGV's daytime safe speed is often reduced when using TIR instead of color cameras. In this paper, we summarize the UGV terrain perception work JPL has performed with TIR cameras over the last decade and describe a calibration target developed by General Dynamics Robotic Systems (GDRS) for TIR cameras and other sensors.

  18. Long wavelength infrared camera (LWIRC): a 10 micron camera for the Keck Telescope

    Energy Technology Data Exchange (ETDEWEB)

    Wishnow, E.H.; Danchi, W.C.; Tuthill, P.; Wurtz, R.; Jernigan, J.G.; Arens, J.F.

    1998-05-01

    The Long Wavelength Infrared Camera (LWIRC) is a facility instrument for the Keck Observatory designed to operate at the f/25 forward Cassegrain focus of the Keck I telescope. The camera operates over the wavelength band 7-13 {micro}m using ZnSe transmissive optics. A set of filters, a circular variable filter (CVF), and a mid-infrared polarizer are available, as are three plate scales: 0.05``, 0.10``, 0.21`` per pixel. The camera focal plane array and optics are cooled using liquid helium. The system has been refurbished with a 128 x 128 pixel Si:As detector array. The electronics readout system used to clock the array is compatible with both the hardware and software of the other Keck infrared instruments NIRC and LWS. A new pre-amplifier/A-D converter has been designed and constructed which decreases greatly the system susceptibility to noise.

  19. Low-cost uncooled VOx infrared camera development

    Science.gov (United States)

    Li, Chuan; Han, C. J.; Skidmore, George D.; Cook, Grady; Kubala, Kenny; Bates, Robert; Temple, Dorota; Lannon, John; Hilton, Allan; Glukh, Konstantin; Hardy, Busbee

    2013-06-01

    The DRS Tamarisk® 320 camera, introduced in 2011, is a low cost commercial camera based on the 17 µm pixel pitch 320×240 VOx microbolometer technology. A higher resolution 17 µm pixel pitch 640×480 Tamarisk®640 has also been developed and is now in production serving the commercial markets. Recently, under the DARPA sponsored Low Cost Thermal Imager-Manufacturing (LCTI-M) program and internal project, DRS is leading a team of industrial experts from FiveFocal, RTI International and MEMSCAP to develop a small form factor uncooled infrared camera for the military and commercial markets. The objective of the DARPA LCTI-M program is to develop a low SWaP camera (costs less than US $500 based on a 10,000 units per month production rate. To meet this challenge, DRS is developing several innovative technologies including a small pixel pitch 640×512 VOx uncooled detector, an advanced digital ROIC and low power miniature camera electronics. In addition, DRS and its partners are developing innovative manufacturing processes to reduce production cycle time and costs including wafer scale optic and vacuum packaging manufacturing and a 3-dimensional integrated camera assembly. This paper provides an overview of the DRS Tamarisk® project and LCTI-M related uncooled technology development activities. Highlights of recent progress and challenges will also be discussed. It should be noted that BAE Systems and Raytheon Vision Systems are also participants of the DARPA LCTI-M program.

  20. Low-cost far infrared bolometer camera for automotive use

    Science.gov (United States)

    Vieider, Christian; Wissmar, Stanley; Ericsson, Per; Halldin, Urban; Niklaus, Frank; Stemme, Göran; Källhammer, Jan-Erik; Pettersson, Håkan; Eriksson, Dick; Jakobsen, Henrik; Kvisterøy, Terje; Franks, John; VanNylen, Jan; Vercammen, Hans; VanHulsel, Annick

    2007-04-01

    A new low-cost long-wavelength infrared bolometer camera system is under development. It is designed for use with an automatic vision algorithm system as a sensor to detect vulnerable road users in traffic. Looking 15 m in front of the vehicle it can in case of an unavoidable impact activate a brake assist system or other deployable protection system. To achieve our cost target below €100 for the sensor system we evaluate the required performance and can reduce the sensitivity to 150 mK and pixel resolution to 80 x 30. We address all the main cost drivers as sensor size and production yield along with vacuum packaging, optical components and large volume manufacturing technologies. The detector array is based on a new type of high performance thermistor material. Very thin Si/SiGe single crystal multi-layers are grown epitaxially. Due to the resulting valence barriers a high temperature coefficient of resistance is achieved (3.3%/K). Simultaneously, the high quality crystalline material provides very low 1/f-noise characteristics and uniform material properties. The thermistor material is transferred from the original substrate wafer to the read-out circuit using adhesive wafer bonding and subsequent thinning. Bolometer arrays can then be fabricated using industry standard MEMS process and materials. The inherently good detector performance allows us to reduce the vacuum requirement and we can implement wafer level vacuum packaging technology used in established automotive sensor fabrication. The optical design is reduced to a single lens camera. We develop a low cost molding process using a novel chalcogenide glass (GASIR®3) and integrate anti-reflective and anti-erosion properties using diamond like carbon coating.

  1. High spatial resolution infrared camera as ISS external experiment

    Science.gov (United States)

    Eckehard, Lorenz; Frerker, Hap; Fitch, Robert Alan

    High spatial resolution infrared camera as ISS external experiment for monitoring global climate changes uses ISS internal and external resources (eg. data storage). The optical experiment will consist of an infrared camera for monitoring global climate changes from the ISS. This technology was evaluated by the German small satellite mission BIRD and further developed in different ESA projects. Compared to BIRD the presended instrument uses proven sensor advanced technologies (ISS external) and ISS on board processing and storage capabili-ties (internal). The instrument will be equipped with a serial interfaces for TM/TC and several relay commands for the power supply. For data processing and storage a mass memory is re-quired. The access to actual attitude data is highly desired to produce geo referenced maps-if possible by an on board processing.

  2. Ensuring long-term stability of infrared camera absolute calibration.

    Science.gov (United States)

    Kattnig, Alain; Thetas, Sophie; Primot, Jérôme

    2015-07-13

    Absolute calibration of cryogenic 3-5 µm and 8-10 µm infrared cameras is notoriously instable and thus has to be repeated before actual measurements. Moreover, the signal to noise ratio of the imagery is lowered, decreasing its quality. These performances degradations strongly lessen the suitability of Infrared Imaging. These defaults are often blamed on detectors reaching a different "response state" after each return to cryogenic conditions, while accounting for the detrimental effects of imperfect stray light management. We show here that detectors are not to be blamed and that the culprit can also dwell in proximity electronics. We identify an unexpected source of instability in the initial voltage of the integrating capacity of detectors. Then we show that this parameter can be easily measured and taken into account. This way we demonstrate that a one month old calibration of a 3-5 µm camera has retained its validity.

  3. Infrared detectors and test technology of cryogenic camera

    Science.gov (United States)

    Yang, Xiaole; Liu, Xingxin; Xing, Mailing; Ling, Long

    2016-10-01

    Cryogenic camera which is widely used in deep space detection cools down optical system and support structure by cryogenic refrigeration technology, thereby improving the sensitivity. Discussing the characteristics and design points of infrared detector combined with camera's characteristics. At the same time, cryogenic background test systems of chip and detector assembly are established. Chip test system is based on variable cryogenic and multilayer Dewar, and assembly test system is based on target and background simulator in the thermal vacuum environment. The core of test is to establish cryogenic background. Non-uniformity, ratio of dead pixels and noise of test result are given finally. The establishment of test system supports for the design and calculation of infrared systems.

  4. A fuzzy automated object classification by infrared laser camera

    Science.gov (United States)

    Kanazawa, Seigo; Taniguchi, Kazuhiko; Asari, Kazunari; Kuramoto, Kei; Kobashi, Syoji; Hata, Yutaka

    2011-06-01

    Home security in night is very important, and the system that watches a person's movements is useful in the security. This paper describes a classification system of adult, child and the other object from distance distribution measured by an infrared laser camera. This camera radiates near infrared waves and receives reflected ones. Then, it converts the time of flight into distance distribution. Our method consists of 4 steps. First, we do background subtraction and noise rejection in the distance distribution. Second, we do fuzzy clustering in the distance distribution, and form several clusters. Third, we extract features such as the height, thickness, aspect ratio, area ratio of the cluster. Then, we make fuzzy if-then rules from knowledge of adult, child and the other object so as to classify the cluster to one of adult, child and the other object. Here, we made the fuzzy membership function with respect to each features. Finally, we classify the clusters to one with the highest fuzzy degree among adult, child and the other object. In our experiment, we set up the camera in room and tested three cases. The method successfully classified them in real time processing.

  5. Lock-in thermography using a cellphone attachment infrared camera

    Science.gov (United States)

    Razani, Marjan; Parkhimchyk, Artur; Tabatabaei, Nima

    2018-03-01

    Lock-in thermography (LIT) is a thermal-wave-based, non-destructive testing, technique which has been widely utilized in research settings for characterization and evaluation of biological and industrial materials. However, despite promising research outcomes, the wide spread adaptation of LIT in industry, and its commercialization, is hindered by the high cost of the infrared cameras used in the LIT setups. In this paper, we report on the feasibility of using inexpensive cellphone attachment infrared cameras for performing LIT. While the cost of such cameras is over two orders of magnitude less than their research-grade counterparts, our experimental results on block sample with subsurface defects and tooth with early dental caries suggest that acceptable performance can be achieved through careful instrumentation and implementation of proper data acquisition and image processing steps. We anticipate this study to pave the way for development of low-cost thermography systems and their commercialization as inexpensive tools for non-destructive testing of industrial samples as well as affordable clinical devices for diagnostic imaging of biological tissues.

  6. Lock-in thermography using a cellphone attachment infrared camera

    Directory of Open Access Journals (Sweden)

    Marjan Razani

    2018-03-01

    Full Text Available Lock-in thermography (LIT is a thermal-wave-based, non-destructive testing, technique which has been widely utilized in research settings for characterization and evaluation of biological and industrial materials. However, despite promising research outcomes, the wide spread adaptation of LIT in industry, and its commercialization, is hindered by the high cost of the infrared cameras used in the LIT setups. In this paper, we report on the feasibility of using inexpensive cellphone attachment infrared cameras for performing LIT. While the cost of such cameras is over two orders of magnitude less than their research-grade counterparts, our experimental results on block sample with subsurface defects and tooth with early dental caries suggest that acceptable performance can be achieved through careful instrumentation and implementation of proper data acquisition and image processing steps. We anticipate this study to pave the way for development of low-cost thermography systems and their commercialization as inexpensive tools for non-destructive testing of industrial samples as well as affordable clinical devices for diagnostic imaging of biological tissues.

  7. Infrared Camera Diagnostic for Heat Flux Measurements on NSTX

    International Nuclear Information System (INIS)

    D. Mastrovito; R. Maingi; H.W. Kugel; A.L. Roquemore

    2003-01-01

    An infrared imaging system has been installed on NSTX (National Spherical Torus Experiment) at the Princeton Plasma Physics Laboratory to measure the surface temperatures on the lower divertor and center stack. The imaging system is based on an Indigo Alpha 160 x 128 microbolometer camera with 12 bits/pixel operating in the 7-13 (micro)m range with a 30 Hz frame rate and a dynamic temperature range of 0-700 degrees C. From these data and knowledge of graphite thermal properties, the heat flux is derived with a classic one-dimensional conduction model. Preliminary results of heat flux scaling are reported

  8. James Webb Telescope's Near Infrared Camera: Making Models, Building Understanding

    Science.gov (United States)

    Lebofsky, Larry A.; McCarthy, D. W.; Higgins, M. L.; Lebofsky, N. R.

    2010-10-01

    The Astronomy Camp for Girl Scout Leaders is a science education program sponsored by NASA's next large space telescope: The James Webb Space Telescope (JWST). The E/PO team for JWST's Near Infrared Camera (NIRCam), in collaboration with the Sahuaro Girl Scout Council, has developed a long-term relationship with adult leaders from all GSUSA Councils that directly benefits troops of all ages, not only in general science education but also specifically in the astronomical and technology concepts relating to JWST. We have been training and equipping these leaders so they can in turn teach young women essential concepts in astronomy, i.e., the night sky environment. We model what astronomers do by engaging trainers in the process of scientific inquiry, and we equip them to host troop-level astronomy-related activities. It is GSUSA's goal to foster girls’ interest and creativity in Science, Technology, Engineering, and Math, creating an environment that encourages their interests early in their lives while creating a safe place for girls to try and fail, and then try again and succeed. To date, we have trained over 158 leaders in 13 camps. These leaders have come from 24 states, DC, Guam, and Japan. While many of the camp activities are related to the "First Light” theme, many of the background activities relate to two of the other JWST and NIRCam themes: "Birth of Stars and Protoplanetary Systems” and "Planetary Systems and the Origin of Life.” The latter includes our own Solar System. Our poster will highlight the Planetary Systems theme: 1. Earth and Moon: Day and Night; Rotation and Revolution. 2. Earth/Moon Comparisons. 3. Size Model: The Diameters of the Planets. 4. Macramé Planetary (Solar) Distance Model. 5.What is a Planet? 6. Planet Sorting Cards. 7. Human Orrery 8. Lookback Time in Our Daily Lives NIRCam E/PO website: http://zeus.as.arizona.edu/ dmccarthy/GSUSA

  9. Strategic options towards an affordable high-performance infrared camera

    Science.gov (United States)

    Oduor, Patrick; Mizuno, Genki; Dutta, Achyut K.; Lewis, Jay; Dhar, Nibir K.

    2016-05-01

    The promise of infrared (IR) imaging attaining low-cost akin to CMOS sensors success has been hampered by the inability to achieve cost advantages that are necessary for crossover from military and industrial applications into the consumer and mass-scale commercial realm despite well documented advantages. Banpil Photonics is developing affordable IR cameras by adopting new strategies to speed-up the decline of the IR camera cost curve. We present a new short-wave IR (SWIR) camera; 640x512 pixel InGaAs uncooled system that is high sensitivity low noise ( 500 frames per second (FPS)) at full resolution, and low power consumption (market adoption by not only demonstrating high-performance IR imaging capability value add demanded by military and industrial application, but also illuminates a path towards justifiable price points essential for consumer facing application industries such as automotive, medical, and security imaging adoption. Among the strategic options presented include new sensor manufacturing technologies that scale favorably towards automation, multi-focal plane array compatible readout electronics, and dense or ultra-small pixel pitch devices.

  10. NIRAC: Near Infrared Airglow Camera for the International Space Station

    Science.gov (United States)

    Gelinas, L. J.; Rudy, R. J.; Hecht, J. H.

    2017-12-01

    NIRAC is a space based infrared airglow imager that will be deployed to the International Space Station in late 2018, under the auspices of the Space Test Program. NIRAC will survey OH airglow emissions in the 1.6 micron wavelength regime, exploring the spatial and temporal variability of emission intensities at latitudes from 51° south to 51° north. Atmospheric perturbations in the 80-100 km altitude range, including those produced by atmospheric gravity waves (AGWs), are observable in the OH airglow. The objective of the NIRAC experiment is to make near global measurement of the OH airglow and airglow perturbations. These emissions also provide a bright source of illumination at night, allowing for nighttime detection of clouds and surface characteristics. The instrument, developed by the Aerospace Space Science Applications Laboratory, employs a space-compatible FPGA for camera control and data collection and a novel, custom optical system to eliminate image smear due to orbital motion. NIRAC utilizes a high-performance, large format infrared focal plane array, transitioning technology used in the existing Aerospace Corporation ground-based airglow imager to a space based platform. The high-sensitivity, four megapixel imager has a native spatial resolution of 100 meters at ISS altitudes. The 23° x 23° FOV sweeps out a 150 km swath of the OH airglow layer as viewed from the ISS, and is sensitive to OH intensity perturbations down to 0.1%. The detector has a 1.7 micron cutoff that precludes the need for cold optics and reduces cooling requirements (to 180 K). Detector cooling is provided by a compact, lightweight cryocooler capable of reaching 120K, providing a great deal of margin.

  11. Cosmic Infrared Background Fluctuations in Deep Spitzer Infrared Array Camera Images: Data Processing and Analysis

    Science.gov (United States)

    Arendt, Richard; Kashlinsky, A.; Moseley, S.; Mather, J.

    2010-01-01

    This paper provides a detailed description of the data reduction and analysis procedures that have been employed in our previous studies of spatial fluctuation of the cosmic infrared background (CIB) using deep Spitzer Infrared Array Camera observations. The self-calibration we apply removes a strong instrumental signal from the fluctuations that would otherwise corrupt the results. The procedures and results for masking bright sources and modeling faint sources down to levels set by the instrumental noise are presented. Various tests are performed to demonstrate that the resulting power spectra of these fields are not dominated by instrumental or procedural effects. These tests indicate that the large-scale ([greater, similar]30') fluctuations that remain in the deepest fields are not directly related to the galaxies that are bright enough to be individually detected. We provide the parameterization of these power spectra in terms of separate instrument noise, shot noise, and power-law components. We discuss the relationship between fluctuations measured at different wavelengths and depths, and the relations between constraints on the mean intensity of the CIB and its fluctuation spectrum. Consistent with growing evidence that the [approx]1-5 [mu]m mean intensity of the CIB may not be as far above the integrated emission of resolved galaxies as has been reported in some analyses of DIRBE and IRTS observations, our measurements of spatial fluctuations of the CIB intensity indicate the mean emission from the objects producing the fluctuations is quite low ([greater, similar]1 nW m-2 sr-1 at 3-5 [mu]m), and thus consistent with current [gamma]-ray absorption constraints. The source of the fluctuations may be high-z Population III objects, or a more local component of very low luminosity objects with clustering properties that differ from the resolved galaxies. Finally, we discuss the prospects of the upcoming space-based surveys to directly measure the epochs

  12. COSMIC INFRARED BACKGROUND FLUCTUATIONS IN DEEP SPITZER INFRARED ARRAY CAMERA IMAGES: DATA PROCESSING AND ANALYSIS

    International Nuclear Information System (INIS)

    Arendt, Richard G.; Kashlinsky, A.; Moseley, S. H.; Mather, J.

    2010-01-01

    This paper provides a detailed description of the data reduction and analysis procedures that have been employed in our previous studies of spatial fluctuation of the cosmic infrared background (CIB) using deep Spitzer Infrared Array Camera observations. The self-calibration we apply removes a strong instrumental signal from the fluctuations that would otherwise corrupt the results. The procedures and results for masking bright sources and modeling faint sources down to levels set by the instrumental noise are presented. Various tests are performed to demonstrate that the resulting power spectra of these fields are not dominated by instrumental or procedural effects. These tests indicate that the large-scale (∼>30') fluctuations that remain in the deepest fields are not directly related to the galaxies that are bright enough to be individually detected. We provide the parameterization of these power spectra in terms of separate instrument noise, shot noise, and power-law components. We discuss the relationship between fluctuations measured at different wavelengths and depths, and the relations between constraints on the mean intensity of the CIB and its fluctuation spectrum. Consistent with growing evidence that the ∼1-5 μm mean intensity of the CIB may not be as far above the integrated emission of resolved galaxies as has been reported in some analyses of DIRBE and IRTS observations, our measurements of spatial fluctuations of the CIB intensity indicate the mean emission from the objects producing the fluctuations is quite low (∼>1 nW m -2 sr -1 at 3-5 μm), and thus consistent with current γ-ray absorption constraints. The source of the fluctuations may be high-z Population III objects, or a more local component of very low luminosity objects with clustering properties that differ from the resolved galaxies. Finally, we discuss the prospects of the upcoming space-based surveys to directly measure the epochs inhabited by the populations producing these

  13. [Evaluation of Iris Morphology Viewed through Stromal Edematous Corneas by Infrared Camera].

    Science.gov (United States)

    Kobayashi, Masaaki; Morishige, Naoyuki; Morita, Yukiko; Yamada, Naoyuki; Kobayashi, Motomi; Sonoda, Koh-Hei

    2016-02-01

    We reported that the application of infrared camera enables us to observe iris morphology in Peters' anomaly through edematous corneas. To observe the iris morphology in bullous keratopathy or failure grafts with an infrared camera. Eleven bullous keratopathy or failure grafts subjects (6 men and 5 women, mean age ± SD; 72.7 ± 13.0 years old) were enrolled in this study. The iris morphology was observed by applying visible light mode and near infrared light mode of infrared camera (MeibomPen). The detectability of pupil shapes, iris patterns and presence of iridectomy was evaluated. Infrared mode observation enabled us to detect the pupil shapes in 11 out of 11 cases, iris patterns in 3 out of 11 cases, and presence of iridetomy in 9 out of 11 cases although visible light mode observation could not detect any iris morphological changes. Applying infrared optics was valuable for observation of the iris morphology through stromal edematous corneas.

  14. Near infrared focal plane for the ISOCAM camera

    International Nuclear Information System (INIS)

    Epstein, G.; Stefanovitch, D.; Tiphene, D.; Carpentier, Y.; Lorans, D.

    1988-01-01

    ISOCAM is one of the science instruments in the Infrared Space Observatory. It is a 2-channel IR Astronomical Imager intended to observe at very low flux levels, thanks to the use of a liquid helium cooled telescope. This paper describes the Focal Plane Assembly design of the short wavelength channel. The operation of a 32 x 32 InSb CID-SAT array detector has been demonstrated. The problems encountered in the design of the cooled electronics and the component selection process are discussed in the light of specific ISO constraints, such as thermal control and radiation shielding. 6 references

  15. High Quantum Efficiency 1024x1024 Longwave Infrared SLS FPA and Camera, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose a high quantum efficiency (QE) 1024x1024 longwave infrared focal plane array (LWIR FPA) and CAMERA with ~ 12 micron cutoff wavelength made from...

  16. The infrared camera system on the HL-2A tokamak device

    International Nuclear Information System (INIS)

    Li Wei; Lu Jie; Yi Ping

    2009-04-01

    In order to measure and analyze the heat flux on the divertor plate under different discharge conditions, an infrared camera diagnostic system for HL-2A Device has been developed. The infrared camera diagnostic system mainly includes the thermograph with uncooled microbolometer Focal Plane Array detector, Zinc Selenide window, Firewire Fiber Repeaters, 50 m long fibers, magnetic shielding box and data acquisition card. The diagnostic system can provide high spatial resolution, long distance control and real-time data acquisition. Based on the surface temperature measured by the infrared camera diagnostic system and the knowledge of the copper thermal properties, the heat flux can be derived by heat conduct model. The infrared camera diagnostic system and preliminary results are presented in details. (authors)

  17. Use of COTS Infrared Camera Arrays for Enhanced Human in the Loop Data Collection

    Data.gov (United States)

    National Aeronautics and Space Administration — Demonstrate the use of one or more Microsoft Kinect infrared cameras for the application of passive data collection during human in the loop (HITL) tests. By using...

  18. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    Science.gov (United States)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellut, Paolo; Sherwin, Gary

    2011-01-01

    TIR cameras can be used for day/night Unmanned Ground Vehicle (UGV) autonomous navigation when stealth is required. The quality of uncooled TIR cameras has significantly improved over the last decade, making them a viable option at low speed Limiting factors for stereo ranging with uncooled LWIR cameras are image blur and low texture scenes TIR perception capabilities JPL has explored includes: (1) single and dual band TIR terrain classification (2) obstacle detection (pedestrian, vehicle, tree trunks, ditches, and water) (3) perception thru obscurants

  19. Design, Manufacturing, and Commissioning of BIRCAM (Bootes InfraRed CAMera

    Directory of Open Access Journals (Sweden)

    Alberto Riva

    2010-01-01

    Full Text Available This paper covers the various aspect of design, manufacturing and commissioning of the infrared camera BIRCAM, installed at BOOTES-IR, the 60 cm robotic infrared telescope at Sierra Nevada Observatory (OSN, Granada, Spain. We describe how we achieved a quality astronomical image, moving from the scientific requirements.

  20. SCC500: next-generation infrared imaging camera core products with highly flexible architecture for unique camera designs

    Science.gov (United States)

    Rumbaugh, Roy N.; Grealish, Kevin; Kacir, Tom; Arsenault, Barry; Murphy, Robert H.; Miller, Scott

    2003-09-01

    A new 4th generation MicroIR architecture is introduced as the latest in the highly successful Standard Camera Core (SCC) series by BAE SYSTEMS to offer an infrared imaging engine with greatly reduced size, weight, power, and cost. The advanced SCC500 architecture provides great flexibility in configuration to include multiple resolutions, an industry standard Real Time Operating System (RTOS) for customer specific software application plug-ins, and a highly modular construction for unique physical and interface options. These microbolometer based camera cores offer outstanding and reliable performance over an extended operating temperature range to meet the demanding requirements of real-world environments. A highly integrated lens and shutter is included in the new SCC500 product enabling easy, drop-in camera designs for quick time-to-market product introductions.

  1. Variation in detection among passive infrared triggered-cameras used in wildlife research

    Science.gov (United States)

    Damm, Philip E.; Grand, James B.; Barnett, Steven W.

    2010-01-01

    Precise and accurate estimates of demographics such as age structure, productivity, and density are necessary in determining habitat and harvest management strategies for wildlife populations. Surveys using automated cameras are becoming an increasingly popular tool for estimating these parameters. However, most camera studies fail to incorporate detection probabilities, leading to parameter underestimation. The objective of this study was to determine the sources of heterogeneity in detection for trail cameras that incorporate a passive infrared (PIR) triggering system sensitive to heat and motion. Images were collected at four baited sites within the Conecuh National Forest, Alabama, using three cameras at each site operating continuously over the same seven-day period. Detection was estimated for four groups of animals based on taxonomic group and body size. Our hypotheses of detection considered variation among bait sites and cameras. The best model (w=0.99) estimated different rates of detection for each camera in addition to different detection rates for four animal groupings. Factors that explain this variability might include poor manufacturing tolerances, variation in PIR sensitivity, animal behavior, and species-specific infrared radiation. Population surveys using trail cameras with PIR systems must incorporate detection rates for individual cameras. Incorporating time-lapse triggering systems into survey designs should eliminate issues associated with PIR systems.

  2. Nondestructive PCBA fault test by using infrared camera

    International Nuclear Information System (INIS)

    Joo, Hoon; Cho, Young Shin; Kim, Beak Sop; Song, Seong Ho; Kim, Ho Sung

    2004-01-01

    A global inspection equipment is developed to evaluate the status of component operations and function error, as well as having a comprehensive function board after examining the infrared components. Its basic principle is to analyze the measured power of the infrared rays each part emits at each instant. After applying the power and setting the initial conditioning signals to the board under test, tile system determines the normality of the parts and determines whether the board is working well. The normality is determined by comparing the change of infrared energy with tile reference patterns of the known good boards. The results of experimental test with several boards show that the system is accurate and reliable test solution and one of very useful methods to test PBA nondestructively.

  3. Infrared Imaging Camera Final Report CRADA No. TC02061.0

    Energy Technology Data Exchange (ETDEWEB)

    Roos, E. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nebeker, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-08

    This was a collaborative effort between the University of California, Lawrence Livermore National Laboratory (LLNL) and Cordin Company (Cordin) to enhance the U.S. ability to develop a commercial infrared camera capable of capturing high-resolution images in a l 00 nanoseconds (ns) time frame. The Department of Energy (DOE), under an Initiative for Proliferation Prevention (IPP) project, funded the Russian Federation Nuclear Center All-Russian Scientific Institute of Experimental Physics (RFNC-VNIIEF) in Sarov. VNIIEF was funded to develop a prototype commercial infrared (IR) framing camera and to deliver a prototype IR camera to LLNL. LLNL and Cordin were partners with VNIIEF on this project. A prototype IR camera was delivered by VNIIEF to LLNL in December 2006. In June of 2007, LLNL and Cordin evaluated the camera and the test results revealed that the camera exceeded presently available commercial IR cameras. Cordin believes that the camera can be sold on the international market. The camera is currently being used as a scientific tool within Russian nuclear centers. This project was originally designated as a two year project. The project was not started on time due to changes in the IPP project funding conditions; the project funding was re-directed through the International Science and Technology Center (ISTC), which delayed the project start by over one year. The project was not completed on schedule due to changes within the Russian government export regulations. These changes were directed by Export Control regulations on the export of high technology items that can be used to develop military weapons. The IR camera was on the list that export controls required. The ISTC and Russian government, after negotiations, allowed the delivery of the camera to LLNL. There were no significant technical or business changes to the original project.

  4. SIBI: A compact hyperspectral camera in the mid-infrared

    Science.gov (United States)

    Pola Fossi, Armande; Ferrec, Yann; Domel, Roland; Coudrain, Christophe; Guerineau, Nicolas; Roux, Nicolas; D'Almeida, Oscar; Bousquet, Marc; Kling, Emmanuel; Sauer, Hervé

    2015-10-01

    Recent developments in unmanned aerial vehicles have increased the demand for more and more compact optical systems. In order to bring solutions to this demand, several infrared systems are being developed at ONERA such as spectrometers, imaging devices, multispectral and hyperspectral imaging systems. In the field of compact infrared hyperspectral imaging devices, ONERA and Sagem Défense et Sécurité have collaborated to develop a prototype called SIBI, which stands for "Spectro-Imageur Birefringent Infrarouge". It is a static Fourier transform imaging spectrometer which operates in the mid-wavelength infrared spectral range and uses a birefringent lateral shearing interferometer. Up to now, birefringent interferometers have not been often used for hyperspectral imaging in the mid-infrared because of the lack of crystal manufacturers, contrary to the visible spectral domain where the production of uniaxial crystals like calcite are mastered for various optical applications. In the following, we will present the design and the realization of SIBI as well as the first experimental results.

  5. Quantative determination of surface temperatures using an infrared camera

    International Nuclear Information System (INIS)

    Hsieh, C.K.; Ellingson, W.A.

    1977-01-01

    A method is presented to determine the surface-temperature distribution at each point in an infrared picture. To handle the surface reflection problem, three cases are considered that include the use of black coatings, radiation shields, and band-pass filters. For uniform irradiation on the test surface, the irradiation can be measured by using a cooled, convex mirror. Equations are derived to show that this surrounding irradiation effect can be subtracted out from the scanned radiation; thus the net radiation is related to only emission from the surface. To provide for temperature measurements over a large field, the image-processing technique is used to digitize the infrared data. The paper spells out procedures that involve the use of a computer for making point-by-point temperature calculations. Finally, a sample case is given to illustrate applications of the method. 6 figures, 1 table

  6. University Physics Students' Ideas of Thermal Radiation Expressed in Open Laboratory Activities Using Infrared Cameras

    Science.gov (United States)

    Haglund, Jesper; Melander, Emil; Weiszflog, Matthias; Andersson, Staffan

    2017-01-01

    Background: University physics students were engaged in open-ended thermodynamics laboratory activities with a focus on understanding a chosen phenomenon or the principle of laboratory apparatus, such as thermal radiation and a heat pump. Students had access to handheld infrared (IR) cameras for their investigations. Purpose: The purpose of the…

  7. Estimating the Infrared Radiation Wavelength Emitted by a Remote Control Device Using a Digital Camera

    Science.gov (United States)

    Catelli, Francisco; Giovannini, Odilon; Bolzan, Vicente Dall Agnol

    2011-01-01

    The interference fringes produced by a diffraction grating illuminated with radiation from a TV remote control and a red laser beam are, simultaneously, captured by a digital camera. Based on an image with two interference patterns, an estimate of the infrared radiation wavelength emitted by a TV remote control is made. (Contains 4 figures.)

  8. Observation of runaway electrons by infrared camera in J-TEXT

    Energy Technology Data Exchange (ETDEWEB)

    Tong, R. H.; Chen, Z. Y., E-mail: zychen@hust.edu.cn; Zhang, M.; Huang, D. W.; Yan, W.; Zhuang, G. [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, School of Electrical and Electronic Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China)

    2016-11-15

    When the energy of confined runaway electrons approaches several tens of MeV, the runaway electrons can emit synchrotron radiation in the range of infrared wavelength. An infrared camera working in the wavelength of 3-5 μm has been developed to study the runaway electrons in the Joint Texas Experimental Tokamak (J-TEXT). The camera is located in the equatorial plane looking tangentially into the direction of electron approach. The runaway electron beam inside the plasma has been observed at the flattop phase. With a fast acquisition of the camera, the behavior of runaway electron beam has been observed directly during the runaway current plateau following the massive gas injection triggered disruptions.

  9. Periscope-camera system for visible and infrared imaging diagnostics on TFTR

    International Nuclear Information System (INIS)

    Medley, S.S.; Dimock, D.L.; Hayes, S.; Long, D.; Lowrence, J.L.; Mastrocola, V.; Renda, G.; Ulrickson, M.; Young, K.M.

    1985-05-01

    An optical diagnostic consisting of a periscope which relays images of the torus interior to an array of cameras is used on the Tokamak Fusion Test Reactor (TFTR) to view plasma discharge phenomena and inspect vacuum vessel internal structures in both visible and near-infrared wavelength regions. Three periscopes view through 20-cm-diameter fused-silica windows which are spaced around the torus midplane to provide a viewing coverage of approximately 75% of the vacuum vessel internal surface area. The periscopes have f/8 optics and motor-driven controls for focusing, magnification selection (5 0 , 20 0 , and 60 0 field of view), elevation and azimuth setting, mast rotation, filter selection, iris aperture, and viewing port selection. The four viewing ports on each periscope are equipped with multiple imaging devices which include: (1) an inspection eyepiece, (2) standard (RCA TC2900) and fast (RETICON) framing rate television cameras, (3) a PtSi CCD infrared imaging camera, (4) a 35 mm Nikon F3 still camera, or (5) a 16 mm Locam II movie camera with variable framing up to 500 fps. Operation of the periscope-camera system is controlled either locally or remotely through a computer-CAMAC interface. A description of the equipment and examples of its application are presented

  10. Periscope-camera system for visible and infrared imaging diagnostics on TFTR

    International Nuclear Information System (INIS)

    Medley, S.S.; Dimock, D.L.; Hayes, S.; Long, D.; Lowrance, J.L.; Mastrocola, V.; Renda, G.; Ulrickson, M.; Young, K.M.

    1985-01-01

    An optical diagnostic consisting of a periscope which relays images of the torus interior to an array of cameras is used on the Tokamak Fusion Test Reactor (TFTR) to view plasma discharge phenomena and inspect the vacuum vessel internal structures in both the visible and near-infrared wavelength regions. Three periscopes view through 20-cm-diam fused-silica windows which are spaced around the torus midplane to provide a viewing coverage of approximately 75% of the vacuum vessel internal surface area. The periscopes have f/8 optics and motor-driven controls for focusing, magnification selection (5 0 , 20 0 , and 60 0 field of view), elevation and azimuth setting, mast rotation, filter selection, iris aperture, and viewing port selection. The four viewing ports on each periscope are equipped with multiple imaging devices which include: (1) an inspection eyepiece, (2) standard (RCA TC2900) and fast (RETICON) framing rate television cameras, (3) a PtSi CCD infrared imaging camera, (4) a 35-mm Nikon F3 still camera, or (5) a 16-mm Locam II movie camera with variable framing rate up to 500 fps. Operation of the periscope-camera system is controlled either locally or remotely through a computer-CAMAC interface. A description of the equipment and examples of its application are presented

  11. Camac interface for digitally recording infrared camera images

    International Nuclear Information System (INIS)

    Dyer, G.R.

    1986-01-01

    An instrument has been built to store the digital signals from a modified imaging infrared scanner directly in a digital memory. This procedure avoids the signal-to-noise degradation and dynamic range limitations associated with successive analog-to-digital and digital-to-analog conversions and the analog recording method normally used to store data from the scanner. This technique also allows digital data processing methods to be applied directly to recorded data and permits processing and image reconstruction to be done using either a mainframe or a microcomputer. If a suitable computer and CAMAC-based data collection system are already available, digital storage of up to 12 scanner images can be implemented for less than $1750 in materials cost. Each image is stored as a frame of 60 x 80 eight-bit pixels, with an acquisition rate of one frame every 16.7 ms. The number of frames stored is limited only by the available memory. Initially, data processing for this equipment was done on a VAX 11-780, but images may also be displayed on the screen of a microcomputer. Software for setting the displayed gray scale, generating contour plots and false-color displays, and subtracting one image from another (e.g., background suppression) has been developed for IBM-compatible personal computers

  12. TIFR Near Infrared Imaging Camera-II on the 3.6 m Devasthal Optical Telescope

    Science.gov (United States)

    Baug, T.; Ojha, D. K.; Ghosh, S. K.; Sharma, S.; Pandey, A. K.; Kumar, Brijesh; Ghosh, Arpan; Ninan, J. P.; Naik, M. B.; D’Costa, S. L. A.; Poojary, S. S.; Sandimani, P. R.; Shah, H.; Krishna Reddy, B.; Pandey, S. B.; Chand, H.

    Tata Institute of Fundamental Research (TIFR) Near Infrared Imaging Camera-II (TIRCAM2) is a closed-cycle Helium cryo-cooled imaging camera equipped with a Raytheon 512×512 pixels InSb Aladdin III Quadrant focal plane array (FPA) having sensitivity to photons in the 1-5μm wavelength band. In this paper, we present the performance of the camera on the newly installed 3.6m Devasthal Optical Telescope (DOT) based on the calibration observations carried out during 2017 May 11-14 and 2017 October 7-31. After the preliminary characterization, the camera has been released to the Indian and Belgian astronomical community for science observations since 2017 May. The camera offers a field-of-view (FoV) of ˜86.5‧‧×86.5‧‧ on the DOT with a pixel scale of 0.169‧‧. The seeing at the telescope site in the near-infrared (NIR) bands is typically sub-arcsecond with the best seeing of ˜0.45‧‧ realized in the NIR K-band on 2017 October 16. The camera is found to be capable of deep observations in the J, H and K bands comparable to other 4m class telescopes available world-wide. Another highlight of this camera is the observational capability for sources up to Wide-field Infrared Survey Explorer (WISE) W1-band (3.4μm) magnitudes of 9.2 in the narrow L-band (nbL; λcen˜ 3.59μm). Hence, the camera could be a good complementary instrument to observe the bright nbL-band sources that are saturated in the Spitzer-Infrared Array Camera (IRAC) ([3.6] ≲ 7.92 mag) and the WISE W1-band ([3.4] ≲ 8.1 mag). Sources with strong polycyclic aromatic hydrocarbon (PAH) emission at 3.3μm are also detected. Details of the observations and estimated parameters are presented in this paper.

  13. Characterization and Performance of the Cananea Near-infrared Camera (CANICA)

    Science.gov (United States)

    Devaraj, R.; Mayya, Y. D.; Carrasco, L.; Luna, A.

    2018-05-01

    We present details of characterization and imaging performance of the Cananea Near-infrared Camera (CANICA) at the 2.1 m telescope of the Guillermo Haro Astrophysical Observatory (OAGH) located in Cananea, Sonora, México. CANICA has a HAWAII array with a HgCdTe detector of 1024 × 1024 pixels covering a field of view of 5.5 × 5.5 arcmin2 with a plate scale of 0.32 arcsec/pixel. The camera characterization involved measuring key detector parameters: conversion gain, dark current, readout noise, and linearity. The pixels in the detector have a full-well-depth of 100,000 e‑ with the conversion gain measured to be 5.8 e‑/ADU. The time-dependent dark current was estimated to be 1.2 e‑/sec. Readout noise for correlated double sampled (CDS) technique was measured to be 30 e‑/pixel. The detector shows 10% non-linearity close to the full-well-depth. The non-linearity was corrected within 1% levels for the CDS images. Full-field imaging performance was evaluated by measuring the point spread function, zeropoints, throughput, and limiting magnitude. The average zeropoint value in each filter are J = 20.52, H = 20.63, and K = 20.23. The saturation limit of the detector is about sixth magnitude in all the primary broadbands. CANICA on the 2.1 m OAGH telescope reaches background-limited magnitudes of J = 18.5, H = 17.6, and K = 16.0 for a signal-to-noise ratio of 10 with an integration time of 900 s.

  14. Development of plenoptic infrared camera using low dimensional material based photodetectors

    Science.gov (United States)

    Chen, Liangliang

    Infrared (IR) sensor has extended imaging from submicron visible spectrum to tens of microns wavelength, which has been widely used for military and civilian application. The conventional bulk semiconductor materials based IR cameras suffer from low frame rate, low resolution, temperature dependent and highly cost, while the unusual Carbon Nanotube (CNT), low dimensional material based nanotechnology has been made much progress in research and industry. The unique properties of CNT lead to investigate CNT based IR photodetectors and imaging system, resolving the sensitivity, speed and cooling difficulties in state of the art IR imagings. The reliability and stability is critical to the transition from nano science to nano engineering especially for infrared sensing. It is not only for the fundamental understanding of CNT photoresponse induced processes, but also for the development of a novel infrared sensitive material with unique optical and electrical features. In the proposed research, the sandwich-structured sensor was fabricated within two polymer layers. The substrate polyimide provided sensor with isolation to background noise, and top parylene packing blocked humid environmental factors. At the same time, the fabrication process was optimized by real time electrical detection dielectrophoresis and multiple annealing to improve fabrication yield and sensor performance. The nanoscale infrared photodetector was characterized by digital microscopy and precise linear stage in order for fully understanding it. Besides, the low noise, high gain readout system was designed together with CNT photodetector to make the nano sensor IR camera available. To explore more of infrared light, we employ compressive sensing algorithm into light field sampling, 3-D camera and compressive video sensing. The redundant of whole light field, including angular images for light field, binocular images for 3-D camera and temporal information of video streams, are extracted and

  15. AKARI INFRARED CAMERA SURVEY OF THE LARGE MAGELLANIC CLOUD. II. THE NEAR-INFRARED SPECTROSCOPIC CATALOG

    International Nuclear Information System (INIS)

    Shimonishi, Takashi; Onaka, Takashi; Kato, Daisuke; Sakon, Itsuki; Ita, Yoshifusa; Kawamura, Akiko; Kaneda, Hidehiro

    2013-01-01

    We performed a near-infrared spectroscopic survey toward an area of ∼10 deg 2 of the Large Magellanic Cloud (LMC) with the infrared satellite AKARI. Observations were carried out as part of the AKARI Large-area Survey of the Large Magellanic Cloud (LSLMC). The slitless multi-object spectroscopic capability of the AKARI/IRC enabled us to obtain low-resolution (R ∼ 20) spectra in 2-5 μm for a large number of point sources in the LMC. As a result of the survey, we extracted about 2000 infrared spectra of point sources. The data are organized as a near-infrared spectroscopic catalog. The catalog includes various infrared objects such as young stellar objects (YSOs), asymptotic giant branch (AGB) stars, supergiants, and so on. It is shown that 97% of the catalog sources have corresponding photometric data in the wavelength range from 1.2 to 11 μm, and 67% of the sources also have photometric data up to 24 μm. The catalog allows us to investigate near-infrared spectral features of sources by comparison with their infrared spectral energy distributions. In addition, it is estimated that about 10% of the catalog sources are observed at more than two different epochs. This enables us to study a spectroscopic variability of sources by using the present catalog. Initial results of source classifications for the LSLMC samples are presented. We classified 659 LSLMC spectra based on their near-infrared spectral features by visual inspection. As a result, it is shown that the present catalog includes 7 YSOs, 160 C-rich AGBs, 8 C-rich AGB candidates, 85 O-rich AGBs, 122 blue and yellow supergiants, 150 red super giants, and 128 unclassified sources. Distributions of the classified sources on the color-color and color-magnitude diagrams are discussed in the text. Continuous wavelength coverage and high spectroscopic sensitivity in 2-5 μm can only be achieved by space observations. This is an unprecedented large-scale spectroscopic survey toward the LMC in the near-infrared

  16. Flaw evaluation of Nd:YAG laser welding based plume shape by infrared thermal camera

    International Nuclear Information System (INIS)

    Kim, Jae Yeol; Yoo, Young Tae; Yang, Dong Jo; Song, Kyung Seol; Ro, Kyoung Bo

    2003-01-01

    In Nd:YAG laser welding evaluation methods of welding flaw are various. But, the method due to plume shape is difficult to classification od welding flaw. The Nd:YAG laser process is known to have high speed and deep penetration capability to become one of the most advanced welding technologies. At the present time, some methods are studied for measurement of plume shape by using high-speed camera and photo diode. This paper describes the machining characteristics of SM45C carbon steel welding by use of an Nd:YAG laser. In spite of its good mechanical characteristics, SM45C carbon steel has a high carbon contents and suffers a limitation in the industrial application due to the poor welding properties. In this study, plume shape was measured by infrared thermal camera that is non-contact/non-destructive thermal measurement equipment through change of laser generating power, speed, focus. Weld was performed on bead-on method. Measurement results are compared as two equipment. Here, two results are composed of measurement results of plume quantities due to plume shape by infrared thermal camera and inspection results of weld bead include weld flaws by ultrasonic inspector.

  17. Report on the Radiation Effects Testing of the Infrared and Optical Transition Radiation Camera Systems

    Energy Technology Data Exchange (ETDEWEB)

    Holloway, Michael Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-04-20

    Presented in this report are the results tests performed at Argonne National Lab in collaboration with Los Alamos National Lab to assess the reliability of the critical 99Mo production facility beam monitoring diagnostics. The main components of the beam monitoring systems are two cameras that will be exposed to radiation during accelerator operation. The purpose of this test is to assess the reliability of the cameras and related optical components when exposed to operational radiation levels. Both X-ray and neutron radiation could potentially damage camera electronics as well as the optical components such as lenses and windows. This report covers results of the testing of component reliability when exposed to X-ray radiation. With the information from this study we provide recommendations for implementing protective measures for the camera systems in order to minimize the occurrence of radiation-induced failure within a ten month production run cycle.

  18. The Utility of Using a Near-Infrared (NIR) Camera to Measure Beach Surface Moisture

    Science.gov (United States)

    Nelson, S.; Schmutz, P. P.

    2017-12-01

    Surface moisture content is an important factor that must be considered when studying aeolian sediment transport in a beach environment. A few different instruments and procedures are available for measuring surface moisture content (i.e. moisture probes, LiDAR, and gravimetric moisture data from surface scrapings); however, these methods can be inaccurate, costly, and inapplicable, particularly in the field. Near-infrared (NIR) spectral band imagery is another technique used to obtain moisture data. NIR imagery has been predominately used through remote sensing and has yet to be used for ground-based measurements. Dry sand reflects infrared radiation given off by the sun and wet sand absorbs IR radiation. All things considered, this study assesses the utility of measuring surface moisture content of beach sand with a modified NIR camera. A traditional point and shoot digital camera was internally modified with the placement of a visible light-blocking filter. Images were taken of three different types of beach sand at controlled moisture content values, with sunlight as the source of infrared radiation. A technique was established through trial and error by comparing resultant histogram values using Adobe Photoshop with the various moisture conditions. The resultant IR absorption histogram values were calibrated to actual gravimetric moisture content from surface scrapings of the samples. Overall, the results illustrate that the NIR spectrum modified camera does not provide the ability to adequately measure beach surface moisture content. However, there were noted differences in IR absorption histogram values among the different sediment types. Sediment with darker quartz mineralogy provided larger variations in histogram values, but the technique is not sensitive enough to accurately represent low moisture percentages, which are of most importance when studying aeolian sediment transport.

  19. Volcano monitoring with an infrared camera: first insights from Villarrica Volcano

    Science.gov (United States)

    Rosas Sotomayor, Florencia; Amigo Ramos, Alvaro; Velasquez Vargas, Gabriela; Medina, Roxana; Thomas, Helen; Prata, Fred; Geoffroy, Carolina

    2015-04-01

    This contribution focuses on the first trials of the, almost 24/7 monitoring of Villarrica volcano with an infrared camera. Results must be compared with other SO2 remote sensing instruments such as DOAS and UV-camera, for the ''day'' measurements. Infrared remote sensing of volcanic emissions is a fast and safe method to obtain gas abundances in volcanic plumes, in particular when the access to the vent is difficult, during volcanic crisis and at night time. In recent years, a ground-based infrared camera (Nicair) has been developed by Nicarnica Aviation, which quantifies SO2 and ash on volcanic plumes, based on the infrared radiance at specific wavelengths through the application of filters. Three Nicair1 (first model) have been acquired by the Geological Survey of Chile in order to study degassing of active volcanoes. Several trials with the instruments have been performed in northern Chilean volcanoes, and have proven that the intervals of retrieved SO2 concentration and fluxes are as expected. Measurements were also performed at Villarrica volcano, and a location to install a ''fixed'' camera, at 8km from the crater, was discovered here. It is a coffee house with electrical power, wifi network, polite and committed owners and a full view of the volcano summit. The first measurements are being made and processed in order to have full day and week of SO2 emissions, analyze data transfer and storage, improve the remote control of the instrument and notebook in case of breakdown, web-cam/GoPro support, and the goal of the project: which is to implement a fixed station to monitor and study the Villarrica volcano with a Nicair1 integrating and comparing these results with other remote sensing instruments. This works also looks upon the strengthen of bonds with the community by developing teaching material and giving talks to communicate volcanic hazards and other geoscience topics to the people who live "just around the corner" from one of the most active volcanoes

  20. Color Segmentation Approach of Infrared Thermography Camera Image for Automatic Fault Diagnosis

    International Nuclear Information System (INIS)

    Djoko Hari Nugroho; Ari Satmoko; Budhi Cynthia Dewi

    2007-01-01

    Predictive maintenance based on fault diagnosis becomes very important in current days to assure the availability and reliability of a system. The main purpose of this research is to configure a computer software for automatic fault diagnosis based on image model acquired from infrared thermography camera using color segmentation approach. This technique detects hot spots in equipment of the plants. Image acquired from camera is first converted to RGB (Red, Green, Blue) image model and then converted to CMYK (Cyan, Magenta, Yellow, Key for Black) image model. Assume that the yellow color in the image represented the hot spot in the equipment, the CMYK image model is then diagnosed using color segmentation model to estimate the fault. The software is configured utilizing Borland Delphi 7.0 computer programming language. The performance is then tested for 10 input infrared thermography images. The experimental result shows that the software capable to detect the faulty automatically with performance value of 80 % from 10 sheets of image input. (author)

  1. Realization of the Zone Length Measurement during Zone Refining Process via Implementation of an Infrared Camera

    Directory of Open Access Journals (Sweden)

    Danilo C. Curtolo

    2018-05-01

    Full Text Available Zone refining, as the currently most common industrial process to attain ultrapure metals, is influenced by a variety of factors. One of these parameters, the so-called “zone length”, affects not only the ultimate concentration distribution of impurities, but also the rate at which this distribution is approached. This important parameter has however neither been investigated experimentally, nor ever varied for the purpose of optimization. This lack of detections may be due to the difficult temperature measurement of a moving molten area in a vacuum system, of which the zone refining methodology is comprised. Up to now, numerical simulation as a combination of complex mathematical calculations, as well as many assumptions has been the only way to reveal it. This paper aims to propose an experimental method to accurately measure the molten zone length and to extract helpful information on the thermal gradient, temperature profile and real growth rate in the zone refining of an exemplary metal, in this case aluminum. This thermographic method is based on the measurement of the molten surface temperature via an infrared camera, as well as further data analysis through the mathematical software MATLAB. The obtained results show great correlation with the visual observations of zone length and provide helpful information to determine the thermal gradient and real growth rate during the whole process. The investigations in this paper approved the application of an infrared camera for this purpose as a promising technique to automatically control the zone length during a zone refining process.

  2. The cloud monitor by an infrared camera at the Telescope Array experiment

    International Nuclear Information System (INIS)

    Shibata, F.

    2011-01-01

    The mesurement of the extensive air shower using the fluorescence detectors (FDs) is affected by the condition of the atmosphere. In particular, FD aperture is limited by cloudiness. If cloud exists on the light path from extensive air shower to FDs, fluorescence photons will be absorbed drastically. Therefore cloudiness of FD's field of view (FOV) is one of important quality cut condition in FD analysis. In the Telescope Array (TA), an infrared (IR) camera with 320x236 pixels and a filed of view of 25.8 deg. x19.5 deg. has been installed at an observation site for cloud monitoring during FD observations. This IR camera measures temperature of the sky every 30 min during FD observation. IR camera is mounted on steering table, which can be changed in elevation and azimuthal direction. Clouds can be seen at a higher temperature than areas of cloudless sky from these temperature maps. In this paper, we discuss the quality of the cloud monitoring data, the analysis method, and current quality cut condition of cloudiness in FD analysis.

  3. Infrared spectroscopy of Landau levels of graphene.

    Science.gov (United States)

    Jiang, Z; Henriksen, E A; Tung, L C; Wang, Y-J; Schwartz, M E; Han, M Y; Kim, P; Stormer, H L

    2007-05-11

    We report infrared studies of the Landau level (LL) transitions in single layer graphene. Our specimens are density tunable and show in situ half-integer quantum Hall plateaus. Infrared transmission is measured in magnetic fields up to B=18 T at selected LL fillings. Resonances between hole LLs and electron LLs, as well as resonances between hole and electron LLs, are resolved. Their transition energies are proportional to sqrt[B], and the deduced band velocity is (-)c approximately equal to 1.1 x 10(6) m/s. The lack of precise scaling between different LL transitions indicates considerable contributions of many-particle effects to the infrared transition energies.

  4. Outdoor Air Quality Level Inference via Surveillance Cameras

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2016-01-01

    Full Text Available Air pollution is a universal problem confronted by many developing countries. Because there are very few air quality monitoring stations in cities, it is difficult for people to know the exact air quality level anytime and anywhere. Fortunately, large amount of surveillance cameras have been deployed in the cities and can capture image densely and conveniently in the cities. In this case, this provides the possibility to utilize surveillance cameras as sensors to obtain data and predict the air quality level. To this end, we present a novel air quality level inference approach based on outdoor images. Firstly, we explore several features extracted from images as the robust representation for air quality prediction. Then, to effectively fuse these heterogeneous and complementary features, we adopt multikernel learning to learn an adaptive classifier for air quality level inference. In addition, to facilitate the research, we construct an Outdoor Air Quality Image Set (OAQIS dataset, which contains high quality registered and calibrated images with rich labels, that is, concentration of particles mass (PM, weather, temperature, humidity, and wind. Extensive experiments on the OAQIS dataset demonstrate the effectiveness of the proposed approach.

  5. Human Detection Based on the Generation of a Background Image by Using a Far-Infrared Light Camera

    Directory of Open Access Journals (Sweden)

    Eun Som Jeon

    2015-03-01

    Full Text Available The need for computer vision-based human detection has increased in fields, such as security, intelligent surveillance and monitoring systems. However, performance enhancement of human detection based on visible light cameras is limited, because of factors, such as nonuniform illumination, shadows and low external light in the evening and night. Consequently, human detection based on thermal (far-infrared light cameras has been considered as an alternative. However, its performance is influenced by the factors, such as low image resolution, low contrast and the large noises of thermal images. It is also affected by the high temperature of backgrounds during the day. To solve these problems, we propose a new method for detecting human areas in thermal camera images. Compared to previous works, the proposed research is novel in the following four aspects. One background image is generated by median and average filtering. Additional filtering procedures based on maximum gray level, size filtering and region erasing are applied to remove the human areas from the background image. Secondly, candidate human regions in the input image are located by combining the pixel and edge difference images between the input and background images. The thresholds for the difference images are adaptively determined based on the brightness of the generated background image. Noise components are removed by component labeling, a morphological operation and size filtering. Third, detected areas that may have more than two human regions are merged or separated based on the information in the horizontal and vertical histograms of the detected area. This procedure is adaptively operated based on the brightness of the generated background image. Fourth, a further procedure for the separation and removal of the candidate human regions is performed based on the size and ratio of the height to width information of the candidate regions considering the camera viewing direction

  6. PANIC: A General-purpose Panoramic Near-infrared Camera for the Calar Alto Observatory

    Science.gov (United States)

    Cárdenas Vázquez, M.-C.; Dorner, B.; Huber, A.; Sánchez-Blanco, E.; Alter, M.; Rodríguez Gómez, J. F.; Bizenberger, P.; Naranjo, V.; Ibáñez Mengual, J.-M.; Panduro, J.; García Segura, A. J.; Mall, U.; Fernández, M.; Laun, W.; Ferro Rodríguez, I. M.; Helmling, J.; Terrón, V.; Meisenheimer, K.; Fried, J. W.; Mathar, R. J.; Baumeister, H.; Rohloff, R.-R.; Storz, C.; Verdes-Montenegro, L.; Bouy, H.; Ubierna, M.; Fopp, P.; Funke, B.

    2018-02-01

    PANIC7 is the new PAnoramic Near-Infrared Camera for Calar Alto and is a project jointly developed by the MPIA in Heidelberg, Germany, and the IAA in Granada, Spain, for the German-Spanish Astronomical Center at Calar Alto Observatory (CAHA; Almería, Spain). This new instrument works with the 2.2 m and 3.5 m CAHA telescopes covering a field of view of 30 × 30 arcmin and 15 × 15 arcmin, respectively, with a sampling of 4096 × 4096 pixels. It is designed for the spectral bands from Z to K S , and can also be equipped with narrowband filters. The instrument was delivered to the observatory in 2014 October and was commissioned at both telescopes between 2014 November and 2015 June. Science verification at the 2.2 m telescope was carried out during the second semester of 2015 and the instrument is now at full operation. We describe the design, assembly, integration, and verification process, the final laboratory tests and the PANIC instrument performance. We also present first-light data obtained during the commissioning and preliminary results of the scientific verification. The final optical model and the theoretical performance of the camera were updated according to the as-built data. The laboratory tests were made with a star simulator. Finally, the commissioning phase was done at both telescopes to validate the camera real performance on sky. The final laboratory test confirmed the expected camera performances, complying with the scientific requirements. The commissioning phase on sky has been accomplished.

  7. Robot Towed Shortwave Infrared Camera for Specific Surface Area Retrieval of Surface Snow

    Science.gov (United States)

    Elliott, J.; Lines, A.; Ray, L.; Albert, M. R.

    2017-12-01

    Optical grain size and specific surface area are key parameters for measuring the atmospheric interactions of snow, as well as tracking metamorphosis and allowing for the ground truthing of remote sensing data. We describe a device using a shortwave infrared camera with changeable optical bandpass filters (centered at 1300 nm and 1550 nm) that can be used to quickly measure the average SSA over an area of 0.25 m^2. The device and method are compared with calculations made from measurements taken with a field spectral radiometer. The instrument is designed to be towed by a small autonomous ground vehicle, and therefore rides above the snow surface on ultra high molecular weight polyethylene (UHMW) skis.

  8. Photoconductor arrays for a spectral-photometric far-infrared camera on SOFIA

    Science.gov (United States)

    Wolf, Juergen; Driescher, Hans; Schubert, Josef; Rabanus, D.; Paul, E.; Roesner, K.

    1998-04-01

    The Stratospheric Observatory for Infrared Astronomy, SOFIA, is a joint US and German project and will start observations from altitudes up to 45,000 ft in late 2001. The 2.5 m telescope is being developed in Germany while the 747- aircraft modifications and preparation of the observatory's operations center is done by a US consortium. Several research institutions and universities of both countries have started to develop science instruments. The DLR Institute of Space Sensor Technology in Berlin plans on a spectral-photometric camera working in the 20 to 220 micrometers wavelength range, using doped silicon and germanium extrinsic photoconductors in large, 2D arrays: silicon blocked-impurity band detectors, Ge:Ga and stressed Ge:Ga. While the silicon array will be commercially available, the germanium arrays have to be developed, including their cryogenic multiplexers. Partner institutions in Germany and the US will support the development of the instrument and its observations.

  9. Pedestrian Detection Based on Adaptive Selection of Visible Light or Far-Infrared Light Camera Image by Fuzzy Inference System and Convolutional Neural Network-Based Verification.

    Science.gov (United States)

    Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung

    2017-07-08

    A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.

  10. The Infrared Camera for RATIR, a Rapid Response GRB Followup Instrument

    Science.gov (United States)

    Rapchun, David A.; Alardin, W.; Bigelow, B. C.; Bloom, J.; Butler, N.; Farah, A.; Fox, O. D.; Gehrels, N.; Gonzalez, J.; Klein, C.; Kutyrev, A. S.; Lotkin, G.; Morisset, C.; Moseley, S. H.; Richer, M.; Robinson, F. D.; Samuel, M. V.; Sparr, L. M.; Tucker, C.; Watson, A.

    2011-01-01

    RATIR (Reionization and Transients Infrared instrument) will be a hybrid optical/near IR imager that will utilize the "J-band dropout" to rapidly identify very high redshift (VHR) gamma-ray bursts (GRBs) from a sample of all observable Swift bursts. Our group at GSFC is developing the instrument in collaboration with UC Berkeley (UCB) and University of Mexico (UNAM). RATIR has both a visible and IR camera, which give it access to 8 bands spanning visible and IR wavelengths. The instrument implements a combination of filters and dichroics to provide the capability of performing photometry in 4 bands simultaneously. The GSFC group leads the design and construction of the instrument's IR camera, equipped with two HgCdTe 2k x 2k Teledyne detectors. The cryostat housing these detectors is cooled by a mechanical cryo-compressor, which allows uninterrupted operation on the telescope. The host 1.5-m telescope, located at the UNAM San Pedro Martir Observatory, Mexico, has recently undergone robotization, allowing for fully automated, continuous operation. After commissioning in the spring of 2011, RATIR will dedicate its time to obtaining prompt follow-up observations of GRBs and identifying VHR GRBs, thereby providing a valuable tool for studying the epoch of reionization.

  11. The infrared camera prototype characterization for the JEM-EUSO space mission

    International Nuclear Information System (INIS)

    Morales de los Ríos, J.A.; Joven, E.; Peral, L. del; Reyes, M.; Licandro, J.; Rodríguez Frías, M.D.

    2014-01-01

    JEM-EUSO (Extreme Universe Space Observatory on Japanese Experiment Module) is an advanced observatory that will be on-board the International Space Station (ISS) and use the Earth's atmosphere as a huge calorimeter detector. However, the atmospheric clouds introduce uncertainties in the signals measured by JEM-EUSO. Therefore, it is extremely important to know the atmospheric conditions and properties of the clouds in the Field of View (FoV) of the telescope. The Atmospheric Monitoring System (AMS) of JEM-EUSO includes a lidar and an infrared imaging system, IR-Camera, aimed to detect the presence of clouds and to obtain the cloud coverage and cloud top altitude during the observations of the JEM-EUSO main telescope. To define the road-map for the design of the electronics, the detector has been tested extensively with a first prototype. The actual design of the IR-Camera, the test of the prototype, and the outcome of this characterization are presented in this paper

  12. The infrared camera prototype characterization for the JEM-EUSO space mission

    Energy Technology Data Exchange (ETDEWEB)

    Morales de los Ríos, J.A., E-mail: josealberto.morales@uah.es [SPace and AStroparticle (SPAS) Group, UAH, Madrid (Spain); Ebisuzaki Computational Astrophysics Laboratory, RIKEN (Japan); Joven, E. [Instituto de Astrofísica de Canarias (IAC), Tenerife (Spain); Peral, L. del [SPace and AStroparticle (SPAS) Group, UAH, Madrid (Spain); Leonard E. Parker Center for Gravitation, Cosmology and Astrophysics, University of Wisconsin-Milwaukee (United States); Reyes, M. [Instituto de Astrofísica de Canarias (IAC), Tenerife (Spain); Licandro, J. [Instituto de Astrofísica de Canarias (IAC), Tenerife (Spain); Departamento de Astrofísica, Universidad de La Laguna, Tenerife (Spain); Rodríguez Frías, M.D. [SPace and AStroparticle (SPAS) Group, UAH, Madrid (Spain); Instituto de Astrofísica de Canarias (IAC), Tenerife (Spain)

    2014-06-01

    JEM-EUSO (Extreme Universe Space Observatory on Japanese Experiment Module) is an advanced observatory that will be on-board the International Space Station (ISS) and use the Earth's atmosphere as a huge calorimeter detector. However, the atmospheric clouds introduce uncertainties in the signals measured by JEM-EUSO. Therefore, it is extremely important to know the atmospheric conditions and properties of the clouds in the Field of View (FoV) of the telescope. The Atmospheric Monitoring System (AMS) of JEM-EUSO includes a lidar and an infrared imaging system, IR-Camera, aimed to detect the presence of clouds and to obtain the cloud coverage and cloud top altitude during the observations of the JEM-EUSO main telescope. To define the road-map for the design of the electronics, the detector has been tested extensively with a first prototype. The actual design of the IR-Camera, the test of the prototype, and the outcome of this characterization are presented in this paper.

  13. Mobile viewer system for virtual 3D space using infrared LED point markers and camera

    Science.gov (United States)

    Sakamoto, Kunio; Taneji, Shoto

    2006-09-01

    The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.

  14. Optimization of a miniature short-wavelength infrared objective optics of a short-wavelength infrared to visible upconversion layer attached to a mobile-devices visible camera

    Science.gov (United States)

    Kadosh, Itai; Sarusi, Gabby

    2017-10-01

    The use of dual cameras in parallax in order to detect and create 3-D images in mobile devices has been increasing over the last few years. We propose a concept where the second camera will be operating in the short-wavelength infrared (SWIR-1300 to 1800 nm) and thus have night vision capability while preserving most of the other advantages of dual cameras in terms of depth and 3-D capabilities. In order to maintain commonality of the two cameras, we propose to attach to one of the cameras a SWIR to visible upconversion layer that will convert the SWIR image into a visible image. For this purpose, the fore optics (the objective lenses) should be redesigned for the SWIR spectral range and the additional upconversion layer, whose thickness is mobile device visible range camera sensor (the CMOS sensor). This paper presents such a SWIR objective optical design and optimization that is formed and fit mechanically to the visible objective design but with different lenses in order to maintain the commonality and as a proof-of-concept. Such a SWIR objective design is very challenging since it requires mimicking the original visible mobile camera lenses' sizes and the mechanical housing, so we can adhere to the visible optical and mechanical design. We present in depth a feasibility study and the overall optical system performance of such a SWIR mobile-device camera fore optics design.

  15. A Study of Planetary Nebulae using the Faint Object Infrared Camera for the SOFIA Telescope

    Science.gov (United States)

    Davis, Jessica

    2012-01-01

    A planetary nebula is formed following an intermediate-mass (1-8 solar M) star's evolution off of the main sequence; it undergoes a phase of mass loss whereby the stellar envelope is ejected and the core is converted into a white dwarf. Planetary nebulae often display complex morphologies such as waists or torii, rings, collimated jet-like outflows, and bipolar symmetry, but exactly how these features form is unclear. To study how the distribution of dust in the interstellar medium affects their morphology, we utilize the Faint Object InfraRed CAmera for the SOFIA Telescope (FORCAST) to obtain well-resolved images of four planetary nebulae--NGC 7027, NGC 6543, M2-9, and the Frosty Leo Nebula--at wavelengths where they radiate most of their energy. We retrieve mid infrared images at wavelengths ranging from 6.3 to 37.1 micron for each of our targets. IDL (Interactive Data Language) is used to perform basic analysis. We select M2-9 to investigate further; analyzing cross sections of the southern lobe reveals a slight limb brightening effect. Modeling the dust distribution within the lobes reveals that the thickness of the lobe walls is higher than anticipated, or rather than surrounding a vacuum surrounds a low density region of tenuous dust. Further analysis of this and other planetary nebulae is needed before drawing more specific conclusions.

  16. Automated cloud classification using a ground based infra-red camera and texture analysis techniques

    Science.gov (United States)

    Rumi, Emal; Kerr, David; Coupland, Jeremy M.; Sandford, Andrew P.; Brettle, Mike J.

    2013-10-01

    Clouds play an important role in influencing the dynamics of local and global weather and climate conditions. Continuous monitoring of clouds is vital for weather forecasting and for air-traffic control. Convective clouds such as Towering Cumulus (TCU) and Cumulonimbus clouds (CB) are associated with thunderstorms, turbulence and atmospheric instability. Human observers periodically report the presence of CB and TCU clouds during operational hours at airports and observatories; however such observations are expensive and time limited. Robust, automatic classification of cloud type using infrared ground-based instrumentation offers the advantage of continuous, real-time (24/7) data capture and the representation of cloud structure in the form of a thermal map, which can greatly help to characterise certain cloud formations. The work presented here utilised a ground based infrared (8-14 μm) imaging device mounted on a pan/tilt unit for capturing high spatial resolution sky images. These images were processed to extract 45 separate textural features using statistical and spatial frequency based analytical techniques. These features were used to train a weighted k-nearest neighbour (KNN) classifier in order to determine cloud type. Ground truth data were obtained by inspection of images captured simultaneously from a visible wavelength colour camera at the same installation, with approximately the same field of view as the infrared device. These images were classified by a trained cloud observer. Results from the KNN classifier gave an encouraging success rate. A Probability of Detection (POD) of up to 90% with a Probability of False Alarm (POFA) as low as 16% was achieved.

  17. 32x32 HgCdTe/CCD infrared camera for the 2-5 micron range

    International Nuclear Information System (INIS)

    Monin, J.L.; Vauglin, I.; Sibille, F.

    1988-01-01

    The paper presents a complete infrared camera system, based on a high electron capacity detector (HgCdTe/CCD), that has been used under high background conditions to generate astronomical images. The performance of the system and some results are presented, and the use of such a detector in astronomy is discussed. 8 references

  18. Change detection and characterization of volcanic activity using ground based low-light and near infrared cameras to monitor incandescence and thermal signatures

    Science.gov (United States)

    Harrild, Martin; Webley, Peter; Dehn, Jonathan

    2015-04-01

    Knowledge and understanding of precursory events and thermal signatures are vital for monitoring volcanogenic processes, as activity can often range from low level lava effusion to large explosive eruptions, easily capable of ejecting ash up to aircraft cruise altitudes. Using ground based remote sensing techniques to monitor and detect this activity is essential, but often the required equipment and maintenance is expensive. Our investigation explores the use of low-light cameras to image volcanic activity in the visible to near infrared (NIR) portion of the electromagnetic spectrum. These cameras are ideal for monitoring as they are cheap, consume little power, are easily replaced and can provide near real-time data. We focus here on the early detection of volcanic activity, using automated scripts, that capture streaming online webcam imagery and evaluate image pixel brightness values to determine relative changes and flag increases in activity. The script is written in Python, an open source programming language, to reduce the overall cost to potential consumers and increase the application of these tools across the volcanological community. In addition, by performing laboratory tests to determine the spectral response of these cameras, a direct comparison of collocated low-light and thermal infrared cameras has allowed approximate eruption temperatures and effusion rates to be determined from pixel brightness. The results of a field campaign in June, 2013 to Stromboli volcano, Italy, are also presented here. Future field campaigns to Latin America will include collaborations with INSIVUMEH in Guatemala, to apply our techniques to Fuego and Santiaguito volcanoes.

  19. Comparison of two surface temperature measurement using thermocouples and infrared camera

    Directory of Open Access Journals (Sweden)

    Michalski Dariusz

    2017-01-01

    Full Text Available This paper compares two methods applied to measure surface temperatures at an experimental setup designed to analyse flow boiling heat transfer. The temperature measurements were performed in two parallel rectangular minichannels, both 1.7 mm deep, 16 mm wide and 180 mm long. The heating element for the fluid flowing in each minichannel was a thin foil made of Haynes-230. The two measurement methods employed to determine the surface temperature of the foil were: the contact method, which involved mounting thermocouples at several points in one minichannel, and the contactless method to study the other minichannel, where the results were provided with an infrared camera. Calculations were necessary to compare the temperature results. Two sets of measurement data obtained for different values of the heat flux were analysed using the basic statistical methods, the method error and the method accuracy. The experimental error and the method accuracy were taken into account. The comparative analysis showed that although the values and distributions of the surface temperatures obtained with the two methods were similar but both methods had certain limitations.

  20. Nonuniformity correction of infrared cameras by reading radiance temperatures with a spatially nonhomogeneous radiation source

    International Nuclear Information System (INIS)

    Gutschwager, Berndt; Hollandt, Jörg

    2017-01-01

    We present a novel method of nonuniformity correction (NUC) of infrared cameras and focal plane arrays (FPA) in a wide optical spectral range by reading radiance temperatures and by applying a radiation source with an unknown and spatially nonhomogeneous radiance temperature distribution. The benefit of this novel method is that it works with the display and the calculation of radiance temperatures, it can be applied to radiation sources of arbitrary spatial radiance temperature distribution, and it only requires sufficient temporal stability of this distribution during the measurement process. In contrast to this method, an initially presented method described the calculation of NUC correction with the reading of monitored radiance values. Both methods are based on the recording of several (at least three) images of a radiation source and a purposeful row- and line-shift of these sequent images in relation to the first primary image. The mathematical procedure is explained in detail. Its numerical verification with a source of a predefined nonhomogeneous radiance temperature distribution and a thermal imager of a predefined nonuniform FPA responsivity is presented. (paper)

  1. Experimental investigation of thermal loading of a horizontal thin plate using infrared camera

    Directory of Open Access Journals (Sweden)

    M.Y. Abdollahzadeh Jamalabadi

    2014-07-01

    Full Text Available This study reports the results of experimental investigations of the characteristics of thermal loading of a thin plate by discrete radiative heat sources. The carbon–steel thin plate is horizontally located above the heat sources. Temperature distribution of the plate is measured using an infrared camera. The effects of various parameters, such as the Rayleigh number, from 107 to 1011, the aspect ratio, from 0.05 to 0.2, the distance ratio, from 0.05 to 0.2, the number of heaters, from 1 to 24, the thickness ratio, from 0.003 to 0.005, and the thermal radiative emissivity, from 0.567 to 0.889 on the maximum temperature and the length of uniform temperature region on a thin plate are explored. The results indicate that the most effective parameters on the order of impact on the maximum temperature is Rayleigh number, the number of heat sources, the distance ratio, the aspect ratio, the surface emissivity, and the plate thickness ratio. Finally, the results demonstrated that there is an optimal distance ratio to maximize the region of uniform temperature on the plate.

  2. Near infrared thermography by CCD cameras and application to first wall components of Tore Supra tokamak

    International Nuclear Information System (INIS)

    Moreau, F.

    1996-01-01

    In the Tokamak TORE-SUPRA, the plasma facing components absorbs and evacuate (active cooling) high power fluxes (up to 10 MW/m 2 ). Their thermal behavior study is essential for the success of controlled thermonuclear fusion line. The first part is devoted to the study of power deposition on the TORE-SUPRA actively cooled limiters. A model of power deposition on one of the limiters is developed. It Takes into account the magnetic topology and a description of the plasma edge. The model is validated with experimental calorimetric data obtained during a series of shots. This will allow to compare the surface temperature measurements with the predicted ones. The main purpose of this thesis was to evaluate and develop a new surface temperature measurement system. It works in the near infrared range (890 nm) and is designed to complete the existing thermographic diagnostic of TORE-SUPRA. By using the radiation laws (for a blackbody and the plasma) ant the laboratory calibration one can estimate the surface temperature of the observed object. We evaluate the performances and limits of such a device in the harsh conditions encountered in a Tokamak environment. On the one hand, in a quasi ideal situation, this analysis shows that the range of measurement is 600 deg. C to 2500 deg. C. On the other hand, when one takes into account of the plasma radiation (with an averaged central plasma density of 6.10 19 m -3 ), we find that the minimum surface temperature rise to 900 deg. C. In the near future, according to the development of IR-CCD cameras working in the near infrared range up to 2 micrometers, we will be able to keep the good spatial resolution with an improved lower limit for the temperature down to 150 deg. C. The last section deals with a number of computer tools to process the images obtained from experiments on TORE-SUPRA. A pattern recognition application was especially developed to detect a complex plasma iso-intensity structure. (author)

  3. Near infrared thermography by CCD cameras and application to first wall components of Tore Supra tokamak

    International Nuclear Information System (INIS)

    Moreau, F.

    1996-01-01

    In the Tokamak TORE-SUPRA, the plasma facing components absorbs and evacuate (active cooling) high power fluxes (up to 10 MW/m 2 ). Their thermal behavior study is essential for the success of controlled thermonuclear fusion line. The first part is devoted to the study of power deposition on the TORE-SUPRA actively cooled limiters. A model of power deposition on one of the limiters is developed. It takes into account the magnetic topology and a description of the plasma edge. The model is validated with experimental calorimetric data obtained during a series of shots. This will allow to compare the surface temperature measurements with the predicted ones. The main purpose of this thesis was to evaluate and develop a new temperature measurement system. It works in the near infrared range (890 nm) and is designed to complete the existing thermographic diagnostic of TORE-SUPRA. By using the radiation laws (for a blackbody and the plasma) and the laboratory calibration one can estimate the surface temperature of the observed object. We evaluate the performances and limits of such a device in the harsh conditions encountered in a Tokamak environment. On the one hand, in a quasi ideal situation, this analysis shows that the range of measurements is 600 deg. C to 2500 deg. C. On the other hand, when one takes into account of the plasma radiation (with an averaged central plasma density of 6.10 19 m -3 ), we find that the minimum surface temperature rise to 900 deg. C instead of 700 deg. C. In the near future, according to the development of IR-CCD cameras working in the near infrared range up to 2 micrometers, we will be able to keep the good spatial resolution with an improved lower limit for the temperature down to 150 deg. C. The last section deals with a number of computer tools to process the images obtained from experiments on TORE-SUPRA. A pattern recognition application was especially developed to detect a complex plasma iso-intensity structure. (author)

  4. Infrared

    Science.gov (United States)

    Vollmer, M.

    2013-11-01

    underlying physics. There are now at least six different disciplines that deal with infrared radiation in one form or another, and in one or several different spectral portions of the whole IR range. These are spectroscopy, astronomy, thermal imaging, detector and source development and metrology, as well the field of optical data transmission. Scientists working in these fields range from chemists and astronomers through to physicists and even photographers. This issue presents examples from some of these fields. All the papers—though some of them deal with fundamental or applied research—include interesting elements that make them directly applicable to university-level teaching at the graduate or postgraduate level. Source (e.g. quantum cascade lasers) and detector development (e.g. multispectral sensors), as well as metrology issues and optical data transmission, are omitted since they belong to fundamental research journals. Using a more-or-less arbitrary order according to wavelength range, the issue starts with a paper on the physics of near-infrared photography using consumer product cameras in the spectral range from 800 nm to 1.1 µm [1]. It is followed by a series of three papers dealing with IR imaging in spectral ranges from 3 to 14 µm [2-4]. One of them deals with laboratory courses that may help to characterize the IR camera response [2], the second discusses potential applications for nondestructive testing techniques [3] and the third gives an example of how IR thermal imaging may be used to understand cloud cover of the Earth [4], which is the prerequisite for successful climate modelling. The next two papers cover the vast field of IR spectroscopy [5, 6]. The first of these deals with Fourier transform infrared spectroscopy in the spectral range from 2.5 to 25 µm, studying e.g. ro-vibrational excitations in gases or optical phonon interactions within solids [5]. The second deals mostly with the spectroscopy of liquids such as biofuels and special

  5. "Wow, It Turned out Red! First, a Little Yellow, and Then Red!" 1st-Graders' Work with an Infrared Camera

    Science.gov (United States)

    Jeppsson, Fredrik; Frejd, Johanna; Lundmark, Frida

    2017-01-01

    This study focuses on investigating how students make use of their bodily experiences in combination with infrared (IR) cameras, as a way to make meaning in learning about heat, temperature, and friction. A class of 20 primary students (age 7-8 years), divided into three groups, took part in three IR camera laboratory experiments. The qualitative…

  6. The James Webb Space Telescope's Near-Infrared Camera (NIRCam): Making Models, Building Understanding

    Science.gov (United States)

    McCarthy, D. W., Jr.; Lebofsky, L. A.; Higgins, M. L.; Lebofsky, N. R.

    2011-09-01

    Since 2003, the Near Infrared Camear (NIRCam) science team for the James Webb Space Telescope (JWST) has conducted "Train the Trainer" workshops for adult leaders of the Girl Scout of the USA (GSUSA), engaging them in the process of scientific inquiry and equipping them to host astronomy-related activities at the troop level. Training includes topics in basic astronomy (night sky, phases of the Moon, the scale of the Solar System and beyond, stars, galaxies, telescopes, etc.) as well as JWST-specific research areas in extra-solar planetary systems and cosmology, to pave the way for girls and women to understand the first images from JWST. Participants become part of our world-wide network of 160 trainers teaching young women essential STEM-related concepts using astronomy, the night sky environment, applied math, engineering, and critical thinking.

  7. Study and use of an infrared camera optimized for ground based observations in the 10 micron wavelength range

    International Nuclear Information System (INIS)

    Remy, Sophie

    1991-01-01

    Astronomical observations in the 10 micron atmospheric window provide very important information for many of astrophysical topics. But because of the very large terrestrial photon background at that wavelength, ground based observations have been impeded. On the other band, the ground based telescopes offer a greater angular resolution than the spatially based telescopes. The recent development of detector arrays for the mid infrared range made easier the development of infrared cameras with optimized detectors for astronomical observations from the ground. The CAMIRAS infrared camera, built by the 'Service d'Astrophysique' in Saclay is the instrument we have studied and we present its performances. Its sensitivity, given for an integration time of one minute on source and a signal to noise ratio of 3, is 0.15 Jy for punctual sources, and 20 mJy arcs"-"2 for extended sources. But we need to get rid of the enormous photon background so we have to find a better way of observation based on modulation techniques as 'chopping' or 'nodding'. Thus we show that a modulation about 1 Hz is satisfactory with our detectors arrays without perturbing the signal to noise ratio. As we have a good instrument and because we are able to get rid of the photon background, we can study astronomical objects. Results from a comet, dusty stellar disks, and an ultra-luminous galaxy are presented. (author) [fr

  8. Analysis of Uncertainties in Infrared Camera Measurements of a Turbofan Engine in an Altitude Test Cell

    National Research Council Canada - National Science Library

    Morris, Thomas

    2004-01-01

    ... from the facility structure, hot exhaust gases, and the measurement equipment itself. The atmosphere and a protective ZnSe window that shields the camera from the hot engine exhaust also introduce measurement uncertainty due to attenuation...

  9. Shutterless non-uniformity correction for the long-term stability of an uncooled long-wave infrared camera

    Science.gov (United States)

    Liu, Chengwei; Sui, Xiubao; Gu, Guohua; Chen, Qian

    2018-02-01

    For the uncooled long-wave infrared (LWIR) camera, the infrared (IR) irradiation the focal plane array (FPA) receives is a crucial factor that affects the image quality. Ambient temperature fluctuation as well as system power consumption can result in changes of FPA temperature and radiation characteristics inside the IR camera; these will further degrade the imaging performance. In this paper, we present a novel shutterless non-uniformity correction method to compensate for non-uniformity derived from the variation of ambient temperature. Our method combines a calibration-based method and the properties of a scene-based method to obtain correction parameters at different ambient temperature conditions, so that the IR camera performance can be less influenced by ambient temperature fluctuation or system power consumption. The calibration process is carried out in a temperature chamber with slowly changing ambient temperature and a black body as uniform radiation source. Enough uniform images are captured and the gain coefficients are calculated during this period. Then in practical application, the offset parameters are calculated via the least squares method based on the gain coefficients, the captured uniform images and the actual scene. Thus we can get a corrected output through the gain coefficients and offset parameters. The performance of our proposed method is evaluated on realistic IR images and compared with two existing methods. The images we used in experiments are obtained by a 384× 288 pixels uncooled LWIR camera. Results show that our proposed method can adaptively update correction parameters as the actual target scene changes and is more stable to temperature fluctuation than the other two methods.

  10. Analysis on nondestructive temperature distribution of tire tread part in a running using infrared thermal vision camera

    International Nuclear Information System (INIS)

    Kim, Jae Yeol; Yang, Dong Jo; Ma, Sang Dong; Park, Byoung Gu; Lee, Ju Wan

    2001-01-01

    The experimental method which investigates validity of numerical simulation for wheeling tires has not developed until now. Separation of belt caused by sudden temperature increase is the most serious problem with wheeling tires. Actually, separation of belt is closely related with the life cycle and design of tires. It is important to investigate the temperature history of tires because sudden temperature increase on belt accelerates the thermal fatigue and then causes the destruction of bending area in the radial direction. Therefore, in the present study, finite element method (FEM) was used to obtain the accurate temperature distribution of tire. Its results were compared with experimental data acquired by infrared thermal camera.

  11. A Near-Infrared Photon Counting Camera for High Sensitivity Astronomical Observation, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation is a Near Infrared Photon-Counting Sensor (NIRPCS), an imaging device with sufficient sensitivity to capture the spectral signatures, in the...

  12. A Near-Infrared Photon Counting Camera for High Sensitivity Astronomical Observation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation is a Near Infrared Photon-Counting Sensor (NIRPCS), an imaging device with sufficient sensitivity to capture the spectral signatures, in the...

  13. Quantifying seasonal variation of leaf area index using near-infrared digital camera in a rice paddy

    Science.gov (United States)

    Hwang, Y.; Ryu, Y.; Kim, J.

    2017-12-01

    Digital camera has been widely used to quantify leaf area index (LAI). Numerous simple and automatic methods have been proposed to improve the digital camera based LAI estimates. However, most studies in rice paddy relied on arbitrary thresholds or complex radiative transfer models to make binary images. Moreover, only a few study reported continuous, automatic observation of LAI over the season in rice paddy. The objective of this study is to quantify seasonal variations of LAI using raw near-infrared (NIR) images coupled with a histogram shape-based algorithm in a rice paddy. As vegetation highly reflects the NIR light, we installed NIR digital camera 1.8 m above the ground surface and acquired unsaturated raw format images at one-hour intervals between 15 to 80 º solar zenith angles over the entire growing season in 2016 (from May to September). We applied a sub-pixel classification combined with light scattering correction method. Finally, to confirm the accuracy of the quantified LAI, we also conducted direct (destructive sampling) and indirect (LAI-2200) manual observations of LAI once per ten days on average. Preliminary results show that NIR derived LAI agreed well with in-situ observations but divergence tended to appear once rice canopy is fully developed. The continuous monitoring of LAI in rice paddy will help to understand carbon and water fluxes better and evaluate satellite based LAI products.

  14. On the Integration of Medium Wave Infrared Cameras for Vision-Based Navigation

    Science.gov (United States)

    2015-03-01

    almost negligible detection by EO cameras in the dark . In order to compare the estimated SfM trajectories, the point clouds created by VisualSFM for...IEEE, 2000. [14] Noah Snavely, Steven M Seitz, and Richard Szeliski. Photo tourism : exploring photo collections in 3d. In ACM transactions on graphics

  15. Picosecond camera

    International Nuclear Information System (INIS)

    Decroisette, Michel

    A Kerr cell activated by infrared pulses of a model locked Nd glass laser, acts as an ultra-fast and periodic shutter, with a few p.s. opening time. Associated with a S.T.L. camera, it gives rise to a picosecond camera allowing us to study very fast effects [fr

  16. Quantifying levels of animal activity using camera trap data

    NARCIS (Netherlands)

    Rowcliffe, J.M.; Kays, R.; Kranstauber, B.; Carbone, C.; Jansen, P.A.

    2014-01-01

    1.Activity level (the proportion of time that animals spend active) is a behavioural and ecological metric that can provide an indicator of energetics, foraging effort and exposure to risk. However, activity level is poorly known for free-living animals because it is difficult to quantify activity

  17. Nighttime Near Infrared Observations of Augustine Volcano Jan-Apr, 2006 Recorded With a Small Astronomical CCD Camera

    Science.gov (United States)

    Sentman, D.; McNutt, S.; Reyes, C.; Stenbaek-Nielsen, H.; Deroin, N.

    2006-12-01

    Nighttime observations of Augustine Volcano were made during Jan-Apr, 2006 using a small, unfiltered, astronomical CCD camera operating from Homer, Alaska. Time-lapse images of the volcano were made looking across the open water of the Cook Inlet over a slant range of ~105 km. A variety of volcano activities were observed that originated in near-infrared (NIR) 0.9-1.1 micron emissions, which were detectable at the upper limit of the camera passband but were otherwise invisible to the naked eye. These activities included various types of steam releases, pyroclastic flows, rockfalls and debris flows that were correlated very closely with seismic measurements made from instruments located within 4 km on the volcanic island. Specifically, flow events to the east (towards the camera) produced high amplitudes on the eastern seismic stations and events presumably to the west were stronger on western stations. The ability to detect nighttime volcanic emissions in the NIR over large horizontal distances using standard silicon CCD technology, even in the presence of weak intervening fog, came as a surprise, and is due to a confluence of several mutually reinforcing factors: (1) Hot enough (~1000K) thermal emissions from the volcano that the short wavelength portion of the Planck radiation curve overlaps the upper portions (0.9-1.1 micron) of the sensitivity of the silicon CCD detectors, and could thus be detected, (2) The existence of several atmospheric transmission windows within the NIR passband of the camera for the emissions to propagate with relatively small attenuation through more than 10 atmospheres, and (3) in the case of fog, forward Mie scattering.

  18. Near-Infrared Photon-Counting Camera for High-Sensitivity Observations

    Science.gov (United States)

    Jurkovic, Michael

    2012-01-01

    The dark current of a transferred-electron photocathode with an InGaAs absorber, responsive over the 0.9-to-1.7- micron range, must be reduced to an ultralow level suitable for low signal spectral astrophysical measurements by lowering the temperature of the sensor incorporating the cathode. However, photocathode quantum efficiency (QE) is known to reduce to zero at such low temperatures. Moreover, it has not been demonstrated that the target dark current can be reached at any temperature using existing photocathodes. Changes in the transferred-electron photocathode epistructure (with an In- GaAs absorber lattice-matched to InP and exhibiting responsivity over the 0.9- to-1.7- m range) and fabrication processes were developed and implemented that resulted in a demonstrated >13x reduction in dark current at -40 C while retaining >95% of the approximately equal to 25% saturated room-temperature QE. Further testing at lower temperature is needed to confirm a >25 C predicted reduction in cooling required to achieve an ultralow dark-current target suitable for faint spectral astronomical observations that are not otherwise possible. This reduction in dark current makes it possible to increase the integration time of the imaging sensor, thus enabling a much higher near-infrared (NIR) sensitivity than is possible with current technology. As a result, extremely faint phenomena and NIR signals emitted from distant celestial objects can be now observed and imaged (such as the dynamics of redshifting galaxies, and spectral measurements on extra-solar planets in search of water and bio-markers) that were not previously possible. In addition, the enhanced NIR sensitivity also directly benefits other NIR imaging applications, including drug and bomb detection, stand-off detection of improvised explosive devices (IED's), Raman spectroscopy and microscopy for life/physical science applications, and semiconductor product defect detection.

  19. On-board Data Processing to Lower Bandwidth Requirements on an Infrared Astronomy Satellite: Case of Herschel-PACS Camera

    Directory of Open Access Journals (Sweden)

    Christian Reimers

    2005-09-01

    Full Text Available This paper presents a new data compression concept, “on-board processing,” for infrared astronomy, where space observatories have limited processing resources. The proposed approach has been developed and tested for the PACS camera from the European Space Agency (ESA mission, Herschel. Using lossy and lossless compression, the presented method offers high compression ratio with a minimal loss of potentially useful scientific data. It also provides higher signal-to-noise ratio than that for standard compression techniques. Furthermore, the proposed approach presents low algorithmic complexity such that it is implementable on the resource-limited hardware. The various modules of the data compression concept are discussed in detail.

  20. Large Area Divertor Temperature Measurements Using A High-speed Camera With Near-infrared FiIters in NSTX

    International Nuclear Information System (INIS)

    Lyons, B.C.; Scotti, F.; Zweben, S.J.; Gray, T.K.; Hosea, J.; Kaita, R.; Kugel, H.W.; Maqueda, R.J.; McLean, A.G.; Roquemore, A.L.; Soukhanovskii, V.A.; Taylor, G.

    2011-01-01

    Fast cameras already installed on the National Spherical Torus Experiment (NSTX) have be equipped with near-infrared (NIR) filters in order to measure the surface temperature in the lower divertor region. Such a system provides a unique combination of high speed (> 50 kHz) and wide fi eld-of-view (> 50% of the divertor). Benchtop calibrations demonstrated the system's ability to measure thermal emission down to 330 oC. There is also, however, signi cant plasma light background in NSTX. Without improvements in background reduction, the current system is incapable of measuring signals below the background equivalent temperature (600 - 700 oC). Thermal signatures have been detected in cases of extreme divertor heating. It is observed that the divertor can reach temperatures around 800 oC when high harmonic fast wave (HHFW) heating is used. These temperature profiles were fi t using a simple heat diffusion code, providing a measurement of the heat flux to the divertor. Comparisons to other infrared thermography systems on NSTX are made.

  1. Near Real-Time Ground-to-Ground Infrared Remote-Sensing Combination and Inexpensive Visible Camera Observations Applied to Tomographic Stack Emission Measurements

    Directory of Open Access Journals (Sweden)

    Philippe de Donato

    2018-04-01

    Full Text Available Evaluation of the environmental impact of gas plumes from stack emissions at the local level requires precise knowledge of the spatial development of the cloud, its evolution over time, and quantitative analysis of each gaseous component. With extensive developments, remote-sensing ground-based technologies are becoming increasingly relevant to such an application. The difficulty of determining the exact 3-D thickness of the gas plume in real time has meant that the various gas components are mainly expressed using correlation coefficients of gas occurrences and path concentration (ppm.m. This paper focuses on a synchronous and non-expensive multi-angled approach combining three high-resolution visible cameras (GoPro-Hero3 and a scanning infrared (IR gas system (SIGIS, Bruker. Measurements are performed at a NH3 emissive industrial site (NOVACARB Society, Laneuveville-devant-Nancy, France. Visible data images were processed by a first geometrical reconstruction gOcad® protocol to build a 3-D envelope of the gas plume which allows estimation of the plume’s thickness corresponding to the 2-D infrared grid measurements. NH3 concentration data could thereby be expressed in ppm and have been interpolated using a second gOcad® interpolation algorithm allowing a precise volume visualization of the NH3 distribution in the flue gas steam.

  2. Camera pose refinement by matching uncertain 3D building models with thermal infrared image sequences for high quality texture extraction

    Science.gov (United States)

    Iwaszczuk, Dorota; Stilla, Uwe

    2017-10-01

    Thermal infrared (TIR) images are often used to picture damaged and weak spots in the insulation of the building hull, which is widely used in thermal inspections of buildings. Such inspection in large-scale areas can be carried out by combining TIR imagery and 3D building models. This combination can be achieved via texture mapping. Automation of texture mapping avoids time consuming imaging and manually analyzing each face independently. It also provides a spatial reference for façade structures extracted in the thermal textures. In order to capture all faces, including the roofs, façades, and façades in the inner courtyard, an oblique looking camera mounted on a flying platform is used. Direct geo-referencing is usually not sufficient for precise texture extraction. In addition, 3D building models have also uncertain geometry. In this paper, therefore, methodology for co-registration of uncertain 3D building models with airborne oblique view images is presented. For this purpose, a line-based model-to-image matching is developed, in which the uncertainties of the 3D building model, as well as of the image features are considered. Matched linear features are used for the refinement of the exterior orientation parameters of the camera in order to ensure optimal co-registration. Moreover, this study investigates whether line tracking through the image sequence supports the matching. The accuracy of the extraction and the quality of the textures are assessed. For this purpose, appropriate quality measures are developed. The tests showed good results on co-registration, particularly in cases where tracking between the neighboring frames had been applied.

  3. Comparison of Near-Infrared Imaging Camera Systems for Intracranial Tumor Detection.

    Science.gov (United States)

    Cho, Steve S; Zeh, Ryan; Pierce, John T; Salinas, Ryan; Singhal, Sunil; Lee, John Y K

    2018-04-01

    Distinguishing neoplasm from normal brain parenchyma intraoperatively is critical for the neurosurgeon. 5-Aminolevulinic acid (5-ALA) has been shown to improve gross total resection and progression-free survival but has limited availability in the USA. Near-infrared (NIR) fluorescence has advantages over visible light fluorescence with greater tissue penetration and reduced background fluorescence. In order to prepare for the increasing number of NIR fluorophores that may be used in molecular imaging trials, we chose to compare a state-of-the-art, neurosurgical microscope (System 1) to one of the commercially available NIR visualization platforms (System 2). Serial dilutions of indocyanine green (ICG) were imaged with both systems in the same environment. Each system's sensitivity and dynamic range for NIR fluorescence were documented and analyzed. In addition, brain tumors from six patients were imaged with both systems and analyzed. In vitro, System 2 demonstrated greater ICG sensitivity and detection range (System 1 1.5-251 μg/l versus System 2 0.99-503 μg/l). Similarly, in vivo, System 2 demonstrated signal-to-background ratio (SBR) of 2.6 ± 0.63 before dura opening, 5.0 ± 1.7 after dura opening, and 6.1 ± 1.9 after tumor exposure. In contrast, System 1 could not easily detect ICG fluorescence prior to dura opening with SBR of 1.2 ± 0.15. After the dura was reflected, SBR increased to 1.4 ± 0.19 and upon exposure of the tumor SBR increased to 1.8 ± 0.26. Dedicated NIR imaging platforms can outperform conventional microscopes in intraoperative NIR detection. Future microscopes with improved NIR detection capabilities could enhance the use of NIR fluorescence to detect neoplasm and improve patient outcome.

  4. Robust Vehicle Detection under Various Environments to Realize Road Traffic Flow Surveillance Using an Infrared Thermal Camera

    Science.gov (United States)

    Iwasaki, Yoichiro; Misumi, Masato; Nakamiya, Toshiyuki

    2015-01-01

    To realize road traffic flow surveillance under various environments which contain poor visibility conditions, we have already proposed two vehicle detection methods using thermal images taken with an infrared thermal camera. The first method uses pattern recognition for the windshields and their surroundings to detect vehicles. However, the first method decreases the vehicle detection accuracy in winter season. To maintain high vehicle detection accuracy in all seasons, we developed the second method. The second method uses tires' thermal energy reflection areas on a road as the detection targets. The second method did not achieve high detection accuracy for vehicles on left-hand and right-hand lanes except for two center-lanes. Therefore, we have developed a new method based on the second method to increase the vehicle detection accuracy. This paper proposes the new method and shows that the detection accuracy for vehicles on all lanes is 92.1%. Therefore, by combining the first method and the new method, high vehicle detection accuracies are maintained under various environments, and road traffic flow surveillance can be realized. PMID:25763384

  5. HUBBLE SPACE TELESCOPE/NEAR-INFRARED CAMERA AND MULTI-OBJECT SPECTROMETER OBSERVATIONS OF THE GLIMPSE9 STELLAR CLUSTER

    International Nuclear Information System (INIS)

    Messineo, Maria; Figer, Donald F.; Davies, Ben; Trombley, Christine; Kudritzki, R. P.; Rich, R. Michael; MacKenty, John

    2010-01-01

    We present Hubble Space Telescope/Near-Infrared Camera and Multi-Object Spectrometer photometry, and low-resolution K-band spectra of the GLIMPSE9 stellar cluster. The newly obtained color-magnitude diagram shows a cluster sequence with H - K S = ∼1 mag, indicating an interstellar extinction A K s = 1.6 ± 0.2 mag. The spectra of the three brightest stars show deep CO band heads, which indicate red supergiants with spectral type M1-M2. Two 09-B2 supergiants are also identified, which yield a spectrophotometric distance of 4.2 ± 0.4 kpc. Presuming that the population is coeval, we derive an age between 15 and 27 Myr, and a total cluster mass of 1600 ± 400 M sun , integrated down to 1 M sun . In the vicinity of GLIMPSE9 are several H II regions and supernova remnants, all of which (including GLIMPSE9) are probably associated with a giant molecular cloud (GMC) in the inner galaxy. GLIMPSE9 probably represents one episode of massive star formation in this GMC. We have identified several other candidate stellar clusters of the same complex.

  6. Robust Vehicle Detection under Various Environmental Conditions Using an Infrared Thermal Camera and Its Application to Road Traffic Flow Monitoring

    Directory of Open Access Journals (Sweden)

    Toshiyuki Nakamiya

    2013-06-01

    Full Text Available We have already proposed a method for detecting vehicle positions and their movements (henceforth referred to as “our previous method” using thermal images taken with an infrared thermal camera. Our experiments have shown that our previous method detects vehicles robustly under four different environmental conditions which involve poor visibility conditions in snow and thick fog. Our previous method uses the windshield and its surroundings as the target of the Viola-Jones detector. Some experiments in winter show that the vehicle detection accuracy decreases because the temperatures of many windshields approximate those of the exterior of the windshields. In this paper, we propose a new vehicle detection method (henceforth referred to as “our new method”. Our new method detects vehicles based on tires’ thermal energy reflection. We have done experiments using three series of thermal images for which the vehicle detection accuracies of our previous method are low. Our new method detects 1,417 vehicles (92.8% out of 1,527 vehicles, and the number of false detection is 52 in total. Therefore, by combining our two methods, high vehicle detection accuracies are maintained under various environmental conditions. Finally, we apply the traffic information obtained by our two methods to traffic flow automatic monitoring, and show the effectiveness of our proposal.

  7. Robust Vehicle Detection under Various Environments to Realize Road Traffic Flow Surveillance Using an Infrared Thermal Camera

    Directory of Open Access Journals (Sweden)

    Yoichiro Iwasaki

    2015-01-01

    Full Text Available To realize road traffic flow surveillance under various environments which contain poor visibility conditions, we have already proposed two vehicle detection methods using thermal images taken with an infrared thermal camera. The first method uses pattern recognition for the windshields and their surroundings to detect vehicles. However, the first method decreases the vehicle detection accuracy in winter season. To maintain high vehicle detection accuracy in all seasons, we developed the second method. The second method uses tires’ thermal energy reflection areas on a road as the detection targets. The second method did not achieve high detection accuracy for vehicles on left-hand and right-hand lanes except for two center-lanes. Therefore, we have developed a new method based on the second method to increase the vehicle detection accuracy. This paper proposes the new method and shows that the detection accuracy for vehicles on all lanes is 92.1%. Therefore, by combining the first method and the new method, high vehicle detection accuracies are maintained under various environments, and road traffic flow surveillance can be realized.

  8. Robust vehicle detection under various environments to realize road traffic flow surveillance using an infrared thermal camera.

    Science.gov (United States)

    Iwasaki, Yoichiro; Misumi, Masato; Nakamiya, Toshiyuki

    2015-01-01

    To realize road traffic flow surveillance under various environments which contain poor visibility conditions, we have already proposed two vehicle detection methods using thermal images taken with an infrared thermal camera. The first method uses pattern recognition for the windshields and their surroundings to detect vehicles. However, the first method decreases the vehicle detection accuracy in winter season. To maintain high vehicle detection accuracy in all seasons, we developed the second method. The second method uses tires' thermal energy reflection areas on a road as the detection targets. The second method did not achieve high detection accuracy for vehicles on left-hand and right-hand lanes except for two center-lanes. Therefore, we have developed a new method based on the second method to increase the vehicle detection accuracy. This paper proposes the new method and shows that the detection accuracy for vehicles on all lanes is 92.1%. Therefore, by combining the first method and the new method, high vehicle detection accuracies are maintained under various environments, and road traffic flow surveillance can be realized.

  9. Robust vehicle detection under various environmental conditions using an infrared thermal camera and its application to road traffic flow monitoring.

    Science.gov (United States)

    Iwasaki, Yoichiro; Misumi, Masato; Nakamiya, Toshiyuki

    2013-06-17

    We have already proposed a method for detecting vehicle positions and their movements (henceforth referred to as "our previous method") using thermal images taken with an infrared thermal camera. Our experiments have shown that our previous method detects vehicles robustly under four different environmental conditions which involve poor visibility conditions in snow and thick fog. Our previous method uses the windshield and its surroundings as the target of the Viola-Jones detector. Some experiments in winter show that the vehicle detection accuracy decreases because the temperatures of many windshields approximate those of the exterior of the windshields. In this paper, we propose a new vehicle detection method (henceforth referred to as "our new method"). Our new method detects vehicles based on tires' thermal energy reflection. We have done experiments using three series of thermal images for which the vehicle detection accuracies of our previous method are low. Our new method detects 1,417 vehicles (92.8%) out of 1,527 vehicles, and the number of false detection is 52 in total. Therefore, by combining our two methods, high vehicle detection accuracies are maintained under various environmental conditions. Finally, we apply the traffic information obtained by our two methods to traffic flow automatic monitoring, and show the effectiveness of our proposal.

  10. Superimpose methods for uncooled infrared camera applied to the micro-scale thermal characterization of composite materials

    Science.gov (United States)

    Morikawa, Junko

    2015-05-01

    The mobile type apparatus for a quantitative micro-scale thermography using a micro-bolometer was developed based on our original techniques such as an achromatic lens design to capture a micro-scale image in long-wave infrared, a video signal superimposing for the real time emissivity correction, and a pseudo acceleration of a timeframe. The total size of the instrument was designed as it was put in the 17 cm x 28 cm x 26 cm size carrying box. The video signal synthesizer enabled to record a direct digital signal of monitoring temperature or positioning data. The encoded digital signal data embedded in each image was decoded to read out. The protocol to encode/decode the measured data was originally defined. The mixed signals of IR camera and the imposed data were applied to the pixel by pixel emissivity corrections and the pseudo-acceleration of the periodical thermal phenomena. Because the emissivity of industrial materials and biological tissues were usually inhomogeneous, it has the different temperature dependence on each pixel. The time-scale resolution for the periodic thermal event was improved with the algorithm for "pseudoacceleration". It contributes to reduce the noise by integrating the multiple image data, keeping a time resolution. The anisotropic thermal properties of some composite materials such as thermal insulating materials of cellular plastics and the biometric composite materials were analyzed using these techniques.

  11. Development of transformation bands in TiNi SMA for various stress and strain rates studied by a fast and sensitive infrared camera

    International Nuclear Information System (INIS)

    Pieczyska, E A; Kulasinski, K; Tobushi, H

    2013-01-01

    TiNi shape memory alloy (SMA) was subjected to tension at various strain rates for stress- and strain-controlled tests. The nucleation, development and saturation of the stress-induced martensitic transformation were investigated, based on the specimen temperature changes, measured by a fast and sensitive infrared camera. It was found that the initial, macroscopically homogeneous phase transformation occurs at the same stress level for all strain rates applied, regardless of the loading manner, while the stress of the localized transformation increases with the strain rate. At higher strain rate, a more dynamic course of the transformation process was observed, revealed in the creation of numerous fine transformation bands. An inflection point was noticed on the stress–strain curve, dividing the transformation range into two stages: the first heterogeneous, where transformation bands nucleate and evolve throughout the sample; the second, where the bands overlap, related to significant temperature increase and an upswing region of the curve. In the final part of the SMA loading a decrease of the average sample temperature revealed the saturation stage of the transformation. It was also observed that nucleation of the localized martensitic forward transformation takes place in the weakest area of the sample in both approaches, whereas the reverse transformation always initiates in its central part. (paper)

  12. AIRS/Aqua Level 1C Infrared (IR) resampled and corrected radiances V006

    Data.gov (United States)

    National Aeronautics and Space Administration — The AIRS Infrared (IR) level 1C data set contains AIRS infrared calibrated and geolocated radiances in W/m2/micron/ster. This data set is generated from AIRS level...

  13. Multiple-aperture optical design for micro-level cameras using 3D-printing method

    Science.gov (United States)

    Peng, Wei-Jei; Hsu, Wei-Yao; Cheng, Yuan-Chieh; Lin, Wen-Lung; Yu, Zong-Ru; Chou, Hsiao-Yu; Chen, Fong-Zhi; Fu, Chien-Chung; Wu, Chong-Syuan; Huang, Chao-Tsung

    2018-02-01

    The design of the ultra miniaturized camera using 3D-printing technology directly printed on to the complementary metal-oxide semiconductor (CMOS) imaging sensor is presented in this paper. The 3D printed micro-optics is manufactured using the femtosecond two-photon direct laser writing, and the figure error which could achieve submicron accuracy is suitable for the optical system. Because the size of the micro-level camera is approximately several hundreds of micrometers, the resolution is reduced much and highly limited by the Nyquist frequency of the pixel pitch. For improving the reduced resolution, one single-lens can be replaced by multiple-aperture lenses with dissimilar field of view (FOV), and then stitching sub-images with different FOV can achieve a high resolution within the central region of the image. The reason is that the angular resolution of the lens with smaller FOV is higher than that with larger FOV, and then the angular resolution of the central area can be several times than that of the outer area after stitching. For the same image circle, the image quality of the central area of the multi-lens system is significantly superior to that of a single-lens. The foveated image using stitching FOV breaks the limitation of the resolution for the ultra miniaturized imaging system, and then it can be applied such as biomedical endoscopy, optical sensing, and machine vision, et al. In this study, the ultra miniaturized camera with multi-aperture optics is designed and simulated for the optimum optical performance.

  14. Evaluation of an infrared camera and X-ray system using implanted fiducials in patients with lung tumors for gated radiation therapy

    International Nuclear Information System (INIS)

    Willoughby, Twyla R.; Forbes, Alan R.; Buchholz, Daniel; Langen, Katja M.; Wagner, Thomas H.; Zeidan, Omar A.; Kupelian, Patrick A.; Meeks, Sanford L.

    2006-01-01

    Purpose: To report on the initial clinical use of a commercially available system to deliver gated treatment using implanted fiducials, in-room kV X-rays, and an infrared camera tracking system. Methods and Materials: ExacTrac Adaptive Gating from BrainLab is a localization system using infrared cameras and X-rays. Gating signals are the patient's breathing pattern obtained from infrared reflectors on the patient. kV X-rays of an implanted fiducial are synchronized to the breathing pattern. After localization and shift of the patient to isocenter, the breathing pattern is used to gate Radiation. Feasibility tests included localization accuracy, radiation output constancy, and dose distributions with gating. Clinical experience is reported on treatment of patients with small lung lesions. Results: Localization accuracy of a moving target with gating was 1.7 mm. Dose constancy measurements showed insignificant change in output with gating. Improvements of dose distributions on moving targets improved with gating. Eleven patients with lung lesions were implanted with 20 mm x 0.7 mm gold coil (Visicoil). The implanted fiducial was used to localize and treat the patients with gating. Treatment planning and repeat computed tomographic scans showed that the change in center of gross target volume (GTV) to implanted marker averaged 2.47 mm due in part to asymmetric tumor shrinkage. Conclusion: ExacTrac Adaptive Gating has been used to treat lung lesions. Initial system evaluation verified its accuracy and usability. Implanted fiducials are visible in X-rays and did not migrate

  15. Fulfilling the pedestrian protection directive using a long-wavelength infrared camera designed to meet both performance and cost targets

    Science.gov (United States)

    Källhammer, Jan-Erik; Pettersson, Håkan; Eriksson, Dick; Junique, Stéphane; Savage, Susan; Vieider, Christian; Andersson, Jan Y.; Franks, John; Van Nylen, Jan; Vercammen, Hans; Kvisterøy, Terje; Niklaus, Frank; Stemme, Göran

    2006-04-01

    Pedestrian fatalities are around 15% of the traffic fatalities in Europe. A proposed EU regulation requires the automotive industry to develop technologies that will substantially decrease the risk for Vulnerable Road Users when hit by a vehicle. Automatic Brake Assist systems, activated by a suitable sensor, will reduce the speed of the vehicle before the impact, independent of any driver interaction. Long Wavelength Infrared technology is an ideal candidate for such sensors, but requires a significant cost reduction. The target necessary for automotive serial applications are well below the cost of systems available today. Uncooled bolometer arrays are the most mature technology for Long Wave Infrared with low-cost potential. Analyses show that sensor size and production yield along with vacuum packaging and the optical components are the main cost drivers. A project has been started to design a new Long Wave Infrared system with a ten times cost reduction potential, optimized for the pedestrian protection requirement. It will take advantage of the progress in Micro Electro-Mechanical Systems and Long Wave Infrared optics to keep the cost down. Deployable and pre-impact braking systems can become effective alternatives to passive impact protection systems solutions fulfilling the EU pedestrian protection regulation. Low-cost Long Wave Infrared sensors will be an important enabler to make such systems cost competitive, allowing high market penetration.

  16. Development of an omnidirectional gamma-ray imaging Compton camera for low-radiation-level environmental monitoring

    Science.gov (United States)

    Watanabe, Takara; Enomoto, Ryoji; Muraishi, Hiroshi; Katagiri, Hideaki; Kagaya, Mika; Fukushi, Masahiro; Kano, Daisuke; Satoh, Wataru; Takeda, Tohoru; Tanaka, Manobu M.; Tanaka, Souichi; Uchida, Tomohisa; Wada, Kiyoto; Wakamatsu, Ryo

    2018-02-01

    We have developed an omnidirectional gamma-ray imaging Compton camera for environmental monitoring at low levels of radiation. The camera consisted of only six CsI(Tl) scintillator cubes of 3.5 cm, each of which was readout by super-bialkali photo-multiplier tubes (PMTs). Our camera enables the visualization of the position of gamma-ray sources in all directions (∼4π sr) over a wide energy range between 300 and 1400 keV. The angular resolution (σ) was found to be ∼11°, which was realized using an image-sharpening technique. A high detection efficiency of 18 cps/(µSv/h) for 511 keV (1.6 cps/MBq at 1 m) was achieved, indicating the capability of this camera to visualize hotspots in areas with low-radiation-level contamination from the order of µSv/h to natural background levels. Our proposed technique can be easily used as a low-radiation-level imaging monitor in radiation control areas, such as medical and accelerator facilities.

  17. Clinical usefulness of augmented reality using infrared camera based real-time feedback on gait function in cerebral palsy: a case study.

    Science.gov (United States)

    Lee, Byoung-Hee

    2016-04-01

    [Purpose] This study investigated the effects of real-time feedback using infrared camera recognition technology-based augmented reality in gait training for children with cerebral palsy. [Subjects] Two subjects with cerebral palsy were recruited. [Methods] In this study, augmented reality based real-time feedback training was conducted for the subjects in two 30-minute sessions per week for four weeks. Spatiotemporal gait parameters were used to measure the effect of augmented reality-based real-time feedback training. [Results] Velocity, cadence, bilateral step and stride length, and functional ambulation improved after the intervention in both cases. [Conclusion] Although additional follow-up studies of the augmented reality based real-time feedback training are required, the results of this study demonstrate that it improved the gait ability of two children with cerebral palsy. These findings suggest a variety of applications of conservative therapeutic methods which require future clinical trials.

  18. Magnetic configuration effects on the edge heat flux in the limiter plasma on W7-X measured using the infrared camera and the combined probe

    Science.gov (United States)

    P, DREWS; H, NIEMANN; J, COSFELD; Y, GAO; J, GEIGER; O, GRULKE; M, HENKEL; D, HÖSCHEN; K, HOLLFELD; C, KILLER; A, KRÄMER-FLECKEN; Y, LIANG; S, LIU; D, NICOLAI; O, NEUBAUER; M, RACK; B, SCHWEER; G, SATHEESWARAN; L, RUDISCHHAUSER; N, SANDRI; N, WANG; the W7-X Team

    2018-05-01

    Controlling the heat and particle fluxes in the plasma edge and on the plasma facing components is important for the safe and effective operation of every magnetically confined fusion device. This was attempted on Wendelstein 7-X in the first operational campaign, with the modification of the magnetic configuration by use of the trim coils and tuning the field coil currents, commonly named iota scan. Ideally, the heat loads on the five limiters are equal. However, they differ between each limiter and are non-uniform, due to the (relatively small) error fields caused by the misalignment of components. It is therefore necessary to study the influence of the configuration changes on the transport of heat and particles in the plasma edge caused by the application of error fields and the change of the magnetic configuration. In this paper the up-stream measurements conducted with the combined probe are compared to the downstream measurements with the DIAS infrared camera on the limiter.

  19. Masterpieces unmasked: New high-resolution infrared cameras produce rich, detailed images of artwork, and create new controversies

    CERN Document Server

    Marshall, J

    2002-01-01

    Luca Pezzati is a physicist who heads a group called Art Diagnostics, which is a part of the Opificio delle Pietre Dure, an institute devoted to the research and conservation of artworks in Italy. Pezzati and his group use high-resolution infrared scanning device to produce colour images of what lies below the surface of paintings. Their scanner is able to produce the best-known quality of images without harming the painting under examination (1 page).

  20. Aluminum-coated optical fibers as efficient infrared timing fiducial photocathodes for synchronizing x-ray streak cameras

    International Nuclear Information System (INIS)

    Koch, J.A.; MacGowan, B.J.

    1991-01-01

    The timing fiducial system at the Nova Two-Beam Facility allows time-resolved x-ray and optical streak camera data from laser-produced plasmas to be synchronized to within 30 ps. In this system, an Al-coated optical fiber is inserted into an aperture in the cathode plate of each streak camera. The coating acts as a photocathode for a low-energy pulse of 1ω (λ = 1.054 μm) light which is synchronized to the main Nova beam. The use of the fundamental (1ω) for this fiducial pulse has been found to offer significant advantages over the use of the 2ω second harmonic (λ = 0.53 μm). These advantages include brighter signals, greater reliability, and a higher relative damage threshold, allowing routine use without fiber replacement. The operation of the system is described, and experimental data and interpretations are discussed which suggest that the electron production in the Al film is due to thermionic emission. The results of detailed numerical simulations of the relevant thermal processes, undertaken to model the response of the coated fiber to 1ω laser pulses, are also presented, which give qualitative agreement with experimental data. Quantitative discrepancies between the modeling results and the experimental data are discussed, and suggestions for further research are given

  1. Using a 3D profiler and infrared camera to monitor oven loading in fully cooked meat operations

    Science.gov (United States)

    Stewart, John; Giorges, Aklilu

    2009-05-01

    Ensuring meat is fully cooked is an important food safety issue for operations that produce "ready to eat" products. In order to kill harmful pathogens like Salmonella, all of the product must reach a minimum threshold temperature. Producers typically overcook the majority of the product to ensure meat in the most difficult scenario reaches the desired temperature. A difficult scenario can be caused by an especially thick piece of meat or by a surge of product into the process. Overcooking wastes energy, degrades product quality, lowers the maximum throughput rate of the production line and decreases product yield. At typical production rates of 6000lbs/hour, these losses from overcooking can have a significant cost impact on producers. A wide area 3D camera coupled with a thermal camera was used to measure the thermal mass variability of chicken breasts in a cooking process. Several types of variability are considered including time varying thermal mass (mass x temperature / time), variation in individual product geometry and variation in product temperature. The automatic identification of product arrangement issues that affect cooking such as overlapping product and folded products is also addressed. A thermal model is used along with individual product geometry and oven cook profiles to predict the percentage of product that will be overcooked and to identify products that may not fully cook in a given process.

  2. Detection of water leakage in buried pipes using infrared technology; a comparative study of using high and low resolution infrared cameras for evaluating distant remote detection

    OpenAIRE

    Shakmak, B; Al-Habaibeh, A

    2015-01-01

    Water is one of the most precious commodities around the world. However, significant amount of water is lost daily in many countries through broken and leaking pipes. This paper investigates the use of low and high resolution infrared systems to detect water leakage in relatively dry countries. The overall aim is to develop a non-contact and high speed system that could be used to detect leakage in pipes remotely via the effect of the change in humidity on the temperature of the ground due to...

  3. Planetcam: A Visible And Near Infrared Lucky-imaging Camera To Study Planetary Atmospheres And Solar System Objects

    Science.gov (United States)

    Sanchez-Lavega, Agustin; Rojas, J.; Hueso, R.; Perez-Hoyos, S.; de Bilbao, L.; Murga, G.; Ariño, J.; Mendikoa, I.

    2012-10-01

    PlanetCam is a two-channel fast-acquisition and low-noise camera designed for a multispectral study of the atmospheres of the planets (Venus, Mars, Jupiter, Saturn, Uranus and Neptune) and the satellite Titan at high temporal and spatial resolutions simultaneously invisible (0.4-1 μm) and NIR (1-2.5 μm) channels. This is accomplished by means of a dichroic beam splitter that separates both beams directing them into two different detectors. Each detector has filter wheels corresponding to the characteristic absorption bands of each planetary atmosphere. Images are acquired and processed using the “lucky imaging” technique in which several thousand images of the same object are obtained in a short time interval, coregistered and ordered in terms of image quality to reconstruct a high-resolution ideally diffraction limited image of the object. Those images will be also calibrated in terms of intensity and absolute reflectivity. The camera will be tested at the 50.2 cm telescope of the Aula EspaZio Gela (Bilbao) and then commissioned at the 1.05 m at Pic-duMidi Observatory (Franca) and at the 1.23 m telescope at Calar Alto Observatory in Spain. Among the initially planned research targets are: (1) The vertical structure of the clouds and hazes in the planets and their scales of variability; (2) The meteorology, dynamics and global winds and their scales of variability in the planets. PlanetCam is also expected to perform studies of other Solar System and astrophysical objects. Acknowledgments: This work was supported by the Spanish MICIIN project AYA2009-10701 with FEDER funds, by Grupos Gobierno Vasco IT-464-07 and by Universidad País Vasco UPV/EHU through program UFI11/55.

  4. Nimbus-4 Infrared Interferometer Spectrometer (IRIS) Level 1 Radiance Data V001

    Data.gov (United States)

    National Aeronautics and Space Administration — The Nimbus-4 Infrared Interferometer Spectrometer (IRIS) Level 1 Radiance Data contain thermal emissions of the Earth's atmosphere at wave numbers between 400 and...

  5. Near infrared thermography by CCD cameras and application to first wall components of Tore Supra tokamak; Thermographie proche infrarouge par cameras CCD et application aux composants de premiere paroi du tokamak Tore Supra

    Energy Technology Data Exchange (ETDEWEB)

    Moreau, F.

    1996-06-07

    In the Tokamak TORE-SUPRA, the plasma facing components absorbs and evacuate (active cooling) high power fluxes (up to 10 MW/m{sup 2}). Their thermal behavior study is essential for the success of controlled thermonuclear fusion line. The first part is devoted to the study of power deposition on the TORE-SUPRA actively cooled limiters. A model of power deposition on one of the limiters is developed. It takes into account the magnetic topology and a description of the plasma edge. The model is validated with experimental calorimetric data obtained during a series of shots. This will allow to compare the surface temperature measurements with the predicted ones. The main purpose of this thesis was to evaluate and develop a new temperature measurement system. It works in the near infrared range (890 nm) and is designed to complete the existing thermographic diagnostic of TORE-SUPRA. By using the radiation laws (for a blackbody and the plasma) and the laboratory calibration one can estimate the surface temperature of the observed object. We evaluate the performances and limits of such a device in the harsh conditions encountered in a Tokamak environment. On the one hand, in a quasi ideal situation, this analysis shows that the range of measurements is 600 deg. C to 2500 deg. C. On the other hand, when one takes into account of the plasma radiation (with an averaged central plasma density of 6.10{sup 19} m{sup -3}), we find that the minimum surface temperature rise to 900 deg. C instead of 700 deg. C. In the near future, according to the development of IR-CCD cameras working in the near infrared range up to 2 micrometers, we will be able to keep the good spatial resolution with an improved lower limit for the temperature down to 150 deg. C. The last section deals with a number of computer tools to process the images obtained from experiments on TORE-SUPRA. A pattern recognition application was developed to detect a complex plasma iso-intensity structure. 87 refs.

  6. IrisDenseNet: Robust Iris Segmentation Using Densely Connected Fully Convolutional Networks in the Images by Visible Light and Near-Infrared Light Camera Sensors.

    Science.gov (United States)

    Arsalan, Muhammad; Naqvi, Rizwan Ali; Kim, Dong Seop; Nguyen, Phong Ha; Owais, Muhammad; Park, Kang Ryoung

    2018-05-10

    The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR) light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN) has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet), which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE-I) datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA) v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets.

  7. IrisDenseNet: Robust Iris Segmentation Using Densely Connected Fully Convolutional Networks in the Images by Visible Light and Near-Infrared Light Camera Sensors

    Directory of Open Access Journals (Sweden)

    Muhammad Arsalan

    2018-05-01

    Full Text Available The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet, which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database and mobile iris challenge evaluation (MICHE-I datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets.

  8. Built-in hyperspectral camera for smartphone in visible, near-infrared and middle-infrared lights region (second report): sensitivity improvement of Fourier-spectroscopic imaging to detect diffuse reflection lights from internal human tissues for healthcare sensors

    Science.gov (United States)

    Kawashima, Natsumi; Hosono, Satsuki; Ishimaru, Ichiro

    2016-05-01

    We proposed the snapshot-type Fourier spectroscopic imaging for smartphone that was mentioned in 1st. report in this conference. For spectroscopic components analysis, such as non-invasive blood glucose sensors, the diffuse reflection lights from internal human skins are very weak for conventional hyperspectral cameras, such as AOTF (Acousto-Optic Tunable Filter) type. Furthermore, it is well known that the spectral absorption of mid-infrared lights or Raman spectroscopy especially in long wavelength region is effective to distinguish specific biomedical components quantitatively, such as glucose concentration. But the main issue was that photon energies of middle infrared lights and light intensities of Raman scattering are extremely weak. For improving sensitivity of our spectroscopic imager, the wide-field-stop & beam-expansion method was proposed. Our line spectroscopic imager introduced a single slit for field stop on the conjugate objective plane. Obviously to increase detected light intensities, the wider slit width of the field stop makes light intensities higher, regardless of deterioration of spatial resolutions. Because our method is based on wavefront-division interferometry, it becomes problems that the wider width of single slit makes the diffraction angle narrower. This means that the narrower diameter of collimated objective beams deteriorates visibilities of interferograms. By installing the relative inclined phaseshifter onto optical Fourier transform plane of infinity corrected optical systems, the collimated half flux of objective beams derived from single-bright points on objective surface penetrate through the wedge prism and the cuboid glass respectively. These two beams interfere each other and form the infererogram as spatial fringe patterns. Thus, we installed concave-cylindrical lens between the wider slit and objective lens as a beam expander. We successfully obtained the spectroscopic characters of hemoglobin from reflected lights from

  9. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  10. Optimizing Low Light Level Imaging Techniques and Sensor Design Parameters using CCD Digital Cameras for Potential NASA Earth Science Research aboard a Small Satellite or ISS

    Data.gov (United States)

    National Aeronautics and Space Administration — For this project, the potential of using state-of-the-art aerial digital framing cameras that have time delayed integration (TDI) to acquire useful low light level...

  11. Phone camera detection of glucose blood level based on magnetic particles entrapped inside bubble wrap.

    Science.gov (United States)

    Martinkova, Pavla; Pohanka, Miroslav

    2016-12-18

    Glucose is an important diagnostic biochemical marker of diabetes but also for organophosphates, carbamates, acetaminophens or salicylates poisoning. Hence, innovation of accurate and fast detection assay is still one of priorities in biomedical research. Glucose sensor based on magnetic particles (MPs) with immobilized enzymes glucose oxidase (GOx) and horseradish peroxidase (HRP) was developed and the GOx catalyzed reaction was visualized by a smart-phone-integrated camera. Exponential decay concentration curve with correlation coefficient 0.997 and with limit of detection 0.4 mmol/l was achieved. Interfering and matrix substances were measured due to possibility of assay influencing and no effect of the tested substances was observed. Spiked plasma samples were also measured and no influence of plasma matrix on the assay was proved. The presented assay showed complying results with reference method (standard spectrophotometry based on enzymes glucose oxidase and peroxidase inside plastic cuvettes) with linear dependence and correlation coefficient 0.999 in concentration range between 0 and 4 mmol/l. On the grounds of measured results, method was considered as highly specific, accurate and fast assay for detection of glucose.

  12. Radioisotope camera

    International Nuclear Information System (INIS)

    Tausch, L.M.; Kump, R.J.

    1978-01-01

    The electronic ciruit corrects distortions caused by the distance between the individual photomultiplier tubes of the multiple radioisotope camera on one hand and between the tube configuration and the scintillator plate on the other. For this purpose the transmission characteristics of the nonlinear circuits are altered as a function of the energy of the incident radiation. By this means the threshold values between lower and higher amplification are adjusted to the energy level of each scintillation. The correcting circuit may be used for any number of isotopes to be measured. (DG) [de

  13. Built-in hyperspectral camera for smartphone in visible, near-infrared and middle-infrared lights region (third report): spectroscopic imaging for broad-area and real-time componential analysis system against local unexpected terrorism and disasters

    Science.gov (United States)

    Hosono, Satsuki; Kawashima, Natsumi; Wollherr, Dirk; Ishimaru, Ichiro

    2016-05-01

    The distributed networks for information collection of chemical components with high-mobility objects, such as drones or smartphones, will work effectively for investigations, clarifications and predictions against unexpected local terrorisms and disasters like localized torrential downpours. We proposed and reported the proposed spectroscopic line-imager for smartphones in this conference. In this paper, we will mention the wide-area spectroscopic-image construction by estimating 6 DOF (Degrees Of Freedom: parallel movements=x,y,z and rotational movements=θx, θy, θz) from line data to observe and analyze surrounding chemical-environments. Recently, smartphone movies, what were photographed by peoples happened to be there, had worked effectively to analyze what kinds of phenomenon had happened around there. But when a gas tank suddenly blew up, we did not recognize from visible-light RGB-color cameras what kinds of chemical gas components were polluting surrounding atmospheres. Conventionally Fourier spectroscopy had been well known as chemical components analysis in laboratory usages. But volatile gases should be analyzed promptly at accident sites. And because the humidity absorption in near and middle infrared lights has very high sensitivity, we will be able to detect humidity in the sky from wide field spectroscopic image. And also recently, 6-DOF sensors are easily utilized for estimation of position and attitude for UAV (Unmanned Air Vehicle) or smartphone. But for observing long-distance views, accuracies of angle measurements were not sufficient to merge line data because of leverage theory. Thus, by searching corresponding pixels between line spectroscopic images, we are trying to estimate 6-DOF in high accuracy.

  14. GRACE star camera noise

    Science.gov (United States)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  15. Infrared camera - specially developed for professional building inspection - with dew point detection tracing out fungi formation; Infrarotkamera - speziell entwickelt fuer die professionelle Gebaeudeinspektion - mit Taupunktermittlungsfunktion zum schnellen Auffinden von Gefahrbereichen fuer Schimmelbildung

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    2004-07-01

    Besides the positive effect of energy saving the aspect of fungi formation has also to be respected. Fungi is not only responseable for health diseases but can also damage building materials. Examples are shown and how they can be detect by an infrared monitoring camera. (GL) [German] Die Waermeschutzverordnung und das daraus hervorgegangene Energieeinspargesetz haben weitreichende Folgen fuer Architekten, Bauingenieure, Instandhalter und Sanierer: Zeitgemaesse Gebaeude werden immer dichter, Luftzirkulation und damit der Feuchtigkeitsaustausch werden weitgehend unterbunden. Neben dem positiven Effekt der Energieersparnis ergibt sich dadurch leider oft auch ein Problem durch Schimmelbildung. Dabei ist zu bedenken, dass Schimmelpilz nicht nur gesundheitsgefaehrend ist, sondern oft auch die Bausubstanz signifikant schaedigen kann. (orig.)

  16. A study of full width at half maximum (FWHM) according to the filter's cut off level in SPECT camera

    International Nuclear Information System (INIS)

    Park, Soung Ock; Kwon, Soo Il

    2003-01-01

    Filtering is necessary to reduce statistical noise and to increase image quality in SPECT images. Noises controlled by low-pass filter designed to suppress high spatial frequency in SPECT image. Most SPECT filter function control the degree of high frequency suppression by choosing a cut off frequency. The location of cut off frequency determines the affect image noise and spatial resolution. If select the low cut off frequency, its provide good noise suppression but insufficient image quantity and high cut off frequencies increase the image resolution but insufficient noise suppression. The purpose of this study was to determines the optimum cut off level with comparison of FWHM according to cut off level in each filters-Band-limited, Sheep-logan, Sheep-logan Hanning, Generalized Hamming, Low pass cosine, Parazen and Butterworth filter in SPECT camera. We recorded image along the X, Y, Z-axis with 99m TcO 4 point source and measured FWHM by use profile curve. We find averaged length is 9.16 mm ∼ 18.14 mm of FWHM in X, Y, and Z-axis, and Band-limited and Generalized Hamming filters measures 9.16 mm at 0.7 cycle/pixel cut off frequency

  17. Improved detection probability of low level light and infrared image fusion system

    Science.gov (United States)

    Luo, Yuxiang; Fu, Rongguo; Zhang, Junju; Wang, Wencong; Chang, Benkang

    2018-02-01

    Low level light(LLL) image contains rich information on environment details, but is easily affected by the weather. In the case of smoke, rain, cloud or fog, much target information will lose. Infrared image, which is from the radiation produced by the object itself, can be "active" to obtain the target information in the scene. However, the image contrast and resolution is bad, the ability of the acquisition of target details is very poor, and the imaging mode does not conform to the human visual habit. The fusion of LLL and infrared image can make up for the deficiency of each sensor and give play to the advantages of single sensor. At first, we show the hardware design of fusion circuit. Then, through the recognition probability calculation of the target(one person) and the background image(trees), we find that the trees detection probability of LLL image is higher than that of the infrared image, and the person detection probability of the infrared image is obviously higher than that of LLL image. The detection probability of fusion image for one person and trees is higher than that of single detector. Therefore, image fusion can significantly enlarge recognition probability and improve detection efficiency.

  18. Infrared analyzers for breast milk analysis: fat levels can influence the accuracy of protein measurements.

    Science.gov (United States)

    Kwan, Celia; Fusch, Gerhard; Bahonjic, Aldin; Rochow, Niels; Fusch, Christoph

    2017-10-26

    Currently, there is a growing interest in lacto-engineering in the neonatal intensive care unit, using infrared milk analyzers to rapidly measure the macronutrient content in breast milk before processing and feeding it to preterm infants. However, there is an overlap in the spectral information of different macronutrients, so they can potentially impact the robustness of the measurement. In this study, we investigate whether the measurement of protein is dependent on the levels of fat present while using an infrared milk analyzer. Breast milk samples (n=25) were measured for fat and protein content before and after being completely defatted by centrifugation, using chemical reference methods and near-infrared milk analyzer (Unity SpectraStar) with two different calibration algorithms provided by the manufacturer (released 2009 and 2015). While the protein content remained unchanged, as measured by elemental analysis, measurements by infrared milk analyzer show a difference in protein measurements dependent on fat content; high fat content can lead to falsely high protein content. This difference is less pronounced when measured using the more recent calibration algorithm. Milk analyzer users must be cautious of their devices' measurements, especially if they are changing the matrix of breast milk using more advanced lacto-engineering.

  19. A color fusion method of infrared and low-light-level images based on visual perception

    Science.gov (United States)

    Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa

    2014-11-01

    The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.

  20. Handheld Device Adapted to Smartphone Cameras for the Measurement of Sodium Ion Concentrations at Saliva-Relevant Levels via Fluorescence

    Directory of Open Access Journals (Sweden)

    Michelle Lipowicz

    2015-06-01

    Full Text Available The use of saliva sampling as a minimally-invasive means for drug testing and monitoring physiology is a subject of great interest to researchers and clinicians. This study describes a new optical method based on non-axially symmetric focusing of light using an oblate spheroid sample chamber. The device is simple, lightweight, low cost and is easily attached to several different brands/models of smartphones (Apple, Samsung, HTC and Nokia for the measurement of sodium ion levels at physiologically-relevant saliva concentrations. The sample and fluorescent reagent solutions are placed in a specially-designed, lightweight device that excludes ambient light and concentrates 470-nm excitation light, from a low-power photodiode, within the sample through non-axially-symmetric refraction. The study found that smartphone cameras and post-image processing quantitated sodium ion concentration in water over the range of 0.5–10 mM, yielding best-fit regressions of the data that agree well with a data regression of microplate luminometer results. The data suggest that fluorescence can be used for the measurement of salivary sodium ion concentrations in low-resource or point-of-care settings. With further fluorescent assay testing, the device may find application in a variety of enzymatic or chemical assays.

  1. Brown dwarf photospheres are patchy: A Hubble space telescope near-infrared spectroscopic survey finds frequent low-level variability

    International Nuclear Information System (INIS)

    Buenzli, Esther; Apai, Dániel; Radigan, Jacqueline; Reid, I. Neill; Flateau, Davin

    2014-01-01

    Condensate clouds strongly impact the spectra of brown dwarfs and exoplanets. Recent discoveries of variable L/T transition dwarfs argued for patchy clouds in at least some ultracool atmospheres. This study aims to measure the frequency and level of spectral variability in brown dwarfs and to search for correlations with spectral type. We used Hubble Space Telescope/Wide Field Camera 3 to obtain spectroscopic time series for 22 brown dwarfs of spectral types ranging from L5 to T6 at 1.1-1.7 μm for ≈40 minutes per object. Using Bayesian analysis, we find six brown dwarfs with confident (p > 95%) variability in the relative flux in at least one wavelength region at sub-percent precision, and five brown dwarfs with tentative (p > 68%) variability. We derive a minimum variability fraction f min =27 −7 +11 % over all covered spectral types. The fraction of variables is equal within errors for mid-L, late-L, and mid-T spectral types; for early-T dwarfs we do not find any confident variable but the sample is too small to derive meaningful limits. For some objects, the variability occurs primarily in the flux peak in the J or H band, others are variable throughout the spectrum or only in specific absorption regions. Four sources may have broadband peak-to-peak amplitudes exceeding 1%. Our measurements are not sensitive to very long periods, inclinations near pole-on and rotationally symmetric heterogeneity. The detection statistics are consistent with most brown dwarf photospheres being patchy. While multiple-percent near-infrared variability may be rare and confined to the L/T transition, low-level heterogeneities are a frequent characteristic of brown dwarf atmospheres.

  2. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, Ul; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is replaceably mounted in the ray inlet opening of the camera, while the others are placed on separate supports. Supports are swingably mounted upon a column one above the other

  3. Gamma camera

    International Nuclear Information System (INIS)

    Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  4. INFRARED TRANSMISSION SPECTROSCOPY OF THE EXOPLANETS HD 209458b AND XO-1b USING THE WIDE FIELD CAMERA-3 ON THE HUBBLE SPACE TELESCOPE

    Energy Technology Data Exchange (ETDEWEB)

    Deming, Drake; Wilkins, Ashlee [Department of Astronomy, University of Maryland, College Park, MD 20742 (United States); McCullough, Peter; Crouzet, Nicolas [Space Telescope Science Institute, Baltimore, MD 21218 (United States); Burrows, Adam [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544-1001 (United States); Fortney, Jonathan J. [Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States); Agol, Eric; Dobbs-Dixon, Ian [NASA Astrobiology Institute' s Virtual Planetary Laboratory (United States); Madhusudhan, Nikku [Yale Center for Astronomy and Astrophysics, Yale University, New Haven, CT 06511 (United States); Desert, Jean-Michel; Knutson, Heather A.; Line, Michael [Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA 91125 (United States); Gilliland, Ronald L. [Center for Exoplanets and Habitable Worlds, The Pennsylvania State University, University Park, PA 16802 (United States); Haynes, Korey [Department of Physics and Astronomy, George Mason University, Fairfax, VA 22030 (United States); Magic, Zazralt [Max-Planck-Institut fuer Astrophysik, D-85741 Garching (Germany); Mandell, Avi M.; Clampin, Mark [NASA' s Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Ranjan, Sukrit; Charbonneau, David [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); Seager, Sara, E-mail: ddeming@astro.umd.edu [Department of Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); and others

    2013-09-10

    Exoplanetary transmission spectroscopy in the near-infrared using the Hubble Space Telescope (HST) NICMOS is currently ambiguous because different observational groups claim different results from the same data, depending on their analysis methodologies. Spatial scanning with HST/WFC3 provides an opportunity to resolve this ambiguity. We here report WFC3 spectroscopy of the giant planets HD 209458b and XO-1b in transit, using spatial scanning mode for maximum photon-collecting efficiency. We introduce an analysis technique that derives the exoplanetary transmission spectrum without the necessity of explicitly decorrelating instrumental effects, and achieves nearly photon-limited precision even at the high flux levels collected in spatial scan mode. Our errors are within 6% (XO-1) and 26% (HD 209458b) of the photon-limit at a resolving power of {lambda}/{delta}{lambda} {approx} 70, and are better than 0.01% per spectral channel. Both planets exhibit water absorption of approximately 200 ppm at the water peak near 1.38 {mu}m. Our result for XO-1b contradicts the much larger absorption derived from NICMOS spectroscopy. The weak water absorption we measure for HD 209458b is reminiscent of the weakness of sodium absorption in the first transmission spectroscopy of an exoplanet atmosphere by Charbonneau et al. Model atmospheres having uniformly distributed extra opacity of 0.012 cm{sup 2} g{sup -1} account approximately for both our water measurement and the sodium absorption. Our results for HD 209458b support the picture advocated by Pont et al. in which weak molecular absorptions are superposed on a transmission spectrum that is dominated by continuous opacity due to haze and/or dust. However, the extra opacity needed for HD 209458b is grayer than for HD 189733b, with a weaker Rayleigh component.

  5. Cameras in mobile phones

    Science.gov (United States)

    Nummela, Ville; Viinikanoja, Jarkko; Alakarhu, Juha

    2006-04-01

    One of the fastest growing markets in consumer markets today are camera phones. During past few years total volume has been growing fast and today millions of mobile phones with camera will be sold. At the same time resolution and functionality of the cameras has been growing from CIF towards DSC level. From camera point of view the mobile world is an extremely challenging field. Cameras should have good image quality but in small size. They also need to be reliable and their construction should be suitable for mass manufacturing. All components of the imaging chain should be well optimized in this environment. Image quality and usability are the most important parameters to user. The current trend of adding more megapixels to cameras and at the same time using smaller pixels is affecting both. On the other hand reliability and miniaturization are key drivers for product development as well as the cost. In optimized solution all parameters are in balance but the process of finding the right trade-offs is not an easy task. In this paper trade-offs related to optics and their effects to image quality and usability of cameras are discussed. Key development areas from mobile phone camera point of view are also listed.

  6. Rapid determination of sugar level in snack products using infrared spectroscopy.

    Science.gov (United States)

    Wang, Ting; Rodriguez-Saona, Luis E

    2012-08-01

    Real-time spectroscopic methods can provide a valuable window into food manufacturing to permit optimization of production rate, quality and safety. There is a need for cutting edge sensor technology directed at improving efficiency, throughput and reliability of critical processes. The aim of the research was to evaluate the feasibility of infrared systems combined with chemometric analysis to develop rapid methods for determination of sugars in cereal products. Samples were ground and spectra were collected using a mid-infrared (MIR) spectrometer equipped with a triple-bounce ZnSe MIRacle attenuated total reflectance accessory or Fourier transform near infrared (NIR) system equipped with a diffuse reflection-integrating sphere. Sugar contents were determined using a reference HPLC method. Partial least squares regression (PLSR) was used to create cross-validated calibration models. The predictability of the models was evaluated on an independent set of samples and compared with reference techniques. MIR and NIR spectra showed characteristic absorption bands for sugars, and generated excellent PLSR models (sucrose: SEP 0.96). Multivariate models accurately and precisely predicted sugar level in snacks allowing for rapid analysis. This simple technique allows for reliable prediction of quality parameters, and automation enabling food manufacturers for early corrective actions that will ultimately save time and money while establishing a uniform quality. The U.S. snack food industry generates billions of dollars in revenue each year and vibrational spectroscopic methods combined with pattern recognition analysis could permit optimization of production rate, quality, and safety of many food products. This research showed that infrared spectroscopy is a powerful technique for near real-time (approximately 1 min) assessment of sugar content in various cereal products. © 2012 Institute of Food Technologists®

  7. Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects.

    Science.gov (United States)

    Bulczak, David; Lambers, Martin; Kolb, Andreas

    2017-12-22

    In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data.

  8. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, U.; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is mounted in the ray inlet opening of the camera, while the others are placed on separate supports. The supports are swingably mounted upon a column one above the other through about 90 0 to a collimator exchange position. Each of the separate supports is swingable to a vertically aligned position, with limiting of the swinging movement and positioning of the support at the desired exchange position. The collimators are carried on the supports by means of a series of vertically disposed coil springs. Projections on the camera are movable from above into grooves of the collimator at the exchange position, whereupon the collimator is turned so that it is securely prevented from falling out of the camera head

  9. Assessment of a landfill methane emission screening method using an unmanned aerial vehicle mounted thermal infrared camera – A field study

    DEFF Research Database (Denmark)

    Fjelsted, Lotte; Christensen, A. G.; Larsen, J. E.

    2018-01-01

    An unmanned aerial vehicle (UAV)-mounted thermal infrared (TIR) camera’s ability to delineate landfill gas (LFG) emission hotspots was evaluated in a field test at two Danish landfills (Hedeland landfill and Audebo landfill). At both sites, a test area of 100 m2 was established and divided into a...

  10. Streak camera imaging of single photons at telecom wavelength

    Science.gov (United States)

    Allgaier, Markus; Ansari, Vahid; Eigner, Christof; Quiring, Viktor; Ricken, Raimund; Donohue, John Matthew; Czerniuk, Thomas; Aßmann, Marc; Bayer, Manfred; Brecht, Benjamin; Silberhorn, Christine

    2018-01-01

    Streak cameras are powerful tools for temporal characterization of ultrafast light pulses, even at the single-photon level. However, the low signal-to-noise ratio in the infrared range prevents measurements on weak light sources in the telecom regime. We present an approach to circumvent this problem, utilizing an up-conversion process in periodically poled waveguides in Lithium Niobate. We convert single photons from a parametric down-conversion source in order to reach the point of maximum detection efficiency of commercially available streak cameras. We explore phase-matching configurations to apply the up-conversion scheme in real-world applications.

  11. Non-invasive prediction of hematocrit levels by portable visible and near-infrared spectrophotometer.

    Science.gov (United States)

    Sakudo, Akikazu; Kato, Yukiko Hakariya; Kuratsune, Hirohiko; Ikuta, Kazuyoshi

    2009-10-01

    After blood donation, in some individuals having polycythemia, dehydration causes anemia. Although the hematocrit (Ht) level is closely related to anemia, the current method of measuring Ht is performed after blood drawing. Furthermore, the monitoring of Ht levels contributes to a healthy life. Therefore, a non-invasive test for Ht is warranted for the safe donation of blood and good quality of life. A non-invasive procedure for the prediction of hematocrit levels was developed on the basis of a chemometric analysis of visible and near-infrared (Vis-NIR) spectra of the thumbs using portable spectrophotometer. Transmittance spectra in the 600- to 1100-nm region from thumbs of Japanese volunteers were subjected to a partial least squares regression (PLSR) analysis and leave-out cross-validation to develop chemometric models for predicting Ht levels. Ht levels of masked samples predicted by this model from Vis-NIR spectra provided a coefficient of determination in prediction of 0.6349 with a standard error of prediction of 3.704% and a detection limit in prediction of 17.14%, indicating that the model is applicable for normal and abnormal value in Ht level. These results suggest portable Vis-NIR spectrophotometer to have potential for the non-invasive measurement of Ht levels with a combination of PLSR analysis.

  12. Analysis of serum cortisol levels by Fourier Transform Infrared Spectroscopy for diagnosis of stress in athletes

    Directory of Open Access Journals (Sweden)

    Lia Campos Lemes

    Full Text Available Abstract Introduction Fourier-transform infrared (FT-IR spectroscopy is a technique with great potential for body fluids analyses. The aim of this study was to examine the impact of session training on cortisol concentrations in rugby players by means of infrared analysis of serum. Methods Blood collections were performed pre, post and 24 hours after of rugby training sessions. Serum cortisol was analyzed by FT-IR spectroscopy and chemiluminescent immunoassay. Results There was a significant difference between the integrated area, in the region of 1180-1102 cm-1, of the spectra for pre, post and post 24 h serums. The cortisol concentration obtained by chemiluminescent immunoassay showed no significant difference between pre, post and post 24 h. Positive correlations were obtained between the techniques (r = 0.75, post (r = 0.83 and post 24 h (r = 0.73. Conclusion The results showed no increase in cortisol levels of the players after the training sessions, as well as positive correlations indicating that FT-IR spectroscopy have produced promising results for the analysis of serum for diagnosis of stress.

  13. Sensitive Multi-Species Emissions Monitoring: Infrared Laser-Based Detection of Trace-Level Contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Steill, Jeffrey D. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Huang, Haifeng [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Hoops, Alexandra A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Patterson, Brian D. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Birtola, Salvatore R. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Jaska, Mark [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Strecker, Kevin E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Chandler, David W. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bisson, Soott [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2014-09-01

    This report summarizes our development of spectroscopic chemical analysis techniques and spectral modeling for trace-gas measurements of highly-regulated low-concentration species present in flue gas emissions from utility coal boilers such as HCl under conditions of high humidity. Detailed spectral modeling of the spectroscopy of HCl and other important combustion and atmospheric species such as H 2 O, CO 2 , N 2 O, NO 2 , SO 2 , and CH 4 demonstrates that IR-laser spectroscopy is a sensitive multi-component analysis strategy. Experimental measurements from techniques based on IR laser spectroscopy are presented that demonstrate sub-ppm sensitivity levels to these species. Photoacoustic infrared spectroscopy is used to detect and quantify HCl at ppm levels with extremely high signal-to-noise even under conditions of high relative humidity. Additionally, cavity ring-down IR spectroscopy is used to achieve an extremely high sensitivity to combustion trace gases in this spectral region; ppm level CH 4 is one demonstrated example. The importance of spectral resolution in the sensitivity of a trace-gas measurement is examined by spectral modeling in the mid- and near-IR, and efforts to improve measurement resolution through novel instrument development are described. While previous project reports focused on benefits and complexities of the dual-etalon cavity ring-down infrared spectrometer, here details on steps taken to implement this unique and potentially revolutionary instrument are described. This report also illustrates and critiques the general strategy of IR- laser photodetection of trace gases leading to the conclusion that mid-IR laser spectroscopy techniques provide a promising basis for further instrument development and implementation that will enable cost-effective sensitive detection of multiple key contaminant species simultaneously.

  14. Potential of a newly developed high-speed near-infrared (NIR) camera (Compovision) in polymer industrial analyses: monitoring crystallinity and crystal evolution of polylactic acid (PLA) and concentration of PLA in PLA/Poly-(R)-3-hydroxybutyrate (PHB) blends.

    Science.gov (United States)

    Ishikawa, Daitaro; Nishii, Takashi; Mizuno, Fumiaki; Sato, Harumi; Kazarian, Sergei G; Ozaki, Yukihiro

    2013-12-01

    This study was carried out to evaluate a new high-speed hyperspectral near-infrared (NIR) camera named Compovision. Quantitative analyses of the crystallinity and crystal evolution of biodegradable polymer, polylactic acid (PLA), and its concentration in PLA/poly-(R)-3-hydroxybutyrate (PHB) blends were investigated using near-infrared (NIR) imaging. This NIR camera can measure two-dimensional NIR spectral data in the 1000-2350 nm region obtaining images with wide field of view of 150 × 250 mm(2) (approximately 100  000 pixels) at high speeds (in less than 5 s). PLA with differing crystallinities between 0 and 50% blended samples with PHB in ratios of 80/20, 60/40, 40/60, 20/80, and pure films of 100% PLA and PHB were prepared. Compovision was used to collect respective NIR spectra in the 1000-2350 nm region and investigate the crystallinity of PLA and its concentration in the blends. The partial least squares (PLS) regression models for the crystallinity of PLA were developed using absorbance, second derivative, and standard normal variate (SNV) spectra from the most informative region of the spectra, between 1600 and 2000 nm. The predicted results of PLS models achieved using the absorbance and second derivative spectra were fairly good with a root mean square error (RMSE) of less than 6.1% and a determination of coefficient (R(2)) of more than 0.88 for PLS factor 1. The results obtained using the SNV spectra yielded the best prediction with the smallest RMSE of 2.93% and the highest R(2) of 0.976. Moreover, PLS models developed for estimating the concentration of PLA in the blend polymers using SNV spectra gave good predicted results where the RMSE was 4.94% and R(2) was 0.98. The SNV-based models provided the best-predicted results, since it can reduce the effects of the spectral changes induced by the inhomogeneity and the thickness of the samples. Wide area crystal evolution of PLA on a plate where a temperature slope of 70-105 °C had occurred was also

  15. Investigations on in situ diagnostics by an infrared camera to distinguish between the plasma facing tiles with carbonaceous surface layer and defect in the underneath junction

    International Nuclear Information System (INIS)

    Cai, Laizhong; Gauthier, Eric; Corre, Yann; Liu, Jian

    2013-01-01

    Both a deposition surface layer and a delamination underneath junction existing on plasma facing components (PFCs) can result in abnormal high surface temperature under normal heating conditions. The tile with delamination has to be replaced to prevent from a critical failure (complete delamination) during plasma operation while the carbon deposit can be removed without any repairing. Therefore, distinguishing in situ deposited tiles and junction defect tiles is crucial to avoid the critical failure without unwanted shutdown. In this paper, the thermal behaviors of junction defect tiles and carbon deposit tiles are simulated numerically. A modified time constant method is then introduced to analyze the thermal behaviors of deposited tiles and junction defect tiles. The feasibility of discrimination by analyzing the thermal behaviors of tiles is discussed and the requirements of this method for discrimination are described. Finally, the time resolution requirement of IR cameras to do the discrimination is mentioned

  16. Evaluation of the optical cross talk level in the SiPMs adopted in ASTRI SST-2M Cherenkov Camera using EASIROC front-end electronics

    International Nuclear Information System (INIS)

    Impiombato, D; Giarrusso, S; Mineo, T; Agnetta, G; Biondo, B; Catalano, O; Gargano, C; Rosa, G La; Russo, F; Sottile, G; Belluso, M; Billotta, S; Bonanno, G; Garozzo, S; Marano, D; Romeo, G

    2014-01-01

    ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana), is a flagship project of the Italian Ministry of Education, University and Research whose main goal is the design and construction of an end-to-end prototype of the Small Size of Telescopes of the Cherenkov Telescope Array. The prototype, named ASTRI SST-2M, will adopt a wide field dual mirror optical system in a Schwarzschild-Couder configuration to explore the VHE range of the electromagnetic spectrum. The camera at the focal plane is based on Silicon Photo-Multipliers detectors which is an innovative solution for the detection astronomical Cherenkov light. This contribution reports some preliminary results on the evaluation of the optical cross talk level among the SiPM pixels foreseen for the ASTRI SST-2M camera

  17. Infrared spectroscopic measurement of skin hydration and sebum levels and comparison to corneometer and sebumeter

    Science.gov (United States)

    Ezerskaia, Anna; Pereira, S. F.; Urbach, H. P.; Varghese, Babu

    2016-05-01

    Skin health characterized by a system of water and lipids in Stratum Corneum provide protection from harmful external elements and prevent trans-epidermal water loss. Skin hydration (moisture) and sebum (skin surface lipids) are considered to be important factors in skin health; a right balance between these components is an indication of skin health and plays a central role in protecting and preserving skin integrity. In this manuscript we present an infrared spectroscopic method for simultaneous and quantitative measurement of skin hydration and sebum levels utilizing differential detection with three wavelengths 1720, 1750, and 1770 nm, corresponding to the lipid vibrational bands that lie "in between" the prominent water absorption bands. The skin sebum and hydration values on the forehead under natural conditions and its variations to external stimuli were measured using our experimental set-up. The experimental results obtained with the optical set-up show good correlation with the results obtained with the commercially available instruments Corneometer and Sebumeter.

  18. Active spectral imaging nondestructive evaluation (SINDE) camera

    Energy Technology Data Exchange (ETDEWEB)

    Simova, E.; Rochefort, P.A., E-mail: eli.simova@cnl.ca [Canadian Nuclear Laboratories, Chalk River, Ontario (Canada)

    2016-06-15

    A proof-of-concept video camera for active spectral imaging nondestructive evaluation has been demonstrated. An active multispectral imaging technique has been implemented in the visible and near infrared by using light emitting diodes with wavelengths spanning from 400 to 970 nm. This shows how the camera can be used in nondestructive evaluation to inspect surfaces and spectrally identify materials and corrosion. (author)

  19. Scintillating camera

    International Nuclear Information System (INIS)

    Vlasbloem, H.

    1976-01-01

    The invention relates to a scintillating camera and in particular to an apparatus for determining the position coordinates of a light pulse emitting point on the anode of an image intensifier tube which forms part of a scintillating camera, comprising at least three photomultipliers which are positioned to receive light emitted by the anode screen on their photocathodes, circuit means for processing the output voltages of the photomultipliers to derive voltages that are representative of the position coordinates; a pulse-height discriminator circuit adapted to be fed with the sum voltage of the output voltages of the photomultipliers for gating the output of the processing circuit when the amplitude of the sum voltage of the output voltages of the photomultipliers lies in a predetermined amplitude range, and means for compensating the distortion introduced in the image on the anode screen

  20. Gamma camera

    International Nuclear Information System (INIS)

    Reiss, K.H.; Kotschak, O.; Conrad, B.

    1976-01-01

    A gamma camera with a simplified setup as compared with the state of engineering is described permitting, apart from good localization, also energy discrimination. Behind the usual vacuum image amplifier a multiwire proportional chamber filled with trifluorine bromium methane is connected in series. Localizing of the signals is achieved by a delay line, energy determination by means of a pulse height discriminator. With the aid of drawings and circuit diagrams, the setup and mode of operation are explained. (ORU) [de

  1. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  2. Ultradeep Infrared Array Camera Observations of Sub-L* z ~ 7 and z ~ 8 Galaxies in the Hubble Ultra Deep Field: the Contribution of Low-Luminosity Galaxies to the Stellar Mass Density and Reionization

    Science.gov (United States)

    Labbé, I.; González, V.; Bouwens, R. J.; Illingworth, G. D.; Oesch, P. A.; van Dokkum, P. G.; Carollo, C. M.; Franx, M.; Stiavelli, M.; Trenti, M.; Magee, D.; Kriek, M.

    2010-01-01

    We study the Spitzer Infrared Array Camera (IRAC) mid-infrared (rest-frame optical) fluxes of 14 newly WFC3/IR-detected z ~ 7 z 850-dropout galaxies and 5z ~ 8 Y 105-dropout galaxies. The WFC3/IR depth and spatial resolution allow accurate removal of contaminating foreground light, enabling reliable flux measurements at 3.6 μm and 4.5 μm. None of the galaxies are detected to [3.6] ≈ 26.9 (AB, 2σ), but a stacking analysis reveals a robust detection for the z 850-dropouts and an upper limit for the Y 105-dropouts. We construct average broadband spectral energy distributions using the stacked Advanced Camera for Surveys (ACS), WFC3, and IRAC fluxes and fit stellar population synthesis models to derive mean redshifts, stellar masses, and ages. For the z 850-dropouts, we find z = 6.9+0.1 -0.1, (U - V)rest ≈ 0.4, reddening AV = 0, stellar mass langM*rang = 1.2+0.3 -0.6 × 109 M sun (Salpeter initial mass function). The best-fit ages ~300 Myr, M/LV ≈ 0.2, and SSFR ~1.7 Gyr-1 are similar to values reported for luminous z ~ 7 galaxies, indicating the galaxies are smaller but not much younger. The sub-L* galaxies observed here contribute significantly to the stellar mass density and under favorable conditions may have provided enough photons for sustained reionization at 7 dropouts have stellar masses that are uncertain by 1.5 dex due to the near-complete reliance on far-UV data. Adopting the 2σ upper limit on the M/L(z = 8), the stellar mass density to M UV,AB Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programs #11563, 9797. Based on observations with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under NASA contract 1407. Support for this work was provided by NASA through contract 125790 issued by JPL/Caltech. Based on service mode observations collected at the European Southern Observatory, Paranal, Chile (ESO Program

  3. ULTRADEEP INFRARED ARRAY CAMERA OBSERVATIONS OF SUB-L* z ∼ 7 AND z ∼ 8 GALAXIES IN THE HUBBLE ULTRA DEEP FIELD: THE CONTRIBUTION OF LOW-LUMINOSITY GALAXIES TO THE STELLAR MASS DENSITY AND REIONIZATION

    International Nuclear Information System (INIS)

    Labbe, I.; Gonzalez, V.; Bouwens, R. J.; Illingworth, G. D.; Magee, D.; Oesch, P. A.; Carollo, C. M.; Van Dokkum, P. G.; Franx, M.; Stiavelli, M.; Trenti, M.; Kriek, M.

    2010-01-01

    We study the Spitzer Infrared Array Camera (IRAC) mid-infrared (rest-frame optical) fluxes of 14 newly WFC3/IR-detected z ∼ 7 z 850 -dropout galaxies and 5z ∼ 8 Y 105 -dropout galaxies. The WFC3/IR depth and spatial resolution allow accurate removal of contaminating foreground light, enabling reliable flux measurements at 3.6 μm and 4.5 μm. None of the galaxies are detected to [3.6] ∼ 26.9 (AB, 2σ), but a stacking analysis reveals a robust detection for the z 850 -dropouts and an upper limit for the Y 105 -dropouts. We construct average broadband spectral energy distributions using the stacked Advanced Camera for Surveys (ACS), WFC3, and IRAC fluxes and fit stellar population synthesis models to derive mean redshifts, stellar masses, and ages. For the z 850 -dropouts, we find z = 6.9 +0.1 -0.1 , (U - V) rest ∼ 0.4, reddening A V = 0, stellar mass (M*) = 1.2 +0.3 -0.6 x 10 9 M sun (Salpeter initial mass function). The best-fit ages ∼300 Myr, M/L V ∼ 0.2, and SSFR ∼1.7 Gyr -1 are similar to values reported for luminous z ∼ 7 galaxies, indicating the galaxies are smaller but not much younger. The sub-L* galaxies observed here contribute significantly to the stellar mass density and under favorable conditions may have provided enough photons for sustained reionization at 7 +0.1 -0.2 Y 105 -dropouts have stellar masses that are uncertain by 1.5 dex due to the near-complete reliance on far-UV data. Adopting the 2σ upper limit on the M/L(z = 8), the stellar mass density to M UV,AB +1.4 -1.8 x 10 6 M sun Mpc -3 to ρ*(z = 8) 5 M sun Mpc -3 , following ∝(1 + z) -6 over 3 < z < 8. Lower masses at z = 8 would signify more dramatic evolution, which can be established with deeper IRAC observations, long before the arrival of the James Webb Space Telescope.

  4. Cell viability, reactive oxygen species, apoptosis, and necrosis in myoblast cultures exposed to low-level infrared laser.

    Science.gov (United States)

    Alexsandra da Silva Neto Trajano, Larissa; da Silva, Camila Luna; de Carvalho, Simone Nunes; Cortez, Erika; Mencalha, André Luiz; de Souza da Fonseca, Adenilson; Stumbo, Ana Carolina

    2016-07-01

    Low-level infrared laser is considered safe and effective for treatment of muscle injuries. However, the mechanism involved on beneficial effects of laser therapy are not understood. The aim was to evaluate cell viability, reactive oxygen species, apoptosis, and necrosis in myoblast cultures exposed to low-level infrared laser at therapeutic fluences. C2C12 myoblast cultures at different (2 and 10 %) fetal bovine serum (FBS) concentrations were exposed to low-level infrared laser (808 nm, 100 mW) at different fluences (10, 35, and 70 J/cm(2)) and evaluated after 24, 48, and 72 h. Cell viability was evaluated by WST-1 assay; reactive oxygen species (ROS), apoptosis, and necrosis were evaluated by flow cytometry. Cell viability was decreased atthe lowest FBS concentration. Laser exposure increased the cell viability in myoblast cultures at 2 % FBS after 48 and 72 h, but no significant increase in ROS was observed. Apoptosis was decreased at the higher fluence and necrosis was increased at lower fluence in myoblast cultures after 24 h of laser exposure at 2 % FBS. No laser-induced alterations were obtained at 10 % FBS. Results show that level of reactive oxygen species is not altered, at least to those evaluated in this study, but low-level infrared laser exposure affects cell viability, apoptosis, and necrosis in myoblast cultures depending on laser fluence and physiologic conditions of cells.

  5. Gamma camera

    International Nuclear Information System (INIS)

    Berninger, W.H.

    1975-01-01

    The light pulse output of a scintillator, on which incident collimated gamma rays impinge, is detected by an array of photoelectric tubes each having a convexly curved photocathode disposed in close proximity to the scintillator. Electronic circuitry connected to outputs of the phototubes develops the scintillation event position coordinate electrical signals with good linearity and with substantial independence of the spacing between the scintillator and photocathodes so that the phototubes can be positioned as close to the scintillator as is possible to obtain less distortion in the field of view and improved spatial resolution as compared to conventional planar photocathode gamma cameras

  6. Economical Appraisal of Total Aflatoxin Level in the Poultry Feeds by Fourier Transform Infrared Spectroscopy

    International Nuclear Information System (INIS)

    Sherazai, S.T.H.; Shar, Z.; Iqbal, M.; Sumbal, G.A.

    2013-01-01

    Single-bounce attenuated total reflectance (SB-ATR) Fourier transform infrared (FTIR) spectroscopy has been used for the quantitative determination of total aflatoxins in the broiler poultry feed. An FTIR calibration spanning the range 1-70 micro g/L aflatoxin standards in (70:30, v/v) methanol-water solvent system based on partial least square (PLS) model, developed by relating mid IR region between 3755-950 cm/ sub -1/. The excellent coefficient of various (using 0.998) was achieved with 1.49 relative mean square error of calibration (RMSEC). Aflatoxins from each of eight poultry feeds was extracted and the determined by the widely used commercially available Enzyme-linked Immunosorbent Assay (ELISA) procedure and the SB-ATR/FTIR method. The SB-ATR/FTIR aflatoxins predictions were related to those determined by the ELISA method by linear regression, producing an R value of 0.989 and a SD of +- 2.80 micro g/L. The result of the study clearly indicated that FT-IR spectroscopy due to its rapidity and simplicity along with data manipulation by advance computer software could be effectively used for routine determination of aflatoxins present in the poultry feeds at very low level. (author)

  7. The infrared retina

    International Nuclear Information System (INIS)

    Krishna, Sanjay

    2009-01-01

    As infrared imaging systems have evolved from the first generation of linear devices to the second generation of small format staring arrays to the present 'third-gen' systems, there is an increased emphasis on large area focal plane arrays (FPAs) with multicolour operation and higher operating temperature. In this paper, we discuss how one needs to develop an increased functionality at the pixel level for these next generation FPAs. This functionality could manifest itself as spectral, polarization, phase or dynamic range signatures that could extract more information from a given scene. This leads to the concept of an infrared retina, which is an array that works similarly to the human eye that has a 'single' FPA but multiple cones, which are photoreceptor cells in the retina of the eye that enable the perception of colour. These cones are then coupled with powerful signal processing techniques that allow us to process colour information from a scene, even with a limited basis of colour cones. Unlike present day multi or hyperspectral systems, which are bulky and expensive, the idea would be to build a poor man's 'infrared colour' camera. We use examples such as plasmonic tailoring of the resonance or bias dependent dynamic tuning based on quantum confined Stark effect or incorporation of avalanche gain to achieve embodiments of the infrared retina.

  8. Selective-imaging camera

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  9. Monitoring landscape-level distribution and migration Phenology of Raptors using a volunteer camera-trap network

    Science.gov (United States)

    Jachowski, David S.; Katzner, Todd; Rodrigue, Jane L.; Ford, W. Mark

    2015-01-01

    Conservation of animal migratory movements is among the most important issues in wildlife management. To address this need for landscape-scale monitoring of raptor populations, we developed a novel, baited photographic observation network termed the “Appalachian Eagle Monitoring Program” (AEMP). During winter months of 2008–2012, we partnered with professional and citizen scientists in 11 states in the United States to collect approximately 2.5 million images. To our knowledge, this represents the largest such camera-trap effort to date. Analyses of data collected in 2011 and 2012 revealed complex, often species-specific, spatial and temporal patterns in winter raptor movement behavior as well as spatial and temporal resource partitioning between raptor species. Although programmatic advances in data analysis and involvement are needed, the continued growth of the program has the potential to provide a long-term, cost-effective, range-wide monitoring tool for avian and terrestrial scavengers during the winter season. Perhaps most importantly, by relying heavily on citizen scientists, AEMP has the potential to improve long-term interest and support for raptor conservation and serve as a model for raptor conservation programs in other portions of the world.

  10. AIRS/Aqua Level 2 Cloud-cleared infrared radiances (AIRS+AMSU) V005

    Data.gov (United States)

    National Aeronautics and Space Administration — The Atmospheric Infrared Sounder (AIRS) is a facility instrument aboard the second Earth Observing System (EOS) polar-orbiting platform, EOS Aqua. In combination...

  11. Aqua AIRS Level 2 Cloud-Cleared Infrared Radiances (AIRS+AMSU) V006

    Data.gov (United States)

    National Aeronautics and Space Administration — The Atmospheric Infrared Sounder (AIRS) is a facility instrument aboard the second Earth Observing System (EOS) polar-orbiting platform, EOS Aqua. In combination...

  12. Nimbus-2 Level 2 Medium Resolution Infrared Radiometer (MRIR) V001

    Data.gov (United States)

    National Aeronautics and Space Administration — The Nimbus II Medium Resolution Infrared Radiometer (MRIR) was designed to measure electromagnetic radiation emitted and reflected from the earth and its atmosphere...

  13. AIRS/Aqua Level 1B Infrared (IR) quality assurance subset V005

    Data.gov (United States)

    National Aeronautics and Space Administration — The Atmospheric Infrared Sounder (AIRS) is a facility instrument aboard the second Earth Observing System (EOS) polar-orbiting platform, EOS Aqua. In combination...

  14. Effect of red and infrared low-level laser therapy in endodontic sealer on subcutaneous tissue

    Science.gov (United States)

    Sivieri-Araujo, G.; Berbert, F. L. C. V.; Ramalho, L. T. O.; Rastelli, A. N. S.; Crisci, F. S.; Bonetti-Filho, I.; Tanomaru-Filho, M.

    2011-12-01

    This study evaluated the reactions of connective tissue after the implant of one endodontic sealer (Endofill) that was irradiated with low-level laser therapy (LLLT). Sixty mice were distributed into three Groups ( n = 20): GI—the tubes filled with Endofill were implanted in the animals and were not irradiated with LLLT; GII—the tubes containing Endofill were implanted in the animals and then irradiated with red LLLT (InGaAlP, λ = 685 nm, P = 35 mW, t = 58 s, D = 72 J/cm2, E = 2 J, Ø = 0.60 mm, continuous mode) and GIII—the tubes with Endofill were implanted and irradiated with infrared LLLT (AsGaAl, λ = 830 nm, P = 50 mW, t = 40 s, D = 70 J/cm2, E = 2 J, Ø = 0.60 mm, continuous wave) both are semiconductor diode laser device. The animals were killed after 7 and 30 days. Series sections of 6 μm thickness were obtained and stained with Hematoxylin-Eosin and Masson Trichrome. The data of the histopathological evaluation were submitted to Kruskal-Wallis and Dunn's tests at 5% significance level. At the 7th day: GI showed the presence of inflammation; GII and GIII reduced inflammation. At 30th day: GI showed low inflammation; GII and GII the absence of inflammation. It was possible show that LLLT reduced the irritating effect promoted by the Endofill, in the period of 7 days ( p > 0.05). The tissue repair occurred in 30 days, regardless of the use of LLLT.

  15. Video digitizer (real time-frame grabber) with region of interest suitable for quantitative data analysis used on the infrared and H alpha cameras installed on the DIII-D experiment

    International Nuclear Information System (INIS)

    Ferguson, S.W.; Kevan, D.K.; Hill, D.N.; Allen, S.L.

    1987-01-01

    This paper describes a CAMAC based video digitizer with region of interest (ROI) capability that was designed for use with the infrared and H alpha cameras installed by Lawrence Livermore Laboratory on the DIII-D experiment at G.A. Technologies in San Diego, California. The video digitizer uses a custom built CAMAC video synchronizer module to clock data into a CAMAC transient recorder on a line-by-line basis starting at the beginning of a field. The number of fields that are recorded is limited only by the available transient recorder memory. In order to conserve memory, the CAMAC video synchronizer module provides for the alternative selection of a specific region of interest in each successive field to be recorded. Memory conservation can be optimized by specifying lines in the field, start time, stop time, and the number of data samples per line. This video frame grabber has proved versatile for capturing video in such diverse applications as recording video fields from a video tape recorder played in slow motion or recording video fields in real time during a DIII-D shot. In other cases, one or more lines of video are recorded per frame to give a cross sectional slice of the plasma. Since all the data in the digitizer memory is synchronized to video fields and lines, the data can be read directly into the control computer in the proper matrix format to facilitate rapid processing, display, and permanent storage

  16. Energy levels and far-infrared optical absorption of impurity doped semiconductor nanorings: Intense laser and electric fields effects

    Energy Technology Data Exchange (ETDEWEB)

    Barseghyan, M.G., E-mail: mbarsegh@ysu.am

    2016-11-10

    Highlights: • The electron-impurity interaction on energy levels in nanoring have been investigated. • The electron-impurity interaction on far-infrared absorption have been investigated. • The energy levels are more stable for higher values of electric field. - Abstract: The effects of electron-impurity interaction on energy levels and far-infrared absorption in semiconductor nanoring under the action of intense laser and lateral electric fields have been investigated. Numerical calculations are performed using exact diagonalization technique. It is found that the electron-impurity interaction and external fields change the energy spectrum dramatically, and also have significant influence on the absorption spectrum. Strong dependence on laser field intensity and electric field of lowest energy levels, also supported by the Coulomb interaction with impurity, is clearly revealed.

  17. Molded, wafer level optics for long wave infra-red applications

    Science.gov (United States)

    Franks, John

    2016-05-01

    For many years, the Thermal Imaging market has been driven by the high volume consumer market. The first signs of this came with the launch of night vision systems for cars, first by Cadillac and Honda and then, more successfully by BMW, Daimler and Audi. For the first time, simple thermal imaging systems were being manufactured at the rate of more than 10,000 units a year. This step change in volumes enabled a step change in system costs, with thermal imaging moving into the consumer's price range. Today we see that the consumer awareness and the consumer market continues to increase with the launch of a number of consumer focused smart phone add-ons. This has brought a further step change in system costs, with the possibility to turn your mobile phone into a thermal imager for under $250. As the detector technology has matured, the pixel pitches have dropped from 50μm in 2002 to 12 μm or even 10μm in today's detectors. This dramatic shrinkage in size has had an equally dramatic effect on the optics required to produce the image on the detector. A moderate field of view that would have required a focal length of 40mm in 2002 now requires a focal length of 8mm. For wide field of view applications and small detector formats, focal lengths in the range 1mm to 5mm are becoming common. For lenses, the quantity manufactured, quality and costs will require a new approach to high volume Infra-Red (IR) manufacturing to meet customer expectations. This, taken with the SwaP-C requirements and the emerging requirement for very small lenses driven by the new detectors, suggests that wafer scale optics are part of the solution. Umicore can now present initial results from an intensive research and development program to mold and coat wafer level optics, using its chalcogenide glass, GASIR®.

  18. Mesoscale circulation at the upper cloud level at middle latitudes from the imaging by Venus Monitoring Camera onboard Venus Express

    Science.gov (United States)

    Patsaeva, Marina; Ignatiev, Nikolay; Markiewicz, Wojciech; Khatuntsev, Igor; Titov, Dmitrij; Patsaev, Dmitry

    The Venus Monitoring Camera onboard ESA Venus Express spacecraft acquired a great number of UV images (365 nm) allowing us to track the motion of cloud features at the upper cloud layer of Venus. A digital method developed to analyze correlation functions between two UV images provided wind vector fields on the Venus day side (9-16 hours local time) from the equator to high latitudes. Sizes and regions for the correlation were chosen empirically, as a trade-off of sensitivity against noise immunity and vary from 10(°) x7.5(°) to 20(°) x10(°) depending on the grid step, making this method suitable to investigate the mesoscale circulation. Previously, the digital method was used for investigation of the circulation at low latitudes and provided good agreement with manual tracking of the motion of cloud patterns. Here we present first results obtained by this method for middle latitudes (25(°) S-75(°) S) on the basis of 270 orbits. Comparing obtained vector fields with images for certain orbits, we found a relationship between morphological patterns of the cloud cover at middle latitudes and parameters of the circulation. Elongated cloud features, so-called streaks, are typical for middle latitudes, and their orientation varies over wide range. The behavior of the vector field of velocities depends on the angle between the streak and latitude circles. In the middle latitudes the average angle of the flow deviation from the zonal direction is equal to -5.6(°) ± 1(°) (the sign “-“ means the poleward flow, the standard error is given). For certain orbits, this angle varies from -15.6(°) ± 1(°) to 1.4(°) ± 1(°) . In some regions at latitudes above 60(°) S the meridional wind is equatorward in the morning. The relationship between the cloud cover morphology and circulation peculiarity can be attributed to the motion of the Y-feature in the upper cloud layer due to the super-rotation of the atmosphere.

  19. Poster abstract: Water level estimation in urban ultrasonic/passive infrared flash flood sensor networks using supervised learning

    KAUST Repository

    Mousa, Mustafa

    2014-04-01

    This article describes a machine learning approach to water level estimation in a dual ultrasonic/passive infrared urban flood sensor system. We first show that an ultrasonic rangefinder alone is unable to accurately measure the level of water on a road due to thermal effects. Using additional passive infrared sensors, we show that ground temperature and local sensor temperature measurements are sufficient to correct the rangefinder readings and improve the flood detection performance. Since floods occur very rarely, we use a supervised learning approach to estimate the correction to the ultrasonic rangefinder caused by temperature fluctuations. Preliminary data shows that water level can be estimated with an absolute error of less than 2 cm. © 2014 IEEE.

  20. Can Camera Traps Monitor Komodo Dragons a Large Ectothermic Predator?

    OpenAIRE

    Ariefiandy, Achmad; Purwandana, Deni; Seno, Aganto; Ciofi, Claudio; Jessop, Tim S.

    2013-01-01

    Camera trapping has greatly enhanced population monitoring of often cryptic and low abundance apex carnivores. Effectiveness of passive infrared camera trapping, and ultimately population monitoring, relies on temperature mediated differences between the animal and its ambient environment to ensure good camera detection. In ectothermic predators such as large varanid lizards, this criterion is presumed less certain. Here we evaluated the effectiveness of camera trapping to potentially monitor...

  1. Gamma camera

    International Nuclear Information System (INIS)

    Conrad, B.; Heinzelmann, K.G.

    1975-01-01

    A gamma camera is described which obviates the distortion of locating signals generally caused by the varied light conductive capacities of the light conductors in that the flow of light through each light conductor may be varied by means of a shutter. A balancing of the flow of light through each of the individual light conductors, in effect, collective light conductors may be balanced on the basis of their light conductive capacities or properties, so as to preclude a distortion of the locating signals caused by the varied light conductive properties of the light conductors. Each light conductor has associated therewith two, relative to each other, independently adjustable shutters, of which one forms a closure member and the other an adjusting shutter. In this embodiment of the invention it is thus possible to block all of the light conductors leading to a photoelectric transducer, with the exception of those light conductors which are to be balanced. The balancing of the individual light conductors may then be obtained on the basis of the output signals of the photoelectric transducer. (auth)

  2. Scintillation camera

    International Nuclear Information System (INIS)

    Zioni, J.; Klein, Y.; Inbar, D.

    1975-01-01

    The scintillation camera is to make pictures of the density distribution of radiation fields created by the injection or administration radioactive medicaments into the body of the patient. It contains a scintillation crystal, several photomultipliers and computer circuits to obtain an analytical function at the exits of the photomultiplier which is dependent on the position of the scintillations at the time in the crystal. The scintillation crystal is flat and spatially corresponds to the production site of radiation. The photomultipliers form a pattern whose basic form consists of at least three photomultipliers. They are assigned to at least two crossing parallel series groups where a vertical running reference axis in the crystal plane belongs to each series group. The computer circuits are each assigned to a reference axis. Each series of a series group assigned to one of the reference axes in the computer circuit has an adder to produce a scintillation dependent series signal. Furthermore, the projection of the scintillation on this reference axis is calculated. A series signal is used for this which originates from a series chosen from two neighbouring photomultiplier series of this group. The scintillation must have appeared between these chosen series. They are termed as basic series. The photomultiplier can be arranged hexagonally or rectangularly. (GG/LH) [de

  3. Capillary-oxygenation-level-dependent near-infrared spectrometry in frontal lobe of humans

    NARCIS (Netherlands)

    Rasmussen, Peter; Dawson, Ellen A.; Nybo, Lars; van Lieshout, Johannes J.; Secher, Niels H.; Gjedde, Albert

    2007-01-01

    Brain function requires oxygen and maintenance of brain capillary oxygenation is important. We evaluated how faithfully frontal lobe near-infrared spectroscopy (NIRS) follows haemoglobin saturation (SCap) and how calculated mitochondrial oxygen tension (PMitoO2) influences motor performance. Twelve

  4. DUST EXTINCTION FROM BALMER DECREMENTS OF STAR-FORMING GALAXIES AT 0.75 {<=} z {<=} 1.5 WITH HUBBLE SPACE TELESCOPE/WIDE-FIELD-CAMERA 3 SPECTROSCOPY FROM THE WFC3 INFRARED SPECTROSCOPIC PARALLEL SURVEY

    Energy Technology Data Exchange (ETDEWEB)

    Dominguez, A.; Siana, B.; Masters, D. [Department of Physics and Astronomy, University of California Riverside, Riverside, CA 92521 (United States); Henry, A. L.; Martin, C. L. [Department of Physics, University of California, Santa Barbara, CA 93106 (United States); Scarlata, C.; Bedregal, A. G. [Minnesota Institute for Astrophysics, University of Minnesota, Minneapolis, MN 55455 (United States); Malkan, M.; Ross, N. R. [Department of Physics and Astronomy, University of California Los Angeles, Los Angeles, CA 90095 (United States); Atek, H.; Colbert, J. W. [Spitzer Science Center, Caltech, Pasadena, CA 91125 (United States); Teplitz, H. I.; Rafelski, M. [Infrared Processing and Analysis Center, Caltech, Pasadena, CA 91125 (United States); McCarthy, P.; Hathi, N. P.; Dressler, A. [Observatories of the Carnegie Institution for Science, Pasadena, CA 91101 (United States); Bunker, A., E-mail: albertod@ucr.edu [Department of Physics, Oxford University, Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH (United Kingdom)

    2013-02-15

    Spectroscopic observations of H{alpha} and H{beta} emission lines of 128 star-forming galaxies in the redshift range 0.75 {<=} z {<=} 1.5 are presented. These data were taken with slitless spectroscopy using the G102 and G141 grisms of the Wide-Field-Camera 3 (WFC3) on board the Hubble Space Telescope as part of the WFC3 Infrared Spectroscopic Parallel survey. Interstellar dust extinction is measured from stacked spectra that cover the Balmer decrement (H{alpha}/H{beta}). We present dust extinction as a function of H{alpha} luminosity (down to 3 Multiplication-Sign 10{sup 41} erg s{sup -1}), galaxy stellar mass (reaching 4 Multiplication-Sign 10{sup 8} M {sub Sun }), and rest-frame H{alpha} equivalent width. The faintest galaxies are two times fainter in H{alpha} luminosity than galaxies previously studied at z {approx} 1.5. An evolution is observed where galaxies of the same H{alpha} luminosity have lower extinction at higher redshifts, whereas no evolution is found within our error bars with stellar mass. The lower H{alpha} luminosity galaxies in our sample are found to be consistent with no dust extinction. We find an anti-correlation of the [O III] {lambda}5007/H{alpha} flux ratio as a function of luminosity where galaxies with L {sub H{alpha}} < 5 Multiplication-Sign 10{sup 41} erg s{sup -1} are brighter in [O III] {lambda}5007 than H{alpha}. This trend is evident even after extinction correction, suggesting that the increased [O III] {lambda}5007/H{alpha} ratio in low-luminosity galaxies is likely due to lower metallicity and/or higher ionization parameters.

  5. Handheld Device Adapted to Smartphone Cameras for the Measurement of Sodium Ion Concentrations at Saliva-Relevant Levels via Fluorescence

    OpenAIRE

    Lipowicz, Michelle; Garcia, Antonio

    2015-01-01

    The use of saliva sampling as a minimally-invasive means for drug testing and monitoring physiology is a subject of great interest to researchers and clinicians. This study describes a new optical method based on non-axially symmetric focusing of light using an oblate spheroid sample chamber. The device is simple, lightweight, low cost and is easily attached to several different brands/models of smartphones (Apple, Samsung, HTC and Nokia) for the measurement of sodium ion levels at physiologi...

  6. Infrared spectroscopic measurement of skin hydration and sebum levels and comparison to corneometer and sebumeter

    OpenAIRE

    Ezerskaia, A.; Pereira, S.F.; Urbach, Paul; Varghese, Babu; Popp, Jürgen; Tuchin, Valery V.; Matthews, Dennis L.; Pavone, Francesco S.

    2016-01-01

    Skin health characterized by a system of water and lipids in Stratum Corneum provide protection from harmful external elements and prevent trans-epidermal water loss. Skin hydration (moisture) and sebum (skin surface lipids) are considered to be important factors in skin health; a right balance between these components is an indication of skin health and plays a central role in protecting and preserving skin integrity. In this manuscript we present an infrared spectroscopic method for simulta...

  7. Application and possible mechanisms of combining LLLT (low level laser therapy), infrared hyperthermia and ionizing radiation in the treatment of cancer

    Science.gov (United States)

    Abraham, Edward H.; Woo, Van H.; Harlin-Jones, Cheryl; Heselich, Anja; Frohns, Florian

    2014-02-01

    Benefit of concomitant infrared hyperthermia and low level laser therapy and ionizing radiation is evaluated in this study. The purpose/objectives: presentation with locally advanced bulky superficial tumors is clinically challenging. To enhance the efficacy of chemotherapy and IMRT (intensity-modulated radiation therapy) and/or electron beam therapy we have developed an inexpensive and clinically effective infrared hyperthermia approach that combines black-body infrared radiation with halogen spectrum radiation and discrete wave length infrared clinical lasers LLLT. The goal is to produce a composite spectrum extending from the far infrared to near infrared and portions of the visible spectrum with discrete penetrating wavelengths generated by the clinical infrared lasers with frequencies of 810 nm and/or 830 nm. The composite spectrum from these sources is applied before and after radiation therapy. We monitor the surface and in some cases deeper temperatures with thermal probes, but use an array of surface probes as the limiting safe thermal constraint in patient treatment while at the same time maximizing infrared entry to deeper tissue layers. Fever-grade infrared hyperthermia is produced in the first centimeters while non-thermal infrared effects act at deeper tissue layers. The combination of these effects with ionizing radiation leads to improved tumor control in many cancers.

  8. Control system for gamma camera

    International Nuclear Information System (INIS)

    Miller, D.W.

    1977-01-01

    An improved gamma camera arrangement is described which utilizing a solid state detector, formed of high purity germanium. the central arrangement of the camera operates to effect the carrying out of a trapezoidal filtering operation over antisymmetrically summed spatial signals through gated integration procedures utilizing idealized integrating intervals. By simultaneously carrying out peak energy evaluation of the input signals, a desirable control over pulse pile-up phenomena is achieved. Additionally, through the use of the time derivative of incoming pulse or signal energy information to initially enable the control system, a low level information evaluation is provided serving to enhance the signal processing efficiency of the camera

  9. Selecting a digital camera for telemedicine.

    Science.gov (United States)

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  10. A study of full width at half maximum (FWHM) according to the filter's cut off level in SPECT camera

    Energy Technology Data Exchange (ETDEWEB)

    Park, Soung Ock [Dongnam Health College, Suwon (Korea, Republic of); Kwon, Soo Il [Kyonggi University, Suwon (Korea, Republic of)

    2003-06-15

    Filtering is necessary to reduce statistical noise and to increase image quality in SPECT images. Noises controlled by low-pass filter designed to suppress high spatial frequency in SPECT image. Most SPECT filter function control the degree of high frequency suppression by choosing a cut off frequency. The location of cut off frequency determines the affect image noise and spatial resolution. If select the low cut off frequency, its provide good noise suppression but insufficient image quantity and high cut off frequencies increase the image resolution but insufficient noise suppression. The purpose of this study was to determines the optimum cut off level with comparison of FWHM according to cut off level in each filters-Band-limited, Sheep-logan, Sheep-logan Hanning, Generalized Hamming, Low pass cosine, Parazen and Butterworth filter in SPECT camera. We recorded image along the X, Y, Z-axis with {sup 99m}TcO{sub 4} point source and measured FWHM by use profile curve. We find averaged length is 9.16 mm {approx} 18.14 mm of FWHM in X, Y, and Z-axis, and Band-limited and Generalized Hamming filters measures 9.16 mm at 0.7 cycle/pixel cut off frequency.

  11. Pulsed low-level infrared laser alters mRNA levels from muscle repair genes dependent on power output in Wistar rats

    Science.gov (United States)

    Trajano, L. A. S. N.; Trajano, E. T. L.; Thomé, A. M. C.; Sergio, L. P. S.; Mencalha, A. L.; Stumbo, A. C.; Fonseca, A. S.

    2017-10-01

    Satellite cells are present in skeletal muscle functioning in the repair and regeneration of muscle injury. Activation of these cells depends on the expression of myogenic factor 5 (Myf5), myogenic determination factor 1(MyoD), myogenic regulatory factor 4 (MRF4), myogenin (MyoG), paired box transcription factors 3 (Pax3), and 7 (Pax7). Low-level laser irradiation accelerates the repair of muscle injuries. However, data from the expression of myogenic factors have been controversial. Furthermore, the effects of different laser beam powers on the repair of muscle injuries have been not evaluated. The aim of this study was to evaluate the effects of low-level infrared laser at different powers and in pulsed emission mode on the expression of myogenic regulatory factors and on Pax3 and Pax7 in injured skeletal muscle from Wistar rats. Animals that underwent cryoinjury were divided into three groups: injury, injury laser 25 Mw, and injury laser 75 mW. Low-level infrared laser irradiation (904 nm, 3 J cm-2, 5 kHz) was carried out at 25 and 75 mW. After euthanasia, skeletal muscle samples were withdrawn and the total RNA was extracted for the evaluation of mRNA levels from the MyoD, MyoG, MRF4, Myf5, Pax3, and Pax7 gene. Pax 7 mRNA levels did not alter, but Pax3 mRNA levels increased in the injured and laser-irradiated group at 25 mW. MyoD, MyoG, and MYf5 mRNA levels increased in the injured and laser-irradiated animals at both powers, and MRF4 mRNA levels decreased in the injured and laser-irradiated group at 75 mW. In conclusion, exposure to pulsed low-level infrared laser, by power-dependent effect, could accelerate the muscle repair process altering mRNA levels from paired box transcription factors and myogenic regulatory factors.

  12. Permanent magnetic field, direct electric field, and infrared to reduce blood glucose level and hepatic function in mus musculus with diabetic mellitus

    Science.gov (United States)

    Suhariningsih; Basuki Notobroto, Hari; Winarni, Dwi; Achmad Hussein, Saikhu; Anggono Prijo, Tri

    2017-05-01

    Blood contains several electrolytes with positive (cation) and negative (anion) ion load. Both electrolytes deliver impulse synergistically adjusting body needs. Those electrolytes give specific effect to external disturbance such as electric, magnetic, even infrared field. A study has been conducted to reduce blood glucose level and liver function, in type 2 Diabetes Mellitus patients, using Biophysics concept which uses combination therapy of permanent magnetic field, electric field, and infrared. This study used 48 healthy mice (mus musculus), male, age 3-4 weeks, with approximately 25-30 g in weight. Mice was fed with lard as high fat diet orally, before Streptozotocin (STZ) induction become diabetic mice. Therapy was conducted by putting mice in a chamber that emits the combination of permanent magnetic field, electric field, and infrared, every day for 1 hour for 28 days. There were 4 combinations of therapy/treatment, namely: (1) permanent magnetic field, direct electric field, and infrared; (2) permanent magnetic field, direct electric field, without infrared; (3) permanent magnetic field, alternating electric field, and infrared; and (4) permanent magnetic field, alternating electric field, without infrared. The results of therapy show that every combination is able to reduce blood glucose level, AST, and ALT. However, the best result is by using combination of permanent magnetic field, direct electric field, and infrared.

  13. Permanent magnetic field, direct electric field, and infrared to reduce blood glucose level and hepatic function in mus musculus with diabetic mellitus

    International Nuclear Information System (INIS)

    Suhariningsih; Prijo, Tri Anggono; Notobroto, Hari Basuki; Winarni, Dwi; Hussein, Saikhu Achmad

    2017-01-01

    Blood contains several electrolytes with positive (cation) and negative (anion) ion load. Both electrolytes deliver impulse synergistically adjusting body needs. Those electrolytes give specific effect to external disturbance such as electric, magnetic, even infrared field. A study has been conducted to reduce blood glucose level and liver function, in type 2 Diabetes Mellitus patients, using Biophysics concept which uses combination therapy of permanent magnetic field, electric field, and infrared. This study used 48 healthy mice ( mus musculus ), male, age 3-4 weeks, with approximately 25-30 g in weight. Mice was fed with lard as high fat diet orally, before Streptozotocin (STZ) induction become diabetic mice. Therapy was conducted by putting mice in a chamber that emits the combination of permanent magnetic field, electric field, and infrared, every day for 1 hour for 28 days. There were 4 combinations of therapy/treatment, namely: (1) permanent magnetic field, direct electric field, and infrared; (2) permanent magnetic field, direct electric field, without infrared; (3) permanent magnetic field, alternating electric field, and infrared; and (4) permanent magnetic field, alternating electric field, without infrared. The results of therapy show that every combination is able to reduce blood glucose level, AST, and ALT. However, the best result is by using combination of permanent magnetic field, direct electric field, and infrared. (paper)

  14. Optical camera system for radiation field

    International Nuclear Information System (INIS)

    Maki, Koichi; Senoo, Makoto; Takahashi, Fuminobu; Shibata, Keiichiro; Honda, Takuro.

    1995-01-01

    An infrared-ray camera comprises a transmitting filter used exclusively for infrared-rays at a specific wavelength, such as far infrared-rays and a lens used exclusively for infrared rays. An infrared ray emitter-incorporated photoelectric image converter comprising an infrared ray emitting device, a focusing lens and a semiconductor image pick-up plate is disposed at a place of low gamma-ray dose rate. Infrared rays emitted from an objective member are passed through the lens system of the camera, and real images are formed by way of the filter. They are transferred by image fibers, introduced to the photoelectric image converter and focused on the image pick-up plate by the image-forming lens. Further, they are converted into electric signals and introduced to a display and monitored. With such a constitution, an optical material used exclusively for infrared rays, for example, ZnSe can be used for the lens system and the optical transmission system. Accordingly, it can be used in a radiation field of high gamma ray dose rate around the periphery of the reactor container. (I.N.)

  15. GHRSST Level 2P 1 m Depth Global Sea Surface Temperature from the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi NPP satellite (GDS version 2)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A global Group for High Resolution Sea Surface Temperature (GHRSST) Level 2P dataset based on retrievals from the Visible Infrared Imaging Radiometer Suite (VIIRS)....

  16. Additive Manufacturing Infrared Inspection

    Science.gov (United States)

    Gaddy, Darrell; Nettles, Mindy

    2015-01-01

    The Additive Manufacturing Infrared Inspection Task started the development of a real-time dimensional inspection technique and digital quality record for the additive manufacturing process using infrared camera imaging and processing techniques. This project will benefit additive manufacturing by providing real-time inspection of internal geometry that is not currently possible and reduce the time and cost of additive manufactured parts with automated real-time dimensional inspections which deletes post-production inspections.

  17. Micromachined single-level nonplanar polycrystalline SiGe thermal microemitters for infrared dynamic scene projection

    Science.gov (United States)

    Malyutenko, V. K.; Malyutenko, O. Yu.; Leonov, V.; Van Hoof, C.

    2009-05-01

    The technology for self-supported membraneless polycrystalline SiGe thermal microemitters, their design, and performance are presented. The 128-element arrays with a fill factor of 88% and a 2.5-μm-thick resonant cavity have been grown by low-pressure chemical vapor deposition and fabricated using surface micromachining technology. The 200-nm-thick 60×60 μm2 emitting pixels enforced with a U-shape profile pattern demonstrate a thermal time constant of 2-7 ms and an apparent temperature of 700 K in the 3-5 and 8-12 μm atmospheric transparency windows. The application of the devices to the infrared dynamic scene simulation and their benefit over conventional planar membrane-supported emitters are discussed.

  18. Very large scale heterogeneous integration (VLSHI) and wafer-level vacuum packaging for infrared bolometer focal plane arrays

    Science.gov (United States)

    Forsberg, Fredrik; Roxhed, Niclas; Fischer, Andreas C.; Samel, Björn; Ericsson, Per; Hoivik, Nils; Lapadatu, Adriana; Bring, Martin; Kittilsland, Gjermund; Stemme, Göran; Niklaus, Frank

    2013-09-01

    Imaging in the long wavelength infrared (LWIR) range from 8 to 14 μm is an extremely useful tool for non-contact measurement and imaging of temperature in many industrial, automotive and security applications. However, the cost of the infrared (IR) imaging components has to be significantly reduced to make IR imaging a viable technology for many cost-sensitive applications. This paper demonstrates new and improved fabrication and packaging technologies for next-generation IR imaging detectors based on uncooled IR bolometer focal plane arrays. The proposed technologies include very large scale heterogeneous integration for combining high-performance, SiGe quantum-well bolometers with electronic integrated read-out circuits and CMOS compatible wafer-level vacuum packing. The fabrication and characterization of bolometers with a pitch of 25 μm × 25 μm that are arranged on read-out-wafers in arrays with 320 × 240 pixels are presented. The bolometers contain a multi-layer quantum well SiGe thermistor with a temperature coefficient of resistance of -3.0%/K. The proposed CMOS compatible wafer-level vacuum packaging technology uses Cu-Sn solid-liquid interdiffusion (SLID) bonding. The presented technologies are suitable for implementation in cost-efficient fabless business models with the potential to bring about the cost reduction needed to enable low-cost IR imaging products for industrial, security and automotive applications.

  19. Temperature measurement with industrial color camera devices

    Science.gov (United States)

    Schmidradler, Dieter J.; Berndorfer, Thomas; van Dyck, Walter; Pretschuh, Juergen

    1999-05-01

    This paper discusses color camera based temperature measurement. Usually, visual imaging and infrared image sensing are treated as two separate disciplines. We will show, that a well selected color camera device might be a cheaper, more robust and more sophisticated solution for optical temperature measurement in several cases. Herein, only implementation fragments and important restrictions for the sensing element will be discussed. Our aim is to draw the readers attention to the use of visual image sensors for measuring thermal radiation and temperature and to give reasons for the need of improved technologies for infrared camera devices. With AVL-List, our partner of industry, we successfully used the proposed sensor to perform temperature measurement for flames inside the combustion chamber of diesel engines which finally led to the presented insights.

  20. Comparison of vehicle-mounted forward-looking polarimetric infrared and downward-looking infrared sensors for landmine detection

    NARCIS (Netherlands)

    Cremer, F.; Schavemaker, J.G.M.; Jong, W. de; Schutte, K.

    2003-01-01

    This paper gives a comparison of two vehicle-mounted infrared systems for landmine detection. The first system is a down-ward looking standard infrared camera using processing methods developed within the EU project LOTUS. The second system is using a forward-looking polarimetric infrared camera.

  1. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  2. Real-Time Monitoring of Low-Level Mixed-Waste Loading during Polyethylene Microencapsulation using Transient Infrared Spectroscopy

    International Nuclear Information System (INIS)

    Jones, Roger W.; Kalb, Paul D.; McClelland, John F.; Ochiai, Shukichi

    1999-01-01

    In polyethylene microencapsulation, low-level mixed waste (LLMW) is homogenized with molten polyethylene and extruded into containers, resulting in a lighter, lower-volume waste form than cementation and grout methods produce. Additionally, the polyethylene-based waste form solidifies by cooling, with no risk of the waste interfering with cure, as may occur with cementation and grout processes. We have demonstrated real-time monitoring of the polyethylene encapsulation process stream using a noncontact device based on transient infrared spectroscopy (TIRS). TIRS can acquire mid-infrared spectra from solid or viscous liquid process streams, such as the molten, waste-loaded polyethylene stream that exits the microencapsulation extruder. The waste loading in the stream was determined from the TIRS spectra using partial least squares techniques. The monitor has been demonstrated during the polyethylene microencapsulation of nitrate-salt LLMW and its surrogate, molten salt oxidation LLMW and its surrogate, and flyash. The monitor typically achieved a standard error of prediction for the waste loading of about 1% by weight with an analysis time under 1 minute

  3. Using the standard deviation of a region of interest in an image to estimate camera to emitter distance.

    Science.gov (United States)

    Cano-García, Angel E; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe

    2012-01-01

    In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information.

  4. Using the Standard Deviation of a Region of Interest in an Image to Estimate Camera to Emitter Distance

    Directory of Open Access Journals (Sweden)

    Felipe Espinoza

    2012-05-01

    Full Text Available In this study, a camera to infrared diode (IRED distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information.

  5. Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects

    Directory of Open Access Journals (Sweden)

    David Bulczak

    2017-12-01

    Full Text Available In the last decade, Time-of-Flight (ToF range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF measurements for selected, purchasable materials in the near-infrared (NIR range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data.

  6. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen- er...

  7. Noninvasive measurement of blood glucose level using mid-infrared quantum cascade lasers

    Science.gov (United States)

    Yoshioka, Kiriko; Kino, Saiko; Matsuura, Yuji

    2017-04-01

    For non-invasive measurement of blood glucose level, attenuated total reflection (ATR) absorption spectroscopy system using a QCL as a light source was developed. The results of measurement of glucose solutions showed that the system had a sensitivity that was enough for blood glucose measurement. In-vivo measurement using the proposed system based on QCL showed that there was a correlation between absorptions measured with human lips and blood glucose level.

  8. Hollow optical-fiber based infrared spectroscopy for measurement of blood glucose level by using multi-reflection prism.

    Science.gov (United States)

    Kino, Saiko; Omori, Suguru; Katagiri, Takashi; Matsuura, Yuji

    2016-02-01

    A mid-infrared attenuated total reflection (ATR) spectroscopy system employing hollow optical fibers and a trapezoidal multi-reflection ATR prism has been developed to measure blood glucose levels. Using a multi-reflection prism brought about higher sensitivity, and the flat and wide contact surface of the prism resulted in higher measurement reproducibility. An analysis of in vivo measurements of human inner lip mucosa revealed clear signatures of glucose in the difference spectra between ones taken during the fasting state and ones taken after ingestion of glucose solutions. A calibration plot based on the absorption peak at 1155 cm(-1) that originates from the pyranose ring structure of glucose gave measurement errors less than 20%.

  9. Detection of wine grape nutrient levels using visible and near infrared 1nm spectral resolution remote sensing

    Science.gov (United States)

    Anderson, Grant; van Aardt, Jan; Bajorski, Peter; Vanden Heuvel, Justine

    2016-05-01

    The grape industry relies on regular crop assessment to aid in the day-to-day and seasonal management of their crop. More specifically, there are six key nutrients of interest to viticulturists in the growing of wine grapes, namely nitrogen, potassium, phosphorous, magnesium, zinc and boron. Traditional methods of determining the levels of these nutrients are through collection and chemical analysis of petiole samples from the grape vines themselves. We collected ground-level observations of the spectra of the grape vines, using a hyperspectral spectrometer (0.4-2.5um), at the same time that petioles samples were harvested. We then interpolated the data into a consistent 1 nm spectral resolution before comparing it to the nutrient data collected. This nutrient data came from both the industry standard petiole analysis, as well as an additional leaf-level analysis. The data were collected for two different grape cultivars, both during bloom and veraison periods to provide variability, while also considering the impact of temporal/seasonal change. A narrow-band NDI (Normalized Difference Index) approach, as well as a simple ratio index, was used to determine the correlation of the reflectance data to the nutrient data. This analysis was limited to the silicon photodiode range to increase the utility of our approach for wavelength-specific cameras (via spectral filters) in a low cost drone platform. The NDI generated correlation coefficients were as high as 0.80 and 0.88 for bloom and veraison, respectively. The ratio index produced correlation coefficient results that are the same at two decimal places with 0.80 and 0.88. These results bode well for eventual non-destructive, accurate and precise assessment of vineyard nutrient status.

  10. MEASUREMENT OF TRACE LEVELS OF DEUTERIUM OXIDE IN BIOLOGIC FLUIDS USING INFRARED SPECTROPHOTOMETRY.

    Science.gov (United States)

    Experimental data relevant to the assay of D2O in human serum, urine, and parotid fluid are presented. For serum, with triplicate scans, values of precision...and of accuracy of plus or minus 3% at the 250 p.p.m. D2O level are obtained. By use of parotid fluid the values are narrowed to plus or minus 2% at...aqueous compartments using values for serum water content. Parotid fluid appears to be particularly suitable for biomedical applications due to its ease

  11. Infrared radiation and inversion population of CO2 laser levels in Venusian and Martian atmospheres

    Science.gov (United States)

    Gordiyets, B. F.; Panchenko, V. Y.

    1983-01-01

    Formation mechanisms of nonequilibrium 10 micron CO2 molecule radiation and the possible existence of a natural laser effect in the upper atmospheres of Venus and Mars are theoretically studied. An analysis is made of the excitation process of CO2 molecule vibrational-band levels (with natural isotropic content) induced by direct solar radiation in bands 10.6, 9.4, 4.3, 2.7 and 2.0 microns. The model of partial vibrational-band temperatures was used in the case. The problem of IR radiation transfer in vibrational-rotational bands was solved in the radiation escape approximation.

  12. Confidentiality of 2D Code using Infrared with Cell-level Error Correction

    Directory of Open Access Journals (Sweden)

    Nobuyuki Teraura

    2013-03-01

    Full Text Available Optical information media printed on paper use printing materials to absorb visible light. There is a 2D code, which may be encrypted but also can possibly be copied. Hence, we envisage an information medium that cannot possibly be copied and thereby offers high security. At the surface, the normal 2D code is printed. The inner layers consist of 2D codes printed using a variety of materials, which absorb certain distinct wavelengths, to form a multilayered 2D code. Information can be distributed among the 2D codes forming the inner layers of the multiplex. Additionally, error correction at cell level can be introduced.

  13. Scintillation camera for high activity sources

    International Nuclear Information System (INIS)

    Arseneau, R.E.

    1978-01-01

    The invention described relates to a scintillation camera used for clinical medical diagnosis. Advanced recognition of many unacceptable pulses allows the scintillation camera to discard such pulses at an early stage in processing. This frees the camera to process a greater number of pulses of interest within a given period of time. Temporary buffer storage allows the camera to accommodate pulses received at a rate in excess of its maximum rated capability due to statistical fluctuations in the level of radioactivity of the radiation source measured. (U.K.)

  14. Event detection intelligent camera development

    International Nuclear Information System (INIS)

    Szappanos, A.; Kocsis, G.; Molnar, A.; Sarkozi, J.; Zoletnik, S.

    2008-01-01

    A new camera system 'event detection intelligent camera' (EDICAM) is being developed for the video diagnostics of W-7X stellarator, which consists of 10 distinct and standalone measurement channels each holding a camera. Different operation modes will be implemented for continuous and for triggered readout as well. Hardware level trigger signals will be generated from real time image processing algorithms optimized for digital signal processor (DSP) and field programmable gate array (FPGA) architectures. At full resolution a camera sends 12 bit sampled 1280 x 1024 pixels with 444 fps which means 1.43 Terabyte over half an hour. To analyse such a huge amount of data is time consuming and has a high computational complexity. We plan to overcome this problem by EDICAM's preprocessing concepts. EDICAM camera system integrates all the advantages of CMOS sensor chip technology and fast network connections. EDICAM is built up from three different modules with two interfaces. A sensor module (SM) with reduced hardware and functional elements to reach a small and compact size and robust action in harmful environment as well. An image processing and control unit (IPCU) module handles the entire user predefined events and runs image processing algorithms to generate trigger signals. Finally a 10 Gigabit Ethernet compatible image readout card functions as the network interface for the PC. In this contribution all the concepts of EDICAM and the functions of the distinct modules are described

  15. Spoiling of radiation zeros at the one-loop level and infrared finiteness

    International Nuclear Information System (INIS)

    Laursen, M.L.; Samuel, M.A.; Sen, A.

    1983-01-01

    We consider the amplitude for the radiative decay W - →phi 1 phi 2 #betta# (scalar quarks) including one-loop gluon corrections. We study this process to see if the amplitude (radiation) zeros found in lowest order survive at the one-loop level. The subset of diagrams containing self-mass insertions preserves the zero. Seagull types are shown to have a violation which is similar to kappanot =1. Triangle and box diagrams spoil the zeros as they do in the case of a scalar W. However, the amplitude is completely free of any mass singularities in the classical null zone. We conjecture that this will remain true for spin-(1/2) quarks

  16. Middle infrared (wavelength range: 8 μm-14 μm) 2-dimensional spectroscopy (total weight with electrical controller: 1.7 kg, total cost: less than 10,000 USD) so-called hyperspectral camera for unmanned air vehicles like drones

    Science.gov (United States)

    Yamamoto, Naoyuki; Saito, Tsubasa; Ogawa, Satoru; Ishimaru, Ichiro

    2016-05-01

    We developed the palm size (optical unit: 73[mm]×102[mm]×66[mm]) and light weight (total weight with electrical controller: 1.7[kg]) middle infrared (wavelength range: 8[μm]-14[μm]) 2-dimensional spectroscopy for UAV (Unmanned Air Vehicle) like drone. And we successfully demonstrated the flights with the developed hyperspectral camera mounted on the multi-copter so-called drone in 15/Sep./2015 at Kagawa prefecture in Japan. We had proposed 2 dimensional imaging type Fourier spectroscopy that was the near-common path temporal phase-shift interferometer. We install the variable phase shifter onto optical Fourier transform plane of infinity corrected imaging optical systems. The variable phase shifter was configured with a movable mirror and a fixed mirror. The movable mirror was actuated by the impact drive piezo-electric device (stroke: 4.5[mm], resolution: 0.01[μm], maker: Technohands Co.,Ltd., type:XDT50-45, price: around 1,000USD). We realized the wavefront division type and near common path interferometry that has strong robustness against mechanical vibrations. Without anti-mechanical vibration systems, the palm-size Fourier spectroscopy was realized. And we were able to utilize the small and low-cost middle infrared camera that was the micro borometer array (un-cooled VOxMicroborometer, pixel array: 336×256, pixel pitch: 17[μm], frame rate 60[Hz], maker: FLIR, type: Quark 336, price: around 5,000USD). And this apparatus was able to be operated by single board computer (Raspberry Pi.). Thus, total cost was less than 10,000 USD. We joined with KAMOME-PJ (Kanagawa Advanced MOdule for Material Evaluation Project) with DRONE FACTORY Corp., KUUSATSU Corp., Fuji Imvac Inc. And we successfully obtained the middle infrared spectroscopic imaging with multi-copter drone.

  17. Laser scanning camera inspects hazardous area

    International Nuclear Information System (INIS)

    Fryatt, A.; Miprode, C.

    1985-01-01

    Main operational characteristics of a new laser scanning camera are presented. The camera is intended primarily for low level high resolution viewing inside nuclear reactors. It uses a He-Ne laser beam raster; by detecting the reflected light by means of a phomultiplier, the subject under observation can be reconstructed in an electronic video store and reviewed on a conventional monitor screen

  18. Radiation camera exposure control

    International Nuclear Information System (INIS)

    Martone, R.J.; Yarsawich, M.; Wolczek, W.

    1976-01-01

    A system and method for governing the exposure of an image generated by a radiation camera to an image sensing camera is disclosed. The exposure is terminated in response to the accumulation of a predetermined quantity of radiation, defining a radiation density, occurring in a predetermined area. An index is produced which represents the value of that quantity of radiation whose accumulation causes the exposure termination. The value of the predetermined radiation quantity represented by the index is sensed so that the radiation camera image intensity can be calibrated to compensate for changes in exposure amounts due to desired variations in radiation density of the exposure, to maintain the detectability of the image by the image sensing camera notwithstanding such variations. Provision is also made for calibrating the image intensity in accordance with the sensitivity of the image sensing camera, and for locating the index for maintaining its detectability and causing the proper centering of the radiation camera image

  19. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  20. Camera Traps Can Be Heard and Seen by Animals

    Science.gov (United States)

    Meek, Paul D.; Ballard, Guy-Anthony; Fleming, Peter J. S.; Schaefer, Michael; Williams, Warwick; Falzon, Greg

    2014-01-01

    Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals’ hearing and produce illumination that can be seen by many species. PMID:25354356

  1. Camera traps can be heard and seen by animals.

    Directory of Open Access Journals (Sweden)

    Paul D Meek

    Full Text Available Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5 and infrared illumination outputs (n = 7 of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21 and assessed the vision ranges (n = 3 of mammals species (where data existed to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals' hearing and produce illumination that can be seen by many species.

  2. Multi-Angle Snowflake Camera Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Stuefer, Martin [Univ. of Alaska, Fairbanks, AK (United States); Bailey, J. [Univ. of Alaska, Fairbanks, AK (United States)

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASC cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.

  3. Effect of low-level light therapy on diabetic foot ulcers: a near-infrared spectroscopy study

    Science.gov (United States)

    Salvi, Massimo; Rimini, Daniele; Molinari, Filippo; Bestente, Gianni; Bruno, Alberto

    2017-03-01

    Diabetic foot ulcer (DFU) is a diabetic complication due to peripheral vasculopathy and neuropathy. A promising technology for wound healing in DFU is low-level light therapy (LLLT). Despite several studies showing positive effects of LLLT on DFU, LLLT's physiological effects have not yet been studied. The objective of this study was to investigate vascular and nervous systems modification in DFU after LLLT. Two samples of 45 DFU patients and 11 healthy controls (HCs) were recruited. The total hemoglobin (totHb) concentration change was monitored before and after LLLT by near-infrared spectroscopy and analyzed in time and frequency domains. The spectral power of the totHb changes in the very-low frequency (VLF, 20 to 60 mHz) and low frequency (LF, 60 to 140 mHz) bandwidths was calculated. Data analysis revealed a mean increase of totHb concentration after LLLT in DFU patients, but not in HC. VLF/LF ratio decreased significantly after the LLLT period in DFU patients (indicating an increased activity of the autonomic nervous system), but not in HC. Eventually, different treatment intensities in LLLT therapy showed a different response in DFU. Overall, our results demonstrate that LLLT improves blood flow and autonomic nervous system regulation in DFU and the importance of light intensity in therapeutic protocols.

  4. AIRS/Aqua Level 1B Visible/Near Infrared (VIS/NIR) geolocated and calibrated radiances V005

    Data.gov (United States)

    National Aeronautics and Space Administration — The Atmospheric Infrared Sounder (AIRS) is a facility instrument aboard the second Earth Observing System (EOS) polar-orbiting platform, EOS Aqua. In combination...

  5. Aqua AIRS Level 2 Near Real Time (NRT) Cloud-Cleared Infrared Radiances (AIRS+AMSU) V006

    Data.gov (United States)

    National Aeronautics and Space Administration — The Atmospheric Infrared Sounder (AIRS) is a facility instrument aboard the second Earth Observing System (EOS) polar-orbiting platform, EOS Aqua. In combination...

  6. AIRS/Aqua Level 1B Visible/Near Infrared (VIS/NIR) quality assurance subset V005

    Data.gov (United States)

    National Aeronautics and Space Administration — The Atmospheric Infrared Sounder (AIRS) is a facility instrument aboard the second Earth Observing System (EOS) polar-orbiting platform, EOS Aqua. In combination...

  7. AIRS/Aqua Near Real Time (NRT) Level 1B Infrared (IR) quality assurance subset V005

    Data.gov (United States)

    National Aeronautics and Space Administration — The Atmospheric Infrared Sounder (AIRS) is a facility instrument aboard the second Earth Observing System (EOS) polar-orbiting platform, EOS Aqua. In combination...

  8. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    Science.gov (United States)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  9. Calibration and verification of thermographic cameras for geometric measurements

    Science.gov (United States)

    Lagüela, S.; González-Jorge, H.; Armesto, J.; Arias, P.

    2011-03-01

    Infrared thermography is a technique with an increasing degree of development and applications. Quality assessment in the measurements performed with the thermal cameras should be achieved through metrology calibration and verification. Infrared cameras acquire temperature and geometric information, although calibration and verification procedures are only usual for thermal data. Black bodies are used for these purposes. Moreover, the geometric information is important for many fields as architecture, civil engineering and industry. This work presents a calibration procedure that allows the photogrammetric restitution and a portable artefact to verify the geometric accuracy, repeatability and drift of thermographic cameras. These results allow the incorporation of this information into the quality control processes of the companies. A grid based on burning lamps is used for the geometric calibration of thermographic cameras. The artefact designed for the geometric verification consists of five delrin spheres and seven cubes of different sizes. Metrology traceability for the artefact is obtained from a coordinate measuring machine. Two sets of targets with different reflectivity are fixed to the spheres and cubes to make data processing and photogrammetric restitution possible. Reflectivity was the chosen material propriety due to the thermographic and visual cameras ability to detect it. Two thermographic cameras from Flir and Nec manufacturers, and one visible camera from Jai are calibrated, verified and compared using calibration grids and the standard artefact. The calibration system based on burning lamps shows its capability to perform the internal orientation of the thermal cameras. Verification results show repeatability better than 1 mm for all cases, being better than 0.5 mm for the visible one. As it must be expected, also accuracy appears higher in the visible camera, and the geometric comparison between thermographic cameras shows slightly better

  10. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  11. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.; Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  12. Neutron cameras for ITER

    International Nuclear Information System (INIS)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-01-01

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from 16 N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with 16 N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins

  13. A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer

    Directory of Open Access Journals (Sweden)

    Bailey Y. Shen

    2017-01-01

    Full Text Available Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm×91mm×45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient.

  14. Infrared source test

    Energy Technology Data Exchange (ETDEWEB)

    Ott, L.

    1994-11-15

    The purpose of the Infrared Source Test (IRST) is to demonstrate the ability to track a ground target with an infrared sensor from an airplane. The system is being developed within the Advance Technology Program`s Theater Missile Defense/Unmanned Aerial Vehicle (UAV) section. The IRST payload consists of an Amber Radiance 1 infrared camera system, a computer, a gimbaled mirror, and a hard disk. The processor is a custom R3000 CPU board made by Risq Modular Systems, Inc. for LLNL. The board has ethernet, SCSI, parallel I/O, and serial ports, a DMA channel, a video (frame buffer) interface, and eight MBytes of main memory. The real-time operating system VxWorks has been ported to the processor. The application code is written in C on a host SUN 4 UNIX workstation. The IRST is the result of a combined effort by physicists, electrical and mechanical engineers, and computer scientists.

  15. Construction of a frameless camera-based stereotactic neuronavigator.

    Science.gov (United States)

    Cornejo, A; Algorri, M E

    2004-01-01

    We built an infrared vision system to be used as the real time 3D motion sensor in a prototype low cost, high precision, frameless neuronavigator. The objective of the prototype is to develop accessible technology for increased availability of neuronavigation systems in research labs and small clinics and hospitals. We present our choice of technology including camera and IR emitter characteristics. We describe the methodology for setting up the 3D motion sensor, from the arrangement of the cameras and the IR emitters on surgical instruments, to triangulation equations from stereo camera pairs, high bandwidth computer communication with the cameras and real time image processing algorithms. We briefly cover the issues of camera calibration and characterization. Although our performance results do not yet fully meet the high precision, real time requirements of neuronavigation systems we describe the current improvements being made to the 3D motion sensor that will make it suitable for surgical applications.

  16. State of art in radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Choi; Young Soo; Kim, Seong Ho; Cho, Jae Wan; Kim, Chang Hoi; Seo, Young Chil

    2002-02-01

    Working in radiation environment such as nuclear power plant, RI facility, nuclear fuel fabrication facility, medical center has to be considered radiation exposure, and we can implement these job by remote observation and operation. However the camera used for general industry is weakened at radiation, so radiation-tolerant camera is needed for radiation environment. The application of radiation-tolerant camera system is nuclear industry, radio-active medical, aerospace, and so on. Specially nuclear industry, the demand is continuous in the inspection of nuclear boiler, exchange of pellet, inspection of nuclear waste. In the nuclear developed countries have been an effort to develop radiation-tolerant cameras. Now they have many kinds of radiation-tolerant cameras which can tolerate to 10{sup 6}-10{sup 8} rad total dose. In this report, we examine into the state-of-art about radiation-tolerant cameras, and analyze these technology. We want to grow up the concern of developing radiation-tolerant camera by this paper, and upgrade the level of domestic technology.

  17. Commercialization of radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10{sup 6} - 10{sup 8} rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  18. Commercialization of radiation tolerant camera

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10 6 - 10 8 rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  19. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    A positron emission tomography camera having a plurality of detector rings positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom. Each detector ring or offset ring includes a plurality of photomultiplier tubes and a plurality of scintillation crystals are positioned relative to the photomultiplier tubes whereby each tube is responsive to more than one crystal. Each alternate crystal in the ring is offset by one-half or less of the thickness of the crystal such that the staggered crystals are seen by more than one photomultiplier tube. This sharing of crystals and photomultiplier tubes allows identification of the staggered crystal and the use of smaller detectors shared by larger photomultiplier tubes thereby requiring less photomultiplier tubes, creating more scanning slices, providing better data sampling, and reducing the cost of the camera. The offset detector ring geometry reduces the costs of the positron camera and improves its performance

  20. Dynamical scene analysis with a moving camera: mobile targets detection system

    International Nuclear Information System (INIS)

    Hennebert, Christine

    1996-01-01

    This thesis work deals with the detection of moving objects in monocular image sequences acquired with a mobile camera. We propose a method able to detect small moving objects in visible or infrared images of real outdoor scenes. In order to detect objects of very low apparent motion, we consider an analysis on a large temporal interval. We have chosen to compensate for the dominant motion due to the camera displacement for several consecutive images in order to form a sub-sequence of images for which the camera seems virtually static. We have also developed a new approach allowing to extract the different layers of a real scene in order to deal with cases where the 2D motion due to the camera displacement cannot be globally compensated for. To this end, we use a hierarchical model with two levels: the local merging step and the global merging one. Then, an appropriate temporal filtering is applied to registered image sub-sequence to enhance signals corresponding to moving objects. The detection issue is stated as a labeling problem within a statistical regularization based on Markov Random Fields. Our method has been validated on numerous real image sequences depicting complex outdoor scenes. Finally, the feasibility of an integrated circuit for mobile object detection has been proved. This circuit could lead to an ASIC creation. (author) [fr

  1. Infrared imaging of the crime scene: possibilities and pitfalls

    NARCIS (Netherlands)

    Edelman, Gerda J.; Hoveling, Richelle J. M.; Roos, Martin; van Leeuwen, Ton G.; Aalders, Maurice C. G.

    2013-01-01

    All objects radiate infrared energy invisible to the human eye, which can be imaged by infrared cameras, visualizing differences in temperature and/or emissivity of objects. Infrared imaging is an emerging technique for forensic investigators. The rapid, nondestructive, and noncontact features of

  2. Infrared hyperspectral upconversion imaging using spatial object translation

    DEFF Research Database (Denmark)

    Kehlet, Louis Martinus; Sanders, Nicolai Højer; Tidemand-Lichtenberg, Peter

    2015-01-01

    In this paper hyperspectral imaging in the mid-infrared wavelength region is realised using nonlinear frequency upconversion. The infrared light is converted to the near-infrared region for detection with a Si-based CCD camera. The object is translated in a predefined grid by motorized actuators...

  3. Measurements of temperature of the tungsten hexa-ethoxide pyrolysis flame using IR camera

    CSIR Research Space (South Africa)

    Mudau, AE

    2010-09-01

    Full Text Available In laser pyrolysis, temperature measurement and control plays a vital role during the development of nanoparticles. Authors present the results of temperature measurements using infrared camera on a tungsten hexa-ethoxide pyrolysis flame used...

  4. Software for fast cameras and image handling on MAST

    International Nuclear Information System (INIS)

    Shibaev, S.

    2008-01-01

    The rapid progress in fast imaging gives new opportunities for fusion research. The data obtained by fast cameras play an important and ever-increasing role in analysis and understanding of plasma phenomena. The fast cameras produce a huge amount of data which creates considerable problems for acquisition, analysis, and storage. We use a number of fast cameras on the Mega-Amp Spherical Tokamak (MAST). They cover several spectral ranges: broadband visible, infra-red and narrow band filtered for spectroscopic studies. These cameras are controlled by programs developed in-house. The programs provide full camera configuration and image acquisition in the MAST shot cycle. Despite the great variety of image sources, all images should be stored in a single format. This simplifies development of data handling tools and hence the data analysis. A universal file format has been developed for MAST images which supports storage in both raw and compressed forms, using either lossless or lossy compression. A number of access and conversion routines have been developed for all languages used on MAST. Two movie-style display tools have been developed-Windows native and Qt based for Linux. The camera control programs run as autonomous data acquisition units with full camera configuration set and stored locally. This allows easy porting of the code to other data acquisition systems. The software developed for MAST fast cameras has been adapted for several other tokamaks where it is in regular use

  5. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...

  6. The world's fastest camera

    CERN Multimedia

    Piquepaille, Roland

    2006-01-01

    This image processor is not your typical digital camera. It took 6 years to 20 people and $6 million to build the "Regional Calorimeter Trigger"(RCT) which will be a component of the Compact Muon Solenoid (CMS) experiment, one of the detectors on the Large Hadron Collider (LHC) in Geneva, Switzerland (1 page)

  7. The ultraviolet to infrared energy distribution of the BL Lacertae object PKS 0422+00 at two different brightness levels

    Energy Technology Data Exchange (ETDEWEB)

    Falomo, R.; Bouchet, P.; Maraschi, L.; Treves, A.; Tanzi, E.G. (Padova, Osservatorio Astronomico, Padua (Italy) European Southern Observatory, La Silla (Chile) Milano Universita, Milan (Italy) CNR, Istituto di Fisica Cosmica, Milan (Italy))

    1989-10-01

    The BL Lacertae object PKS 0422+00 was observed with IUE (International Ultraviolet Explorer) on August 31-September 1, 1987, when the visual magnitude of the object was V = 16.2, and again about 4 months later (January 10, 1988) during an active state (V = 15.6). Quasi-simultaneous optical to infrared observations allow deriving a detailed spectral flux distribution from 8 x 10 to the 13th to 2.5 x 10 the 15th Hz, for each epoch. Fits in terms of broken power laws and logarithmic parabolas are discussed. 32 refs.

  8. The ultraviolet to infrared energy distribution of the BL Lacertae object PKS 0422+00 at two different brightness levels

    International Nuclear Information System (INIS)

    Falomo, R.; Bouchet, P.; Maraschi, L.; Treves, A.; Tanzi, E.G.

    1989-01-01

    The BL Lacertae object PKS 0422+00 was observed with IUE (International Ultraviolet Explorer) on August 31-September 1, 1987, when the visual magnitude of the object was V = 16.2, and again about 4 months later (January 10, 1988) during an active state (V = 15.6). Quasi-simultaneous optical to infrared observations allow deriving a detailed spectral flux distribution from 8 x 10 to the 13th to 2.5 x 10 the 15th Hz, for each epoch. Fits in terms of broken power laws and logarithmic parabolas are discussed. 32 refs

  9. Camera network video summarization

    Science.gov (United States)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  10. Uncooled infrared sensors: rapid growth and future perspective

    Science.gov (United States)

    Balcerak, Raymond S.

    2000-07-01

    The uncooled infrared cameras are now available for both the military and commercial markets. The current camera technology incorporates the fruits of many years of development, focusing on the details of pixel design, novel material processing, and low noise read-out electronics. The rapid insertion of cameras into systems is testimony to the successful completion of this 'first phase' of development. In the military market, the first uncooled infrared cameras will be used for weapon sights, driver's viewers and helmet mounted cameras. Major commercial applications include night driving, security, police and fire fighting, and thermography, primarily for preventive maintenance and process control. The technology for the next generation of cameras is even more demanding, but within reach. The paper outlines the technology program planned for the next generation of cameras, and the approaches to further enhance performance, even to the radiation limit of thermal detectors.

  11. Infrared upconversion hyperspectral imaging

    DEFF Research Database (Denmark)

    Kehlet, Louis Martinus; Tidemand-Lichtenberg, Peter; Dam, Jeppe Seidelin

    2015-01-01

    In this Letter, hyperspectral imaging in the mid-IR spectral region is demonstrated based on nonlinear frequency upconversion and subsequent imaging using a standard Si-based CCD camera. A series of upconverted images are acquired with different phase match conditions for the nonlinear frequency...... conversion process. From this, a sequence of monochromatic images in the 3.2-3.4 mu m range is generated. The imaged object consists of a standard United States Air Force resolution target combined with a polystyrene film, resulting in the presence of both spatial and spectral information in the infrared...... image. (C) 2015 Optical Society of America...

  12. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  13. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1986-01-01

    A positron emission tomography camera having a plurality of detector rings positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom. Each ring contains a plurality of scintillation detectors which are positioned around an inner circumference with a septum ring extending inwardly from the inner circumference along each outer edge of each ring. An additional septum ring is positioned in the middle of each ring of detectors and parallel to the other septa rings, whereby the inward extent of all the septa rings may be reduced by one-half and the number of detectors required in each ring is reduced. The additional septa reduces the costs of the positron camera and improves its performance

  14. Gamma ray camera

    International Nuclear Information System (INIS)

    Wang, S.-H.; Robbins, C.D.

    1979-01-01

    An Anger gamma ray camera is improved by the substitution of a gamma ray sensitive, proximity type image intensifier tube for the scintillator screen in the Anger camera. The image intensifier tube has a negatively charged flat scintillator screen, a flat photocathode layer, and a grounded, flat output phosphor display screen, all of which have the same dimension to maintain unit image magnification; all components are contained within a grounded metallic tube, with a metallic, inwardly curved input window between the scintillator screen and a collimator. The display screen can be viewed by an array of photomultipliers or solid state detectors. There are two photocathodes and two phosphor screens to give a two stage intensification, the two stages being optically coupled by a light guide. (author)

  15. NSTX Tangential Divertor Camera

    International Nuclear Information System (INIS)

    Roquemore, A.L.; Ted Biewer; Johnson, D.; Zweben, S.J.; Nobuhiro Nishino; Soukhanovskii, V.A.

    2004-01-01

    Strong magnetic field shear around the divertor x-point is numerically predicted to lead to strong spatial asymmetries in turbulence driven particle fluxes. To visualize the turbulence and associated impurity line emission near the lower x-point region, a new tangential observation port has been recently installed on NSTX. A reentrant sapphire window with a moveable in-vessel mirror images the divertor region from the center stack out to R 80 cm and views the x-point for most plasma configurations. A coherent fiber optic bundle transmits the image through a remotely selected filter to a fast camera, for example a 40500 frames/sec Photron CCD camera. A gas puffer located in the lower inboard divertor will localize the turbulence in the region near the x-point. Edge fluid and turbulent codes UEDGE and BOUT will be used to interpret impurity and deuterium emission fluctuation measurements in the divertor

  16. Scanning gamma camera

    International Nuclear Information System (INIS)

    Engdahl, L.W.; Batter, J.F. Jr.; Stout, K.J.

    1977-01-01

    A scanning system for a gamma camera providing for the overlapping of adjacent scan paths is described. A collimator mask having tapered edges provides for a graduated reduction in intensity of radiation received by a detector thereof, the reduction in intensity being graduated in a direction normal to the scanning path to provide a blending of images of adjacent scan paths. 31 claims, 15 figures

  17. Gamma camera display system

    International Nuclear Information System (INIS)

    Stout, K.J.

    1976-01-01

    A gamma camera having an array of photomultipliers coupled via pulse shaping circuitry and a resistor weighting circuit to a display for forming an image of a radioactive subject is described. A linearizing circuit is coupled to the weighting circuit, the linearizing circuit including a nonlinear feedback circuit with diode coupling to the weighting circuit for linearizing the correspondence between points of the display and points of the subject. 4 Claims, 5 Drawing Figures

  18. Comparison of polarimetric cameras

    Science.gov (United States)

    2017-03-01

    Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget , Paperwork Reduction Project (0704-0188...polarimetric camera, remote sensing, space systems 15. NUMBER OF PAGES 93 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18...2016. Hermann Hall, Monterey, CA. The next data in Figure 37. were collected on 01 December 2016 at 1226 PST on the rooftop of the Marriot Hotel in

  19. Comparison of parameters of modern cooled and uncooled thermal cameras

    Science.gov (United States)

    Bareła, Jarosław; Kastek, Mariusz; Firmanty, Krzysztof; Krupiński, Michał

    2017-10-01

    During the design of a system employing thermal cameras one always faces a problem of choosing the camera types best suited for the task. In many cases such a choice is far from optimal one, and there are several reasons for that. System designers often favor tried and tested solution they are used to. They do not follow the latest developments in the field of infrared technology and sometimes their choices are based on prejudice and not on facts. The paper presents the results of measurements of basic parameters of MWIR and LWIR thermal cameras, carried out in a specialized testing laboratory. The measured parameters are decisive in terms of image quality generated by thermal cameras. All measurements were conducted according to current procedures and standards. However the camera settings were not optimized for a specific test conditions or parameter measurements. Instead the real settings used in normal camera operations were applied to obtain realistic camera performance figures. For example there were significant differences between measured values of noise parameters and catalogue data provided by manufacturers, due to the application of edge detection filters to increase detection and recognition ranges. The purpose of this paper is to provide help in choosing the optimal thermal camera for particular application, answering the question whether to opt for cheaper microbolometer device or apply slightly better (in terms of specifications) yet more expensive cooled unit. Measurements and analysis were performed by qualified personnel with several dozen years of experience in both designing and testing of thermal camera systems with both cooled and uncooled focal plane arrays. Cameras of similar array sizes and optics were compared, and for each tested group the best performing devices were selected.

  20. Computer vision camera with embedded FPGA processing

    Science.gov (United States)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  1. Extended spectrum SWIR camera with user-accessible Dewar

    Science.gov (United States)

    Benapfl, Brendan; Miller, John Lester; Vemuri, Hari; Grein, Christoph; Sivananthan, Siva

    2017-02-01

    Episensors has developed a series of extended short wavelength infrared (eSWIR) cameras based on high-Cd concentration Hg1-xCdxTe absorbers. The cameras have a bandpass extending to 3 microns cutoff wavelength, opening new applications relative to traditional InGaAs-based cameras. Applications and uses are discussed and examples given. A liquid nitrogen pour-filled version was initially developed. This was followed by a compact Stirling-cooled version with detectors operating at 200 K. Each camera has unique sensitivity and performance characteristics. The cameras' size, weight and power specifications are presented along with images captured with band pass filters and eSWIR sources to demonstrate spectral response beyond 1.7 microns. The soft seal Dewars of the cameras are designed for accessibility, and can be opened and modified in a standard laboratory environment. This modular approach allows user flexibility for swapping internal components such as cold filters and cold stops. The core electronics of the Stirlingcooled camera are based on a single commercial field programmable gate array (FPGA) that also performs on-board non-uniformity corrections, bad pixel replacement, and directly drives any standard HDMI display.

  2. New gonioscopy system using only infrared light.

    Science.gov (United States)

    Sugimoto, Kota; Ito, Kunio; Matsunaga, Koichi; Miura, Katsuya; Esaki, Koji; Uji, Yukitaka

    2005-08-01

    To describe an infrared gonioscopy system designed to observe the anterior chamber angle under natural mydriasis in a completely darkened room. An infrared light filter was used to modify the light source of the slit-lamp microscope. A television monitor connected to a CCD monochrome camera was used to indirectly observe the angle. Use of the infrared system enabled observation of the angle under natural mydriasis in a completely darkened room. Infrared gonioscopy is a useful procedure for the observation of the angle under natural mydriasis.

  3. Characterization of SWIR cameras by MRC measurements

    Science.gov (United States)

    Gerken, M.; Schlemmer, H.; Haan, Hubertus A.; Siemens, Christofer; Münzberg, M.

    2014-05-01

    Cameras for the SWIR wavelength range are becoming more and more important because of the better observation range for day-light operation under adverse weather conditions (haze, fog, rain). In order to choose the best suitable SWIR camera or to qualify a camera for a given application, characterization of the camera by means of the Minimum Resolvable Contrast MRC concept is favorable as the MRC comprises all relevant properties of the instrument. With the MRC known for a given camera device the achievable observation range can be calculated for every combination of target size, illumination level or weather conditions. MRC measurements in the SWIR wavelength band can be performed widely along the guidelines of the MRC measurements of a visual camera. Typically measurements are performed with a set of resolution targets (e.g. USAF 1951 target) manufactured with different contrast values from 50% down to less than 1%. For a given illumination level the achievable spatial resolution is then measured for each target. The resulting curve is showing the minimum contrast that is necessary to resolve the structure of a target as a function of spatial frequency. To perform MRC measurements for SWIR cameras at first the irradiation parameters have to be given in radiometric instead of photometric units which are limited in their use to the visible range. In order to do so, SWIR illumination levels for typical daylight and twilight conditions have to be defined. At second, a radiation source is necessary with appropriate emission in the SWIR range (e.g. incandescent lamp) and the irradiance has to be measured in W/m2 instead of Lux = Lumen/m2. At third, the contrast values of the targets have to be calibrated newly for the SWIR range because they typically differ from the values determined for the visual range. Measured MRC values of three cameras are compared to the specified performance data of the devices and the results of a multi-band in-house designed Vis-SWIR camera

  4. Radiation-resistant camera tube

    International Nuclear Information System (INIS)

    Kuwahata, Takao; Manabe, Sohei; Makishima, Yasuhiro

    1982-01-01

    It was a long time ago that Toshiba launched on manufacturing black-and-white radiation-resistant camera tubes employing nonbrowning face-plate glass for ITV cameras used in nuclear power plants. Now in compliance with the increasing demand in nuclear power field, the Company is at grips with the development of radiation-resistant single color-camera tubes incorporating a color-stripe filter for color ITV cameras used under radiation environment. Herein represented are the results of experiments on characteristics of materials for single color-camera tubes and prospects for commercialization of the tubes. (author)

  5. Camera Movement in Narrative Cinema

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2007-01-01

    section unearths what characterizes the literature on camera movement. The second section of the dissertation delineates the history of camera movement itself within narrative cinema. Several organizational principles subtending the on-screen effect of camera movement are revealed in section two...... but they are not organized into a coherent framework. This is the task that section three meets in proposing a functional taxonomy for camera movement in narrative cinema. Two presumptions subtend the taxonomy: That camera movement actively contributes to the way in which we understand the sound and images on the screen......, commentative or valuative manner. 4) Focalization: associating the movement of the camera with the viewpoints of characters or entities in the story world. 5) Reflexive: inviting spectators to engage with the artifice of camera movement. 6) Abstract: visualizing abstract ideas and concepts. In order...

  6. Far-infrared radiation inhibits proliferation, migration, and angiogenesis of human umbilical vein endothelial cells by suppressing secretory clusterin levels.

    Science.gov (United States)

    Hwang, Soojin; Lee, Dong-Hoon; Lee, In-Kyu; Park, Young Mi; Jo, Inho

    2014-04-28

    Far-infrared (FIR) radiation is known to lessen the risk of angiogenesis-related diseases including cancer. Because deficiency of secretory clusterin (sCLU) has been reported to inhibit angiogenesis of endothelial cells (EC), we investigated using human umbilical vein EC (HUVEC) whether sCLU mediates the inhibitory effects of FIR radiation. Although FIR radiation ranging 3-25μm wavelength at room temperature for 60min did not alter EC viability, further incubation in the culture incubator (at 37°C under 5% CO2) after radiation significantly inhibited EC proliferation, in vitro migration, and tube formation in a time-dependent manner. Under these conditions, we found decreased sCLU mRNA and protein expression in HUVEC and decreased sCLU protein secreted in culture medium. Expectedly, the replacement of control culture medium with the FIR-irradiated conditioned medium significantly decreased wound closure and tube formation of HUVEC, and vice versa. Furthermore, neutralization of sCLU with anti-sCLU antibody also mimicked all observed inhibitory effects of FIR radiation. Moreover, treatment with recombinant human sCLU protein completely reversed the inhibitory effects of FIR radiation on EC migration and angiogenesis. Lastly, vascular endothelial growth factor also increased sCLU secretion in the culture medium, and wound closure and tube formation of HUVEC, which were significantly reduced by FIR radiation. Our results demonstrate a novel mechanism by which FIR radiation inhibits the proliferation, migration, and angiogenesis of HUVEC, via decreasing sCLU. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Scintillation camera for high activity sources

    International Nuclear Information System (INIS)

    Arseneau, R.E.

    1976-01-01

    A scintillation camera is provided with electrical components which expand the intrinsic maximum rate of acceptance for processing of pulses emanating from detected radioactive events. Buffer storage is provided to accommodate temporary increases in the level of radioactivity. An early provisional determination of acceptability of pulses allows many unacceptable pulses to be discarded at an early stage

  8. Design of Endoscopic Capsule With Multiple Cameras.

    Science.gov (United States)

    Gu, Yingke; Xie, Xiang; Li, Guolin; Sun, Tianjia; Wang, Dan; Yin, Zheng; Zhang, Pengfei; Wang, Zhihua

    2015-08-01

    In order to reduce the miss rate of the wireless capsule endoscopy, in this paper, we propose a new system of the endoscopic capsule with multiple cameras. A master-slave architecture, including an efficient bus architecture and a four level clock management architecture, is applied for the Multiple Cameras Endoscopic Capsule (MCEC). For covering more area of the gastrointestinal tract wall with low power, multiple cameras with a smart image capture strategy, including movement sensitive control and camera selection, are used in the MCEC. To reduce the data transfer bandwidth and power consumption to prolong the MCEC's working life, a low complexity image compressor with PSNR 40.7 dB and compression rate 86% is implemented. A chipset is designed and implemented for the MCEC and a six cameras endoscopic capsule prototype is implemented by using the chipset. With the smart image capture strategy, the coverage rate of the MCEC prototype can achieve 98% and its power consumption is only about 7.1 mW.

  9. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  10. Infrared astronomy

    International Nuclear Information System (INIS)

    Setti, G.; Fazio, G.

    1978-01-01

    This volume contains lectures describing the important achievements in infrared astronomy. The topics included are galactic infrared sources and their role in star formation, the nature of the interstellar medium and galactic structure, the interpretation of infrared, optical and radio observations of extra-galactic sources and their role in the origin and structure of the universe, instrumental techniques and a review of future space observations. (C.F.)

  11. NV-CMOS HD camera for day/night imaging

    Science.gov (United States)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  12. Experimental Demonstration of Adaptive Infrared Multispectral Imaging Using Plasmonic Filter Array (Postprint)

    Science.gov (United States)

    2016-10-10

    coupler in order to move towards the next generation of versatile infrared cameras. To this end, and as an intermediate step, this paper reports the...with a plasmonic opto-coupler in order to move towards the next generation of versatile infrared cameras. To this end, and as an intermediate step

  13. Pain and mobility improvement and MDA plasma levels in degenerative osteoarthritis, low back pain, and rheumatoid arthritis after infrared A-irradiation

    International Nuclear Information System (INIS)

    Siems, W.; Siems, R.; Kitzing, M.; Harting, H.; Bresgen, N.; Eckl, P.M.; Brenke, R.

    2010-01-01

    Infrared (IR)-A irradiation can be useful in back and musculoskeletal pain therapy. In this study joint and vertebral column pain and mobility were measured during two weeks of IR-A irradiation treatment of patients suffering from degenerative osteoarthritis of hip and knee, low back pain, or rheumatoid arthritis. Additionally, before and after IR-A treatment MDA serum levels were measured to check if MDA variations accompany changes in pain intensity and mobility. Two-hundred and seven patients were divided into verum groups getting IR-irradiation, placebo groups getting visible, but not IR irradiation, and groups getting no irradiation. In osteoarthritis significant pain reduction according to Visual Analogue Scale and mobility improvements occurred in the verum group. Even though beneficial mean value changes occurred in the placebo group, the improvements in the placebo and No Irradiation groups were without statistical significance. In low back pain, pain and mobility improvements (by 35 - 40 %) in the verum group were found, too. A delayed (2 nd week) mobility improvement in rheumatoid arthritis was seen. However, pain relief was seen immediately. In patients suffering from low back pain or rheumatoid arthritis, the pain and mobility improvements were accompanied by significant changes of MDA serum levels. However, MDA appears not a sensitive bio factor for changes of the pain intensity in degenerative osteoarthritis. Nevertheless, unaffected or lowered MDA levels during intensive IR-A therapy argue against previous reports on free radical formation upon infrared. In conclusion, rapid beneficial effects of IR-A towards musculoskeletal pain and joint mobility loss were demonstrated. (authors)

  14. Poster abstract: Water level estimation in urban ultrasonic/passive infrared flash flood sensor networks using supervised learning

    KAUST Repository

    Mousa, Mustafa; Claudel, Christian G.

    2014-01-01

    floods occur very rarely, we use a supervised learning approach to estimate the correction to the ultrasonic rangefinder caused by temperature fluctuations. Preliminary data shows that water level can be estimated with an absolute error of less than 2 cm

  15. 24/7 security system: 60-FPS color EMCCD camera with integral human recognition

    Science.gov (United States)

    Vogelsong, T. L.; Boult, T. E.; Gardner, D. W.; Woodworth, R.; Johnson, R. C.; Heflin, B.

    2007-04-01

    An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.

  16. Video Chat with Multiple Cameras

    OpenAIRE

    MacCormick, John

    2012-01-01

    The dominant paradigm for video chat employs a single camera at each end of the conversation, but some conversations can be greatly enhanced by using multiple cameras at one or both ends. This paper provides the first rigorous investigation of multi-camera video chat, concentrating especially on the ability of users to switch between views at either end of the conversation. A user study of 23 individuals analyzes the advantages and disadvantages of permitting a user to switch between views at...

  17. Transmission electron microscope CCD camera

    Science.gov (United States)

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  18. Infrared thermography

    CERN Document Server

    Meola, Carosena

    2012-01-01

    This e-book conveys information about basic IRT theory, infrared detectors, signal digitalization and applications of infrared thermography in many fields such as medicine, foodstuff conservation, fluid-dynamics, architecture, anthropology, condition monitoring, non destructive testing and evaluation of materials and structures.

  19. A Motionless Camera

    Science.gov (United States)

    1994-01-01

    Omniview, a motionless, noiseless, exceptionally versatile camera was developed for NASA as a receiving device for guiding space robots. The system can see in one direction and provide as many as four views simultaneously. Developed by Omniview, Inc. (formerly TRI) under a NASA Small Business Innovation Research (SBIR) grant, the system's image transformation electronics produce a real-time image from anywhere within a hemispherical field. Lens distortion is removed, and a corrected "flat" view appears on a monitor. Key elements are a high resolution charge coupled device (CCD), image correction circuitry and a microcomputer for image processing. The system can be adapted to existing installations. Applications include security and surveillance, teleconferencing, imaging, virtual reality, broadcast video and military operations. Omniview technology is now called IPIX. The company was founded in 1986 as TeleRobotics International, became Omniview in 1995, and changed its name to Interactive Pictures Corporation in 1997.

  20. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1977-01-01

    A gamma camera system having control components operating in conjunction with a solid state detector is described. The detector is formed of a plurality of discrete components which are associated in geometrical or coordinate arrangement defining a detector matrix to derive coordinate signal outputs. These outputs are selectively filtered and summed to form coordinate channel signals and corresponding energy channel signals. A control feature of the invention regulates the noted summing and filtering performance to derive data acceptance signals which are addressed to further treating components. The latter components include coordinate and enery channel multiplexers as well as energy-responsive selective networks. A sequential control is provided for regulating the signal processing functions of the system to derive an overall imaging cycle

  1. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    A positron emission tomography camera having a plurality of detector planes positioned side-by-side around a patient area to detect radiation. Each plane includes a plurality of photomultiplier tubes and at least two rows of scintillation crystals on each photomultiplier tube extend across to adjacent photomultiplier tubes for detecting radiation from the patient area. Each row of crystals on each photomultiplier tube is offset from the other rows of crystals, and the area of each crystal on each tube in each row is different than the area of the crystals on the tube in other rows for detecting which crystal is actuated and allowing the detector to detect more inter-plane slides. The crystals are offset by an amount equal to the length of the crystal divided by the number of rows. The rows of crystals on opposite sides of the patient may be rotated 90 degrees relative to each other

  2. The Circular Camera Movement

    DEFF Research Database (Denmark)

    Hansen, Lennard Højbjerg

    2014-01-01

    It has been an accepted precept in film theory that specific stylistic features do not express specific content. Nevertheless, it is possible to find many examples in the history of film in which stylistic features do express specific content: for instance, the circular camera movement is used...... repeatedly to convey the feeling of a man and a woman falling in love. This raises the question of why producers and directors choose certain stylistic features to narrate certain categories of content. Through the analysis of several short film and TV clips, this article explores whether...... or not there are perceptual aspects related to specific stylistic features that enable them to be used for delimited narrational purposes. The article further attempts to reopen this particular stylistic debate by exploring the embodied aspects of visual perception in relation to specific stylistic features...

  3. Automatic locking radioisotope camera lock

    International Nuclear Information System (INIS)

    Rosauer, P.J.

    1978-01-01

    The lock of the present invention secures the isotope source in a stored shielded condition in the camera until a positive effort has been made to open the lock and take the source outside of the camera and prevents disconnection of the source pigtail unless the source is locked in a shielded condition in the camera. It also gives a visual indication of the locked or possible exposed condition of the isotope source and prevents the source pigtail from being completely pushed out of the camera, even when the lock is released. (author)

  4. Underwater Near-Infrared Spectroscopy: Muscle Oxygen Changes in the Upper and Lower Extremities in Club Level Swimmers and Triathletes.

    Science.gov (United States)

    Jones, B; Cooper, C E

    2016-01-01

    To date, measurements of oxygen status during swim exercise have focused upon systemic aerobic capacity. The development of a portable, waterproof NIRS device makes possible a local measurement of muscle hemodynamics and oxygenation that could provide a novel insight into the physiological changes that occur during swim exercise. The purpose of this study was to observe changes in muscle oxygenation in the vastus lateralis (VL) and latissimus dorsi (LD) of club level swimmers and triathletes. Ten subjects, five club level swimmers and five club level triathletes (three men and seven women) were used for assessment. Swim group; mean±SD=age 21.2±1.6 years; height 170.6±7.5 cm; weight 62.8±6.9 kg; vastus lateralis skin fold 13.8±5.6 mm; latissimus dorsi skin fold 12.6±3.7. Triathlete group; mean±SD=age 44.0±10.5 years; height 171.6±7.0 cm; weight 68.6±12.7 kg; vastus lateralis skin fold 11.8±3.5 mm; latissimus dorsi skin fold 11.2±3.1. All subjects completed a maximal 200 m freestyle swim, with the PortaMon, a portable NIR device, attached to the subject's dominant side musculature. ΔTSI% between the vastus lateralis and latissimus dorsi were analysed using either paired (2-tailed) t-tests or Wilcoxon signed rank test. The level of significance for analysis was set at pswim significantly faster (p=0.04) than club level triathletes. Club level swimmers use both the upper and lower muscles to a similar extent during a maximal 200 m swim. Club level triathletes predominately use the upper body for propulsion during the same exercise. The data produced by NIRS in this study are the first of their kind and provide insight into muscle oxygenation changes during swim exercise which can indicate the contribution of one muscle compared to another. This also enables a greater understanding of the differences in swimming techniques seen between different cohorts of swimmers and potentially within individual swimmers.

  5. Space Infrared Telescope Facility (SIRTF) science instruments

    International Nuclear Information System (INIS)

    Ramos, R.; Hing, S.M.; Leidich, C.A.; Fazio, G.; Houck, J.R.

    1989-01-01

    Concepts of scientific instruments designed to perform infrared astronomical tasks such as imaging, photometry, and spectroscopy are discussed as part of the Space Infrared Telescope Facility (SIRTF) project under definition study at NASA/Ames Research Center. The instruments are: the multiband imaging photometer, the infrared array camera, and the infrared spectograph. SIRTF, a cryogenically cooled infrared telescope in the 1-meter range and wavelengths as short as 2.5 microns carrying multiple instruments with high sensitivity and low background performance, provides the capability to carry out basic astronomical investigations such as deep search for very distant protogalaxies, quasi-stellar objects, and missing mass; infrared emission from galaxies; star formation and the interstellar medium; and the composition and structure of the atmospheres of the outer planets in the solar sytem. 8 refs

  6. Color reproduction software for a digital still camera

    Science.gov (United States)

    Lee, Bong S.; Park, Du-Sik; Nam, Byung D.

    1998-04-01

    We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.

  7. Phase camera experiment for Advanced Virgo

    Energy Technology Data Exchange (ETDEWEB)

    Agatsuma, Kazuhiro, E-mail: agatsuma@nikhef.nl [National Institute for Subatomic Physics, Amsterdam (Netherlands); Beuzekom, Martin van; Schaaf, Laura van der [National Institute for Subatomic Physics, Amsterdam (Netherlands); Brand, Jo van den [National Institute for Subatomic Physics, Amsterdam (Netherlands); VU University, Amsterdam (Netherlands)

    2016-07-11

    We report on a study of the phase camera, which is a frequency selective wave-front sensor of a laser beam. This sensor is utilized for monitoring sidebands produced by phase modulations in a gravitational wave (GW) detector. Regarding the operation of the GW detectors, the laser modulation/demodulation method is used to measure mirror displacements and used for the position controls. This plays a significant role because the quality of controls affect the noise level of the GW detector. The phase camera is able to monitor each sideband separately, which has a great benefit for the manipulation of the delicate controls. Also, overcoming mirror aberrations will be an essential part of Advanced Virgo (AdV), which is a GW detector close to Pisa. Especially low-frequency sidebands can be affected greatly by aberrations in one of the interferometer cavities. The phase cameras allow tracking such changes because the state of the sidebands gives information on mirror aberrations. A prototype of the phase camera has been developed and is currently tested. The performance checks are almost completed and the installation of the optics at the AdV site has started. After the installation and commissioning, the phase camera will be combined to a thermal compensation system that consists of CO{sub 2} lasers and compensation plates. In this paper, we focus on the prototype and show some limitations from the scanner performance. - Highlights: • The phase camera is being developed for a gravitational wave detector. • A scanner performance limits the operation speed and layout design of the system. • An operation range was found by measuring the frequency response of the scanner.

  8. Phase camera experiment for Advanced Virgo

    International Nuclear Information System (INIS)

    Agatsuma, Kazuhiro; Beuzekom, Martin van; Schaaf, Laura van der; Brand, Jo van den

    2016-01-01

    We report on a study of the phase camera, which is a frequency selective wave-front sensor of a laser beam. This sensor is utilized for monitoring sidebands produced by phase modulations in a gravitational wave (GW) detector. Regarding the operation of the GW detectors, the laser modulation/demodulation method is used to measure mirror displacements and used for the position controls. This plays a significant role because the quality of controls affect the noise level of the GW detector. The phase camera is able to monitor each sideband separately, which has a great benefit for the manipulation of the delicate controls. Also, overcoming mirror aberrations will be an essential part of Advanced Virgo (AdV), which is a GW detector close to Pisa. Especially low-frequency sidebands can be affected greatly by aberrations in one of the interferometer cavities. The phase cameras allow tracking such changes because the state of the sidebands gives information on mirror aberrations. A prototype of the phase camera has been developed and is currently tested. The performance checks are almost completed and the installation of the optics at the AdV site has started. After the installation and commissioning, the phase camera will be combined to a thermal compensation system that consists of CO 2 lasers and compensation plates. In this paper, we focus on the prototype and show some limitations from the scanner performance. - Highlights: • The phase camera is being developed for a gravitational wave detector. • A scanner performance limits the operation speed and layout design of the system. • An operation range was found by measuring the frequency response of the scanner.

  9. Small Orbital Stereo Tracking Camera Technology Development

    Science.gov (United States)

    Gagliano, L.; Bryan, T.; MacLeod, T.

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASAs Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  10. An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates

    Science.gov (United States)

    Hobbs, Michael T.; Brehme, Cheryl S.

    2017-01-01

    Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.

  11. Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor

    Directory of Open Access Journals (Sweden)

    Dong Seop Kim

    2018-03-01

    Full Text Available Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR open database, show that our method outperforms previous works.

  12. An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates.

    Science.gov (United States)

    Hobbs, Michael T; Brehme, Cheryl S

    2017-01-01

    Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.

  13. Multi-Angle Snowflake Camera Value-Added Product

    Energy Technology Data Exchange (ETDEWEB)

    Shkurko, Konstantin [Univ. of Utah, Salt Lake City, UT (United States); Garrett, T. [Univ. of Utah, Salt Lake City, UT (United States); Gaustad, K [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-12-01

    The Multi-Angle Snowflake Camera (MASC) addresses a need for high-resolution multi-angle imaging of hydrometeors in freefall with simultaneous measurement of fallspeed. As illustrated in Figure 1, the MASC consists of three cameras, separated by 36°, each pointing at an identical focal point approximately 10 cm away. Located immediately above each camera, a light aims directly at the center of depth of field for its corresponding camera. The focal point at which the cameras are aimed lies within a ring through which hydrometeors fall. The ring houses a system of near-infrared emitter-detector pairs, arranged in two arrays separated vertically by 32 mm. When hydrometeors pass through the lower array, they simultaneously trigger all cameras and lights. Fallspeed is calculated from the time it takes to traverse the distance between the upper and lower triggering arrays. The trigger electronics filter out ambient light fluctuations associated with varying sunlight and shadows. The microprocessor onboard the MASC controls the camera system and communicates with the personal computer (PC). The image data is sent via FireWire 800 line, and fallspeed (and camera control) is sent via a Universal Serial Bus (USB) line that relies on RS232-over-USB serial conversion. See Table 1 for specific details on the MASC located at the Oliktok Point Mobile Facility on the North Slope of Alaska. The value-added product (VAP) detailed in this documentation analyzes the raw data (Section 2.0) using Python: images rely on OpenCV image processing library and derived aggregated statistics rely on some clever averaging. See Sections 4.1 and 4.2 for more details on what variables are computed.

  14. Near Infrared Optical Visualization of Epidermal Growth Factor Receptors Levels in COLO205 Colorectal Cell Line, Orthotopic Tumor in Mice and Human Biopsies

    Directory of Open Access Journals (Sweden)

    Philip Lazarovici

    2013-07-01

    Full Text Available In this study, we present the applicability of imaging epidermal growth factor (EGF receptor levels in preclinical models of COLO205 carcinoma cells in vitro, mice with orthotopic tumors and ex vivo colorectal tumor biopsies, using EGF-labeled with IRDye800CW (EGF-NIR. The near infrared (NIR bio-imaging of COLO205 cultures indicated specific and selective binding, reflecting EGF receptors levels. In vivo imaging of tumors in mice showed that the highest signal/background ratio between tumor and adjacent tissue was achieved 48 hours post-injection. Dissected colorectal cancer tissues from different patients demonstrated ex vivo specific imaging using the NIR bio-imaging platform of the heterogeneous distributed EGF receptors. Moreover, in the adjacent gastrointestinal tissue of the same patients, which by Western blotting was demonstrated as EGF receptor negative, no labeling with EGF-NIR probe was detected. Present results support the concept of tumor imaging by measuring EGF receptor levels using EGF-NIR probe. This platform is advantageous for EGF receptor bio-imaging of the NCI-60 recommended panel of tumor cell lines including 6–9 colorectal cell lines, since it avoids radioactive probes and is appropriate for use in the clinical setting using NIR technologies in a real-time manner.

  15. Geometric Calibration and Radiometric Correction of the Maia Multispectral Camera

    Science.gov (United States)

    Nocerino, E.; Dubbini, M.; Menna, F.; Remondino, F.; Gattelli, M.; Covi, D.

    2017-10-01

    Multispectral imaging is a widely used remote sensing technique, whose applications range from agriculture to environmental monitoring, from food quality check to cultural heritage diagnostic. A variety of multispectral imaging sensors are available on the market, many of them designed to be mounted on different platform, especially small drones. This work focuses on the geometric and radiometric characterization of a brand-new, lightweight, low-cost multispectral camera, called MAIA. The MAIA camera is equipped with nine sensors, allowing for the acquisition of images in the visible and near infrared parts of the electromagnetic spectrum. Two versions are available, characterised by different set of band-pass filters, inspired by the sensors mounted on the WorlView-2 and Sentinel2 satellites, respectively. The camera details and the developed procedures for the geometric calibrations and radiometric correction are presented in the paper.

  16. The "All Sky Camera Network"

    Science.gov (United States)

    Caldwell, Andy

    2005-01-01

    In 2001, the "All Sky Camera Network" came to life as an outreach program to connect the Denver Museum of Nature and Science (DMNS) exhibit "Space Odyssey" with Colorado schools. The network is comprised of cameras placed strategically at schools throughout Colorado to capture fireballs--rare events that produce meteorites.…

  17. The Eye of the Camera

    NARCIS (Netherlands)

    van Rompay, Thomas Johannes Lucas; Vonk, Dorette J.; Fransen, M.L.

    2009-01-01

    This study addresses the effects of security cameras on prosocial behavior. Results from previous studies indicate that the presence of others can trigger helping behavior, arising from the need for approval of others. Extending these findings, the authors propose that security cameras can likewise

  18. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1982-01-01

    The invention provides a composite solid state detector for use in deriving a display, by spatial coordinate information, of the distribution or radiation emanating from a source within a region of interest, comprising several solid state detector components, each having a given surface arranged for exposure to impinging radiation and exhibiting discrete interactions therewith at given spatially definable locations. The surface of each component and the surface disposed opposite and substantially parallel thereto are associated with impedence means configured to provide for each opposed surface outputs for signals relating the given location of the interactions with one spatial coordinate parameter of one select directional sense. The detector components are arranged to provide groupings of adjacently disposed surfaces mutually linearly oriented to exhibit a common directional sense of the spatial coordinate parameter. Means interconnect at least two of the outputs associated with each of the surfaces within a given grouping for collecting the signals deriving therefrom. The invention also provides a camera system for imaging the distribution of a source of gamma radiation situated within a region of interest

  19. Infrared Imaging for Inquiry-Based Learning

    Science.gov (United States)

    Xie, Charles; Hazzard, Edmund

    2011-01-01

    Based on detecting long-wavelength infrared (IR) radiation emitted by the subject, IR imaging shows temperature distribution instantaneously and heat flow dynamically. As a picture is worth a thousand words, an IR camera has great potential in teaching heat transfer, which is otherwise invisible. The idea of using IR imaging in teaching was first…

  20. A new high-speed IR camera system

    Science.gov (United States)

    Travis, Jeffrey W.; Shu, Peter K.; Jhabvala, Murzy D.; Kasten, Michael S.; Moseley, Samuel H.; Casey, Sean C.; Mcgovern, Lawrence K.; Luers, Philip J.; Dabney, Philip W.; Kaipa, Ravi C.

    1994-01-01

    A multi-organizational team at the Goddard Space Flight Center is developing a new far infrared (FIR) camera system which furthers the state of the art for this type of instrument by the incorporating recent advances in several technological disciplines. All aspects of the camera system are optimized for operation at the high data rates required for astronomical observations in the far infrared. The instrument is built around a Blocked Impurity Band (BIB) detector array which exhibits responsivity over a broad wavelength band and which is capable of operating at 1000 frames/sec, and consists of a focal plane dewar, a compact camera head electronics package, and a Digital Signal Processor (DSP)-based data system residing in a standard 486 personal computer. In this paper we discuss the overall system architecture, the focal plane dewar, and advanced features and design considerations for the electronics. This system, or one derived from it, may prove useful for many commercial and/or industrial infrared imaging or spectroscopic applications, including thermal machine vision for robotic manufacturing, photographic observation of short-duration thermal events such as combustion or chemical reactions, and high-resolution surveillance imaging.

  1. Fourier transform infrared imaging microspectroscopy and tissue-level mechanical testing reveal intraspecies variation in mouse bone mineral and matrix composition.

    Science.gov (United States)

    Courtland, Hayden-William; Nasser, Philip; Goldstone, Andrew B; Spevak, Lyudmila; Boskey, Adele L; Jepsen, Karl J

    2008-11-01

    Fracture susceptibility is heritable and dependent upon bone morphology and quality. However, studies of bone quality are typically overshadowed by emphasis on bone geometry and bone mineral density. Given that differences in mineral and matrix composition exist in a variety of species, we hypothesized that genetic variation in bone quality and tissue-level mechanical properties would also exist within species. Sixteen-week-old female A/J, C57BL/6J (B6), and C3H/HeJ (C3H) inbred mouse femora were analyzed using Fourier transform infrared imaging and tissue-level mechanical testing for variation in mineral composition, mineral maturity, collagen cross-link ratio, and tissue-level mechanical properties. A/J femora had an increased mineral-to-matrix ratio compared to B6. The C3H mineral-to-matrix ratio was intermediate of A/J and B6. C3H femora had reduced acid phosphate and carbonate levels and an increased collagen cross-link ratio compared to A/J and B6. Modulus values paralleled mineral-to-matrix values, with A/J femora being the most stiff, B6 being the least stiff, and C3H having intermediate stiffness. In addition, work-to-failure varied among the strains, with the highly mineralized and brittle A/J femora performing the least amount of work-to-failure. Inbred mice are therefore able to differentially modulate the composition of their bone mineral and the maturity of their bone matrix in conjunction with tissue-level mechanical properties. These results suggest that specific combinations of bone quality and morphological traits are genetically regulated such that mechanically functional bones can be constructed in different ways.

  2. Microstructural properties of high level waste concentrates and gels with raman and infrared spectroscopies. 1997 annual progress report

    International Nuclear Information System (INIS)

    Agnew, S.F.; Coarbin, R.A.; Johnston, C.T.

    1997-01-01

    'Monosodium aluminate, the phase of aluminate found in waste tanks, is only stable over a fairly narrow range of water vapor pressure (22% relative humidity at 22 C). As a result, aluminate solids are stable at Hanford (seasonal average RH ∼20%) but are not be stable at Savannah River (seasonal average RH ∼40%). Monosodium aluminate (MSA) releases water upon precipitation from solution. In contrast, trisodium aluminate (TSA) consumes water upon precipitation. As a result, MSA precipitates gradually over time while TSA undergoes rapid accelerated precipitation, often gelling its solution. Raman spectra reported for first time for monosodium and trisodium aluminate solids. Ternary phase diagrams can be useful for showing effects of water removal, even with concentrated waste. Kinetics of monosodium aluminate precipitation are extremely slow (several months) at room temperature but quite fast (several hours) at 60 C. As a result, all waste simulants that contain aluminate need several days of cooking at 60 C in order to truly represent the equilibrium state of aluminate. The high level waste (HLW) slurries that have been created at the Hanford and Savannah River Sites over that last fifty years constitute a large fraction of the remaining HLW volumes at both sites. In spite of the preponderance of these wastes, very little quantitative information is available about their physical and chemical properties other than elemental analyses.'

  3. Infrared thermography on TFR 600 Tokamak

    International Nuclear Information System (INIS)

    Romain, Roland.

    1980-06-01

    Infrared thermography with a single InSb detector and with a scanning camera has been performed on the TFR fusion device. High power neutral beam injection diagnostic by means of an infrared periscope is showed to be possible. Surface temperature measurements on the limiter during the discharge have been made in order to evaluate the power deposited by the plasma on this part of the inner wall. Various attempts of infrared detection on the high power neutral injector prototype vessel are described, particularly the measurement of the power deposited on one of the extraction grids of the ion source [fr

  4. A novel super-resolution camera model

    Science.gov (United States)

    Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli

    2015-05-01

    Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.

  5. Fog camera to visualize ionizing charged particles

    International Nuclear Information System (INIS)

    Trujillo A, L.; Rodriguez R, N. I.; Vega C, H. R.

    2014-10-01

    The human being can not perceive the different types of ionizing radiation, natural or artificial, present in the nature, for what appropriate detection systems have been developed according to the sensibility to certain radiation type and certain energy type. The objective of this work was to build a fog camera to visualize the traces, and to identify the trajectories, produced by charged particles with high energy, coming mainly of the cosmic rays. The origin of the cosmic rays comes from the solar radiation generated by solar eruptions where the protons compose most of this radiation. It also comes, of the galactic radiation which is composed mainly of charged particles and gamma rays that comes from outside of the solar system. These radiation types have energy time millions higher that those detected in the earth surface, being more important as the height on the sea level increases. These particles in their interaction produce secondary particles that are detectable by means of this cameras type. The camera operates by means of a saturated atmosphere of alcohol vapor. In the moment in that a charged particle crosses the cold area of the atmosphere, the medium is ionized and the particle acts like a condensation nucleus of the alcohol vapor, leaving a visible trace of its trajectory. The built camera was very stable, allowing the detection in continuous form and the observation of diverse events. (Author)

  6. WHAT IS CONTROLLING THE FRAGMENTATION IN THE INFRARED DARK CLOUD G14.225–0.506?: DIFFERENT LEVELS OF FRAGMENTATION IN TWIN HUBS

    Energy Technology Data Exchange (ETDEWEB)

    Busquet, Gemma; Girart, Josep Miquel [Institut de Ciències de l’Espai (CSIC-IEEC), Campus UAB, Carrer de Can Magrans, S/N, E-08193, Cerdanyola del Vallès, Catalunya (Spain); Estalella, Robert [Departament d’Astronomia i Meteorologia, Institut de Ciències del Cosmos (ICC), Universitat de Barcelona (IEEC-UB), Martí i Franquès, 1, E-08028 Barcelona, Catalunya (Spain); Palau, Aina [Instituto de Radioastronomía y Astrofísica, Universidad Nacional Autónoma de México, P.O. Box 3-72, 58090 Morelia, Michoacán, México (Mexico); Liu, Hauyu Baobab; Ho, Paul T. P. [Academia Sinica Institute of Astronomy and Astrophysics, Taipei, Taiwan (China); Zhang, Qizhou [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); De Gregorio-Monsalvo, Itziar [European Southern Observatory (ESO), Karl-Schwarzschild-Str. 2, D-85748 Garching (Germany); Pillai, Thushara [Max Planck Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn (Germany); Anglada, Guillem, E-mail: busquet@ice.cat [Instituto de Astrofísica de Andalucía, CSIC, Glorieta de la Astronomía, s/n, E-18008, Granada (Spain)

    2016-03-20

    We present observations of the 1.3 mm continuum emission toward hub-N and hub-S of the infrared dark cloud G14.225–0.506 carried out with the Submillimeter Array, together with observations of the dust emission at 870 and 350 μm obtained with APEX and CSO telescopes. The large-scale dust emission of both hubs consists of a single peaked clump elongated in the direction of the associated filament. At small scales, the SMA images reveal that both hubs fragment into several dust condensations. The fragmentation level was assessed under the same conditions and we found that hub-N presents 4 fragments while hub-S is more fragmented, with 13 fragments identified. We studied the density structure by means of a simultaneous fit of the radial intensity profile at 870 and 350 μm and the spectral energy distribution adopting a Plummer-like function to describe the density structure. The parameters inferred from the model are remarkably similar in both hubs, suggesting that density structure could not be responsible for determining the fragmentation level. We estimated several physical parameters, such as the level of turbulence and the magnetic field strength, and we found no significant differences between these hubs. The Jeans analysis indicates that the observed fragmentation is more consistent with thermal Jeans fragmentation compared with a scenario in which turbulent support is included. The lower fragmentation level observed in hub-N could be explained in terms of stronger UV radiation effects from a nearby H ii region, evolutionary effects, and/or stronger magnetic fields at small scales, a scenario that should be further investigated.

  7. Assessing the Driver's Current Level of Working Memory Load with High Density Functional Near-infrared Spectroscopy: A Realistic Driving Simulator Study.

    Science.gov (United States)

    Unni, Anirudh; Ihme, Klas; Jipp, Meike; Rieger, Jochem W

    2017-01-01

    Cognitive overload or underload results in a decrease in human performance which may result in fatal incidents while driving. We envision that driver assistive systems which adapt their functionality to the driver's cognitive state could be a promising approach to reduce road accidents due to human errors. This research attempts to predict variations of cognitive working memory load levels in a natural driving scenario with multiple parallel tasks and to reveal predictive brain areas. We used a modified version of the n-back task to induce five different working memory load levels (from 0-back up to 4-back) forcing the participants to continuously update, memorize, and recall the previous 'n' speed sequences and adjust their speed accordingly while they drove for approximately 60 min on a highway with concurrent traffic in a virtual reality driving simulator. We measured brain activation using multichannel whole head, high density functional near-infrared spectroscopy (fNIRS) and predicted working memory load level from the fNIRS data by combining multivariate lasso regression and cross-validation. This allowed us to predict variations in working memory load in a continuous time-resolved manner with mean Pearson correlations between induced and predicted working memory load over 15 participants of 0.61 [standard error (SE) 0.04] and a maximum of 0.8. Restricting the analysis to prefrontal sensors placed over the forehead reduced the mean correlation to 0.38 (SE 0.04), indicating additional information gained through whole head coverage. Moreover, working memory load predictions derived from peripheral heart rate parameters achieved much lower correlations (mean 0.21, SE 0.1). Importantly, whole head fNIRS sampling revealed increasing brain activation in bilateral inferior frontal and bilateral temporo-occipital brain areas with increasing working memory load levels suggesting that these areas are specifically involved in workload-related processing.

  8. Assessing the Driver’s Current Level of Working Memory Load with High Density Functional Near-infrared Spectroscopy: A Realistic Driving Simulator Study

    Science.gov (United States)

    Unni, Anirudh; Ihme, Klas; Jipp, Meike; Rieger, Jochem W.

    2017-01-01

    Cognitive overload or underload results in a decrease in human performance which may result in fatal incidents while driving. We envision that driver assistive systems which adapt their functionality to the driver’s cognitive state could be a promising approach to reduce road accidents due to human errors. This research attempts to predict variations of cognitive working memory load levels in a natural driving scenario with multiple parallel tasks and to reveal predictive brain areas. We used a modified version of the n-back task to induce five different working memory load levels (from 0-back up to 4-back) forcing the participants to continuously update, memorize, and recall the previous ‘n’ speed sequences and adjust their speed accordingly while they drove for approximately 60 min on a highway with concurrent traffic in a virtual reality driving simulator. We measured brain activation using multichannel whole head, high density functional near-infrared spectroscopy (fNIRS) and predicted working memory load level from the fNIRS data by combining multivariate lasso regression and cross-validation. This allowed us to predict variations in working memory load in a continuous time-resolved manner with mean Pearson correlations between induced and predicted working memory load over 15 participants of 0.61 [standard error (SE) 0.04] and a maximum of 0.8. Restricting the analysis to prefrontal sensors placed over the forehead reduced the mean correlation to 0.38 (SE 0.04), indicating additional information gained through whole head coverage. Moreover, working memory load predictions derived from peripheral heart rate parameters achieved much lower correlations (mean 0.21, SE 0.1). Importantly, whole head fNIRS sampling revealed increasing brain activation in bilateral inferior frontal and bilateral temporo-occipital brain areas with increasing working memory load levels suggesting that these areas are specifically involved in workload-related processing. PMID

  9. Assessing the Driver’s Current Level of Working Memory Load with High Density Functional Near-infrared Spectroscopy: A Realistic Driving Simulator Study

    Directory of Open Access Journals (Sweden)

    Anirudh Unni

    2017-04-01

    Full Text Available Cognitive overload or underload results in a decrease in human performance which may result in fatal incidents while driving. We envision that driver assistive systems which adapt their functionality to the driver’s cognitive state could be a promising approach to reduce road accidents due to human errors. This research attempts to predict variations of cognitive working memory load levels in a natural driving scenario with multiple parallel tasks and to reveal predictive brain areas. We used a modified version of the n-back task to induce five different working memory load levels (from 0-back up to 4-back forcing the participants to continuously update, memorize, and recall the previous ‘n’ speed sequences and adjust their speed accordingly while they drove for approximately 60 min on a highway with concurrent traffic in a virtual reality driving simulator. We measured brain activation using multichannel whole head, high density functional near-infrared spectroscopy (fNIRS and predicted working memory load level from the fNIRS data by combining multivariate lasso regression and cross-validation. This allowed us to predict variations in working memory load in a continuous time-resolved manner with mean Pearson correlations between induced and predicted working memory load over 15 participants of 0.61 [standard error (SE 0.04] and a maximum of 0.8. Restricting the analysis to prefrontal sensors placed over the forehead reduced the mean correlation to 0.38 (SE 0.04, indicating additional information gained through whole head coverage. Moreover, working memory load predictions derived from peripheral heart rate parameters achieved much lower correlations (mean 0.21, SE 0.1. Importantly, whole head fNIRS sampling revealed increasing brain activation in bilateral inferior frontal and bilateral temporo-occipital brain areas with increasing working memory load levels suggesting that these areas are specifically involved in workload

  10. WHAT IS CONTROLLING THE FRAGMENTATION IN THE INFRARED DARK CLOUD G14.225–0.506?: DIFFERENT LEVELS OF FRAGMENTATION IN TWIN HUBS

    International Nuclear Information System (INIS)

    Busquet, Gemma; Girart, Josep Miquel; Estalella, Robert; Palau, Aina; Liu, Hauyu Baobab; Ho, Paul T. P.; Zhang, Qizhou; De Gregorio-Monsalvo, Itziar; Pillai, Thushara; Anglada, Guillem

    2016-01-01

    We present observations of the 1.3 mm continuum emission toward hub-N and hub-S of the infrared dark cloud G14.225–0.506 carried out with the Submillimeter Array, together with observations of the dust emission at 870 and 350 μm obtained with APEX and CSO telescopes. The large-scale dust emission of both hubs consists of a single peaked clump elongated in the direction of the associated filament. At small scales, the SMA images reveal that both hubs fragment into several dust condensations. The fragmentation level was assessed under the same conditions and we found that hub-N presents 4 fragments while hub-S is more fragmented, with 13 fragments identified. We studied the density structure by means of a simultaneous fit of the radial intensity profile at 870 and 350 μm and the spectral energy distribution adopting a Plummer-like function to describe the density structure. The parameters inferred from the model are remarkably similar in both hubs, suggesting that density structure could not be responsible for determining the fragmentation level. We estimated several physical parameters, such as the level of turbulence and the magnetic field strength, and we found no significant differences between these hubs. The Jeans analysis indicates that the observed fragmentation is more consistent with thermal Jeans fragmentation compared with a scenario in which turbulent support is included. The lower fragmentation level observed in hub-N could be explained in terms of stronger UV radiation effects from a nearby H ii region, evolutionary effects, and/or stronger magnetic fields at small scales, a scenario that should be further investigated

  11. PERFORMANCE EVALUATION OF THERMOGRAPHIC CAMERAS FOR PHOTOGRAMMETRIC MEASUREMENTS

    Directory of Open Access Journals (Sweden)

    N. Yastikli

    2013-05-01

    Full Text Available The aim of this research is the performance evaluation of the termographic cameras for possible use for photogrammetric documentation and deformation analyses caused by moisture and isolation problem of the historical and cultural heritage. To perform geometric calibration of the termographic camera, the 3D test object was designed with 77 control points which were distributed in different depths. For performance evaluation, Flir A320 termographic camera with 320 × 240 pixels and lens with 18 mm focal length was used. The Nikon D3X SLR digital camera with 6048 × 4032 pixels and lens with 20 mm focal length was used as reference for comparison. The size of pixel was 25 μm for the Flir A320 termographic camera and 6 μm for the Nikon D3X SLR digital camera. The digital images of the 3D test object were recorded with the Flir A320 termographic camera and Nikon D3X SLR digital camera and the image coordinate of the control points in the images were measured. The geometric calibration parameters, including the focal length, position of principal points, radial and tangential distortions were determined with introduced additional parameters in bundle block adjustments. The measurement of image coordinates and bundle block adjustments with additional parameters were performed using the PHIDIAS digital photogrammetric system. The bundle block adjustment was repeated with determined calibration parameter for both Flir A320 termographic camera and Nikon D3X SLR digital camera. The obtained standard deviation of measured image coordinates was 9.6 μm and 10.5 μm for Flir A320 termographic camera and 8.3 μm and 7.7 μm for Nikon D3X SLR digital camera. The obtained standard deviation of measured image points in Flir A320 termographic camera images almost same accuracy level with digital camera in comparison with 4 times bigger pixel size. The obtained results from this research, the interior geometry of the termographic cameras and lens distortion was

  12. Performance Evaluation of Thermographic Cameras for Photogrammetric Measurements

    Science.gov (United States)

    Yastikli, N.; Guler, E.

    2013-05-01

    The aim of this research is the performance evaluation of the termographic cameras for possible use for photogrammetric documentation and deformation analyses caused by moisture and isolation problem of the historical and cultural heritage. To perform geometric calibration of the termographic camera, the 3D test object was designed with 77 control points which were distributed in different depths. For performance evaluation, Flir A320 termographic camera with 320 × 240 pixels and lens with 18 mm focal length was used. The Nikon D3X SLR digital camera with 6048 × 4032 pixels and lens with 20 mm focal length was used as reference for comparison. The size of pixel was 25 μm for the Flir A320 termographic camera and 6 μm for the Nikon D3X SLR digital camera. The digital images of the 3D test object were recorded with the Flir A320 termographic camera and Nikon D3X SLR digital camera and the image coordinate of the control points in the images were measured. The geometric calibration parameters, including the focal length, position of principal points, radial and tangential distortions were determined with introduced additional parameters in bundle block adjustments. The measurement of image coordinates and bundle block adjustments with additional parameters were performed using the PHIDIAS digital photogrammetric system. The bundle block adjustment was repeated with determined calibration parameter for both Flir A320 termographic camera and Nikon D3X SLR digital camera. The obtained standard deviation of measured image coordinates was 9.6 μm and 10.5 μm for Flir A320 termographic camera and 8.3 μm and 7.7 μm for Nikon D3X SLR digital camera. The obtained standard deviation of measured image points in Flir A320 termographic camera images almost same accuracy level with digital camera in comparison with 4 times bigger pixel size. The obtained results from this research, the interior geometry of the termographic cameras and lens distortion was modelled efficiently

  13. Terahertz and Mid Infrared

    CERN Document Server

    Shulika, Oleksiy; Detection of Explosives and CBRN (Using Terahertz)

    2014-01-01

    The reader will find here a timely update on new THz sources and detection schemes as well as concrete applications to the detection of Explosives and CBRN. Included is a method to identify hidden RDX-based explosives (pure and plastic ones) in the frequency domain study by Fourier Transformation, which has been complemented by the demonstration of improvement of the quality of the images captured commercially available THz passive cameras. The presented examples show large potential for the detection of small hidden objects at long distances (6-10 m).  Complementing the results in the short-wavelength range, laser spectroscopy with a mid-infrared, room temperature, continuous wave, DFB laser diode and high performance DFB QCL have been demonstrated to offer excellent enabling sensor technologies for environmental monitoring, medical diagnostics, industrial and security applications.  From the new source point of view a number of systems have been presented - From superconductors to semiconductors, e.g. Det...

  14. Development of underwater camera using high-definition camera

    International Nuclear Information System (INIS)

    Tsuji, Kenji; Watanabe, Masato; Takashima, Masanobu; Kawamura, Shingo; Tanaka, Hiroyuki

    2012-01-01

    In order to reduce the time for core verification or visual inspection of BWR fuels, the underwater camera using a High-Definition camera has been developed. As a result of this development, the underwater camera has 2 lights and 370 x 400 x 328mm dimensions and 20.5kg weight. Using the camera, 6 or so spent-fuel IDs are identified at 1 or 1.5m distance at a time, and 0.3mmφ pin-hole is recognized at 1.5m distance and 20 times zoom-up. Noises caused by radiation less than 15 Gy/h are not affected the images. (author)

  15. Microprocessor-controlled wide-range streak camera

    Science.gov (United States)

    Lewis, Amy E.; Hollabaugh, Craig

    2006-08-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  16. Microprocessor-controlled, wide-range streak camera

    International Nuclear Information System (INIS)

    Amy E. Lewis; Craig Hollabaugh

    2006-01-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized

  17. Improving Photometric Calibration of Meteor Video Camera Systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at approx. 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  18. Polarizing aperture stereoscopic cinema camera

    Science.gov (United States)

    Lipton, Lenny

    2012-07-01

    The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.

  19. Teaching physics and understanding infrared thermal imaging

    Science.gov (United States)

    Vollmer, Michael; Möllmann, Klaus-Peter

    2017-08-01

    Infrared thermal imaging is a very rapidly evolving field. The latest trends are small smartphone IR camera accessories, making infrared imaging a widespread and well-known consumer product. Applications range from medical diagnosis methods via building inspections and industrial predictive maintenance etc. also to visualization in the natural sciences. Infrared cameras do allow qualitative imaging and visualization but also quantitative measurements of the surface temperatures of objects. On the one hand, they are a particularly suitable tool to teach optics and radiation physics and many selected topics in different fields of physics, on the other hand there is an increasing need of engineers and physicists who understand these complex state of the art photonics systems. Therefore students must also learn and understand the physics underlying these systems.

  20. World's fastest and most sensitive astronomical camera

    Science.gov (United States)

    2009-06-01

    The next generation of instruments for ground-based telescopes took a leap forward with the development of a new ultra-fast camera that can take 1500 finely exposed images per second even when observing extremely faint objects. The first 240x240 pixel images with the world's fastest high precision faint light camera were obtained through a collaborative effort between ESO and three French laboratories from the French Centre National de la Recherche Scientifique/Institut National des Sciences de l'Univers (CNRS/INSU). Cameras such as this are key components of the next generation of adaptive optics instruments of Europe's ground-based astronomy flagship facility, the ESO Very Large Telescope (VLT). ESO PR Photo 22a/09 The CCD220 detector ESO PR Photo 22b/09 The OCam camera ESO PR Video 22a/09 OCam images "The performance of this breakthrough camera is without an equivalent anywhere in the world. The camera will enable great leaps forward in many areas of the study of the Universe," says Norbert Hubin, head of the Adaptive Optics department at ESO. OCam will be part of the second-generation VLT instrument SPHERE. To be installed in 2011, SPHERE will take images of giant exoplanets orbiting nearby stars. A fast camera such as this is needed as an essential component for the modern adaptive optics instruments used on the largest ground-based telescopes. Telescopes on the ground suffer from the blurring effect induced by atmospheric turbulence. This turbulence causes the stars to twinkle in a way that delights poets, but frustrates astronomers, since it blurs the finest details of the images. Adaptive optics techniques overcome this major drawback, so that ground-based telescopes can produce images that are as sharp as if taken from space. Adaptive optics is based on real-time corrections computed from images obtained by a special camera working at very high speeds. Nowadays, this means many hundreds of times each second. The new generation instruments require these

  1. Application of infrared to biomedical sciences

    CERN Document Server

    Etehadtavakol, Mahnaz

    2017-01-01

    The book covers the latest updates in the application of infrared to biomedical sciences, a non-invasive, contactless, safe and easy approach imaging of skin and tissue temperatures. Its diagnostic procedure allows practitioners to identify the locations of abnormal chemical and blood vessel activity such as angiogenesis in body tissue. Its non-invasive approach works by applying the technology of the infrared camera and state-of-the-art software, where high-resolution digital infrared imaging technology benefits highly from enhanced image production, standardized image interpretation protocols, computerized comparison and storage, and sophisticated image enhancement and analysis. The book contains contributions from global prominent scientists in the area of infrared applications in biomedical studies. The target audience includes academics, practitioners, clinicians and students working in the area of infrared imaging in biomedicine.

  2. Linearity correction device for a scintillation camera

    Energy Technology Data Exchange (ETDEWEB)

    Lange, Kai

    1978-06-16

    This invention concerns the scintillation cameras still called gamma ray camera. The invention particularly covers the improvement in the resolution and the uniformity of these cameras. Briefly, in the linearity correction device of the invention, the sum is made of the voltage signals of different amplitudes produced by the preamplifiers of all the photomultiplier tubes and the signal obtained is employed to generate bias voltages which represent predetermined percentages of the sum signal. In one design mode, pairs of transistors are blocked when the output signal of the corresponding preamplifier is under a certain point on its gain curve. When the summation of the energies of a given scintillation exceeds this level which corresponds to a first percentage of the total signal, the first transistor of each pair of each line is unblocked, thereby modifying the gain and curve slop. When the total energy of an event exceeds the next preset level, the second transistor is unblocked to alter the shape again, so much so that the curve shows two break points. If needs be, the device can be designed so as to obtain more break points for the increasingly higher levels of energy. Once the signals have been processed as described above, they may be used for calculating the co-ordinates of the scintillation by one of the conventional methods.

  3. Can camera traps monitor Komodo dragons a large ectothermic predator?

    Directory of Open Access Journals (Sweden)

    Achmad Ariefiandy

    Full Text Available Camera trapping has greatly enhanced population monitoring of often cryptic and low abundance apex carnivores. Effectiveness of passive infrared camera trapping, and ultimately population monitoring, relies on temperature mediated differences between the animal and its ambient environment to ensure good camera detection. In ectothermic predators such as large varanid lizards, this criterion is presumed less certain. Here we evaluated the effectiveness of camera trapping to potentially monitor the population status of the Komodo dragon (Varanus komodoensis, an apex predator, using site occupancy approaches. We compared site-specific estimates of site occupancy and detection derived using camera traps and cage traps at 181 trapping locations established across six sites on four islands within Komodo National Park, Eastern Indonesia. Detection and site occupancy at each site were estimated using eight competing models that considered site-specific variation in occupancy (ψand varied detection probabilities (p according to detection method, site and survey number using a single season site occupancy modelling approach. The most parsimonious model [ψ (site, p (site survey; ω = 0.74] suggested that site occupancy estimates differed among sites. Detection probability varied as an interaction between site and survey number. Our results indicate that overall camera traps produced similar estimates of detection and site occupancy to cage traps, irrespective of being paired, or unpaired, with cage traps. Whilst one site showed some evidence detection was affected by trapping method detection was too low to produce an accurate occupancy estimate. Overall, as camera trapping is logistically more feasible it may provide, with further validation, an alternative method for evaluating long-term site occupancy patterns in Komodo dragons, and potentially other large reptiles, aiding conservation of this species.

  4. Can camera traps monitor Komodo dragons a large ectothermic predator?

    Science.gov (United States)

    Ariefiandy, Achmad; Purwandana, Deni; Seno, Aganto; Ciofi, Claudio; Jessop, Tim S

    2013-01-01

    Camera trapping has greatly enhanced population monitoring of often cryptic and low abundance apex carnivores. Effectiveness of passive infrared camera trapping, and ultimately population monitoring, relies on temperature mediated differences between the animal and its ambient environment to ensure good camera detection. In ectothermic predators such as large varanid lizards, this criterion is presumed less certain. Here we evaluated the effectiveness of camera trapping to potentially monitor the population status of the Komodo dragon (Varanus komodoensis), an apex predator, using site occupancy approaches. We compared site-specific estimates of site occupancy and detection derived using camera traps and cage traps at 181 trapping locations established across six sites on four islands within Komodo National Park, Eastern Indonesia. Detection and site occupancy at each site were estimated using eight competing models that considered site-specific variation in occupancy (ψ)and varied detection probabilities (p) according to detection method, site and survey number using a single season site occupancy modelling approach. The most parsimonious model [ψ (site), p (site survey); ω = 0.74] suggested that site occupancy estimates differed among sites. Detection probability varied as an interaction between site and survey number. Our results indicate that overall camera traps produced similar estimates of detection and site occupancy to cage traps, irrespective of being paired, or unpaired, with cage traps. Whilst one site showed some evidence detection was affected by trapping method detection was too low to produce an accurate occupancy estimate. Overall, as camera trapping is logistically more feasible it may provide, with further validation, an alternative method for evaluating long-term site occupancy patterns in Komodo dragons, and potentially other large reptiles, aiding conservation of this species.

  5. Can Camera Traps Monitor Komodo Dragons a Large Ectothermic Predator?

    Science.gov (United States)

    Ariefiandy, Achmad; Purwandana, Deni; Seno, Aganto; Ciofi, Claudio; Jessop, Tim S.

    2013-01-01

    Camera trapping has greatly enhanced population monitoring of often cryptic and low abundance apex carnivores. Effectiveness of passive infrared camera trapping, and ultimately population monitoring, relies on temperature mediated differences between the animal and its ambient environment to ensure good camera detection. In ectothermic predators such as large varanid lizards, this criterion is presumed less certain. Here we evaluated the effectiveness of camera trapping to potentially monitor the population status of the Komodo dragon (Varanus komodoensis), an apex predator, using site occupancy approaches. We compared site-specific estimates of site occupancy and detection derived using camera traps and cage traps at 181 trapping locations established across six sites on four islands within Komodo National Park, Eastern Indonesia. Detection and site occupancy at each site were estimated using eight competing models that considered site-specific variation in occupancy (ψ)and varied detection probabilities (p) according to detection method, site and survey number using a single season site occupancy modelling approach. The most parsimonious model [ψ (site), p (site*survey); ω = 0.74] suggested that site occupancy estimates differed among sites. Detection probability varied as an interaction between site and survey number. Our results indicate that overall camera traps produced similar estimates of detection and site occupancy to cage traps, irrespective of being paired, or unpaired, with cage traps. Whilst one site showed some evidence detection was affected by trapping method detection was too low to produce an accurate occupancy estimate. Overall, as camera trapping is logistically more feasible it may provide, with further validation, an alternative method for evaluating long-term site occupancy patterns in Komodo dragons, and potentially other large reptiles, aiding conservation of this species. PMID:23527027

  6. Analyzer for gamma cameras diagnostic

    International Nuclear Information System (INIS)

    Oramas Polo, I.; Osorio Deliz, J. F.; Diaz Garcia, A.

    2013-01-01

    This research work was carried out to develop an analyzer for gamma cameras diagnostic. It is composed of an electronic system that includes hardware and software capabilities, and operates from the acquisition of the 4 head position signals of a gamma camera detector. The result is the spectrum of the energy delivered by nuclear radiation coming from the camera detector head. This system includes analog processing of position signals from the camera, digitization and the subsequent processing of the energy signal in a multichannel analyzer, sending data to a computer via a standard USB port and processing of data in a personal computer to obtain the final histogram. The circuits are composed of an analog processing board and a universal kit with micro controller and programmable gate array. (Author)

  7. Interband cascade laser-based ppbv-level mid-infrared methane detection using two digital lock-in amplifier schemes

    Science.gov (United States)

    Song, Fang; Zheng, Chuantao; Yu, Di; Zhou, Yanwen; Yan, Wanhong; Ye, Weilin; Zhang, Yu; Wang, Yiding; Tittel, Frank K.

    2018-03-01

    A parts-per-billion in volume (ppbv) level mid-infrared methane (CH4) sensor system was demonstrated using second-harmonic wavelength modulation spectroscopy (2 f-WMS). A 3291 nm interband cascade laser (ICL) and a multi-pass gas cell (MPGC) with a 16 m optical path length were adopted in the reported sensor system. Two digital lock-in amplifier (DLIA) schemes, a digital signal processor (DSP)-based DLIA and a LabVIEW-based DLIA, were used for harmonic signal extraction. A limit of detection (LoD) of 13.07 ppbv with an averaging time of 2 s was achieved using the DSP-based DLIA and a LoD of 5.84 ppbv was obtained using the LabVIEW-based DLIA with the same averaging time. A rise time of 0→2 parts-per-million in volume (ppmv) and fall time of 2→0 ppmv were observed. Outdoor atmospheric CH4 concentration measurements were carried out to evaluate the sensor performance using the two DLIA schemes.

  8. Load-dependent brain activation assessed by time-domain functional near-infrared spectroscopy during a working memory task with graded levels of difficulty

    Science.gov (United States)

    Molteni, Erika; Contini, Davide; Caffini, Matteo; Baselli, Giuseppe; Spinelli, Lorenzo; Cubeddu, Rinaldo; Cerutti, Sergio; Bianchi, Anna Maria; Torricelli, Alessandro

    2012-05-01

    We evaluated frontal brain activation during a mixed attentional/working memory task with graded levels of difficulty in a group of 19 healthy subjects, by means of time-domain functional near-infrared spectroscopy (fNIRS). Brain activation was assessed, and load-related oxy- and deoxy-hemoglobin changes were studied. Generalized linear model (GLM) was applied to the data to explore the metabolic processes occurring during the mental effort and, possibly, their involvement in short-term memorization. GLM was applied to the data twice: for modeling the task as a whole and for specifically investigating brain activation at each cognitive load. This twofold employment of GLM allowed (1) the extraction and isolation of different information from the same signals, obtained through the modeling of different cognitive categories (sustained attention and working memory), and (2) the evaluation of model fitness, by inspection and comparison of residuals (i.e., unmodeled part of the signal) obtained in the two different cases. Results attest to the presence of a persistent attentional-related metabolic activity, superimposed to a task-related mnemonic contribution. Some hemispherical differences have also been highlighted frontally: deoxy-hemoglobin changes manifested a strong right lateralization, whereas modifications in oxy- and total hemoglobin showed a medial localization. The present work successfully explored the capability of fNIRS to detect the two neurophysiological categories under investigation and distinguish their activation patterns.

  9. Astronomy and the camera obscura

    Science.gov (United States)

    Feist, M.

    2000-02-01

    The camera obscura (from Latin meaning darkened chamber) is a simple optical device with a long history. In the form considered here, it can be traced back to 1550. It had its heyday during the Victorian era when it was to be found at the seaside as a tourist attraction or sideshow. It was also used as an artist's drawing aid and, in 1620, the famous astronomer-mathematician, Johannes Kepler used a small tent camera obscura to trace the scenery.

  10. The future of consumer cameras

    Science.gov (United States)

    Battiato, Sebastiano; Moltisanti, Marco

    2015-03-01

    In the last two decades multimedia, and in particular imaging devices (camcorders, tablets, mobile phones, etc.) have been dramatically diffused. Moreover the increasing of their computational performances, combined with an higher storage capability, allows them to process large amount of data. In this paper an overview of the current trends of consumer cameras market and technology will be given, providing also some details about the recent past (from Digital Still Camera up today) and forthcoming key issues.

  11. Camera Coverage Estimation Based on Multistage Grid Subdivision

    Directory of Open Access Journals (Sweden)

    Meizhen Wang

    2017-04-01

    Full Text Available Visual coverage is one of the most important quality indexes for depicting the usability of an individual camera or camera network. It is the basis for camera network deployment, placement, coverage-enhancement, planning, etc. Precision and efficiency are critical influences on applications, especially those involving several cameras. This paper proposes a new method to efficiently estimate superior camera coverage. First, the geographic area that is covered by the camera and its minimum bounding rectangle (MBR without considering obstacles is computed using the camera parameters. Second, the MBR is divided into grids using the initial grid size. The status of the four corners of each grid is estimated by a line of sight (LOS algorithm. If the camera, considering obstacles, covers a corner, the status is represented by 1, otherwise by 0. Consequently, the status of a grid can be represented by a code that is a combination of 0s or 1s. If the code is not homogeneous (not four 0s or four 1s, the grid will be divided into four sub-grids until the sub-grids are divided into a specific maximum level or their codes are homogeneous. Finally, after performing the process above, total camera coverage is estimated according to the size and status of all grids. Experimental results illustrate that the proposed method’s accuracy is determined by the method that divided the coverage area into the smallest grids at the maximum level, while its efficacy is closer to the method that divided the coverage area into the initial grids. It considers both efficiency and accuracy. The initial grid size and maximum level are two critical influences on the proposed method, which can be determined by weighing efficiency and accuracy.

  12. Science, conservation, and camera traps

    Science.gov (United States)

    Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.

  13. Computing camera heading: A study

    Science.gov (United States)

    Zhang, John Jiaxiang

    2000-08-01

    An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.

  14. GHRSST Level 2P Global Sea Surface Temperature from the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi NPP satellite (GDS version 2)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Visible and Infrared Imager/Radiometer Suite (VIIRS) is a multi-disciplinary instrument that is being flown on the Joint Polar Satellite System (JPSS) series of...

  15. AIRS/Aqua Near Real Time (NRT) Level 1B Visible/Near Infrared (VIS/NIR) geolocated and calibrated radiances V005

    Data.gov (United States)

    National Aeronautics and Space Administration — The Atmospheric Infrared Sounder (AIRS) is a facility instrument aboard the second Earth Observing System (EOS) polar-orbiting platform, EOS Aqua. In combination...

  16. AIRS/Aqua Near Real Time (NRT) Level 1B Visible/Near Infrared (VIS/NIR) quality assurance subset V005

    Data.gov (United States)

    National Aeronautics and Space Administration — The Atmospheric Infrared Sounder (AIRS) is a facility instrument aboard the second Earth Observing System (EOS) polar-orbiting platform, EOS Aqua. In combination...

  17. Declarative camera control for automatic cinematography

    Energy Technology Data Exchange (ETDEWEB)

    Christianson, D.B.; Anderson, S.E.; Li-wei He [Univ. of Washington, Seattle, WA (United States)] [and others

    1996-12-31

    Animations generated by interactive 3D computer graphics applications are typically portrayed either from a particular character`s point of view or from a small set of strategically-placed viewpoints. By ignoring camera placement, such applications fail to realize important storytelling capabilities that have been explored by cinematographers for many years. In this paper, we describe several of the principles of cinematography and show how they can be formalized into a declarative language, called the Declarative Camera Control Language (DCCL). We describe the application of DCCL within the context of a simple interactive video game and argue that DCCL represents cinematic knowledge at the same level of abstraction as expert directors by encoding 16 idioms from a film textbook. These idioms produce compelling animations, as demonstrated on the accompanying videotape.

  18. The Cosmic Infrared Background Experiment

    Science.gov (United States)

    Bock, James; Battle, J.; Cooray, A.; Hristov, V.; Kawada, M.; Keating, B.; Lee, D.; Matsumoto, T.; Matsuura, S.; Nam, U.; Renbarger, T.; Sullivan, I.; Tsumura, K.; Wada, T.; Zemcov, M.

    2009-01-01

    We are developing the Cosmic Infrared Background ExpeRiment (CIBER) to search for signatures of first-light galaxy emission in the extragalactic background. The first generation of stars produce characteristic signatures in the near-infrared extragalactic background, including a redshifted Ly-cutoff feature and a characteristic fluctuation power spectrum, that may be detectable with a specialized instrument. CIBER consists of two wide-field cameras to measure the fluctuation power spectrum, and a low-resolution and a narrow-band spectrometer to measure the absolute background. The cameras will search for fluctuations on angular scales from 7 arcseconds to 2 degrees, where the first-light galaxy spatial power spectrum peaks. The cameras have the necessary combination of sensitivity, wide field of view, spatial resolution, and multiple bands to make a definitive measurement. CIBER will determine if the fluctuations reported by Spitzer arise from first-light galaxies. The cameras observe in a single wide field of view, eliminating systematic errors associated with mosaicing. Two bands are chosen to maximize the first-light signal contrast, at 1.6 um near the expected spectral maximum, and at 1.0 um; the combination is a powerful discriminant against fluctuations arising from local sources. We will observe regions of the sky surveyed by Spitzer and Akari. The low-resolution spectrometer will search for the redshifted Lyman cutoff feature in the 0.7 - 1.8 um spectral region. The narrow-band spectrometer will measure the absolute Zodiacal brightness using the scattered 854.2 nm Ca II Fraunhofer line. The spectrometers will test if reports of a diffuse extragalactic background in the 1 - 2 um band continues into the optical, or is caused by an under estimation of the Zodiacal foreground. We report performance of the assembled and tested instrument as we prepare for a first sounding rocket flight in early 2009. CIBER is funded by the NASA/APRA sub-orbital program.

  19. Space-based infrared sensors of space target imaging effect analysis

    Science.gov (United States)

    Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang

    2018-02-01

    Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.

  20. Sub-Camera Calibration of a Penta-Camera

    Science.gov (United States)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  1. An autonomous sensor module based on a legacy CCTV camera

    Science.gov (United States)

    Kent, P. J.; Faulkner, D. A. A.; Marshall, G. F.

    2016-10-01

    A UK MoD funded programme into autonomous sensors arrays (SAPIENT) has been developing new, highly capable sensor modules together with a scalable modular architecture for control and communication. As part of this system there is a desire to also utilise existing legacy sensors. The paper reports upon the development of a SAPIENT-compliant sensor module using a legacy Close-Circuit Television (CCTV) pan-tilt-zoom (PTZ) camera. The PTZ camera sensor provides three modes of operation. In the first mode, the camera is automatically slewed to acquire imagery of a specified scene area, e.g. to provide "eyes-on" confirmation for a human operator or for forensic purposes. In the second mode, the camera is directed to monitor an area of interest, with zoom level automatically optimized for human detection at the appropriate range. Open source algorithms (using OpenCV) are used to automatically detect pedestrians; their real world positions are estimated and communicated back to the SAPIENT central fusion system. In the third mode of operation a "follow" mode is implemented where the camera maintains the detected person within the camera field-of-view without requiring an end-user to directly control the camera with a joystick.

  2. BENCHMARKING THE OPTICAL RESOLVING POWER OF UAV BASED CAMERA SYSTEMS

    Directory of Open Access Journals (Sweden)

    H. Meißner

    2017-08-01

    Full Text Available UAV based imaging and 3D object point generation is an established technology. Some of the UAV users try to address (very highaccuracy applications, i.e. inspection or monitoring scenarios. In order to guarantee such level of detail and accuracy high resolving imaging systems are mandatory. Furthermore, image quality considerably impacts photogrammetric processing, as the tie point transfer, mandatory for forming the block geometry, fully relies on the radiometric quality of images. Thus, empirical testing of radiometric camera performance is an important issue, in addition to standard (geometric calibration, which normally is covered primarily. Within this paper the resolving power of ten different camera/lens installations has been investigated. Selected systems represent different camera classes, like DSLRs, system cameras, larger format cameras and proprietary systems. As the systems have been tested in wellcontrolled laboratory conditions and objective quality measures have been derived, individual performance can be compared directly, thus representing a first benchmark on radiometric performance of UAV cameras. The results have shown, that not only the selection of appropriate lens and camera body has an impact, in addition the image pre-processing, i.e. the use of a specific debayering method, significantly influences the final resolving power.

  3. Microprocessor-controlled, wide-range streak camera

    Energy Technology Data Exchange (ETDEWEB)

    Amy E. Lewis, Craig Hollabaugh

    2006-09-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera’s user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  4. The fly's eye camera system

    Science.gov (United States)

    Mészáros, L.; Pál, A.; Csépány, G.; Jaskó, A.; Vida, K.; Oláh, K.; Mezö, G.

    2014-12-01

    We introduce the Fly's Eye Camera System, an all-sky monitoring device intended to perform time domain astronomy. This camera system design will provide complementary data sets for other synoptic sky surveys such as LSST or Pan-STARRS. The effective field of view is obtained by 19 cameras arranged in a spherical mosaic form. These individual cameras of the device stand on a hexapod mount that is fully capable of achieving sidereal tracking for the subsequent exposures. This platform has many advantages. First of all it requires only one type of moving component and does not include unique parts. Hence this design not only eliminates problems implied by unique elements, but the redundancy of the hexapod allows smooth operations even if one or two of the legs are stuck. In addition, it can calibrate itself by observed stars independently from both the geographical location (including northen and southern hemisphere) and the polar alignment of the full mount. All mechanical elements and electronics are designed within the confines of our institute Konkoly Observatory. Currently, our instrument is in testing phase with an operating hexapod and reduced number of cameras.

  5. Use of Fourier-transform infrared spectroscopy to quantify immunoglobulin G concentration and an analysis of the effect of signalment on levels in canine serum.

    Science.gov (United States)

    Seigneur, A; Hou, S; Shaw, R A; McClure, Jt; Gelens, H; Riley, C B

    2015-01-15

    Deficiency in immunoglobulin G (IgG) is associated with an increased susceptibility to infections in humans and animals, and changes in IgG levels occur in many disease states. In companion animals, failure of transfer of passive immunity is uncommonly diagnosed but mortality rates in puppies are high and more than 30% of these deaths are secondary to septicemia. Currently, radial immunodiffusion (RID) and enzyme-linked immunosorbent assays are the most commonly used methods for quantitative measurement of IgG in dogs. In this study, a Fourier-transform infrared spectroscopy (FTIR) assay for canine serum IgG was developed and compared to the RID assay as the reference standard. Basic signalment data and health status of the dogs were also analyzed to determine if they correlated with serum IgG concentrations based on RID results. Serum samples were collected from 207 dogs during routine hematological evaluation, and IgG concentrations determined by RID. The FTIR assay was developed using partial least squares regression analysis and its performance evaluated using RID assay as the reference test. The concordance correlation coefficient was 0.91 for the calibration model data set and 0.85 for the prediction set. A Bland-Altman plot showed a mean difference of -89 mg/dL and no systematic bias. The modified mean coefficient of variation (CV) for RID was 6.67%, and for FTIR was 18.76%. The mean serum IgG concentration using RID was 1943 ± 880 mg/dL based on the 193 dogs with complete signalment and health data. When age class, gender, breed size and disease status were analyzed by multivariable ANOVA, dogs < 2 years of age (p = 0.0004) and those classified as diseased (p = 0.03) were found to have significantly lower IgG concentrations than older and healthy dogs, respectively. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Video camera use at nuclear power plants

    International Nuclear Information System (INIS)

    Estabrook, M.L.; Langan, M.O.; Owen, D.E.

    1990-08-01

    A survey of US nuclear power plants was conducted to evaluate video camera use in plant operations, and determine equipment used and the benefits realized. Basic closed circuit television camera (CCTV) systems are described and video camera operation principles are reviewed. Plant approaches for implementing video camera use are discussed, as are equipment selection issues such as setting task objectives, radiation effects on cameras, and the use of disposal cameras. Specific plant applications are presented and the video equipment used is described. The benefits of video camera use --- mainly reduced radiation exposure and increased productivity --- are discussed and quantified. 15 refs., 6 figs

  7. Evaluation on the concentration change of paeoniflorin and glycyrrhizic acid in different formulations of Shaoyao-Gancao-Tang by the tri-level infrared macro-fingerprint spectroscopy and the whole analysis method

    Science.gov (United States)

    Liu, Aoxue; Wang, Jingjuan; Guo, Yizhen; Xiao, Yao; Wang, Yue; Sun, Suqin; Chen, Jianbo

    2018-03-01

    As a kind of common prescriptions, Shaoyao-Gancao-Tang (SGT) contains two Chinese herbs with four different proportions which have different clinical efficacy because of their various components. In order to investigate the herb-herb interaction mechanisms, we used the method of tri-level infrared macro-fingerprint spectroscopy to evaluate the concentration change of active components of four SGTs in this research. Fourier transform infrared spectroscopy (FT-IR) and Second derivative infrared spectroscopy (SD-IR) can recognize the multiple prescriptions directly and simultaneously. 2D-IR spectra enhance the spectral resolution and obtain much new information for discriminating the similar complicated samples of SGT. Furthermore, the whole analysis method from the analysis of the main components to the specific components and the relative content of the components may evaluate the quality of TCM better. Then we concluded that paeoniflorin and glycyrrhizic acid were the highest proportion in active ingredients in SGT-12:1 and the lowest one in SGT-12:12, which matched the HPLC-DAD results. It is demonstrated that the method composed by the tri-level infrared macro-fingerprint spectroscopy and the whole analysis can be applicable for effective, visual and accurate analysis and identification of very complicated and similar mixture systems of traditional Chinese medicine.

  8. Theory and applications of smart cameras

    CERN Document Server

    2016-01-01

    This book presents an overview of smart camera systems, considering practical applications but also reviewing fundamental aspects of the underlying technology.  It introduces in a tutorial style the principles of sensing and signal processing, and also describes topics such as wireless connection to the Internet of Things (IoT) which is expected to be the biggest market for smart cameras. It is an excellent guide to the fundamental of smart camera technology, and the chapters complement each other well as the authors have worked as a team under the auspice of GFP(Global Frontier Project), the largest-scale funded research in Korea.  This is the third of three books based on the Integrated Smart Sensors research project, which describe the development of innovative devices, circuits, and system-level enabling technologies.  The aim of the project was to develop common platforms on which various devices and sensors can be loaded, and to create systems offering significant improvements in information processi...

  9. Remote hardware-reconfigurable robotic camera

    Science.gov (United States)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.

    2001-10-01

    In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.

  10. Firefly: A HOT camera core for thermal imagers with enhanced functionality

    Science.gov (United States)

    Pillans, Luke; Harmer, Jack; Edwards, Tim

    2015-06-01

    Raising the operating temperature of mercury cadmium telluride infrared detectors from 80K to above 160K creates new applications for high performance infrared imagers by vastly reducing the size, weight and power consumption of the integrated cryogenic cooler. Realizing the benefits of Higher Operating Temperature (HOT) requires a new kind of infrared camera core with the flexibility to address emerging applications in handheld, weapon mounted and UAV markets. This paper discusses the Firefly core developed to address these needs by Selex ES in Southampton UK. Firefly represents a fundamental redesign of the infrared signal chain reducing power consumption and providing compatibility with low cost, low power Commercial Off-The-Shelf (COTS) computing technology. This paper describes key innovations in this signal chain: a ROIC purpose built to minimize power consumption in the proximity electronics, GPU based image processing of infrared video, and a software customisable infrared core which can communicate wirelessly with other Battlespace systems.

  11. The GISMO-2 Bolometer Camera

    Science.gov (United States)

    Staguhn, Johannes G.; Benford, Dominic J.; Fixsen, Dale J.; Hilton, Gene; Irwin, Kent D.; Jhabvala, Christine A.; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; Miller, Timothy M.; hide

    2012-01-01

    We present the concept for the GISMO-2 bolometer camera) which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISM0-2 will operate Simultaneously in the 1 mm and 2 mm atmospherical windows. The 1 mm channel uses a 32 x 40 TES-based Backshort Under Grid (BUG) bolometer array, the 2 mm channel operates with a 16 x 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISM0-2 was strongly influenced by our experience with the GISMO 2 mm bolometer camera which is successfully operating at the 30m telescope. GISMO is accessible to the astronomical community through the regular IRAM call for proposals.

  12. Dark Energy Camera for Blanco

    Energy Technology Data Exchange (ETDEWEB)

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images from the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.

  13. Perceptual Color Characterization of Cameras

    Directory of Open Access Journals (Sweden)

    Javier Vazquez-Corral

    2014-12-01

    Full Text Available Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as \\(XYZ\\, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a \\(3 \\times 3\\ matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson al., to perform a perceptual color characterization. In particular, we search for the \\(3 \\times 3\\ matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE \\(\\Delta E\\ error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3for the \\(\\Delta E\\ error, 7& for the S-CIELAB error and 13% for the CID error measures.

  14. Near-infrared high-resolution real-time omnidirectional imaging platform for drone detection

    Science.gov (United States)

    Popovic, Vladan; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2016-10-01

    Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps).1 Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time.2 In this paper, we focus on the system-level design of multi-camera sensor acquiring near-infrared (NIR) spectrum and its ability to detect mini-UAVs in a representative rural Swiss environment. The presented results show the UAV detection from the trial that we conducted during a field trial in August 2015.

  15. EDICAM (Event Detection Intelligent Camera)

    Energy Technology Data Exchange (ETDEWEB)

    Zoletnik, S. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Szabolics, T., E-mail: szabolics.tamas@wigner.mta.hu [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Kocsis, G.; Szepesi, T.; Dunai, D. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary)

    2013-10-15

    Highlights: ► We present EDICAM's hardware modules. ► We present EDICAM's main design concepts. ► This paper will describe EDICAM firmware architecture. ► Operation principles description. ► Further developments. -- Abstract: A new type of fast framing camera has been developed for fusion applications by the Wigner Research Centre for Physics during the last few years. A new concept was designed for intelligent event driven imaging which is capable of focusing image readout to Regions of Interests (ROIs) where and when predefined events occur. At present these events mean intensity changes and external triggers but in the future more sophisticated methods might also be defined. The camera provides 444 Hz frame rate at full resolution of 1280 × 1024 pixels, but monitoring of smaller ROIs can be done in the 1–116 kHz range even during exposure of the full image. Keeping space limitations and the harsh environment in mind the camera is divided into a small Sensor Module and a processing card interconnected by a fast 10 Gbit optical link. This camera hardware has been used for passive monitoring of the plasma in different devices for example at ASDEX Upgrade and COMPASS with the first version of its firmware. The new firmware and software package is now available and ready for testing the new event processing features. This paper will present the operation principle and features of the Event Detection Intelligent Camera (EDICAM). The device is intended to be the central element in the 10-camera monitoring system of the Wendelstein 7-X stellarator.

  16. Decoupling Intensity Radiated by the Emitter in Distance Estimation from Camera to IR Emitter

    Directory of Open Access Journals (Sweden)

    Carlos Andrés Luna Vázquez

    2013-05-01

    Full Text Available Various models using radiometric approach have been proposed to solve the problem of estimating the distance between a camera and an infrared emitter diode (IRED. They depend directly on the radiant intensity of the emitter, set by the IRED bias current. As is known, this current presents a drift with temperature, which will be transferred to the distance estimation method. This paper proposes an alternative approach to remove temperature drift in the distance estimation method by eliminating the dependence on radiant intensity. The main aim was to use the relative accumulated energy together with other defined models, such as the zeroth-frequency component of the FFT of the IRED image and the standard deviation of pixel gray level intensities in the region of interest containing the IRED image. By using the abovementioned models, an expression free of IRED radiant intensity was obtained. Furthermore, the final model permitted simultaneous estimation of the distance between the IRED and the camera and the IRED orientation angle. The alternative presented in this paper gave a 3% maximum relative error over a range of distances up to 3 m.

  17. Wildlife Presence and Interactions with Chickens on Australian Commercial Chicken Farms Assessed by Camera Traps.

    Science.gov (United States)

    Scott, Angela Bullanday; Phalen, David; Hernandez-Jover, Marta; Singh, Mini; Groves, Peter; Toribio, Jenny-Ann L M L

    2018-03-01

    The types of wildlife and the frequency of their visits to commercial chicken farms in Australia were assessed using infrared and motion-sensing camera traps. Cameras were set up on 14 free-range layer farms, three cage layer farms, two barn layer farms, five non-free-range meat chicken farms, and six free-range meat chicken farms in the Sydney basin region and South East Queensland. Wildlife visits were found on every farm type and were most frequent on cage layer farms (73%), followed by free-range layer farms (15%). The common mynah ( Acridotheres tristis) was the most frequent wildlife visitor in the study (23.9%), followed by corvids (22.9%) and Columbiformes (7.5%). Most wildlife visits occurred during the day from 6 am to 6 pm (85%). There were infrequent observations of direct contact between chickens and wildlife, suggesting the indirect route of pathogen transfer may be more significant. The level of biosecurity on the farm is suggested to impact the frequency of wildlife visits more so than the farm type.

  18. The Sydney University PAPA camera

    Science.gov (United States)

    Lawson, Peter R.

    1994-04-01

    The Precision Analog Photon Address (PAPA) camera is a photon-counting array detector that uses optical encoding to locate photon events on the output of a microchannel plate image intensifier. The Sydney University camera is a 256x256 pixel detector which can operate at speeds greater than 1 million photons per second and produce individual photon coordinates with a deadtime of only 300 ns. It uses a new Gray coded mask-plate which permits a simplified optical alignment and successfully guards against vignetting artifacts.

  19. Streak cameras and their applications

    International Nuclear Information System (INIS)

    Bernet, J.M.; Imhoff, C.

    1987-01-01

    Over the last several years, development of various measurement techniques in the nanosecond and pico-second range has led to increased reliance on streak cameras. This paper will present the main electronic and optoelectronic performances of the Thomson-CSF TSN 506 cameras and their associated devices used to build an automatic image acquisition and processing system (NORMA). A brief survey of the diversity and the spread of the use of high speed electronic cinematography will be illustrated by a few typical applications [fr

  20. Wide field and diffraction limited array camera for SIRTF

    International Nuclear Information System (INIS)

    Fazio, G.G.; Koch, D.G.; Melnick, G.J.

    1986-01-01

    The Infrared Array Camera for the Space Infrared Telescope Facility (SIRTF/IRAC) is capable of two-dimensional photometry in either a wide field or diffraction-limited mode over the wavelength interval from 2 to 30 microns. Three different two-dimensional direct readout (DRO) array detectors are being considered: Band 1-InSb or Si:In (2-5 microns) 128 x 128 pixels, Band 2-Si:Ga (5-18 microns) 64 x 64 pixels, and Band 3-Si:Sb (18-30 microns) 64 x 64 pixels. The hybrid DRO readout architecture has the advantages of low read noise, random pixel access with individual readout rates, and nondestructive readout. The scientific goals of IRAC are discussed, which are the basis for several important requirements and capabilities of the array camera: (1) diffraction-limited resolution from 2-30 microns, (2) use of the maximum unvignetted field of view of SIRTF, (3) simultaneous observations within the three infrared spectral bands, and (4) the capability for broad and narrow bandwidth spectral resolution. A strategy has been developed to minimize the total electronic and environmental noise sources to satisfy the scientific requirements. 7 references

  1. An Estimate of the Pixel-Level Connection between Visible Infrared Imaging Radiometer Suite Day/Night Band (VIIRS DNB Nighttime Lights and Land Features across China

    Directory of Open Access Journals (Sweden)

    Ting Ma

    2018-05-01

    Full Text Available Satellite-derived nighttime light images are increasingly used for various studies in relation to demographic, socioeconomic and urbanization dynamics because of the salient relationships between anthropogenic lighting signals at night and statistical variables at multiple scales. Owing to a higher spatial resolution and fewer over-glow and saturation effects, the new generation of nighttime light data derived from the Visible Infrared Imaging Radiometer Suite (VIIRS day/night band (DNB, which is located on board the Suomi National Polar-Orbiting Partnership (Suomi-NPP satellite, is expected to facilitate the performance of nocturnal luminosity-based investigations of human activity in a spatially explicit manner. In spite of the importance of the spatial connection between the VIIRS DNB nighttime light radiance (NTL and the land surface type at a fine scale, the crucial role of NTL-based investigations of human settlements is not well understood. In this study, we investigated the pixel-level relationship between the VIIRS DNB-derived NTL, a Landsat-derived land-use/land-cover dataset, and the map of point of interest (POI density over China, especially with respect to the identification of artificial surfaces in urban land. Our estimates suggest that notable differences in the NTL between urban (man-made surfaces and other types of land surfaces likely allow us to spatially identify most of the urban pixels with relatively high radiance values in VIIRS DNB images. Our results also suggest that current nighttime light data have a limited capability for detecting rural residential areas and explaining pixel-level variations in the POI density at a large scale. Moreover, the impact of non-man-made surfaces on the partitioned results appears inevitable because of the spatial heterogeneity of human settlements and the nature of remotely sensed nighttime light data. Using receiver operating characteristic (ROC curve-based analysis, we obtained

  2. Application of Infrared Thermography as a Diagnostic Tool of Knee Osteoarthritis

    Science.gov (United States)

    Arfaoui, Ahlem; Bouzid, Mohamed Amine; Pron, Hervé; Taiar, Redha; Polidori, Guillaume

    This paper aimed to study the feasibility of application of infrared thermography to detect osteoarthritis of the knee and to compare the distribution of skin temperature between participants with osteoarthritis and those without pathology. All tests were conducted at LACM (Laboratory of Mechanical Stresses Analysis) and the gymnasium of the University of Reims Champagne Ardennes. IR thermography was performed using an IR camera. Ten participants with knee osteoarthritis and 12 reference healthy participants without OA participated in this study. Questionnaires were also used. The participants with osteoarthritis of the knee were selected on clinical examination and a series of radiographs. The level of pain was recorded by using a simple verbal scale (0-4). Infrared thermography reveals relevant disease by highlighting asymmetrical behavior in thermal color maps of both knees. Moreover, a linear evolution of skin temperature in the knee area versus time has been found whatever the participant group is in the first stage following a given effort. Results clearly show that the temperature can be regarded as a key parameter for evaluating pain. Thermal images of the knee were taken with an infrared camera. The study shows that with the advantage of being noninvasive and easily repeatable, IRT appears to be a useful tool to detect quantifiable patterns of surface temperatures and predict the singular thermal behavior of this pathology. It also seems that this non-intrusive technique enables to detect the early clinical manifestations of knee OA.

  3. The first GCT camera for the Cherenkov Telescope Array

    CERN Document Server

    De Franco, A.; Allan, D.; Armstrong, T.; Ashton, T.; Balzer, A.; Berge, D.; Bose, R.; Brown, A.M.; Buckley, J.; Chadwick, P.M.; Cooke, P.; Cotter, G.; Daniel, M.K.; Funk, S.; Greenshaw, T.; Hinton, J.; Kraus, M.; Lapington, J.; Molyneux, P.; Moore, P.; Nolan, S.; Okumura, A.; Ross, D.; Rulten, C.; Schmoll, J.; Schoorlemmer, H.; Stephan, M.; Sutcliffe, P.; Tajima, H.; Thornhill, J.; Tibaldo, L.; Varner, G.; Watson, J.; Zink, A.

    2015-01-01

    The Gamma Cherenkov Telescope (GCT) is proposed to be part of the Small Size Telescope (SST) array of the Cherenkov Telescope Array (CTA). The GCT dual-mirror optical design allows the use of a compact camera of diameter roughly 0.4 m. The curved focal plane is equipped with 2048 pixels of ~0.2{\\deg} angular size, resulting in a field of view of ~9{\\deg}. The GCT camera is designed to record the flashes of Cherenkov light from electromagnetic cascades, which last only a few tens of nanoseconds. Modules based on custom ASICs provide the required fast electronics, facilitating sampling and digitisation as well as first level of triggering. The first GCT camera prototype is currently being commissioned in the UK. On-telescope tests are planned later this year. Here we give a detailed description of the camera prototype and present recent progress with testing and commissioning.

  4. Image Alignment for Multiple Camera High Dynamic Range Microscopy.

    Science.gov (United States)

    Eastwood, Brian S; Childs, Elisabeth C

    2012-01-09

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.

  5. Infrared observations of planetary atmospheres

    International Nuclear Information System (INIS)

    Orton, G.S.; Baines, K.H.; Bergstralh, J.T.

    1988-01-01

    The goal of this research in to obtain infrared data on planetary atmospheres which provide information on several aspects of structure and composition. Observations include direct mission real-time support as well as baseline monitoring preceding mission encounters. Besides providing a broader information context for spacecraft experiment data analysis, observations will provide the quantitative data base required for designing optimum remote sensing sequences and evaluating competing science priorities. In the past year, thermal images of Jupiter and Saturn were made near their oppositions in order to monitor long-term changes in their atmospheres. Infrared images of the Jovian polar stratospheric hot spots were made with IUE observations of auroral emissions. An exploratory 5-micrometer spectrum of Uranus was reduced and accepted for publication. An analysis of time-variability of temperature and cloud properties of the Jovian atomsphere was made. Development of geometric reduction programs for imaging data was initiated for the sun workstation. Near-infrared imaging observations of Jupiter were reduced and a preliminary analysis of cloud properties made. The first images of the full disk of Jupiter with a near-infrared array camera were acquired. Narrow-band (10/cm) images of Jupiter and Saturn were obtained with acousto-optical filters

  6. A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i

    Science.gov (United States)

    Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.

    2015-01-01

    We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity. 

  7. High-speed holographic camera

    International Nuclear Information System (INIS)

    Novaro, Marc

    The high-speed holographic camera is a disgnostic instrument using holography as an information storing support. It allows us to take 10 holograms, of an object, with exposures times of 1,5ns, separated in time by 1 or 2ns. In order to get these results easily, no mobile part is used in the set-up [fr

  8. The Camera Comes to Court.

    Science.gov (United States)

    Floren, Leola

    After the Lindbergh kidnapping trial in 1935, the American Bar Association sought to eliminate electronic equipment from courtroom proceedings. Eventually, all but two states adopted regulations applying that ban to some extent, and a 1965 Supreme Court decision encouraged the banning of television cameras at trials as well. Currently, some states…

  9. Gamma camera with reflectivity mask

    International Nuclear Information System (INIS)

    Stout, K.J.

    1980-01-01

    In accordance with the present invention there is provided a radiographic camera comprising: a scintillator; a plurality of photodectors positioned to face said scintillator; a plurality of masked regions formed upon a face of said scintillator opposite said photdetectors and positioned coaxially with respective ones of said photodetectors for decreasing the amount of internal reflection of optical photons generated within said scintillator. (auth)

  10. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  11. Monitoring machining conditions by infrared images

    Science.gov (United States)

    Borelli, Joao E.; Gonzaga Trabasso, Luis; Gonzaga, Adilson; Coelho, Reginaldo T.

    2001-03-01

    During machining process the knowledge of the temperature is the most important factor in tool analysis. It allows to control main factors that influence tool use, life time and waste. The temperature in the contact area between the piece and the tool is resulting from the material removal in cutting operation and it is too difficult to be obtained because the tool and the work piece are in motion. One way to measure the temperature in this situation is detecting the infrared radiation. This work presents a new methodology for diagnosis and monitoring of machining processes with the use of infrared images. The infrared image provides a map in gray tones of the elements in the process: tool, work piece and chips. Each gray tone in the image corresponds to a certain temperature for each one of those materials and the relationship between the gray tones and the temperature is gotten by the previous of infrared camera calibration. The system developed in this work uses an infrared camera, a frame grabber board and a software composed of three modules. The first module makes the image acquisition and processing. The second module makes the feature image extraction and performs the feature vector. Finally, the third module uses fuzzy logic to evaluate the feature vector and supplies the tool state diagnostic as output.

  12. Thermoelectric infrared imaging sensors for automotive applications

    Science.gov (United States)

    Hirota, Masaki; Nakajima, Yasushi; Saito, Masanori; Satou, Fuminori; Uchiyama, Makoto

    2004-07-01

    This paper describes three low-cost thermoelectric infrared imaging sensors having a 1,536, 2,304, and 10,800 element thermoelectric focal plane array (FPA) respectively and two experimental automotive application systems. The FPAs are basically fabricated with a conventional IC process and micromachining technologies and have a low cost potential. Among these sensors, the sensor having 2,304 elements provide high responsivity of 5,500 V/W and a very small size with adopting a vacuum-sealed package integrated with a wide-angle ZnS lens. One experimental system incorporated in the Nissan ASV-2 is a blind spot pedestrian warning system that employs four infrared imaging sensors. This system helps alert the driver to the presence of a pedestrian in a blind spot by detecting the infrared radiation emitted from the person"s body. The system can also prevent the vehicle from moving in the direction of the pedestrian. The other is a rearview camera system with an infrared detection function. This system consists of a visible camera and infrared sensors, and it helps alert the driver to the presence of a pedestrian in a rear blind spot. Various issues that will need to be addressed in order to expand the automotive applications of IR imaging sensors in the future are also summarized. This performance is suitable for consumer electronics as well as automotive applications.

  13. Searching for transits in the Wide Field Camera Transit Survey with difference-imaging light curves

    NARCIS (Netherlands)

    Zendejas, Dominguez J.; Koppenhoefer, J.; Saglia, R.; Birkby, J.L.; Hodgkin, S.; Kovács, G.; Pinfield, D.; Sipocz, B.; Barrado, D.; Bender, R.; Burgo, del C.; Cappetta, M.; Martín, E.; Nefs, B.; Riffeser, A.; Steele, P.

    2013-01-01

    The Wide Field Camera Transit Survey is a pioneer program aiming at for searching extra-solar planets in the near-infrared. The images from the survey are processed by a data reduction pipeline, which uses aperture photometry to construct the light curves. We produce an alternative set of light

  14. Grasping objects from a user’s hand using time-of-flight camera data

    CSIR Research Space (South Africa)

    Govender, N

    2010-11-01

    Full Text Available F camera emits an infrared pulse and measures return phase change at every pixel to estimate depth over an image. We used a Mesa Imaging SR4000 which, if conditions are right, provides impressively accurate point cloud data with associated intensities...

  15. Space imaging infrared optical guidance for autonomous ground vehicle

    Science.gov (United States)

    Akiyama, Akira; Kobayashi, Nobuaki; Mutoh, Eiichiro; Kumagai, Hideo; Yamada, Hirofumi; Ishii, Hiromitsu

    2008-08-01

    We have developed the Space Imaging Infrared Optical Guidance for Autonomous Ground Vehicle based on the uncooled infrared camera and focusing technique to detect the objects to be evaded and to set the drive path. For this purpose we made servomotor drive system to control the focus function of the infrared camera lens. To determine the best focus position we use the auto focus image processing of Daubechies wavelet transform technique with 4 terms. From the determined best focus position we transformed it to the distance of the object. We made the aluminum frame ground vehicle to mount the auto focus infrared unit. Its size is 900mm long and 800mm wide. This vehicle mounted Ackerman front steering system and the rear motor drive system. To confirm the guidance ability of the Space Imaging Infrared Optical Guidance for Autonomous Ground Vehicle we had the experiments for the detection ability of the infrared auto focus unit to the actual car on the road and the roadside wall. As a result the auto focus image processing based on the Daubechies wavelet transform technique detects the best focus image clearly and give the depth of the object from the infrared camera unit.

  16. Low-cost near-infrared imaging device for inspection of historical manuscripts

    International Nuclear Information System (INIS)

    Mohd Ashhar Khalid

    2004-01-01

    Near-infrared (NIR) or sometimes called black light is a waveform beyond visible light and it is not detectable by human eyes. However electronic sensors such as the type used in digital cameras are able to detect signals in the infrared band. To avoid distortion in the pictures obtained near-infrared is blocked by optical filters inserted in digital cameras. By carrying out minor modification allowing near-infrared signal to be imaged while blocking the visible signal, the camera is turned into a low-cost NIR imaging instrument. NIR imaging can be a useful tool in historical manuscript study or restoration. A few applications have been successfully demonstrated in laboratory experiment using the instrument available in MINT. However, due to unavailability of historical items, easily available texts and paintings are used in the demonstrations. This paper reports achievements of early work on the application of digital camera in the detection of damaged prints or writings. (Author)

  17. Development of the Earth Observation Camera of MIRIS

    Directory of Open Access Journals (Sweden)

    Dae-Hee Lee

    2011-09-01

    Full Text Available We have designed and manufactured the Earth observation camera (EOC of multi-purpose infrared imaging system (MIRIS. MIRIS is a main payload of the STSAT-3, which will be launched in late 2012. The main objective of the EOC is to test the operation of Korean IR technology in space, so we have designed the optical and mechanical system of the EOC to fit the IR detector system. We have assembled the flight model (FM of EOC and performed environment tests successfully. The EOC is now ready to be integrated into the satellite system waiting for operation in space, as planned.

  18. Infrared Heaters

    Science.gov (United States)

    1979-01-01

    The heating units shown in the accompanying photos are Panelbloc infrared heaters, energy savers which burn little fuel in relation to their effective heat output. Produced by Bettcher Manufacturing Corporation, Cleveland, Ohio, Panelblocs are applicable to industrial or other facilities which have ceilings more than 12 feet high, such as those pictured: at left the Bare Hills Tennis Club, Baltimore, Maryland and at right, CVA Lincoln- Mercury, Gaithersburg, Maryland. The heaters are mounted high above the floor and they radiate infrared energy downward. Panelblocs do not waste energy by warming the surrounding air. Instead, they beam invisible heat rays directly to objects which absorb the radiation- people, floors, machinery and other plant equipment. All these objects in turn re-radiate the energy to the air. A key element in the Panelbloc design is a coating applied to the aluminized steel outer surface of the heater. This coating must be corrosion resistant at high temperatures and it must have high "emissivity"-the ability of a surface to emit radiant energy. The Bettcher company formerly used a porcelain coating, but it caused a production problem. Bettcher did not have the capability to apply the material in its own plant, so the heaters had to be shipped out of state for porcelainizing, which entailed extra cost. Bettcher sought a coating which could meet the specifications yet be applied in its own facilities. The company asked The Knowledge Availability Systems Center, Pittsburgh, Pennsylvania, a NASA Industrial Applications Center (IAC), for a search of NASA's files

  19. Intraocular camera for retinal prostheses: Refractive and diffractive lens systems

    Science.gov (United States)

    Hauer, Michelle Christine

    The focus of this thesis is on the design and analysis of refractive, diffractive, and hybrid refractive/diffractive lens systems for a miniaturized camera that can be surgically implanted in the crystalline lens sac and is designed to work in conjunction with current and future generation retinal prostheses. The development of such an intraocular camera (IOC) would eliminate the need for an external head-mounted or eyeglass-mounted camera. Placing the camera inside the eye would allow subjects to use their natural eye movements for foveation (attention) instead of more cumbersome head tracking, would notably aid in personal navigation and mobility, and would also be significantly more psychologically appealing from the standpoint of personal appearances. The capability for accommodation with no moving parts or feedback control is incorporated by employing camera designs that exhibit nearly infinite depth of field. Such an ultracompact optical imaging system requires a unique combination of refractive and diffractive optical elements and relaxed system constraints derived from human psychophysics. This configuration necessitates an extremely compact, short focal-length lens system with an f-number close to unity. Initially, these constraints appear highly aggressive from an optical design perspective. However, after careful analysis of the unique imaging requirements of a camera intended to work in conjunction with the relatively low pixellation levels of a retinal microstimulator array, it becomes clear that such a design is not only feasible, but could possibly be implemented with a single lens system.

  20. BOOK REVIEW: Infrared Thermal Imaging: Fundamentals, Research and Applications Infrared Thermal Imaging: Fundamentals, Research and Applications

    Science.gov (United States)

    Planinsic, Gorazd

    2011-09-01

    Ten years ago, a book with a title like this would be interesting only to a narrow circle of specialists. Thanks to rapid advances in technology, the price of thermal imaging devices has dropped sharply, so they have, almost overnight, become accessible to a wide range of users. As the authors point out in the preface, the growth of this area has led to a paradoxical situation: now there are probably more infrared (IR) cameras sold worldwide than there are people who understand the basic physics behind them and know how to correctly interpret the colourful images that are obtained with these devices. My experience confirms this. When I started using the IR camera during lectures on the didactics of physics, I soon realized that I needed more knowledge, which I later found in this book. A wide range of potential readers and topical areas provides a good motive for writing a book such as this one, but it also represents a major challenge for authors, as compromises in the style of writing and choice of topics are required. The authors of this book have successfully achieved this, and indeed done an excellent job. This book addresses a wide range of readers, from engineers, technicians, and physics and science teachers in schools and universities, to researchers and specialists who are professionally active in the field. As technology in this area has made great progress in recent times, this book is also a valuable guide for those who opt to purchase an infrared camera. Chapters in this book could be divided into three areas: the fundamentals of IR thermal imaging and related physics (two chapters); IR imaging systems and methods (two chapters) and applications, including six chapters on pedagogical applications; IR imaging of buildings and infrastructure, industrial applications, microsystems, selected topics in research and industry, and selected applications from other fields. All chapters contain numerous colour pictures and diagrams, and a rich list of relevant

  1. Near-infrared transillumination photography of intraocular tumours.

    Science.gov (United States)

    Krohn, Jørgen; Ulltang, Erlend; Kjersem, Bård

    2013-10-01

    To present a technique for near-infrared transillumination imaging of intraocular tumours based on the modifications of a conventional digital slit lamp camera system. The Haag-Streit Photo-Slit Lamp BX 900 (Haag-Streit AG) was used for transillumination photography by gently pressing the tip of the background illumination cable against the surface of the patient's eye. Thus the light from the flash unit was transmitted into the eye, leading to improved illumination and image resolution. The modification for near-infrared photography was done by replacing the original camera with a Canon EOS 30D (Canon Inc) converted by Advanced Camera Services Ltd. In this camera, the infrared blocking filter was exchanged for a 720 nm long-pass filter, so that the near-infrared part of the spectrum was recorded by the sensor. The technique was applied in eight patients: three with anterior choroidal melanoma, three with ciliary body melanoma and two with ocular pigment alterations. The good diagnostic quality of the photographs made it possible to evaluate the exact location and extent of the lesions in relation to pigmented intraocular landmarks such as the ora serrata and ciliary body. The photographic procedure did not lead to any complications. We recommend near-infrared transillumination photography as a supplementary diagnostic tool for the evaluation and documentation of anteriorly located intraocular tumours.

  2. Towards strong light-matter coupling at the single-resonator level with sub-wavelength mid-infrared nano-antennas

    Energy Technology Data Exchange (ETDEWEB)

    Malerba, M.; De Angelis, F., E-mail: francesco.deangelis@iit.it [Istituto Italiano di Tecnologia, Via Morego, 30, I-16163 Genova (Italy); Ongarello, T.; Paulillo, B.; Manceau, J.-M.; Beaudoin, G.; Sagnes, I.; Colombelli, R., E-mail: raffaele.colombelli@u-psud.fr [Centre for Nanoscience and Nanotechnology (C2N Orsay), CNRS UMR9001, Univ. Paris Sud, Univ. Paris Saclay, 91405 Orsay (France)

    2016-07-11

    We report a crucial step towards single-object cavity electrodynamics in the mid-infrared spectral range using resonators that borrow functionalities from antennas. Room-temperature strong light-matter coupling is demonstrated in the mid-infrared between an intersubband transition and an extremely reduced number of sub-wavelength resonators. By exploiting 3D plasmonic nano-antennas featuring an out-of-plane geometry, we observed strong light-matter coupling in a very low number of resonators: only 16, more than 100 times better than what reported to date in this spectral range. The modal volume addressed by each nano-antenna is sub-wavelength-sized and it encompasses only ≈4400 electrons.

  3. Image quality testing of assembled IR camera modules

    Science.gov (United States)

    Winters, Daniel; Erichsen, Patrik

    2013-10-01

    Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.

  4. Architectural Design Document for Camera Models

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study.......Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study....

  5. 21 CFR 886.1120 - Opthalmic camera.

    Science.gov (United States)

    2010-04-01

    ... DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1120 Opthalmic camera. (a) Identification. An ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding area...

  6. Improved positron emission tomography camera

    International Nuclear Information System (INIS)

    Mullani, N.A.

    1986-01-01

    A positron emission tomography camera having a plurality of rings of detectors positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom, and a plurality of scintillation crystals positioned relative to the photomultiplier tubes whereby each tube is responsive to more than one crystal. Each alternate crystal in the ring may be offset by one-half or less of the thickness of the crystal such that the staggered crystals are seen by more than one photomultiplier tube. This sharing of crystals and photomultiplier tubes allows identification of the staggered crystal and the use of smaller detectors shared by larger photomultiplier tubes thereby requiring less photomultiplier tubes, creating more scanning slices, providing better data sampling, and reducing the cost of the camera. (author)

  7. Vehicular camera pedestrian detection research

    Science.gov (United States)

    Liu, Jiahui

    2018-03-01

    With the rapid development of science and technology, it has made great development, but at the same time of highway traffic more convenient in highway traffic and transportation. However, in the meantime, traffic safety accidents occur more and more frequently in China. In order to deal with the increasingly heavy traffic safety. So, protecting the safety of people's personal property and facilitating travel has become a top priority. The real-time accurate pedestrian and driving environment are obtained through a vehicular camera which are used to detection and track the preceding moving targets. It is popular in the domain of intelligent vehicle safety driving, autonomous navigation and traffic system research. Based on the pedestrian video obtained by the Vehicular Camera, this paper studies the trajectory of pedestrian detection and its algorithm.

  8. Graphic design of pinhole cameras

    Science.gov (United States)

    Edwards, H. B.; Chu, W. P.

    1979-01-01

    The paper describes a graphic technique for the analysis and optimization of pinhole size and focal length. The technique is based on the use of the transfer function of optical elements described by Scott (1959) to construct the transfer function of a circular pinhole camera. This transfer function is the response of a component or system to a pattern of lines having a sinusoidally varying radiance at varying spatial frequencies. Some specific examples of graphic design are presented.

  9. The MVACS Robotic Arm Camera

    Science.gov (United States)

    Keller, H. U.; Hartwig, H.; Kramm, R.; Koschny, D.; Markiewicz, W. J.; Thomas, N.; Fernades, M.; Smith, P. H.; Reynolds, R.; Lemmon, M. T.; Weinberg, J.; Marcialis, R.; Tanner, R.; Boss, B. J.; Oquest, C.; Paige, D. A.

    2001-08-01

    The Robotic Arm Camera (RAC) is one of the key instruments newly developed for the Mars Volatiles and Climate Surveyor payload of the Mars Polar Lander. This lightweight instrument employs a front lens with variable focus range and takes images at distances from 11 mm (image scale 1:1) to infinity. Color images with a resolution of better than 50 μm can be obtained to characterize the Martian soil. Spectral information of nearby objects is retrieved through illumination with blue, green, and red lamp sets. The design and performance of the camera are described in relation to the science objectives and operation. The RAC uses the same CCD detector array as the Surface Stereo Imager and shares the readout electronics with this camera. The RAC is mounted at the wrist of the Robotic Arm and can characterize the contents of the scoop, the samples of soil fed to the Thermal Evolved Gas Analyzer, the Martian surface in the vicinity of the lander, and the interior of trenches dug out by the Robotic Arm. It can also be used to take panoramic images and to retrieve stereo information with an effective baseline surpassing that of the Surface Stereo Imager by about a factor of 3.

  10. Coaxial fundus camera for opthalmology

    Science.gov (United States)

    de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.

    2015-09-01

    A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.

  11. 16 CFR 501.1 - Camera film.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk still...

  12. An Open Standard for Camera Trap Data

    NARCIS (Netherlands)

    Forrester, Tavis; O'Brien, Tim; Fegraus, Eric; Jansen, P.A.; Palmer, Jonathan; Kays, Roland; Ahumada, Jorge; Stern, Beth; McShea, William

    2016-01-01

    Camera traps that capture photos of animals are a valuable tool for monitoring biodiversity. The use of camera traps is rapidly increasing and there is an urgent need for standardization to facilitate data management, reporting and data sharing. Here we offer the Camera Trap Metadata Standard as an

  13. A camera specification for tendering purposes

    International Nuclear Information System (INIS)

    Lunt, M.J.; Davies, M.D.; Kenyon, N.G.

    1985-01-01

    A standardized document is described which is suitable for sending to companies which are being invited to tender for the supply of a gamma camera. The document refers to various features of the camera, the performance specification of the camera, maintenance details, price quotations for various options and delivery, installation and warranty details. (U.K.)

  14. Bridge deck surface temperature monitoring by infrared thermography and inner structure identification using PPT and PCT analysis methods

    Science.gov (United States)

    Dumoulin, Jean

    2013-04-01

    One of the objectives of ISTIMES project was to evaluate the potentialities offered by the integration of different electromagnetic techniques able to perform non-invasive diagnostics for surveillance and monitoring of transport infrastructures. Among the EM methods investigated, we focused our research and development efforts on uncooled infrared camera techniques due to their promising potential level of dissemination linked to their relative low cost on the market. On the other hand, works were also carried out to identify well adapted implementation protocols and key limits of Pulse Phase Thermography (PPT) and Principal Component Thermography (PCT) processing methods to analyse thermal image sequence and retrieve information about the inner structure. So the first part of this research works addresses infrared thermography measurement when it is used in quantitative mode (not in laboratory conditions) and not in qualitative mode (vision applied to survey). In such context, it requires to process in real time thermal radiative corrections on raw data acquired to take into account influences of natural environment evolution with time, thanks to additional measurements. But, camera sensor has to be enough smart to apply in real time calibration law and radiometric corrections in a varying atmosphere. So, a complete measurement system was studied and developed [1] with low cost infrared cameras available on the market. In the system developed, infrared camera is coupled with other sensors to feed simplified radiative models running, in real time, on GPU available on small PC. The whole measurement system was implemented on the "Musmeci" bridge located in Potenza (Italy). No traffic interruption was required during the mounting of our measurement system. The infrared camera was fixed on top of a mast at 6 m elevation from the surface of the bridge deck. A small weather station was added on the same mast at 1 m under the camera. A GPS antenna was also fixed at the

  15. Delay line clipping in a scintillation camera system

    International Nuclear Information System (INIS)

    Hatch, K.F.

    1979-01-01

    The present invention provides a novel base line restoring circuit and a novel delay line clipping circuit in a scintillation camera system. Single and double delay line clipped signal waveforms are generated for increasing the operational frequency and fidelity of data detection of the camera system by base line distortion such as undershooting, overshooting, and capacitive build-up. The camera system includes a set of photomultiplier tubes and associated amplifiers which generate sequences of pulses. These pulses are pulse-height analyzed for detecting a scintillation having an energy level which falls within a predetermined energy range. Data pulses are combined to provide coordinates and energy of photopeak events. The amplifiers are biassed out of saturation over all ranges of pulse energy level and count rate. Single delay line clipping circuitry is provided for narrowing the pulse width of the decaying electrical data pulses which increase operating speed without the occurrence of data loss. (JTA)

  16. Relative camera localisation in non-overlapping camera networks using multiple trajectories

    NARCIS (Netherlands)

    John, V.; Englebienne, G.; Kröse, B.J.A.

    2012-01-01

    In this article we present an automatic camera calibration algorithm using multiple trajectories in a multiple camera network with non-overlapping field-of-views (FOV). Visible trajectories within a camera FOV are assumed to be measured with respect to the camera local co-ordinate system.

  17. Stereo Pinhole Camera: Assembly and experimental activities

    Directory of Open Access Journals (Sweden)

    Gilmário Barbosa Santos

    2015-05-01

    Full Text Available This work describes the assembling of a stereo pinhole camera for capturing stereo-pairs of images and proposes experimental activities with it. A pinhole camera can be as sophisticated as you want, or so simple that it could be handcrafted with practically recyclable materials. This paper describes the practical use of the pinhole camera throughout history and currently. Aspects of optics and geometry involved in the building of the stereo pinhole camera are presented with illustrations. Furthermore, experiments are proposed by using the images obtained by the camera for 3D visualization through a pair of anaglyph glasses, and the estimation of relative depth by triangulation is discussed.

  18. Single Camera Calibration in 3D Vision

    Directory of Open Access Journals (Sweden)

    Caius SULIMAN

    2009-12-01

    Full Text Available Camera calibration is a necessary step in 3D vision in order to extract metric information from 2D images. A camera is considered to be calibrated when the parameters of the camera are known (i.e. principal distance, lens distorsion, focal length etc.. In this paper we deal with a single camera calibration method and with the help of this method we try to find the intrinsic and extrinsic camera parameters. The method was implemented with succes in the programming and simulation environment Matlab.

  19. CameraHRV: robust measurement of heart rate variability using a camera

    Science.gov (United States)

    Pai, Amruta; Veeraraghavan, Ashok; Sabharwal, Ashutosh

    2018-02-01

    The inter-beat-interval (time period of the cardiac cycle) changes slightly for every heartbeat; this variation is measured as Heart Rate Variability (HRV). HRV is presumed to occur due to interactions between the parasym- pathetic and sympathetic nervous system. Therefore, it is sometimes used as an indicator of the stress level of an individual. HRV also reveals some clinical information about cardiac health. Currently, HRV is accurately measured using contact devices such as a pulse oximeter. However, recent research in the field of non-contact imaging Photoplethysmography (iPPG) has made vital sign measurements using just the video recording of any exposed skin (such as a person's face) possible. The current signal processing methods for extracting HRV using peak detection perform well for contact-based systems but have poor performance for the iPPG signals. The main reason for this poor performance is the fact that current methods are sensitive to large noise sources which are often present in iPPG data. Further, current methods are not robust to motion artifacts that are common in iPPG systems. We developed a new algorithm, CameraHRV, for robustly extracting HRV even in low SNR such as is common with iPPG recordings. CameraHRV combined spatial combination and frequency demodulation to obtain HRV from the instantaneous frequency of the iPPG signal. CameraHRV outperforms other current methods of HRV estimation. Ground truth data was obtained from FDA-approved pulse oximeter for validation purposes. CameraHRV on iPPG data showed an error of 6 milliseconds for low motion and varying skin tone scenarios. The improvement in error was 14%. In case of high motion scenarios like reading, watching and talking, the error was 10 milliseconds.

  20. Scalable IC Platform for Smart Cameras

    Directory of Open Access Journals (Sweden)

    Harry Broers

    2005-08-01

    Full Text Available Smart cameras are among the emerging new fields of electronics. The points of interest are in the application areas, software and IC development. In order to reduce cost, it is worthwhile to invest in a single architecture that can be scaled for the various application areas in performance (and resulting power consumption. In this paper, we show that the combination of an SIMD (single-instruction multiple-data processor and a general-purpose DSP is very advantageous for the image processing tasks encountered in smart cameras. While the SIMD processor gives the very high performance necessary by exploiting the inherent data parallelism found in the pixel crunching part of the algorithms, the DSP offers a friendly approach to the more complex tasks. The paper continues to motivate that SIMD processors have very convenient scaling properties in silicon, making the complete, SIMD-DSP architecture suitable for different application areas without changing the software suite. Analysis of the changes in power consumption due to scaling shows that for typical image processing tasks, it is beneficial to scale the SIMD processor to use the maximum level of parallelism available in the algorithm if the IC supply voltage can be lowered. If silicon cost is of importance, the parallelism of the processor should be scaled to just reach the desired performance given the speed of the silicon.

  1. Infrared sensing techniques for adaptive robotic welding

    International Nuclear Information System (INIS)

    Lin, T.T.; Groom, K.; Madsen, N.H.; Chin, B.A.

    1986-01-01

    The objective of this research is to investigate the feasibility of using infrared sensors to monitor the welding process. Data were gathered using an infrared camera which was trained on the molten metal pool during the welding operation. Several types of process perturbations which result in weld defects were then intentionally induced and the resulting thermal images monitored. Gas tungsten arc using ac and dc currents and gas metal arc welding processes were investigated using steel, aluminum and stainless steel plate materials. The thermal images obtained in the three materials and different welding processes revealed nearly identical patterns for the same induced process perturbation. Based upon these results, infrared thermography is a method which may be very applicable to automation of the welding process

  2. Collimator changer for scintillation camera

    International Nuclear Information System (INIS)

    Jupa, E.C.; Meeder, R.L.; Richter, E.K.

    1976-01-01

    A collimator changing assembly mounted on the support structure of a scintillation camera is described. A vertical support column positioned proximate the detector support column with a plurality of support arms mounted thereon in a rotatable cantilevered manner at separate vertical positions. Each support arm is adapted to carry one of the plurality of collimators which are interchangeably mountable on the underside of the detector and to transport the collimator between a store position remote from the detector and a change position underneath said detector

  3. Robot Tracer with Visual Camera

    Science.gov (United States)

    Jabbar Lubis, Abdul; Dwi Lestari, Yuyun; Dafitri, Haida; Azanuddin

    2017-12-01

    Robot is a versatile tool that can function replace human work function. The robot is a device that can be reprogrammed according to user needs. The use of wireless networks for remote monitoring needs can be utilized to build a robot that can be monitored movement and can be monitored using blueprints and he can track the path chosen robot. This process is sent using a wireless network. For visual robot using high resolution cameras to facilitate the operator to control the robot and see the surrounding circumstances.

  4. Neutron imaging system based on a video camera

    International Nuclear Information System (INIS)

    Dinca, M.

    2004-01-01

    The non-destructive testing with cold, thermal, epithermal or fast neutrons is nowadays more and more useful because the world-wide level of industrial development requires considerably higher standards of quality of manufactured products and reliability of technological processes especially where any deviation from standards could result in large-scale catastrophic consequences or human loses. Thanks to their properties, easily obtained and very good discrimination of the materials that penetrate, the thermal neutrons are the most used probe. The methods involved for this technique have advanced from neutron radiography based on converter screens and radiological films to neutron radioscopy based on video cameras, that is, from static images to dynamic images. Many neutron radioscopy systems have been used in the past with various levels of success. The quality of an image depends on the quality of the neutron beam and the type of the neutron imaging system. For real time investigations there are involved tube type cameras, CCD cameras and recently CID cameras that capture the image from an appropriate scintillator through the agency of a mirror. The analog signal of the camera is then converted into digital signal by the signal processing technology included into the camera. The image acquisition card or frame grabber from a PC converts the digital signal into an image. The image is formatted and processed by image analysis software. The scanning position of the object is controlled by the computer that commands the electrical motors that move horizontally, vertically and rotate the table of the object. Based on this system, a lot of static image acquisitions, real time non-destructive investigations of dynamic processes and finally, tomographic investigations of the small objects are done in a short time. A system based on a CID camera is presented. Fundamental differences between CCD and CID cameras lie in their pixel readout structure and technique. CIDs

  5. ISO far-infrared observations of rich galaxy clusters I. Abell 2670

    DEFF Research Database (Denmark)

    Hansen, Lene; Jorgensen, H.E.; Nørgaard-Nielsen, Hans Ulrik

    1999-01-01

    As part of an investigation of far-infrared emission from rich galaxy clusters the central part of Abell 2670 has been mapped with ISO at 60 mu m, 100 mu m, 135 mu m, and 200 mu m using the PHT-C camera. Point sources detected in the field have infrared fluxes comparable to normal spirals...

  6. Human action pattern monitor for telecare system utilizing magnetic thin film infrared sensor

    International Nuclear Information System (INIS)

    Osada, H.; Chiba, S.; Oka, H.; Seki, K.

    2002-01-01

    The magnetic thin film infrared sensor (MFI) is an infrared sensing device utilizing a temperature-sensitive magnetic thin film with marked temperature dependence in the room temperature range. We propose a human action pattern monitor (HPM) constructed with the MFI, without a monitor camera to save the clients' privacy, as a telecare system

  7. Full Body Pose Estimation During Occlusion using Multiple Cameras

    DEFF Research Database (Denmark)

    Fihl, Preben; Cosar, Serhan

    people is a very challenging problem for methods based on pictorials structure as for any other monocular pose estimation method. In this report we present work on a multi-view approach based on pictorial structures that integrate low level information from multiple calibrated cameras to improve the 2D...

  8. Observing bodies. Camera surveillance and the significance of the body.

    NARCIS (Netherlands)

    Dubbeld, L.

    2003-01-01

    At the most mundane level, CCTV observes bodies, and as such attaches great importance to the specific features of the human body. At the same time, however, bodies tend to disappear, as they are represented electronically by the camera monitors and, in the case of image recording, by the computer

  9. Continuous monitoring of Hawaiian volcanoes with thermal cameras

    Science.gov (United States)

    Patrick, Matthew R.; Orr, Tim R.; Antolik, Loren; Lee, Robert Lopaka; Kamibayashi, Kevan P.

    2014-01-01

    Continuously operating thermal cameras are becoming more common around the world for volcano monitoring, and offer distinct advantages over conventional visual webcams for observing volcanic activity. Thermal cameras can sometimes “see” through volcanic fume that obscures views to visual webcams and the naked eye, and often provide a much clearer view of the extent of high temperature areas and activity levels. We describe a thermal camera network recently installed by the Hawaiian Volcano Observatory to monitor Kīlauea’s summit and east rift zone eruptions (at Halema‘uma‘u and Pu‘u ‘Ō‘ō craters, respectively) and to keep watch on Mauna Loa’s summit caldera. The cameras are long-wave, temperature-calibrated models protected in custom enclosures, and often positioned on crater rims close to active vents. Images are transmitted back to the observatory in real-time, and numerous Matlab scripts manage the data and provide automated analyses and alarms. The cameras have greatly improved HVO’s observations of surface eruptive activity, which includes highly dynamic lava lake activity at Halema‘uma‘u, major disruptions to Pu‘u ‘Ō‘ō crater and several fissure eruptions.

  10. Low-cost panoramic infrared surveillance system

    Science.gov (United States)

    Kecskes, Ian; Engel, Ezra; Wolfe, Christopher M.; Thomson, George

    2017-05-01

    A nighttime surveillance concept consisting of a single surface omnidirectional mirror assembly and an uncooled Vanadium Oxide (VOx) longwave infrared (LWIR) camera has been developed. This configuration provides a continuous field of view spanning 360° in azimuth and more than 110° in elevation. Both the camera and the mirror are readily available, off-the-shelf, inexpensive products. The mirror assembly is marketed for use in the visible spectrum and requires only minor modifications to function in the LWIR spectrum. The compactness and portability of this optical package offers significant advantages over many existing infrared surveillance systems. The developed system was evaluated on its ability to detect moving, human-sized heat sources at ranges between 10 m and 70 m. Raw camera images captured by the system are converted from rectangular coordinates in the camera focal plane to polar coordinates and then unwrapped into the users azimuth and elevation system. Digital background subtraction and color mapping are applied to the images to increase the users ability to extract moving items from background clutter. A second optical system consisting of a commercially available 50 mm f/1.2 ATHERM lens and a second LWIR camera is used to examine the details of objects of interest identified using the panoramic imager. A description of the components of the proof of concept is given, followed by a presentation of raw images taken by the panoramic LWIR imager. A description of the method by which these images are analyzed is given, along with a presentation of these results side-by-side with the output of the 50 mm LWIR imager and a panoramic visible light imager. Finally, a discussion of the concept and its future development are given.

  11. Using DSLR cameras in digital holography

    Science.gov (United States)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  12. Human tracking over camera networks: a review

    Science.gov (United States)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  13. Image compensation for camera and lighting variability

    Science.gov (United States)

    Daley, Wayne D.; Britton, Douglas F.

    1996-12-01

    With the current trend of integrating machine vision systems in industrial manufacturing and inspection applications comes the issue of camera and illumination stabilization. Unless each application is built around a particular camera and highly controlled lighting environment, the interchangeability of cameras of fluctuations in lighting become a problem as each camera usually has a different response. An empirical approach is proposed where color tile data is acquired using the camera of interest, and a mapping is developed to some predetermined reference image using neural networks. A similar analytical approach based on a rough analysis of the imaging systems is also considered for deriving a mapping between cameras. Once a mapping has been determined, all data from one camera is mapped to correspond to the images of the other prior to performing any processing on the data. Instead of writing separate image processing algorithms for the particular image data being received, the image data is adjusted based on each particular camera and lighting situation. All that is required when swapping cameras is the new mapping for the camera being inserted. The image processing algorithms can remain the same as the input data has been adjusted appropriately. The results of utilizing this technique are presented for an inspection application.

  14. Detection of high level carbon dioxide emissions using a compact optical fibre based mid-infrared sensor system for applications in environmental pollution monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Muda, R; Lewis, E; O' Keeffe, S; Dooly, G; Clifford, J, E-mail: razali.muda@ul.i [Optical Fibre Sensors Research Centre, Electronic and Computer Engineering Department, University of Limerick (Ireland)

    2009-07-01

    A novel and highly compact optical fibre based sensor system for measurement of high concentrations CO{sub 2} gas emissions in modern automotive exhaust is presented. The sensor system works based on the principle of open-path direct absorption spectroscopy in the mid-infrared wavelength range. The sensor system, which comprises low cost components and is compact in design, is well suited for applications in monitoring CO{sub 2} emissions from the exhaust of automotive vehicles. The sensor system utilises calcium fluoride (CaF{sub 2}) lenses and a narrow band pass (NBP) filter for detection of CO{sub 2} gas. The response of the sensor to high concentrations of CO{sub 2} gas is presented and the result is compared with that of a commercial flue gas analyser. The sensor shows response times of 5.2s and demonstrates minimal susceptibility to cross interferences of other gases present in the exhaust system.

  15. A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Sobers Lourdu Xavier Francis

    2015-11-01

    Full Text Available The aim of this paper is to deploy a time-of-flight (ToF based photonic mixer device (PMD camera on an Autonomous Ground Vehicle (AGV whose overall target is to traverse from one point to another in hazardous and hostile environments employing obstacle avoidance without human intervention. The hypothesized approach of applying a ToF Camera for an AGV is a suitable approach to autonomous robotics because, as the ToF camera can provide three-dimensional (3D information at a low computational cost, it is utilized to extract information about obstacles after their calibration and ground testing and is mounted and integrated with the Pioneer mobile robot. The workspace is a two-dimensional (2D world map which has been divided into a grid/cells, where the collision-free path defined by the graph search algorithm is a sequence of cells the AGV can traverse to reach the target. PMD depth data is used to populate traversable areas and obstacles by representing a grid/cells of suitable size. These camera data are converted into Cartesian coordinates for entry into a workspace grid map. A more optimal camera mounting angle is needed and adopted by analysing the camera's performance discrepancy, such as pixel detection, the detection rate and the maximum perceived distances, and infrared (IR scattering with respect to the ground surface. This mounting angle is recommended to be half the vertical field-of-view (FoV of the PMD camera. A series of still and moving tests are conducted on the AGV to verify correct sensor operations, which show that the postulated application of the ToF camera in the AGV is not straightforward. Later, to stabilize the moving PMD camera and to detect obstacles, a tracking feature detection algorithm and the scene flow technique are implemented to perform a real-time experiment.

  16. Biometric identification using infrared dorsum hand vein images

    Directory of Open Access Journals (Sweden)

    Óscar Fernando Motato Toro

    2009-01-01

    Full Text Available The evident need for improving access and safety controls has orientated the development of new personal identification systems towards using biometric, physiological and behavioral features guaranteeing increasing greater levels of performance. Motivated by this trend, the development and implementation of a computational tool for recording and validating people’s identity using dorsum hand vein images is presented here. A low-cost hardware module for acquiring infrared images was thus designed; it consisted of a conventional video-camera, optical lenses, controlled infrared illumination sources and a frame grabber. The accompanying software module was concerned with visualizing and capturing images, selecting regions of interest, pattern seg-mentation in the region and extracting, describing and classifying these features. An artificial neuron network approach was im-plemented for pattern recognition, resulting in it proving the biometric indicator to be sufficiently discriminating, and a corre-lation-based approach using a 100 image database for static characterisation, determined the system’s maximum efficiency to be 95.72% at a threshold equal to 65. False acceptance rate (FAR was 8.57% and false rejection rate (FRR was 0% at this threshold.

  17. Scintillation camera with improved output means

    International Nuclear Information System (INIS)

    Lange, K.; Wiesen, E.J.; Woronowicz, E.M.

    1978-01-01

    In a scintillation camera system, the output pulse signals from an array of photomultiplier tubes are coupled to the inputs of individual preamplifiers. The preamplifier output signals are coupled to circuitry for computing the x and y coordinates of the scintillations. A cathode ray oscilloscope is used to form an image corresponding with the pattern in which radiation is emitted by a body. Means for improving the uniformity and resolution of the scintillations are provided. The means comprise biasing means coupled to the outputs of selected preamplifiers so that output signals below a predetermined amplitude are not suppressed and signals falling within increasing ranges of amplitudes are increasingly suppressed. In effect, the biasing means make the preamplifiers non-linear for selected signal levels

  18. Photometric Calibration of Consumer Video Cameras

    Science.gov (United States)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  19. Infrared radiation properties of anodized aluminum

    Energy Technology Data Exchange (ETDEWEB)

    Kohara, S. [Science Univ. of Tokyo, Noda, Chiba (Japan). Dept. of Materials Science and Technology; Niimi, Y. [Science Univ. of Tokyo, Noda, Chiba (Japan). Dept. of Materials Science and Technology

    1996-12-31

    The infrared radiation heating is an efficient and energy saving heating method. Ceramics have been used as an infrared radiant material, because the emissivity of metals is lower than that of ceramics. However, anodized aluminum could be used as the infrared radiant material since an aluminum oxide film is formed on the surface. In the present study, the infrared radiation properties of anodized aluminum have been investigated by determining the spectral emissivity curve. The spectral emissivity curve of anodized aluminum changed with the anodizing time. The spectral emissivity curve shifted to the higher level after anodizing for 10 min, but little changed afterwards. The infrared radiant material with high level spectral emissivity curve can be achieved by making an oxide film thicker than about 15 {mu}m on the surface of aluminum. Thus, anodized aluminum is applicable for the infrared radiation heating. (orig.)

  20. Common aperture multispectral spotter camera: Spectro XR

    Science.gov (United States)

    Petrushevsky, Vladimir; Freiman, Dov; Diamant, Idan; Giladi, Shira; Leibovich, Maor

    2017-10-01

    The Spectro XRTM is an advanced color/NIR/SWIR/MWIR 16'' payload recently developed by Elbit Systems / ELOP. The payload's primary sensor is a spotter camera with common 7'' aperture. The sensor suite includes also MWIR zoom, EO zoom, laser designator or rangefinder, laser pointer / illuminator and laser spot tracker. Rigid structure, vibration damping and 4-axes gimbals enable high level of line-of-sight stabilization. The payload's list of features include multi-target video tracker, precise boresight, strap-on IMU, embedded moving map, geodetic calculations suite, and image fusion. The paper describes main technical characteristics of the spotter camera. Visible-quality, all-metal front catadioptric telescope maintains optical performance in wide range of environmental conditions. High-efficiency coatings separate the incoming light into EO, SWIR and MWIR band channels. Both EO and SWIR bands have dual FOV and 3 spectral filters each. Several variants of focal plane array formats are supported. The common aperture design facilitates superior DRI performance in EO and SWIR, in comparison to the conventionally configured payloads. Special spectral calibration and color correction extend the effective range of color imaging. An advanced CMOS FPA and low F-number of the optics facilitate low light performance. SWIR band provides further atmospheric penetration, as well as see-spot capability at especially long ranges, due to asynchronous pulse detection. MWIR band has good sharpness in the entire field-of-view and (with full HD FPA) delivers amount of detail far exceeding one of VGA-equipped FLIRs. The Spectro XR offers level of performance typically associated with larger and heavier payloads.

  1. Photogrammetric Applications of Immersive Video Cameras

    OpenAIRE

    Kwiatek, K.; Tokarczyk, R.

    2014-01-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to ov...

  2. Movement-based Interaction in Camera Spaces

    DEFF Research Database (Denmark)

    Eriksson, Eva; Riisgaard Hansen, Thomas; Lykke-Olesen, Andreas

    2006-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movement-based projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space......, relations, and feedback. We see these as central for describing and analysing movement-based systems using camera tracking and we show how these three concepts can be used to analyse other camera tracking applications....

  3. Performance analysis for gait in camera networks

    OpenAIRE

    Michela Goffredo; Imed Bouchrika; John Carter; Mark Nixon

    2008-01-01

    This paper deploys gait analysis for subject identification in multi-camera surveillance scenarios. We present a new method for viewpoint independent markerless gait analysis that does not require camera calibration and works with a wide range of directions of walking. These properties make the proposed method particularly suitable for gait identification in real surveillance scenarios where people and their behaviour need to be tracked across a set of cameras. Tests on 300 synthetic and real...

  4. Explosive Transient Camera (ETC) Program

    Science.gov (United States)

    Ricker, George

    1991-01-01

    Since the inception of the ETC program, a wide range of new technologies was developed to support this astronomical instrument. The prototype unit was installed at ETC Site 1. The first partially automated observations were made and some major renovations were later added to the ETC hardware. The ETC was outfitted with new thermoelectrically-cooled CCD cameras and a sophisticated vacuum manifold, which, together, made the ETC a much more reliable unit than the prototype. The ETC instrumentation and building were placed under full computer control, allowing the ETC to operate as an automated, autonomous instrument with virtually no human intervention necessary. The first fully-automated operation of the ETC was performed, during which the ETC monitored the error region of the repeating soft gamma-ray burster SGR 1806-21.

  5. Camera processing with chromatic aberration.

    Science.gov (United States)

    Korneliussen, Jan Tore; Hirakawa, Keigo

    2014-10-01

    Since the refractive index of materials commonly used for lens depends on the wavelengths of light, practical camera optics fail to converge light to a single point on an image plane. Known as chromatic aberration, this phenomenon distorts image details by introducing magnification error, defocus blur, and color fringes. Though achromatic and apochromatic lens designs reduce chromatic aberration to a degree, they are complex and expensive and they do not offer a perfect correction. In this paper, we propose a new postcapture processing scheme designed to overcome these problems computationally. Specifically, the proposed solution is comprised of chromatic aberration-tolerant demosaicking algorithm and post-demosaicking chromatic aberration correction. Experiments with simulated and real sensor data verify that the chromatic aberration is effectively corrected.

  6. Approximations to camera sensor noise

    Science.gov (United States)

    Jin, Xiaodan; Hirakawa, Keigo

    2013-02-01

    Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.

  7. An Airborne Multispectral Imaging System Based on Two Consumer-Grade Cameras for Agricultural Remote Sensing

    Directory of Open Access Journals (Sweden)

    Chenghai Yang

    2014-06-01

    Full Text Available This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS sensor with 5616 × 3744 pixels. One camera captures normal color images, while the other is modified to obtain near-infrared (NIR images. The color camera is also equipped with a GPS receiver to allow geotagged images. A remote control is used to trigger both cameras simultaneously. Images are stored in 14-bit RAW and 8-bit JPEG files in CompactFlash cards. The second-order transformation was used to align the color and NIR images to achieve subpixel alignment in four-band images. The imaging system was tested under various flight and land cover conditions and optimal camera settings were determined for airborne image acquisition. Images were captured at altitudes of 305–3050 m (1000–10,000 ft and pixel sizes of 0.1–1.0 m were achieved. Four practical application examples are presented to illustrate how the imaging system was used to estimate cotton canopy cover, detect cotton root rot, and map henbit and giant reed infestations. Preliminary analysis of example images has shown that this system has potential for crop condition assessment, pest detection, and other agricultural applications.

  8. WiseEye: Next Generation Expandable and Programmable Camera Trap Platform for Wildlife Research.

    Directory of Open Access Journals (Sweden)

    Sajid Nazir

    Full Text Available The widespread availability of relatively cheap, reliable and easy to use digital camera traps has led to their extensive use for wildlife research, monitoring and public outreach. Users of these units are, however, often frustrated by the limited options for controlling camera functions, the generation of large numbers of images, and the lack of flexibility to suit different research environments and questions. We describe the development of a user-customisable open source camera trap platform named 'WiseEye', designed to provide flexible camera trap technology for wildlife researchers. The novel platform is based on a Raspberry Pi single-board computer and compatible peripherals that allow the user to control its functions and performance. We introduce the concept of confirmatory sensing, in which the Passive Infrared triggering is confirmed through other modalities (i.e. radar, pixel change to reduce the occurrence of false positives images. This concept, together with user-definable metadata, aided identification of spurious images and greatly reduced post-collection processing time. When tested against a commercial camera trap, WiseEye was found to reduce the incidence of false positive images and false negatives across a range of test conditions. WiseEye represents a step-change in camera trap functionality, greatly increasing the value of this technology for wildlife research and conservation management.

  9. Comparative evaluation of consumer grade cameras and mobile phone cameras for close range photogrammetry

    Science.gov (United States)

    Chikatsu, Hirofumi; Takahashi, Yoji

    2009-08-01

    The authors have been concentrating on developing convenient 3D measurement methods using consumer grade digital cameras, and it was concluded that consumer grade digital cameras are expected to become a useful photogrammetric device for the various close range application fields. On the other hand, mobile phone cameras which have 10 mega pixels were appeared on the market in Japan. In these circumstances, we are faced with alternative epoch-making problem whether mobile phone cameras are able to take the place of consumer grade digital cameras in close range photogrammetric applications. In order to evaluate potentials of mobile phone cameras in close range photogrammetry, comparative evaluation between mobile phone cameras and consumer grade digital cameras are investigated in this paper with respect to lens distortion, reliability, stability and robustness. The calibration tests for 16 mobile phone cameras and 50 consumer grade digital cameras were conducted indoors using test target. Furthermore, practability of mobile phone camera for close range photogrammetry was evaluated outdoors. This paper presents that mobile phone cameras have ability to take the place of consumer grade digital cameras, and develop the market in digital photogrammetric fields.

  10. Decision about buying a gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Ganatra, R D

    1993-12-31

    A large part of the referral to a nuclear medicine department is usually for imaging studies. Sooner or later, the nuclear medicine specialist will be called upon to make a decision about when and what type of gamma camera to buy. There is no longer an option of choosing between a rectilinear scanner and a gamma camera as the former is virtually out of the market. The decision that one has to make is when to invest in a gamma camera, and then on what basis to select the gamma camera 1 tab., 1 fig

  11. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  12. Decision about buying a gamma camera

    International Nuclear Information System (INIS)

    Ganatra, R.D.

    1992-01-01

    A large part of the referral to a nuclear medicine department is usually for imaging studies. Sooner or later, the nuclear medicine specialist will be called upon to make a decision about when and what type of gamma camera to buy. There is no longer an option of choosing between a rectilinear scanner and a gamma camera as the former is virtually out of the market. The decision that one has to make is when to invest in a gamma camera, and then on what basis to select the gamma camera

  13. Streak camera recording of interferometer fringes

    International Nuclear Information System (INIS)

    Parker, N.L.; Chau, H.H.

    1977-01-01

    The use of an electronic high-speed camera in the streaking mode to record interference fringe motion from a velocity interferometer is discussed. Advantages of this method over the photomultiplier tube-oscilloscope approach are delineated. Performance testing and data for the electronic streak camera are discussed. The velocity profile of a mylar flyer accelerated by an electrically exploded bridge, and the jump-off velocity of metal targets struck by these mylar flyers are measured in the camera tests. Advantages of the streak camera include portability, low cost, ease of operation and maintenance, simplified interferometer optics, and rapid data analysis

  14. Infrared stereo calibration for unmanned ground vehicle navigation

    Science.gov (United States)

    Harguess, Josh; Strange, Shawn

    2014-06-01

    The problem of calibrating two color cameras as a stereo pair has been heavily researched and many off-the-shelf software packages, such as Robot Operating System and OpenCV, include calibration routines that work in most cases. However, the problem of calibrating two infrared (IR) cameras for the purposes of sensor fusion and point could generation is relatively new and many challenges exist. We present a comparison of color camera and IR camera stereo calibration using data from an unmanned ground vehicle. There are two main challenges in IR stereo calibration; the calibration board (material, design, etc.) and the accuracy of calibration pattern detection. We present our analysis of these challenges along with our IR stereo calibration methodology. Finally, we present our results both visually and analytically with computed reprojection errors.

  15. Overt vs. covert speed cameras in combination with delayed vs. immediate feedback to the offender.

    Science.gov (United States)

    Marciano, Hadas; Setter, Pe'erly; Norman, Joel

    2015-06-01

    Speeding is a major problem in road safety because it increases both the probability of accidents and the severity of injuries if an accident occurs. Speed cameras are one of the most common speed enforcement tools. Most of the speed cameras around the world are overt, but there is evidence that this can cause a "kangaroo effect" in driving patterns. One suggested alternative to prevent this kangaroo effect is the use of covert cameras. Another issue relevant to the effect of enforcement countermeasures on speeding is the timing of the fine. There is general agreement on the importance of the immediacy of the punishment, however, in the context of speed limit enforcement, implementing such immediate punishment is difficult. An immediate feedback that mediates the delay between the speed violation and getting a ticket is one possible solution. This study examines combinations of concealment and the timing of the fine in operating speed cameras in order to evaluate the most effective one in terms of enforcing speed limits. Using a driving simulator, the driving performance of the following four experimental groups was tested: (1) overt cameras with delayed feedback, (2) overt cameras with immediate feedback, (3) covert cameras with delayed feedback, and (4) covert cameras with immediate feedback. Each of the 58 participants drove in the same scenario on three different days. The results showed that both median speed and speed variance were higher with overt than with covert cameras. Moreover, implementing a covert camera system along with immediate feedback was more conducive to drivers maintaining steady speeds at the permitted levels from the very beginning. Finally, both 'overt cameras' groups exhibit a kangaroo effect throughout the entire experiment. It can be concluded that an implementation strategy consisting of covert speed cameras combined with immediate feedback to the offender is potentially an optimal way to motivate drivers to maintain speeds at the

  16. Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor

    Science.gov (United States)

    Nguyen, Dat Tien; Baek, Na Rae; Pham, Tuyen Danh; Park, Kang Ryoung

    2018-01-01

    Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD) method for an iris recognition system (iPAD) using a near infrared light (NIR) camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED). Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM). Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies. PMID:29695113

  17. Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2018-04-01

    Full Text Available Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD method for an iris recognition system (iPAD using a near infrared light (NIR camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED. Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM. Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies.

  18. Presentation Attack Detection for Iris Recognition System Using NIR Camera Sensor.

    Science.gov (United States)

    Nguyen, Dat Tien; Baek, Na Rae; Pham, Tuyen Danh; Park, Kang Ryoung

    2018-04-24

    Among biometric recognition systems such as fingerprint, finger-vein, or face, the iris recognition system has proven to be effective for achieving a high recognition accuracy and security level. However, several recent studies have indicated that an iris recognition system can be fooled by using presentation attack images that are recaptured using high-quality printed images or by contact lenses with printed iris patterns. As a result, this potential threat can reduce the security level of an iris recognition system. In this study, we propose a new presentation attack detection (PAD) method for an iris recognition system (iPAD) using a near infrared light (NIR) camera image. To detect presentation attack images, we first localized the iris region of the input iris image using circular edge detection (CED). Based on the result of iris localization, we extracted the image features using deep learning-based and handcrafted-based methods. The input iris images were then classified into real and presentation attack categories using support vector machines (SVM). Through extensive experiments with two public datasets, we show that our proposed method effectively solves the iris recognition presentation attack detection problem and produces detection accuracy superior to previous studies.

  19. Multi-wavelength study of infrared galaxies

    International Nuclear Information System (INIS)

    Marcillac, Delphine

    2005-01-01

    This thesis deals with a panchromatic study of luminous infrared galaxies (LIRGs) detected at 15 microns by ISOCAM (camera aboard ISO) and at 24 microns by MIPS (camera aboard the recently launched Spitzer satellite). These galaxies are today considered to be the Rosetta Stone of galaxy evolution since they are found to be far more numerous at high redshift and it is thought that a large part of stars seen in the local universe are born in such phases. The first part of this thesis presents a new study dedicated to dust emission of distant LIRGs in the mid-infrared range. Their dust emission has been compared to those of a local sample of LIRGs in addition to the prediction of several spectral energy distributions (SEDs) built on data available in the local universe. It has been shown that distant and local LIRGs present similar mid infrared spectral energy distribution: similar PAH bumps are detected in both local and distant LIRGs, however distant LIRGs show evidence of a stronger silicate absorption at 10 microns associated silicate grains. It also shows that distant LIRG mid infrared emission can be used together with local SEDs in order to estimate the total infrared luminosity. The second part of this thesis is dedicated to the burst of star formation and to the recent star formation history of these galaxies, which is responsible for the dust emission. This study was done thanks to a combination of high resolution spectra (R=2000 in the rest frame) obtained at VLT/FORS2 and the stellar population synthesis models called GALAXEV (Bruzual and Charlot, 2003). It has been shown that the burst of star formation has a duration of about 0.1 Gyear. About 10 % of the stellar content is formed during this burst of star formation. (author) [fr

  20. The Andromeda Optical and Infrared Disk Survey

    Science.gov (United States)

    Sick, J.; Courteau, S.; Cuillandre, J.-C.

    2014-03-01

    The Andromeda Optical and Infrared Disk Survey has mapped M31 in u* g' r' i' JKs wavelengths out to R = 40 kpc using the MegaCam and WIRCam wide-field cameras on the Canada-France-Hawaii Telescope. Our survey is uniquely designed to simultaneously resolve stars while also carefully reproducing the surface brightness of M31, allowing us to study M31's global structure in the context of both resolved stellar populations and spectral energy distributions. We use the Elixir-LSB method to calibrate the optical u* g' r' i' images by building real-time maps of the sky background with sky-target nodding. These maps are stable to μg ≲ 28.5 mag arcsec-2 and reveal warps in the outer M31 disk in surface brightness. The equivalent WIRCam mapping in the near-infrared uses a combination of sky-target nodding and image-to-image sky offset optimization to produce stable surface brightnesses. This study enables a detailed analysis of the systematics of spectral energy distribution fitting with near-infrared bands where asymptotic giant branch stars impose a significant, but ill-constrained, contribution to the near-infrared light of a galaxy. Here we present panchromatic surface brightness maps and initial results from our near-infrared resolved stellar catalog.

  1. Lane detection algorithm for an onboard camera

    Science.gov (United States)

    Bellino, Mario; Lopez de Meneses, Yuri; Ryser, Peter; Jacot, Jacques

    2005-02-01

    After analysing the major causes of injuries and death on roads, it is understandable that one of the main goals in the automotive industry is to increase vehicle safety. The European project SPARC (Secure Propulsion using Advanced Redundant Control) is developing the next generation of trucks that will fulfil these aims. The main technologies that will be used in the SPARC project to achieve the desiderated level of safety will be presented. In order to avoid accidents in critical situations, it is necessary to have a representation of the environment of the vehicle. Thus, several solutions using different sensors will be described and analysed. Particularly, a division of this project aims to integrate cameras in automotive vehicles to increase security and prevent driver's mistakes. Indeed, with this vision platform it would be possible to extract the position of the lane with respect to the vehicle, and thus, help the driver to follow the optimal trajectory. A definition of lane is proposed, and a lane detection algorithm is presented. In order to improve the detection, several criteria are explained and detailed. Regrettably, such an embedded camera is subject to the vibration of the truck, and the resulting sequence of images is difficult to analyse. Thus, we present different solutions to stabilize the images and particularly a new approach developed by the "Laboratoire de Production Microtechnique". Indeed, it was demonstrated in previous works that the presence of noise can be used, through a phenomenon called Stochastic Resonance. Thus, instead of decreasing the influence of noise in industrial applications, which has non negligible costs, it is perhaps interesting to use this phenomenon to reveal some useful information, such as for example the contour of the objects and lanes.

  2. Spectral Characterization of a Prototype SFA Camera for Joint Visible and NIR Acquisition

    Directory of Open Access Journals (Sweden)

    Jean-Baptiste Thomas

    2016-06-01

    Full Text Available Multispectral acquisition improves machine vision since it permits capturing more information on object surface properties than color imaging. The concept of spectral filter arrays has been developed recently and allows multispectral single shot acquisition with a compact camera design. Due to filter manufacturing difficulties, there was, up to recently, no system available for a large span of spectrum, i.e., visible and Near Infra-Red acquisition. This article presents the achievement of a prototype of camera that captures seven visible and one near infra-red bands on the same sensor chip. A calibration is proposed to characterize the sensor, and images are captured. Data are provided as supplementary material for further analysis and simulations. This opens a new range of applications in security, robotics, automotive and medical fields.

  3. Improving Situational Awareness in camera surveillance by combining top-view maps with camera images

    NARCIS (Netherlands)

    Kooi, F.L.; Zeeders, R.

    2009-01-01

    The goal of the experiment described is to improve today's camera surveillance in public spaces. Three designs with the camera images combined on a top-view map were compared to each other and to the current situation in camera surveillance. The goal was to test which design makes spatial

  4. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many

  5. Laser-based terahertz-field-driven streak camera for the temporal characterization of ultrashort processes

    International Nuclear Information System (INIS)

    Schuette, Bernd

    2011-09-01

    In this work, a novel laser-based terahertz-field-driven streak camera is presented. It allows for a pulse length characterization of femtosecond (fs) extreme ultraviolet (XUV) pulses by a cross-correlation with terahertz (THz) pulses generated with a Ti:sapphire laser. The XUV pulses are emitted by a source of high-order harmonic generation (HHG) in which an intense near-infrared (NIR) fs laser pulse is focused into a gaseous medium. The design and characterization of a high-intensity THz source needed for the streak camera is also part of this thesis. The source is based on optical rectification of the same NIR laser pulse in a lithium niobate crystal. For this purpose, the pulse front of the NIR beam is tilted via a diffraction grating to achieve velocity matching between NIR and THz beams within the crystal. For the temporal characterization of the XUV pulses, both HHG and THz beams are focused onto a gas target. The harmonic radiation creates photoelectron wavepackets which are then accelerated by the THz field depending on its phase at the time of ionization. This principle adopted from a conventional streak camera and now widely used in attosecond metrology. The streak camera presented here is an advancement of a terahertz-field-driven streak camera implemented at the Free Electron Laser in Hamburg (FLASH). The advantages of the laser-based streak camera lie in its compactness, cost efficiency and accessibility, while providing the same good quality of measurements as obtained at FLASH. In addition, its flexibility allows for a systematic investigation of streaked Auger spectra which is presented in this thesis. With its fs time resolution, the terahertz-field-driven streak camera thereby bridges the gap between attosecond and conventional cameras. (orig.)

  6. Laser-based terahertz-field-driven streak camera for the temporal characterization of ultrashort processes

    Energy Technology Data Exchange (ETDEWEB)

    Schuette, Bernd

    2011-09-15

    In this work, a novel laser-based terahertz-field-driven streak camera is presented. It allows for a pulse length characterization of femtosecond (fs) extreme ultraviolet (XUV) pulses by a cross-correlation with terahertz (THz) pulses generated with a Ti:sapphire laser. The XUV pulses are emitted by a source of high-order harmonic generation (HHG) in which an intense near-infrared (NIR) fs laser pulse is focused into a gaseous medium. The design and characterization of a high-intensity THz source needed for the streak camera is also part of this thesis. The source is based on optical rectification of the same NIR laser pulse in a lithium niobate crystal. For this purpose, the pulse front of the NIR beam is tilted via a diffraction grating to achieve velocity matching between NIR and THz beams within the crystal. For the temporal characterization of the XUV pulses, both HHG and THz beams are focused onto a gas target. The harmonic radiation creates photoelectron wavepackets which are then accelerated by the THz field depending on its phase at the time of ionization. This principle adopted from a conventional streak camera and now widely used in attosecond metrology. The streak camera presented here is an advancement of a terahertz-field-driven streak camera implemented at the Free Electron Laser in Hamburg (FLASH). The advantages of the laser-based streak camera lie in its compactness, cost efficiency and accessibility, while providing the same good quality of measurements as obtained at FLASH. In addition, its flexibility allows for a systematic investigation of streaked Auger spectra which is presented in this thesis. With its fs time resolution, the terahertz-field-driven streak camera thereby bridges the gap between attosecond and conventional cameras. (orig.)

  7. Enhancing swimming pool safety by the use of range-imaging cameras

    Science.gov (United States)

    Geerardyn, D.; Boulanger, S.; Kuijk, M.

    2015-05-01

    Drowning is the cause of death of 372.000 people, each year worldwide, according to the report of November 2014 of the World Health Organization.1 Currently, most swimming pools only use lifeguards to detect drowning people. In some modern swimming pools, camera-based detection systems are nowadays being integrated. However, these systems have to be mounted underwater, mostly as a replacement of the underwater lighting. In contrast, we are interested in range imaging cameras mounted on the ceiling of the swimming pool, allowing to distinguish swimmers at the surface from drowning people underwater, while keeping the large field-of-view and minimizing occlusions. However, we have to take into account that the water surface of a swimming pool is not a flat, but mostly rippled surface, and that the water is transparent for visible light, but less transparent for infrared or ultraviolet light. We investigated the use of different types of 3D cameras to detect objects underwater at different depths and with different amplitudes of surface perturbations. Specifically, we performed measurements with a commercial Time-of-Flight camera, a commercial structured-light depth camera and our own Time-of-Flight system. Our own system uses pulsed Time-of-Flight and emits light of 785 nm. The measured distances between the camera and the object are influenced through the perturbations on the water surface. Due to the timing of our Time-of-Flight camera, our system is theoretically able to minimize the influence of the reflections of a partially-reflecting surface. The combination of a post image-acquisition filter compensating for the perturbations and the use of a light source with shorter wavelengths to enlarge the depth range can improve the current commercial cameras. As a result, we can conclude that low-cost range imagers can increase swimming pool safety, by inserting a post-processing filter and the use of another light source.

  8. Camtracker: a new camera controlled high precision solar tracker system for FTIR-spectrometers

    Directory of Open Access Journals (Sweden)

    M. Gisi

    2011-01-01

    Full Text Available A new system to very precisely couple radiation of a moving source into a Fourier Transform Infrared (FTIR Spectrometer is presented. The Camtracker consists of a homemade altazimuthal solar tracker, a digital camera and a homemade program to process the camera data and to control the motion of the tracker. The key idea is to evaluate the image of the radiation source on the entrance field stop of the spectrometer. We prove that the system reaches tracking accuracies of about 10 arc s for a ground-based solar absorption FTIR spectrometer, which is significantly better than current solar trackers. Moreover, due to the incorporation of a camera, the new system allows to document residual pointing errors and to point onto the solar disk center even in case of variable intensity distributions across the source due to cirrus or haze.

  9. High resolution RGB color line scan camera

    Science.gov (United States)

    Lynch, Theodore E.; Huettig, Fred

    1998-04-01

    A color line scan camera family which is available with either 6000, 8000 or 10000 pixels/color channel, utilizes off-the-shelf lenses, interfaces with currently available frame grabbers, includes on-board pixel by pixel offset correction, and is configurable and controllable via RS232 serial port for computer controlled or stand alone operation is described in this paper. This line scan camera is based on an available 8000 element monochrome line scan camera designed by AOA for OEM use. The new color version includes improvements such as better packaging and additional user features which make the camera easier to use. The heart of the camera is a tri-linear CCD sensor with on-chip color balancing for maximum accuracy and pinned photodiodes for low lag response. Each color channel is digitized to 12 bits and all three channels are multiplexed together so that the resulting camera output video is either a 12 or 8 bit data stream at a rate of up to 24Megpixels/sec. Conversion from 12 to 8 bit, or user-defined gamma, is accomplished by on board user-defined video look up tables. The camera has two user-selectable operating modes; lows speed, high sensitivity mode or high speed, reduced sensitivity mode. The intended uses of the camera include industrial inspection, digital archiving, document scanning, and graphic arts applications.

  10. Ultra fast x-ray streak camera

    International Nuclear Information System (INIS)

    Coleman, L.W.; McConaghy, C.F.

    1975-01-01

    A unique ultrafast x-ray sensitive streak camera, with a time resolution of 50psec, has been built and operated. A 100A thick gold photocathode on a beryllium vacuum window is used in a modified commerical image converter tube. The X-ray streak camera has been used in experiments to observe time resolved emission from laser-produced plasmas. (author)

  11. An Open Standard for Camera Trap Data

    Directory of Open Access Journals (Sweden)

    Tavis Forrester

    2016-12-01

    Full Text Available Camera traps that capture photos of animals are a valuable tool for monitoring biodiversity. The use of camera traps is rapidly increasing and there is an urgent need for standardization to facilitate data management, reporting and data sharing. Here we offer the Camera Trap Metadata Standard as an open data standard for storing and sharing camera trap data, developed by experts from a variety of organizations. The standard captures information necessary to share data between projects and offers a foundation for collecting the more detailed data needed for advanced analysis. The data standard captures information about study design, the type of camera used, and the location and species names for all detections in a standardized way. This information is critical for accurately assessing results from individual camera trapping projects and for combining data from multiple studies for meta-analysis. This data standard is an important step in aligning camera trapping surveys with best practices in data-intensive science. Ecology is moving rapidly into the realm of big data, and central data repositories are becoming a critical tool and are emerging for camera trap data. This data standard will help researchers standardize data terms, align past data to new repositories, and provide a framework for utilizing data across repositories and research projects to advance animal ecology and conservation.

  12. Single chip camera active pixel sensor

    Science.gov (United States)

    Shaw, Timothy (Inventor); Pain, Bedabrata (Inventor); Olson, Brita (Inventor); Nixon, Robert H. (Inventor); Fossum, Eric R. (Inventor); Panicacci, Roger A. (Inventor); Mansoorian, Barmak (Inventor)

    2003-01-01

    A totally digital single chip camera includes communications to operate most of its structure in serial communication mode. The digital single chip camera include a D/A converter for converting an input digital word into an analog reference signal. The chip includes all of the necessary circuitry for operating the chip using a single pin.

  13. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  14. Centering mount for a gamma camera

    International Nuclear Information System (INIS)

    Mirkhodzhaev, A.Kh.; Kuznetsov, N.K.; Ostryj, Yu.E.

    1988-01-01

    A device for centering a γ-camera detector in case of radionuclide diagnosis is described. It permits the use of available medical coaches instead of a table with a transparent top. The device can be used for centering a detector (when it is fixed at the low end of a γ-camera) on a required area of the patient's body

  15. Digital airborne camera introduction and technology

    CERN Document Server

    Sandau, Rainer

    2014-01-01

    The last decade has seen great innovations on the airborne camera. This book is the first ever written on the topic and describes all components of a digital airborne camera ranging from the object to be imaged to the mass memory device.

  16. Adapting virtual camera behaviour through player modelling

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Research in virtual camera control has focused primarily on finding methods to allow designers to place cameras effectively and efficiently in dynamic and unpredictable environments, and to generate complex and dynamic plans for cinematography in virtual environments. In this article, we propose...

  17. Driving with head-slaved camera system

    NARCIS (Netherlands)

    Oving, A.B.; Erp, J.B.F. van

    2001-01-01

    In a field experiment, we tested the effectiveness of a head-slaved camera system for driving an armoured vehicle under armour. This system consists of a helmet-mounted display (HMD), a headtracker, and a motion platform with two cameras. Subjects performed several driving tasks on paved and in

  18. Rosetta Star Tracker and Navigation Camera

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera.......Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera....

  19. Wavefront analysis for plenoptic camera imaging

    International Nuclear Information System (INIS)

    Luan Yin-Sen; Xu Bing; Yang Ping; Tang Guo-Mao

    2017-01-01

    The plenoptic camera is a single lens stereo camera which can retrieve the direction of light rays while detecting their intensity distribution. In this paper, to reveal more truths of plenoptic camera imaging, we present the wavefront analysis for the plenoptic camera imaging from the angle of physical optics but not from the ray tracing model of geometric optics. Specifically, the wavefront imaging model of a plenoptic camera is analyzed and simulated by scalar diffraction theory and the depth estimation is redescribed based on physical optics. We simulate a set of raw plenoptic images of an object scene, thereby validating the analysis and derivations and the difference between the imaging analysis methods based on geometric optics and physical optics are also shown in simulations. (paper)

  20. Modelling Virtual Camera Behaviour Through Player Gaze

    DEFF Research Database (Denmark)

    Picardi, Andrea; Burelli, Paolo; Yannakakis, Georgios N.

    2012-01-01

    industry and game AI research focus on the devel- opment of increasingly sophisticated systems to automate the control of the virtual camera integrating artificial intel- ligence algorithms within physical simulations. However, in both industry and academia little research has been carried out......In a three-dimensional virtual environment, aspects such as narrative and interaction largely depend on the placement and animation of the virtual camera. Therefore, virtual camera control plays a critical role in player experience and, thereby, in the overall quality of a computer game. Both game...... on the relationship between virtual camera, game-play and player behaviour. We run a game user experiment to shed some light on this relationship and identify relevant dif- ferences between camera behaviours through different game sessions, playing behaviours and player gaze patterns. Re- sults show that users can...

  1. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Romps, David [Univ. of California, Berkeley, CA (United States); Oktem, Rusen [Univ. of California, Berkeley, CA (United States)

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together to obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.

  2. Gamma camera performance: technical assessment protocol

    Energy Technology Data Exchange (ETDEWEB)

    Bolster, A.A. [West Glasgow Hospitals NHS Trust, London (United Kingdom). Dept. of Clinical Physics; Waddington, W.A. [University College London Hospitals NHS Trust, London (United Kingdom). Inst. of Nuclear Medicine

    1996-12-31

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera`s computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author).

  3. Infrared radiation

    International Nuclear Information System (INIS)

    Moss, C.E.; Ellis, R.J.; Murray, W.E.; Parr, W.H.

    1989-01-01

    All people are exposed to IR radiation from sunlight, artificial light and radiant heating. Exposures to IR are quantified by irradiance and radiant exposure to characterize biological effects on the skin and cornea. However, near-IR exposure to the retina requires knowledge of the radiance of the IR source. With most IR sources in everyday use the health risks are considered minimal; only in certain high radiant work environments are individuals exposed to excessive levels. The interaction of IR radiation with biological tissues is mainly thermal. IR radiation may augment the biological response to other agents. The major health hazards are thermal injury to the eye and skin, including corneal burns from far-IR, heat stress, and retinal and lenticular injury from near-IR radiation. 59 refs, 13 figs, 2 tabs

  4. Development of high-speed video cameras

    Science.gov (United States)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  5. Cloud Computing with Context Cameras

    Science.gov (United States)

    Pickles, A. J.; Rosing, W. E.

    2016-05-01

    We summarize methods and plans to monitor and calibrate photometric observations with our autonomous, robotic network of 2m, 1m and 40cm telescopes. These are sited globally to optimize our ability to observe time-variable sources. Wide field "context" cameras are aligned with our network telescopes and cycle every ˜2 minutes through BVr'i'z' filters, spanning our optical range. We measure instantaneous zero-point offsets and transparency (throughput) against calibrators in the 5-12m range from the all-sky Tycho2 catalog, and periodically against primary standards. Similar measurements are made for all our science images, with typical fields of view of ˜0.5 degrees. These are matched against Landolt, Stetson and Sloan standards, and against calibrators in the 10-17m range from the all-sky APASS catalog. Such measurements provide pretty good instantaneous flux calibration, often to better than 5%, even in cloudy conditions. Zero-point and transparency measurements can be used to characterize, monitor and inter-compare sites and equipment. When accurate calibrations of Target against Standard fields are required, monitoring measurements can be used to select truly photometric periods when accurate calibrations can be automatically scheduled and performed.

  6. Feldspar, Infrared Stimulated Luminescence

    DEFF Research Database (Denmark)

    Jain, Mayank

    2014-01-01

    This entry primarily concerns the characteristics and the origins of infrared-stimulated luminescence in feldspars.......This entry primarily concerns the characteristics and the origins of infrared-stimulated luminescence in feldspars....

  7. Towards Adaptive Virtual Camera Control In Computer Games

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2011-01-01

    Automatic camera control aims to define a framework to control virtual camera movements in dynamic and unpredictable virtual environments while ensuring a set of desired visual properties. We inves- tigate the relationship between camera placement and playing behaviour in games and build a user...... model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platform game, we cluster gaze and camera information to identify camera behaviour profiles and we employ...... camera control in games is discussed....

  8. Development of non-destructive testing system of shoes for infrared rays

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Yeol; Park, Chang Sun; Oh, Ki Jang; Ma, Sang Dong; Kim, Bong Jae [Chosun Univesity, Kwangju (Korea, Republic of); Yang, Dong Jo [Research Institute of Industrial Science and Technology, Pohang (Korea, Republic of)

    2001-05-15

    Diagnosis or measurements using Infrared thermo-image hasn't been available. A quick diagnosis and thermal analysis can be possible when that kind of system is introduced to the investigation of each part. In this study, Infrared Camera, Thermo-vision 900 of AGEMA Company was used in order to investigate. Infrared Camera usually detects only Infrared wave from the light in order to illustrate the temperature distribution. Infrared diagnosis system can be applied to various field. But the defect discrimination can be automatic or mechanization on the special shoes total inspection system. Also, it is more effective to development and composition on the shoes total inspection system. In this study, it is introduction method of special shoes nondestructive total inspection. Performance of the proposed method are shown by through thermo-Image.

  9. Reducing the Variance of Intrinsic Camera Calibration Results in the ROS Camera_Calibration Package

    Science.gov (United States)

    Chiou, Geoffrey Nelson

    The intrinsic calibration of a camera is the process in which the internal optical and geometric characteristics of the camera are determined. If accurate intrinsic parameters of a camera are known, the ray in 3D space that every point in the image lies on can be determined. Pairing with another camera allows for the position of the points in the image to be calculated by intersection of the rays. Accurate intrinsics also allow for the position and orientation of a camera relative to some world coordinate system to be calculated. These two reasons for having accurate intrinsic calibration for a camera are especially important in the field of industrial robotics where 3D cameras are frequently mounted on the ends of manipulators. In the ROS (Robot Operating System) ecosystem, the camera_calibration package is the default standard for intrinsic camera calibration. Several researchers from the Industrial Robotics & Automation division at Southwest Research Institute have noted that this package results in large variances in the intrinsic parameters of the camera when calibrating across multiple attempts. There are also open issues on this matter in their public repository that have not been addressed by the developers. In this thesis, we confirm that the camera_calibration package does indeed return different results across multiple attempts, test out several possible hypothesizes as to why, identify the reason, and provide simple solution to fix the cause of the issue.

  10. Multi-camera synchronization core implemented on USB3 based FPGA platform

    Science.gov (United States)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  11. Extragalactic infrared astronomy

    International Nuclear Information System (INIS)

    Gondhalekar, P.M.

    1985-05-01

    The paper concerns the field of Extragalactic Infrared Astronomy, discussed at the Fourth RAL Workshop on Astronomy and Astrophysics. Fifteen papers were presented on infrared emission from extragalactic objects. Both ground-(and aircraft-) based and IRAS infrared data were reviewed. The topics covered star formation in galaxies, active galactic nuclei and cosmology. (U.K.)

  12. Behavioral Model of High Performance Camera for NIF Optics Inspection

    International Nuclear Information System (INIS)

    Hackel, B M

    2007-01-01

    The purpose of this project was to develop software that will model the behavior of the high performance Spectral Instruments 1000 series Charge-Coupled Device (CCD) camera located in the Final Optics Damage Inspection (FODI) system on the National Ignition Facility. NIF's target chamber will be mounted with 48 Final Optics Assemblies (FOAs) to convert the laser light from infrared to ultraviolet and focus it precisely on the target. Following a NIF shot, the optical components of each FOA must be carefully inspected for damage by the FODI to ensure proper laser performance during subsequent experiments. Rapid image capture and complex image processing (to locate damage sites) will reduce shot turnaround time; thus increasing the total number of experiments NIF can conduct during its 30 year lifetime. Development of these rapid processes necessitates extensive offline software automation -- especially after the device has been deployed in the facility. Without access to the unique real device or an exact behavioral model, offline software testing is difficult. Furthermore, a software-based behavioral model allows for many instances to be running concurrently; this allows multiple developers to test their software at the same time. Thus it is beneficial to construct separate software that will exactly mimic the behavior and response of the real SI-1000 camera

  13. Applying and extending ISO/TC42 digital camera resolution standards to mobile imaging products

    Science.gov (United States)

    Williams, Don; Burns, Peter D.

    2007-01-01

    There are no fundamental differences between today's mobile telephone cameras and consumer digital still cameras that suggest many existing ISO imaging performance standards do not apply. To the extent that they have lenses, color filter arrays, detectors, apertures, image processing, and are hand held, there really are no operational or architectural differences. Despite this, there are currently differences in the levels of imaging performance. These are driven by physical and economic constraints, and image-capture conditions. Several ISO standards for resolution, well established for digital consumer digital cameras, require care when applied to the current generation of cell phone cameras. In particular, accommodation of optical flare, shading non-uniformity and distortion are recommended. We offer proposals for the application of existing ISO imaging resolution performance standards to mobile imaging products, and suggestions for extending performance standards to the characteristic behavior of camera phones.

  14. Thermography by Infrared

    International Nuclear Information System (INIS)

    Harara, W.; Allouch, Y.; Altahan, A.

    2015-08-01

    This study focused on the principle’s explanation of metallic components and structures testing by thermography method using infrared waves. The study confirmed that, thermal waves testing technique as one of the most important method among the modern non-destructive testing methods. It is characterized by its economy, easy to apply and timely testing of components and metallic structures. This method is applicable to a wide variety of components such as testing pieces of planes, power plants, electric transmission lines and aerospace components, in order to verify their structures and fabrication quality and their comformance to the international standards.Testing the components by thermography using infrared radiation is easy and rapid if compared to other NDT methods. The study included an introduction to the thermography testing method, its equipements, components and the applied technique. Finally, two practical applications are given in order to show the importance of this method in industry concerned with determining the liquid level in a tank and testing the stability of the control box of electrical supply.(author)

  15. Continuous-wave near-photon counting spectral imaging detector in the mid-infrared by upconversion

    DEFF Research Database (Denmark)

    Dam, Jeppe Seidelin; Tidemand-Lichtenberg, Peter; Pedersen, Christian

    2013-01-01

    is usually measured in number of electrons. The second noise source is usually referred to as dark noise, which is the background signal generated over time. Dark noise is usually measured in electrons per pixel per second. For silicon cameras certain models like EM-CCD have close to zero read noise, whereas...... high-end IR cameras have read noise of hundreds of electrons. The dark noise for infrared cameras based on semiconductor materials is also substantially higher than for silicon cameras, typical values being millions of electrons per pixel per second for cryogenically cooled cameras whereas peltier...... cooled CCD cameras have dark noise measured in fractions of electrons per pixel per second. An ideal solution thus suggest the combination of an efficient low noise image wavelength conversion system combined with low noise silicon based cameras for low noise imaging in the IR region. We discuss image...

  16. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.

  17. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-04-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2 0 deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  18. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-07-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2-deg deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  19. New camera systems for fuel services

    International Nuclear Information System (INIS)

    Hummel, W.; Beck, H.J.

    2010-01-01

    AREVA NP Fuel Services have many years of experience in visual examination and measurements on fuel assemblies and associated core components by using state of the art cameras and measuring technologies. The used techniques allow the surface and dimensional characterization of materials and shapes by visual examination. New enhanced and sophisticated technologies for fuel services f. e. are two shielded color camera systems for use under water and close inspection of a fuel assembly. Nowadays the market requirements for detecting and characterization of small defects (lower than the 10th of one mm) or cracks and analyzing surface appearances on an irradiated fuel rod cladding or fuel assembly structure parts have increased. Therefore it is common practice to use movie cameras with higher resolution. The radiation resistance of high resolution CCD cameras is in general very low and it is not possible to use them unshielded close to a fuel assembly. By extending the camera with a mirror system and shielding around the sensitive parts, the movie camera can be utilized for fuel assembly inspection. AREVA NP Fuel Services is now equipped with such kind of movie cameras. (orig.)

  20. First results from the TOPSAT camera

    Science.gov (United States)

    Greenway, Paul; Tosh, Ian; Morris, Nigel; Burton, Gary; Cawley, Steve

    2017-11-01

    The TopSat camera is a low cost remote sensing imager capable of producing 2.5 metre resolution panchromatic imagery, funded by the British National Space Centre's Mosaic programme. The instrument was designed and assembled at the Space Science & Technology Department of the CCLRC's Rutherford Appleton Laboratory (RAL) in the UK, and was launched on the 27th October 2005 from Plesetsk Cosmodrome in Northern Russia on a Kosmos-3M. The camera utilises an off-axis three mirror system, which has the advantages of excellent image quality over a wide field of view, combined with a compactness that makes its overall dimensions smaller than its focal length. Keeping the costs to a minimum has been a major design driver in the development of this camera. The camera is part of the TopSat mission, which is a collaboration between four UK organisations; QinetiQ, Surrey Satellite Technology Ltd (SSTL), RAL and Infoterra. Its objective is to demonstrate provision of rapid response high resolution imagery to fixed and mobile ground stations using a low cost minisatellite. The paper "Development of the TopSat Camera" presented by RAL at the 5th ICSO in 2004 described the opto-mechanical design, assembly, alignment and environmental test methods implemented. Now that the spacecraft is in orbit and successfully acquiring images, this paper presents the first results from the camera and makes an initial assessment of the camera's in-orbit performance.

  1. Thermal infrared imaging of the variability of canopy-air temperature difference distribution for heavy metal stress levels discrimination in rice

    Science.gov (United States)

    Zhang, Biyao; Liu, Xiangnan; Liu, Meiling; Wang, Dongmin

    2017-04-01

    This paper addresses the assessment and interpretation of the canopy-air temperature difference (Tc-Ta) distribution as an indicator for discriminating between heavy metal stress levels. Tc-Ta distribution is simulated by coupling the energy balance equation with modified leaf angle distribution. Statistical indices including average value (AVG), standard deviation (SD), median, and span of Tc-Ta in the field of view of a digital thermal imager are calculated to describe Tc-Ta distribution quantitatively and, consequently, became the stress indicators. In the application, two grains of rice growing sites under "mild" and "severe" stress level were selected as study areas. A total of 96 thermal images obtained from the field measurements in the three growth stages were used for a separate application of a theoretical variation of Tc-Ta distribution. The results demonstrated that the statistical indices calculated from both simulated and measured data exhibited an upward trend as the stress level becomes serious because heavy metal stress would only raise a portion of the leaves in the canopy. Meteorological factors could barely affect the sensitivity of the statistical indices with the exception of the wind speed. Among the statistical indices, AVG and SD were demonstrated to be better indicators for stress levels discrimination.

  2. Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?

    Science.gov (United States)

    Barazzetti, L.; Previtali, M.; Roncoroni, F.

    2018-05-01

    360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.

  3. Multi-MGy Radiation Hardened Camera for Nuclear Facilities

    International Nuclear Information System (INIS)

    Girard, Sylvain; Boukenter, Aziz; Ouerdane, Youcef; Goiffon, Vincent; Corbiere, Franck; Rolando, Sebastien; Molina, Romain; Estribeau, Magali; Avon, Barbara; Magnan, Pierre; Paillet, Philippe; Duhamel, Olivier; Gaillardin, Marc; Raine, Melanie

    2015-01-01

    There is an increasing interest in developing cameras for surveillance systems to monitor nuclear facilities or nuclear waste storages. Particularly, for today's and the next generation of nuclear facilities increasing safety requirements consecutive to Fukushima Daiichi's disaster have to be considered. For some applications, radiation tolerance needs to overcome doses in the MGy(SiO 2 ) range whereas the most tolerant commercial or prototypes products based on solid state image sensors withstand doses up to few kGy. The objective of this work is to present the radiation hardening strategy developed by our research groups to enhance the tolerance to ionizing radiations of the various subparts of these imaging systems by working simultaneously at the component and system design levels. Developing radiation-hardened camera implies to combine several radiation-hardening strategies. In our case, we decided not to use the simplest one, the shielding approach. This approach is efficient but limits the camera miniaturization and is not compatible with its future integration in remote-handling or robotic systems. Then, the hardening-by-component strategy appears mandatory to avoid the failure of one of the camera subparts at doses lower than the MGy. Concerning the image sensor itself, the used technology is a CMOS Image Sensor (CIS) designed by ISAE team with custom pixel designs used to mitigate the total ionizing dose (TID) effects that occur well below the MGy range in classical image sensors (e.g. Charge Coupled Devices (CCD), Charge Injection Devices (CID) and classical Active Pixel Sensors (APS)), such as the complete loss of functionality, the dark current increase and the gain drop. We'll present at the conference a comparative study between these radiation-hardened pixel radiation responses with respect to conventional ones, demonstrating the efficiency of the choices made. The targeted strategy to develop the complete radiation hard camera

  4. Multi-MGy Radiation Hardened Camera for Nuclear Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Girard, Sylvain; Boukenter, Aziz; Ouerdane, Youcef [Universite de Saint-Etienne, Lab. Hubert Curien, UMR-CNRS 5516, F-42000 Saint-Etienne (France); Goiffon, Vincent; Corbiere, Franck; Rolando, Sebastien; Molina, Romain; Estribeau, Magali; Avon, Barbara; Magnan, Pierre [ISAE, Universite de Toulouse, F-31055 Toulouse (France); Paillet, Philippe; Duhamel, Olivier; Gaillardin, Marc; Raine, Melanie [CEA, DAM, DIF, F-91297 Arpajon (France)

    2015-07-01

    There is an increasing interest in developing cameras for surveillance systems to monitor nuclear facilities or nuclear waste storages. Particularly, for today's and the next generation of nuclear facilities increasing safety requirements consecutive to Fukushima Daiichi's disaster have to be considered. For some applications, radiation tolerance needs to overcome doses in the MGy(SiO{sub 2}) range whereas the most tolerant commercial or prototypes products based on solid state image sensors withstand doses up to few kGy. The objective of this work is to present the radiation hardening strategy developed by our research groups to enhance the tolerance to ionizing radiations of the various subparts of these imaging systems by working simultaneously at the component and system design levels. Developing radiation-hardened camera implies to combine several radiation-hardening strategies. In our case, we decided not to use the simplest one, the shielding approach. This approach is efficient but limits the camera miniaturization and is not compatible with its future integration in remote-handling or robotic systems. Then, the hardening-by-component strategy appears mandatory to avoid the failure of one of the camera subparts at doses lower than the MGy. Concerning the image sensor itself, the used technology is a CMOS Image Sensor (CIS) designed by ISAE team with custom pixel designs used to mitigate the total ionizing dose (TID) effects that occur well below the MGy range in classical image sensors (e.g. Charge Coupled Devices (CCD), Charge Injection Devices (CID) and classical Active Pixel Sensors (APS)), such as the complete loss of functionality, the dark current increase and the gain drop. We'll present at the conference a comparative study between these radiation-hardened pixel radiation responses with respect to conventional ones, demonstrating the efficiency of the choices made. The targeted strategy to develop the complete radiation hard camera

  5. Forward looking anomaly detection via fusion of infrared and color imagery

    Science.gov (United States)

    Stone, K.; Keller, J. M.; Popescu, M.; Havens, T. C.; Ho, K. C.

    2010-04-01

    This paper develops algorithms for the detection of interesting and abnormal objects in color and infrared imagery taken from cameras mounted on a moving vehicle, observing a fixed scene. The primary purpose of detection is to cue a human-in-the-loop detection system. Algorithms for direct detection and change detection are investigated, as well as fusion of the two. Both methods use temporal information to reduce the number of false alarms. The direct detection algorithm uses image self-similarity computed between local neighborhoods to determine interesting, or unique, parts of an image. Neighborhood similarity is computed using Euclidean distance in CIELAB color space for the color imagery, and Euclidean distance between grey levels in the infrared imagery. The change detection algorithm uses the affine scale-invariant feature transform (ASIFT) to transform multiple background frames into the current image space. Each transformed image is then compared to the current image, and the multiple outputs are fused to produce a single difference image. Changes in lighting and contrast between the background run and the current run are adjusted for in both color and infrared imagery. Frame-to-frame motion is modeled using a perspective transformation, the parameters of which are computed using scale-invariant feature transform (SIFT) keypoint correspondences. This information is used to perform temporal accumulation of single frame detections for both the direct detection and change detection algorithms. Performance of the proposed algorithms is evaluated on multiple lanes from a data collection at a US Army test site.

  6. Ground-based infrared surveys: imaging the thermal fields at volcanoes and revealing the controlling parameters.

    Science.gov (United States)

    Pantaleo, Michele; Walter, Thomas

    2013-04-01

    Temperature monitoring is a widespread procedure in the frame of volcano hazard monitoring. Indeed temperature changes are expected to reflect changes in volcanic activity. We propose a new approach, within the thermal monitoring, which is meant to shed light on the parameters controlling the fluid pathways and the fumarole sites by using infrared measurements. Ground-based infrared cameras allow one to remotely image the spatial distribution, geometric pattern and amplitude of fumarole fields on volcanoes at metre to centimetre resolution. Infrared mosaics and time series are generated and interpreted, by integrating geological field observations and modeling, to define the setting of the volcanic degassing system at shallow level. We present results for different volcano morphologies and show that lithology, structures and topography control the appearance of fumarole field by the creation of permeability contrasts. We also show that the relative importance of those parameters is site-dependent. Deciphering the setting of the degassing system is essential for hazard assessment studies because it would improve our understanding on how the system responds to endogenous or exogenous modification.

  7. Novel mid-infrared imaging system based on single-mode quantum cascade laser illumination and upconversion

    DEFF Research Database (Denmark)

    Tomko, Jan; Junaid, Saher; Tidemand-Lichtenberg, Peter

    2017-01-01

    Compared to the visible or near-infrared (NIR) spectral regions, there is a lack of very high sensitivity detectors in the mid-infrared (MIR) that operate near room temperature. Upconversion of the MIR light to NIR light that is imaged using affordable, fast, and sensitive NIR detectors or camera...

  8. Fuzzy logic control for camera tracking system

    Science.gov (United States)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  9. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different....... For this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of scenarios...

  10. Scintillation camera with second order resolution

    International Nuclear Information System (INIS)

    Muehllehner, G.

    1976-01-01

    A scintillation camera for use in radioisotope imaging to determine the concentration of radionuclides in a two-dimensional area is described in which means is provided for second order positional resolution. The phototubes, which normally provide only a single order of resolution, are modified to provide second order positional resolution of radiation within an object positioned for viewing by the scintillation camera. The phototubes are modified in that multiple anodes are provided to receive signals from the photocathode in a manner such that each anode is particularly responsive to photoemissions from a limited portion of the photocathode. Resolution of radioactive events appearing as an output of this scintillation camera is thereby improved

  11. First Light with a 67-Million-Pixel WFI Camera

    Science.gov (United States)

    1999-01-01

    optical astronomical instruments - the "Charge-Coupled Devices (CCD's)" - are currently restricted to about 4000 x 4000 pixels. For the time being, the only possible way towards even larger detector areas is by assembling mosaics of CCD's. ESO , MPI-A and OAC have therefore undertaken a joint project to build a new and large astronomical camera with a mosaic of CCD's. This new Wide Field Imager (WFI) comprises eight CCD's with high sensitivity from the ultraviolet to the infrared spectral domain, each with 2046 x 4098 pixels. Mounted behind an advanced optical system at the Cassegrain focus of the 2.2-m telescope of the Max-Planck-Gesellschaft (MPG) at ESO's La Silla Observatory in Chile, the combined 8184 x 8196 = 67,076,064 pixels cover a square field-of-view with an edge of more than half a degree (over 30 arcmin) [1]. Compared to the viewing field of the human eye, this may still appear small, but in the domain of astronomical instrumentation, it is indeed a large step forward. For comparison, the largest field-of-view with the FORS1 instrument at the VLT is about 7 arcmin. Moreover, the level of detail detectable with the WFI (theoretical image sharpness) exceeds what is possible with the naked eye by a factor of about 10,000. The WFI project was completed in only two years in response to a recommendation to ESO by the "La Silla 2000" Working Group and the Scientific-Technical Committee (STC) to offer this type of instrument to the community. The MPI-A proposed to build such an instrument for the MPG/ESO 2.2-m telescope and a joint project was soon established. A team of astronomers from the three institutions is responsible for the initial work with the WFI at La Silla. A few other Cameras of this size are available, e.g. at Hawaii, Kitt Peak (USA) and Cerro Tololo (Chile), but this is the first time that a telescope this large has been fully dedicated to wide-field imaging with an 8kx8k CCD. The first WFI images Various exposures were obtained during the early

  12. Infrared Fe II lines in Eta Carinae and a possible interpretation of infrared excesses

    International Nuclear Information System (INIS)

    Thackeray, A.D.

    1978-01-01

    The identification of very strong emission lines in the near infrared spectrum of Eta Carinae with newly recognised high-level transitions of Fe II raises the possibility that the infrared excesses of hot emission-line stars may be due to dielectronic recombination of Fe II. Johansson's Fe II lines also need to be considered in the interpretation of the infrared spectra of supernovae. (author)

  13. Parallel Computational Intelligence-Based Multi-Camera Surveillance System

    OpenAIRE

    Orts-Escolano, Sergio; Garcia-Rodriguez, Jose; Morell, Vicente; Cazorla, Miguel; Azorin-Lopez, Jorge; García-Chamizo, Juan Manuel

    2014-01-01

    In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units). It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mob...

  14. Comparison of low-cost handheld retinal camera and traditional table top retinal camera in the detection of retinal features indicating a risk of cardiovascular disease

    Science.gov (United States)

    Joshi, V.; Wigdahl, J.; Nemeth, S.; Zamora, G.; Ebrahim, E.; Soliz, P.

    2018-02-01

    Retinal abnormalities associated with hypertensive retinopathy are useful in assessing the risk of cardiovascular disease, heart failure, and stroke. Assessing these risks as part of primary care can lead to a decrease in the incidence of cardiovascular disease-related deaths. Primary care is a resource limited setting where low cost retinal cameras may bring needed help without compromising care. We compared a low-cost handheld retinal camera to a traditional table top retinal camera on their optical characteristics and performance to detect hypertensive retinopathy. A retrospective dataset of N=40 subjects (28 with hypertensive retinopathy, 12 controls) was used from a clinical study conducted at a primary care clinic in Texas. Non-mydriatic retinal fundus images were acquired using a Pictor Plus hand held camera (Volk Optical Inc.) and a Canon CR1-Mark II tabletop camera (Canon USA) during the same encounter. The images from each camera were graded by a licensed optometrist according to the universally accepted Keith-Wagener-Barker Hypertensive Retinopathy Classification System, three weeks apart to minimize memory bias. The sensitivity of the hand-held camera to detect any level of hypertensive retinopathy was 86% compared to the Canon. Insufficient photographer's skills produced 70% of the false negative cases. The other 30% were due to the handheld camera's insufficient spatial resolution to resolve the vascular changes such as minor A/V nicking and copper wiring, but these were associated with non-referable disease. Physician evaluation of the performance of the handheld camera indicates it is sufficient to provide high risk patients with adequate follow up and management.

  15. Optomechanical System Development of the AWARE Gigapixel Scale Camera

    Science.gov (United States)

    Son, Hui S.

    Electronic focal plane arrays (FPA) such as CMOS and CCD sensors have dramatically improved to the point that digital cameras have essentially phased out film (except in very niche applications such as hobby photography and cinema). However, the traditional method of mating a single lens assembly to a single detector plane, as required for film cameras, is still the dominant design used in cameras today. The use of electronic sensors and their ability to capture digital signals that can be processed and manipulated post acquisition offers much more freedom of design at system levels and opens up many interesting possibilities for the next generation of computational imaging systems. The AWARE gigapixel scale camera is one such computational imaging system. By utilizing a multiscale optical design, in which a large aperture objective lens is mated with an array of smaller, well corrected relay lenses, we are able to build an optically simple system that is capable of capturing gigapixel scale images via post acquisition stitching of the individual pictures from the array. Properly shaping the array of digital cameras allows us to form an effectively continuous focal surface using off the shelf (OTS) flat sensor technology. This dissertation details developments and physical implementations of the AWARE system architecture. It illustrates the optomechanical design principles and system integration strategies we have developed through the course of the project by summarizing the results of the two design phases for AWARE: AWARE-2 and AWARE-10. These systems represent significant advancements in the pursuit of scalable, commercially viable snapshot gigapixel imaging systems and should serve as a foundation for future development of such systems.

  16. A user-friendly technical set-up for infrared photography of forensic findings.

    Science.gov (United States)

    Rost, Thomas; Kalberer, Nicole; Scheurer, Eva

    2017-09-01

    Infrared photography is interesting for a use in forensic science and forensic medicine since it reveals findings that normally are almost invisible to the human eye. Originally, infrared photography has been made possible by the placement of an infrared light transmission filter screwed in front of the camera objective lens. However, this set-up is associated with many drawbacks such as the loss of the autofocus function, the need of an external infrared source, and long exposure times which make the use of a tripod necessary. These limitations prevented up to now the routine application of infrared photography in forensics. In this study the use of a professional modification inside the digital camera body was evaluated regarding camera handling and image quality. This permanent modification consisted of the replacement of the in-built infrared blocking filter by an infrared transmission filter of 700nm and 830nm, respectively. The application of this camera set-up for the photo-documentation of forensically relevant post-mortem findings was investigated in examples of trace evidence such as gunshot residues on the skin, in external findings, e.g. hematomas, as well as in an exemplary internal finding, i.e., Wischnewski spots in a putrefied stomach. The application of scattered light created by indirect flashlight yielded a more uniform illumination of the object, and the use of the 700nm filter resulted in better pictures than the 830nm filter. Compared to pictures taken under visible light, infrared photographs generally yielded better contrast. This allowed for discerning more details and revealed findings which were not visible otherwise, such as imprints on a fabric and tattoos in mummified skin. The permanent modification of a digital camera by building in a 700nm infrared transmission filter resulted in a user-friendly and efficient set-up which qualified for the use in daily forensic routine. Main advantages were a clear picture in the viewfinder, an auto

  17. Narrowband infrared emitters for combat ID

    Science.gov (United States)

    Pralle, Martin U.; Puscasu, Irina; Daly, James; Fallon, Keith; Loges, Peter; Greenwald, Anton; Johnson, Edward

    2007-04-01

    There is a strong desire to create narrowband infrared light sources as personnel beacons for application in infrared Identify Friend or Foe (IFF) systems. This demand has augmented dramatically in recent years with the reports of friendly fire casualties in Afghanistan and Iraq. ICx Photonics' photonic crystal enhanced TM (PCE TM) infrared emitter technology affords the possibility of creating narrowband IR light sources tuned to specific IR wavebands (near 1-2 microns, mid 3-5 microns, and long 8-12 microns) making it the ideal solution for infrared IFF. This technology is based on a metal coated 2D photonic crystal of air holes in a silicon substrate. Upon thermal excitation the photonic crystal modifies the emitted yielding narrowband IR light with center wavelength commensurate with the periodicity of the lattice. We have integrated this technology with microhotplate MEMS devices to yield 15mW IR light sources in the 3-5 micron waveband with wall plug efficiencies in excess of 10%, 2 orders of magnitude more efficient that conventional IR LEDs. We have further extended this technology into the LWIR with a light source that produces 9 mW of 8-12 micron light at an efficiency of 8%. Viewing distances >500 meters were observed with fielded camera technologies, ideal for ground to ground troop identification. When grouped into an emitter panel, the viewing distances were extended to 5 miles, ideal for ground to air identification.

  18. CINE: Comet INfrared Excitation

    Science.gov (United States)

    de Val-Borro, Miguel; Cordiner, Martin A.; Milam, Stefanie N.; Charnley, Steven B.

    2017-08-01

    CINE calculates infrared pumping efficiencies that can be applied to the most common molecules found in cometary comae such as water, hydrogen cyanide or methanol. One of the main mechanisms for molecular excitation in comets is the fluorescence by the solar radiation followed by radiative decay to the ground vibrational state. This command-line tool calculates the effective pumping rates for rotational levels in the ground vibrational state scaled by the heliocentric distance of the comet. Fluorescence coefficients are useful for modeling rotational emission lines observed in cometary spectra at sub-millimeter wavelengths. Combined with computational methods to solve the radiative transfer equations based, e.g., on the Monte Carlo algorithm, this model can retrieve production rates and rotational temperatures from the observed emission spectrum.

  19. C.C.D. readout of a picosecond streak camera with an intensified C.C.D

    International Nuclear Information System (INIS)

    Lemonier, M.; Richard, J.C.; Cavailler, C.; Mens, A.; Raze, G.

    1984-08-01

    This paper deals with a digital streak camera readout device. The device consists in a low light level television camera made of a solid state C.C.D. array coupled to an image intensifier associated to a video-digitizer coupled to a micro-computer system. The streak camera images are picked-up as a video signal, digitized and stored. This system allows the fast recording and the automatic processing of the data provided by the streak tube

  20. Poor Man's Virtual Camera: Real-Time Simultaneous Matting and Camera Pose Estimation.

    Science.gov (United States)

    Szentandrasi, Istvan; Dubska, Marketa; Zacharias, Michal; Herout, Adam

    2016-03-18

    Today's film and advertisement production heavily uses computer graphics combined with living actors by chromakeying. The matchmoving process typically takes a considerable manual effort. Semi-automatic matchmoving tools exist as well, but they still work offline and require manual check-up and correction. In this article, we propose an instant matchmoving solution for green screen. It uses a recent technique of planar uniform marker fields. Our technique can be used in indie and professional filmmaking as a cheap and ultramobile virtual camera, and for shot prototyping and storyboard creation. The matchmoving technique based on marker fields of shades of green is very computationally efficient: we developed and present in the article a mobile application running at 33 FPS. Our technique is thus available to anyone with a smartphone at low cost and with easy setup, opening space for new levels of filmmakers' creative expression.