WorldWideScience

Sample records for project micro camera

  1. Homogeneity corrections in the Anger camera with micro-Z processor

    International Nuclear Information System (INIS)

    Knoop, B.; Jordan, K.

    1979-01-01

    Series of measurements largely covering the area of clinical use of the Anger camera were carried out to investigate the mode of action of inhomogeneity correction by the micro-Z processor. The variation of boundary conditions of measurements when measuring in patients is simulated as closely as possible by selecting suitable measuring arrangements. The measured results confirm both the concepts outlined above on the causes of inhomogeneity of the Anger camera and the suitability for inhomogeneity correction under clinical conditions of the methods applied in the micro-Z processor. (orig./HP) [de

  2. The application of micro UAV in construction project

    Science.gov (United States)

    Kaamin, Masiri; Razali, Siti Nooraiin Mohd; Ahmad, Nor Farah Atiqah; Bukari, Saifullizan Mohd; Ngadiman, Norhayati; Kadir, Aslila Abd; Hamid, Nor Baizura

    2017-10-01

    In every outstanding construction project, there is definitely have an effective construction management. Construction management allows a construction project to be implemented according to plan. Every construction project must have a progress development works that is usually created by the site engineer. Documenting the progress of works is one of the requirements in construction management. In a progress report it is necessarily have a visual image as an evidence. The conventional method used for photographing on the construction site is by using common digital camera which is has few setback comparing to Micro Unmanned Aerial Vehicles (UAV). Besides, site engineer always have a current issues involving limitation of monitoring on high reach point and entire view of the construction site. The purpose of this paper is to provide a concise review of Micro UAV technology in monitoring the progress on construction site through visualization approach. The aims of this study are to replace the conventional method of photographing on construction site using Micro UAV which can portray the whole view of the building, especially on high reach point and allows to produce better images, videos and 3D model and also facilitating site engineer to monitor works in progress. The Micro UAV was flown around the building construction according to the Ground Control Points (GCPs) to capture images and record videos. The images taken from Micro UAV have been processed generate 3D model and were analysed to visualize the building construction as well as monitoring the construction progress work and provides immediate reliable data for project estimation. It has been proven that by using Micro UAV, a better images and videos can give a better overview of the construction site and monitor any defects on high reach point building structures. Not to be forgotten, with Micro UAV the construction site progress is more efficiently tracked and kept on the schedule.

  3. MicroCameras and Photometers (MCP) on board the TARANIS satellite

    Science.gov (United States)

    Farges, T.; Hébert, P.; Le Mer-Dachard, F.; Ravel, K.; Gaillac, S.

    2017-12-01

    TARANIS (Tool for the Analysis of Radiations from lightNing and Sprites) is a CNES micro satellite. Its main objective is to study impulsive transfers of energy between the Earth atmosphere and the space environment. It will be sun-synchronous at an altitude of 700 km. It will be launched in 2019 for at least 2 years. Its payload is composed of several electromagnetic instruments in different wavelengths (from gamma-rays to radio waves including optical). TARANIS instruments are currently in calibration and qualification phase. The purpose is to present the MicroCameras and Photometers (MCP) design, to show its performances after its recent characterization and at last to discuss the scientific objectives and how we want to answer it with the MCP observations. The MicroCameras, developed by Sodern, are dedicated to the spatial description of TLEs and their parent lightning. They are able to differentiate sprite and lightning thanks to two narrow bands ([757-767 nm] and [772-782 nm]) that provide simultaneous pairs of images of an Event. Simulation results of the differentiation method will be shown. After calibration and tests, the MicroCameras are now delivered to the CNES for integration on the payload. The Photometers, developed by Bertin Technologies, will provide temporal measurements and spectral characteristics of TLEs and lightning. There are key instrument because of their capability to detect on-board TLEs and then switch all the instruments of the scientific payload in their high resolution acquisition mode. Photometers use four spectral bands in the [170-260 nm], [332-342 nm], [757-767 nm] and [600-900 nm] and have the same field of view as cameras. The on-board TLE detection algorithm remote-controlled parameters have been tuned before launch using the electronic board and simulated or real events waveforms. After calibration, the Photometers are now going through the environmental tests. They will be delivered to the CNES for integration on the

  4. Calibration method for projector-camera-based telecentric fringe projection profilometry system.

    Science.gov (United States)

    Liu, Haibo; Lin, Huijing; Yao, Linshen

    2017-12-11

    By combining a fringe projection setup with a telecentric lens, a fringe pattern could be projected and imaged within a small area, making it possible to measure the three-dimensional (3D) surfaces of micro-components. This paper focuses on the flexible calibration of the fringe projection profilometry (FPP) system using a telecentric lens. An analytical telecentric projector-camera calibration model is introduced, in which the rig structure parameters remain invariant for all views, and the 3D calibration target can be located on the projector image plane with sub-pixel precision. Based on the presented calibration model, a two-step calibration procedure is proposed. First, the initial parameters, e.g., the projector-camera rig, projector intrinsic matrix, and coordinates of the control points of a 3D calibration target, are estimated using the affine camera factorization calibration method. Second, a bundle adjustment algorithm with various simultaneous views is applied to refine the calibrated parameters, especially the rig structure parameters and coordinates of the control points forth 3D target. Because the control points are determined during the calibration, there is no need for an accurate 3D reference target, whose is costly and extremely difficult to fabricate, particularly for tiny objects used to calibrate the telecentric FPP system. Real experiments were performed to validate the performance of the proposed calibration method. The test results showed that the proposed approach is very accurate and reliable.

  5. Camera calibration based on the back projection process

    Science.gov (United States)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  6. The influence of flywheel micro vibration on space camera and vibration suppression

    Science.gov (United States)

    Li, Lin; Tan, Luyang; Kong, Lin; Wang, Dong; Yang, Hongbo

    2018-02-01

    Studied the impact of flywheel micro vibration on a high resolution optical satellite that space-borne integrated. By testing the flywheel micro vibration with six-component test bench, the flywheel disturbance data is acquired. The finite element model of the satellite was established and the unit force/torque were applied at the flywheel mounting position to obtain the micro vibration data of the camera. Integrated analysis of the data of the two parts showed that the influence of flywheel micro vibration on the camera is mainly concentrated around 60-80 Hz and 170-230 Hz, the largest angular displacement of the secondary mirror along the optical axis direction is 0.04″ and the maximum angular displacement vertical to optical axis is 0.032″. After the design and installation of vibration isolator, the maximum angular displacement of the secondary mirror is 0.011″, the decay rate of root mean square value of the angular displacement is more than 50% and the maximum is 96.78%. The whole satellite was suspended to simulate the boundary condition on orbit; the imaging experiment results show that the image motion caused by the flywheel micro vibrationis less than 0.1 pixel after installing the vibration isolator.

  7. FPGA-Based HD Camera System for the Micropositioning of Biomedical Micro-Objects Using a Contactless Micro-Conveyor

    Directory of Open Access Journals (Sweden)

    Elmar Yusifli

    2017-03-01

    Full Text Available With recent advancements, micro-object contactless conveyers are becoming an essential part of the biomedical sector. They help avoid any infection and damage that can occur due to external contact. In this context, a smart micro-conveyor is devised. It is a Field Programmable Gate Array (FPGA-based system that employs a smart surface for conveyance along with an OmniVision complementary metal-oxide-semiconductor (CMOS HD camera for micro-object position detection and tracking. A specific FPGA-based hardware design and VHSIC (Very High Speed Integrated Circuit Hardware Description Language (VHDL implementation are realized. It is done without employing any Nios processor or System on a Programmable Chip (SOPC builder based Central Processing Unit (CPU core. It keeps the system efficient in terms of resource utilization and power consumption. The micro-object positioning status is captured with an embedded FPGA-based camera driver and it is communicated to the Image Processing, Decision Making and Command (IPDC module. The IPDC is programmed in C++ and can run on a Personal Computer (PC or on any appropriate embedded system. The IPDC decisions are sent back to the FPGA, which pilots the smart surface accordingly. In this way, an automated closed-loop system is employed to convey the micro-object towards a desired location. The devised system architecture and implementation principle is described. Its functionality is also verified. Results have confirmed the proper functionality of the developed system, along with its outperformance compared to other solutions.

  8. Quality control of radiosurgery: dosimetry with micro camera in spherical mannequin

    International Nuclear Information System (INIS)

    Casado Villalon, F. J.; Navarro Guirado, F.; Garci Pareja, S.; Benitez Villegas, E. M.; Galan Montenegro, P.; Moreno Saiz, C.

    2013-01-01

    The dosimetry of small field is part of quality control in the treatment of cranial radiosurgery. In this work the results of absorbed dose in the isocenter, Planner, with those obtained from are compared experimentally with a micro-camera into an spherical mannequin. (Author)

  9. Advanced EVA Suit Camera System Development Project

    Science.gov (United States)

    Mock, Kyla

    2016-01-01

    The National Aeronautics and Space Administration (NASA) at the Johnson Space Center (JSC) is developing a new extra-vehicular activity (EVA) suit known as the Advanced EVA Z2 Suit. All of the improvements to the EVA Suit provide the opportunity to update the technology of the video imagery. My summer internship project involved improving the video streaming capabilities of the cameras that will be used on the Z2 Suit for data acquisition. To accomplish this, I familiarized myself with the architecture of the camera that is currently being tested to be able to make improvements on the design. Because there is a lot of benefit to saving space, power, and weight on the EVA suit, my job was to use Altium Design to start designing a much smaller and simplified interface board for the camera's microprocessor and external components. This involved checking datasheets of various components and checking signal connections to ensure that this architecture could be used for both the Z2 suit and potentially other future projects. The Orion spacecraft is a specific project that may benefit from this condensed camera interface design. The camera's physical placement on the suit also needed to be determined and tested so that image resolution can be maximized. Many of the options of the camera placement may be tested along with other future suit testing. There are multiple teams that work on different parts of the suit, so the camera's placement could directly affect their research or design. For this reason, a big part of my project was initiating contact with other branches and setting up multiple meetings to learn more about the pros and cons of the potential camera placements we are analyzing. Collaboration with the multiple teams working on the Advanced EVA Z2 Suit is absolutely necessary and these comparisons will be used as further progress is made for the overall suit design. This prototype will not be finished in time for the scheduled Z2 Suit testing, so my time was

  10. Micro Hard-X Ray Camera: From Caliste 64 to Caliste 256

    International Nuclear Information System (INIS)

    Meuris, A.; Limousin, O.; Le Mer, I.; Pinsard, F.; Blondel, C.; Daly, F.; Lugiez, F.; Gevin, O.; Delagnes, E.; Chavassieux, M.; Vassal, M.C.; Bocage, R.; Soufflet, F.

    2009-01-01

    Caliste project aims at hybridizing 1 cm 2 Cd(Zn)Te detectors with low noise front-end electronics, in a single component standing in a 1 * 1 * 2 cm 3 volume. The micro-camera is a spectroscopic imager for X and gamma rays detection, with time-tagging capability. Hybridization consists in stacking full custom ASICs perpendicular to the detection surface. The first prototype Caliste 64 integrates a detector of 8 * 8 pixels of 1 mm pitch. Fabrication and characterizations of nine cameras units validate the design and the hybridization concept. Spectroscopic tests result in a mean energy resolution of ∼0.7 keV FWHM at 14 keV and ∼0.85 keV FWHM at 60 keV using 1 mm-thick Al Schottky CdTe detectors biased at -400 V and cooled down to 15 degrees C. The new prototype called Caliste 256 integrates 16 * 16 pixels of 580 m pitch in the same volume as Caliste 64. Electrical tests with the first sample fabricated without detector result in a mean equivalent noise charge of 64e - rms (9.6 μs, no leakage current). Caliste devices are 4-side buttable and can be used as elementary detection units of a large hard X-ray focal plane, as for the 64 cm 2 high energy detector of the Simbol-X astronomical space mission. (authors)

  11. Auction Mechanism of Micro-Grid Project Transfer

    Directory of Open Access Journals (Sweden)

    Yong Long

    2017-10-01

    Full Text Available Micro-grid project transfer is the primary issue of micro-grid development. The efficiency and quality of the micro-grid project transfer directly affect the quality of micro-grid project construction and development, which is very important for the sustainable development of micro-grid. This paper constructs a multi-attribute auction model of micro-grid project transfer, which reflects the characteristics of micro-grid system and the interests of stakeholders, calculates the optimal bidding strategy and analyzes the influence of relevant factors on auction equilibrium by multi-stage dynamic game with complete information, and makes a numerical simulation analysis. Results indicate that the optimal strategy of auction mechanism is positively related to power quality, energy storage quality, and carbon emissions. Different from the previous lowest price winning mechanism, the auction mechanism formed in this paper emphasizes that the energy suppliers which provide the comprehensive optimization of power quality, energy storage quality, carbon emissions, and price will win the auction, when both the project owners and energy suppliers maximize their benefits under this auction mechanism. The auction mechanism is effective because it is in line with the principle of individual rationality and incentive compatibility. In addition, the number of energy suppliers participating in the auction and the cost of the previous auction are positively related to the auction equilibrium, both of which are adjusting the equilibrium results of the auction. At the same time, the utilization rate of renewable energy and the comprehensive utilization of energy also have a positive impact on the auction equilibrium. In the end, this paper puts forward a series of policy suggestions about micro-grid project auction. The research in this paper is of great significance to improve the auction quality of micro-grid projects and promote the sustainable development of micro-grid.

  12. Incentive Mechanism of Micro-grid Project Development

    Directory of Open Access Journals (Sweden)

    Yong Long

    2018-01-01

    Full Text Available Due to the issue of cost and benefit, the investment demand and consumption demand of micro-grids are insufficient in the early stages, which makes all parties lack motivation to participate in the development of micro-grid projects and leads to the slow development of micro-grids. In order to promote the development of micro-grids, the corresponding incentive mechanism should be designed to motivate the development of micro-grid projects. Therefore, this paper builds a multi-stage incentive model of micro-grid project development involving government, grid corporation, energy supplier, equipment supplier, and the user in order to study the incentive problems of micro-grid project development. Through the solution and analysis of the model, this paper deduces the optimal subsidy of government and the optimal cooperation incentive of the energy supplier, and calculates the optimal pricing strategy of grid corporation and the energy supplier, and analyzes the influence of relevant factors on optimal subsidy and incentive. The study reveals that the cost and social benefit of micro-grid development have a positive impact on micro-grid subsidy, technical level and equipment quality of equipment supplier as well as the fact that government subsidies positively adjust the level of cooperation incentives and price incentives. In the end, the validity of the model is verified by numerical analysis, and the incentive strategy of each participant is analyzed. The research of this paper is of great significance to encourage project development of micro-grids and to promote the sustainable development of micro-grids.

  13. Science goals and expected results from the smart-1 amie multi-coulour micro-camera

    Science.gov (United States)

    Josset, J.-L.; AMIE Team

    2003-04-01

    The Advanced Moon micro-Imager Experiment (AMIE), which will be on board ESA SMART-1, the first European mission to the Moon (launch foreseen in 2003), is an imaging system with scientific, technical and public outreach oriented objectives. The science objectives are to image the Lunar South Pole (Aitken basin), permanent shadow areas (ice deposit), eternal light (crater rims), ancient Lunar Nonmare volcanism, local spectro-photometry and physical state of the lunar surface, and to map high latitudes regions (south) mainly at far side. The main science goals and the expected results from the AMIE multi-colour micro-camera are presented.

  14. Multiple-aperture optical design for micro-level cameras using 3D-printing method

    Science.gov (United States)

    Peng, Wei-Jei; Hsu, Wei-Yao; Cheng, Yuan-Chieh; Lin, Wen-Lung; Yu, Zong-Ru; Chou, Hsiao-Yu; Chen, Fong-Zhi; Fu, Chien-Chung; Wu, Chong-Syuan; Huang, Chao-Tsung

    2018-02-01

    The design of the ultra miniaturized camera using 3D-printing technology directly printed on to the complementary metal-oxide semiconductor (CMOS) imaging sensor is presented in this paper. The 3D printed micro-optics is manufactured using the femtosecond two-photon direct laser writing, and the figure error which could achieve submicron accuracy is suitable for the optical system. Because the size of the micro-level camera is approximately several hundreds of micrometers, the resolution is reduced much and highly limited by the Nyquist frequency of the pixel pitch. For improving the reduced resolution, one single-lens can be replaced by multiple-aperture lenses with dissimilar field of view (FOV), and then stitching sub-images with different FOV can achieve a high resolution within the central region of the image. The reason is that the angular resolution of the lens with smaller FOV is higher than that with larger FOV, and then the angular resolution of the central area can be several times than that of the outer area after stitching. For the same image circle, the image quality of the central area of the multi-lens system is significantly superior to that of a single-lens. The foveated image using stitching FOV breaks the limitation of the resolution for the ultra miniaturized imaging system, and then it can be applied such as biomedical endoscopy, optical sensing, and machine vision, et al. In this study, the ultra miniaturized camera with multi-aperture optics is designed and simulated for the optimum optical performance.

  15. Development of Electron Tracking Compton Camera using micro pixel gas chamber for medical imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kabuki, Shigeto; Hattori, Kaori [Department of Physics, Faculty of Science, Kyoto University, Sakyo-ku, Kyoto 606-8502 (Japan); Kohara, Ryota [Hitachi Medical Corporation, Kashiwa, Chiba 277-0804 (Japan); Kunieda, Etsuo; Kubo, Atsushi [Department of Radiography, Keio University, Shinjuku-ku, Tokyo 160-8582 (Japan); Kubo, Hidetoshi; Miuchi, Kentaro [Department of Physics, Faculty of Science, Kyoto University, Sakyo-ku, Kyoto 606-8502 (Japan); Nakahara, Tadaki [Department of Radiography, Keio University, Shinjuku-ku, Tokyo 160-8582 (Japan); Nagayoshi, Tsutomu; Nishimura, Hironobu; Okada, Yoko; Orito, Reiko; Sekiya, Hiroyuki [Department of Physics, Faculty of Science, Kyoto University, Sakyo-ku, Kyoto 606-8502 (Japan); Shirahata, Takashi [Hitachi Medical Corporation, Kashiwa, Chiba 277-0804 (Japan); Takada, Atsushi [Department of Physics, Faculty of Science, Kyoto University, Sakyo-ku, Kyoto 606-8502 (Japan); Tanimori, Toru [Department of Physics, Faculty of Science, Kyoto University, Sakyo-ku, Kyoto 606-8502 (Japan)], E-mail: tanimori@cr.scphys.kyoto-u.ac.jp; Ueno, Kazuki [Department of Physics, Faculty of Science, Kyoto University, Sakyo-ku, Kyoto 606-8502 (Japan)

    2007-10-01

    We have developed the Electron Tracking Compton Camera (ETCC) with reconstructing the 3-D tracks of the scattered electron in Compton process for both sub-MeV and MeV gamma rays. By measuring both the directions and energies of not only the recoil gamma ray but also the scattered electron, the direction of the incident gamma ray is determined for each individual photon. Furthermore, a residual measured angle between the recoil electron and scattered gamma ray is quite powerful for the kinematical background rejection. For the 3-D tracking of the electrons, the Micro Time Projection Chamber ({mu}-TPC) was developed using a new type of the micro pattern gas detector. The ETCC consists of this {mu}-TPC (10x10x8 cm{sup 3}) and the 6x6x13 mm{sup 3} GSO crystal pixel arrays with a flat panel photo-multiplier surrounding the {mu}-TPC for detecting recoil gamma rays. The ETCC provided the angular resolution of 6.6 deg. (FWHM) at 364 keV of {sup 131}I. A mobile ETCC for medical imaging, which is fabricated in a 1 m cubic box, has been operated since October 2005. Here, we present the imaging results for the line sources and the phantom of human thyroid gland using 364 keV gamma rays of {sup 131}I.

  16. Global Calibration of Multi-Cameras Based on Refractive Projection and Ray Tracing

    Directory of Open Access Journals (Sweden)

    Mingchi Feng

    2017-10-01

    Full Text Available Multi-camera systems are widely applied in the three dimensional (3D computer vision, especially when multiple cameras are distributed on both sides of the measured object. The calibration methods of multi-camera systems are critical to the accuracy of vision measurement and the key is to find an appropriate calibration target. In this paper, a high-precision camera calibration method for multi-camera systems based on transparent glass checkerboards and ray tracing is described, and is used to calibrate multiple cameras distributed on both sides of the glass checkerboard. Firstly, the intrinsic parameters of each camera are obtained by Zhang’s calibration method. Then, multiple cameras capture several images from the front and back of the glass checkerboard with different orientations, and all images contain distinct grid corners. As the cameras on one side are not affected by the refraction of glass checkerboard, extrinsic parameters can be directly calculated. However, the cameras on the other side are influenced by the refraction of glass checkerboard, and the direct use of projection model will produce a calibration error. A multi-camera calibration method using refractive projection model and ray tracing is developed to eliminate this error. Furthermore, both synthetic and real data are employed to validate the proposed approach. The experimental results of refractive calibration show that the error of the 3D reconstruction is smaller than 0.2 mm, the relative errors of both rotation and translation are less than 0.014%, and the mean and standard deviation of reprojection error of the four-camera system are 0.00007 and 0.4543 pixels, respectively. The proposed method is flexible, highly accurate, and simple to carry out.

  17. Design and operation of a setup with a camera and adjustable mirror to inspect the sense-wire planes of the Time Projection Chamber inside the MicroBooNE cryostat

    International Nuclear Information System (INIS)

    Carls, B.; James, C.C.; Kubinski, R.M.; Pordes, S.; Schukraft, A.; Horton-Smith, G.; Strauss, T.

    2015-01-01

    Detectors in particle physics, particularly when including cryogenic components, are often enclosed in vessels that do not provide any physical or visual access to the detectors themselves after installation. However, it can be desirable for experiments to visually investigate the inside of the vessel. The MicroBooNE cryostat hosts a TPC with sense-wire planes, which had to be inspected for damage such as breakage or sagging. This inspection was performed after the transportation of the vessel with the enclosed detector to its final location, but before filling with liquid argon. This paper describes an approach to view the inside of the MicroBooNE cryostat with a setup of a camera and a mirror through one of its cryogenic service nozzles. The paper describes the camera and mirror chosen for the operation, the illumination, and the mechanical structure of the setup. It explains how the system was operated and demonstrates its performance

  18. Design and Analysis of a Single—Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs

    Directory of Open Access Journals (Sweden)

    Carlos Jaramillo

    2016-02-01

    Full Text Available We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo vision system applied to Micro Aerial Vehicles (MAVs. The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration. We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads. The theoretical single viewpoint (SVP constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion. We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances.

  19. The Aarhus Ion Micro-Trap Project

    DEFF Research Database (Denmark)

    Miroshnychenko, Yevhen; Nielsen, Otto; Poulsen, Gregers

    As part of our involvement in the EU MICROTRAP project, we have designed, manufactured and assembled a micro-scale ion trap with integrated optical fibers. These prealigned fibers will allow delivering cooling laser light to single ions. Therefore, such a trap will not require any direct optical...... and installed in an ultra high vacuum chamber, which includes an ablation oven for all-optical loading of the trap [2]. The next steps on the project are to demonstrate the operation of the micro-trap and the cooling of ions using fiber delivered light. [1] D. Grant, Development of Micro-Scale Ion traps, Master...... Thesis (2008). [2] R.J. Hendricks, D.M. Grant, P.F. Herskind, A. Dantan and M. Drewsen, An all-optical ion-loading technique for scalable microtrap architectures, Applied Physics B, 88, 507 (2007)....

  20. A ‘Bibliocentro’ project for the Camera di Commercio of Rome

    Directory of Open Access Journals (Sweden)

    Fiammetta Sabba

    2015-07-01

    Full Text Available The Camera di Commercio of Rome, created in 1809, has always been closely connected with the commercial, industrial, economic and touristic reality of the city. According to these features, the Camera would need for its library and its documentation center to have an additional interface with its users and to have an effective tool to play his mission. Despite the economic difficulties have not allowed the realization, a detailed project of the structure, whose name should have been 'Bibliocentro', was done. It provides the following qualities: Thematic nature, Utilities, Usability, Visibility, Activities, Interactivity and Historicity. The bibliographic collection and media should be strongly focused on the issues of the institutional activities, along with the complete collection of all publications edited by the Camera. The collection should be ordered and indexed according to the Dewey Decimal Classification. The project also provides for the membership of Bibliocentro to the National Libraries System, to ensure visibility and accessibility.

  1. Borehole camera technology and its application in the Three Gorges Project

    Energy Technology Data Exchange (ETDEWEB)

    Wang, C.Y.; Sheng, Q.; Ge, X.R. [Chinese Academy of Sciences, Inst. of Rock and Soil Mechanics, Wuhan (China); Law, K.T. [Carleton Univ., Ottawa, ON (Canada)

    2002-07-01

    The China's Three Gorges Project is the world's largest hydropower project, consisting of a 1,983-meter long and 185-meter high dam and 26 power generating units. Borehole examination has been conducted at the site to ensure stability of the slope of the ship lock used for navigation. This paper describes 2 systems for borehole inspection and viewing. Both methods of camera borehole technology provide a unique way for geologic engineers to observe the condition inside a borehole. The Axial-View Borehole Television (AVBTV) provides real-time frontal view of the borehole ahead of the probe, making it possible to detect where holes are blocked and to see cracks and other distinctive features in the strata. The Digital Panoramic Borehole Camera System (DPBCS) can collect, measure, save, analyze, manage and displace geological information about a borehole. It can also be used to determine the orientation of discontinuity, generate unrolled image and virtual core graph and conduct statistical analysis. Both camera systems have been demonstrated successfully at the Three Gorges Project for qualitative description of the borehole as well as for quantitative analysis of cracks existing in the rock. It has been determined that most of the cracks dip in the same general direction as the northern slope of the permanent ship lock of the Three Gorges Project. 12 refs., 1 tab., 9 figs.

  2. Temporal resolution technology of a soft X-ray picosecond framing camera based on Chevron micro-channel plates gated in cascade

    Energy Technology Data Exchange (ETDEWEB)

    Yang Wenzheng [State Key Laboratory of Transient Optics and Photonics, Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China)], E-mail: ywz@opt.ac.cn; Bai Yonglin; Liu Baiyu [State Key Laboratory of Transient Optics and Photonics, Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Bai Xiaohong; Zhao Junping; Qin Junjun [Key Laboratory of Ultra-fast Photoelectric Diagnostics Technology, Xi' an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China)

    2009-09-11

    We describe a soft X-ray picosecond framing camera (XFC) based on Chevron micro-channel plates (MCPs) gated in cascade for ultra-fast process diagnostics. The micro-strip lines are deposited on both the input and the output surfaces of the Chevron MCPs and can be gated by a negative (positive) electric pulse on the first (second) MCP. The gating is controlled by the time delay T{sub d} between two gating pulses. By increasing T{sub d}, the temporal resolution and the gain of the camera are greatly improved compared with a single-gated MCP-XFC. The optimal T{sub d}, which results in the best temporal resolution, is within the electron transit time and transit time spread of the MCP. Using 250 ps, {+-}2.5 kV gating pulses, the temporal resolution of the double-gated Chevron MCPs camera is improved from 60 ps for the single-gated MCP-XFC to 37 ps for T{sub d}=350 ps. The principle is presented in detail and accompanied with a theoretic simulation and experimental results.

  3. Development of a single-photon-counting camera with use of a triple-stacked micro-channel plate.

    Science.gov (United States)

    Yasuda, Naruomi; Suzuki, Hitoshi; Katafuchi, Tetsuro

    2016-01-01

    At the quantum-mechanical level, all substances (not merely electromagnetic waves such as light and X-rays) exhibit wave–particle duality. Whereas students of radiation science can easily understand the wave nature of electromagnetic waves, the particle (photon) nature may elude them. Therefore, to assist students in understanding the wave–particle duality of electromagnetic waves, we have developed a photon-counting camera that captures single photons in two-dimensional images. As an image intensifier, this camera has a triple-stacked micro-channel plate (MCP) with an amplification factor of 10(6). The ultra-low light of a single photon entering the camera is first converted to an electron through the photoelectric effect on the photocathode. The electron is intensified by the triple-stacked MCP and then converted to a visible light distribution, which is measured by a high-sensitivity complementary metal oxide semiconductor image sensor. Because it detects individual photons, the photon-counting camera is expected to provide students with a complete understanding of the particle nature of electromagnetic waves. Moreover, it measures ultra-weak light that cannot be detected by ordinary low-sensitivity cameras. Therefore, it is suitable for experimental research on scintillator luminescence, biophoton detection, and similar topics.

  4. In situ micro-focused X-ray beam characterization with a lensless camera using a hybrid pixel detector

    International Nuclear Information System (INIS)

    Kachatkou, Anton; Marchal, Julien; Silfhout, Roelof van

    2014-01-01

    Position and size measurements of a micro-focused X-ray beam, using an X-ray beam imaging device based on a lensless camera that collects radiation scattered from a thin foil placed in the path of the beam at an oblique angle, are reported. Results of studies on micro-focused X-ray beam diagnostics using an X-ray beam imaging (XBI) instrument based on the idea of recording radiation scattered from a thin foil of a low-Z material with a lensless camera are reported. The XBI instrument captures magnified images of the scattering region within the foil as illuminated by the incident beam. These images contain information about beam size, beam position and beam intensity that is extracted during dedicated signal processing steps. In this work the use of the device with beams for which the beam size is significantly smaller than that of a single detector pixel is explored. The performance of the XBI device equipped with a state-of-the-art hybrid pixel X-ray imaging sensor is analysed. Compared with traditional methods such as slit edge or wire scanners, the XBI micro-focused beam characterization is significantly faster and does not interfere with on-going experiments. The challenges associated with measuring micrometre-sized beams are described and ways of optimizing the resolution of beam position and size measurements of the XBI instrument are discussed

  5. Advanced system for Gamma Cameras modernization

    International Nuclear Information System (INIS)

    Osorio Deliz, J. F.; Diaz Garcia, A.; Arista Romeu, E. J.

    2015-01-01

    Analog and digital gamma cameras still largely used in developing countries. Many of them rely in old hardware electronics, which in many cases limits their use in actual nuclear medicine diagnostic studies. Consequently, there are different worldwide companies that produce medical equipment engaged into a partial or total Gamma Cameras modernization. Present work has demonstrated the possibility of substitution of almost entire signal processing electronics placed at inside a Gamma Camera detector head by a digitizer PCI card. this card includes four 12 Bits Analog-to-Digital-Converters of 50 MHz speed. It has been installed in a PC and controlled through software developed in Lab View. Besides, there were done some changes to the hardware inside the detector head including redesign of the Orientation Display Block (ODA card). Also a new electronic design was added to the Microprocessor Control Block (MPA card) which comprised a PIC micro controller acting as a tuning system for individual Photomultiplier Tubes. The images, obtained by measurement of 99m Tc point radioactive source, using modernized camera head demonstrate its overall performance. The system was developed and tested in an old Gamma Camera ORBITER II SIEMENS GAMMASONIC at National Institute of Oncology and Radiobiology (INOR) under CAMELUD project supported by National Program PNOULU and IAEA . (Author)

  6. Low-complexity camera digital signal imaging for video document projection system

    Science.gov (United States)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  7. Qualification Tests of Micro-camera Modules for Space Applications

    Science.gov (United States)

    Kimura, Shinichi; Miyasaka, Akira

    Visual capability is very important for space-based activities, for which small, low-cost space cameras are desired. Although cameras for terrestrial applications are continually being improved, little progress has been made on cameras used in space, which must be extremely robust to withstand harsh environments. This study focuses on commercial off-the-shelf (COTS) CMOS digital cameras because they are very small and are based on an established mass-market technology. Radiation and ultrahigh-vacuum tests were conducted on a small COTS camera that weighs less than 100 mg (including optics). This paper presents the results of the qualification tests for COTS cameras and for a small, low-cost COTS-based space camera.

  8. Camera-pose estimation via projective Newton optimization on the manifold.

    Science.gov (United States)

    Sarkis, Michel; Diepold, Klaus

    2012-04-01

    Determining the pose of a moving camera is an important task in computer vision. In this paper, we derive a projective Newton algorithm on the manifold to refine the pose estimate of a camera. The main idea is to benefit from the fact that the 3-D rigid motion is described by the special Euclidean group, which is a Riemannian manifold. The latter is equipped with a tangent space defined by the corresponding Lie algebra. This enables us to compute the optimization direction, i.e., the gradient and the Hessian, at each iteration of the projective Newton scheme on the tangent space of the manifold. Then, the motion is updated by projecting back the variables on the manifold itself. We also derive another version of the algorithm that employs homeomorphic parameterization to the special Euclidean group. We test the algorithm on several simulated and real image data sets. Compared with the standard Newton minimization scheme, we are now able to obtain the full numerical formula of the Hessian with a 60% decrease in computational complexity. Compared with Levenberg-Marquardt, the results obtained are more accurate while having a rather similar complexity.

  9. Northern micro-grid project

    Energy Technology Data Exchange (ETDEWEB)

    Curtis, David; Singh, Bob

    2010-09-15

    The electrical distribution system for the Kasabonika Lake First Nation in northern Ontario (Canada) consumed 1.2 million liters of diesel fuel in 2008, amounting to 3,434 tones of CO2 emissions. The Northern Micro-Grid Project, supported by seven partners, involves integrating renewable generation & storage into the Kasabonika Lake distribution system. Through R&D and demonstration, the objectives are to reduce the amount of diesel consumed, support the distribution system exclusively on renewable resources during light loads, engage and impart knowledge/training to better position the community for future opportunities. The paper will discuss challenges, opportunities and future plans associated with the project.

  10. Crowd-sourced Archaeological Research: The MicroPasts Project

    Directory of Open Access Journals (Sweden)

    Chiara Bonacchi

    2014-10-01

    Full Text Available This paper offers a brief introduction to MicroPasts, a web-enabled crowd-sourcing and crowd-funding project whose overall goal is to promote the collection and use of high quality research data via institutional and community collaborations, both on- and off-line. In addition to introducing this initiative, the discussion below is a reflection of its lead author’s core contribution to the project and will dwell in more detail on one particular aspect of MicroPasts: its relevance to research and practice in public archaeology, cultural policy and heritage studies.

  11. Wafer-level vacuum packaged resonant micro-scanning mirrors for compact laser projection displays

    Science.gov (United States)

    Hofmann, Ulrich; Oldsen, Marten; Quenzer, Hans-Joachim; Janes, Joachim; Heller, Martin; Weiss, Manfred; Fakas, Georgios; Ratzmann, Lars; Marchetti, Eleonora; D'Ascoli, Francesco; Melani, Massimiliano; Bacciarelli, Luca; Volpi, Emilio; Battini, Francesco; Mostardini, Luca; Sechi, Francesco; De Marinis, Marco; Wagner, Bernd

    2008-02-01

    Scanning laser projection using resonant actuated MEMS scanning mirrors is expected to overcome the current limitation of small display size of mobile devices like cell phones, digital cameras and PDAs. Recent progress in the development of compact modulated RGB laser sources enables to set up very small laser projection systems that become attractive not only for consumer products but also for automotive applications like head-up and dash-board displays. Within the last years continuous progress was made in increasing MEMS scanner performance. However, only little is reported on how mass-produceability of these devices and stable functionality even under harsh environmental conditions can be guaranteed. Automotive application requires stable MEMS scanner operation over a wide temperature range from -40° to +85°Celsius. Therefore, hermetic packaging of electrostatically actuated MEMS scanning mirrors becomes essential to protect the sensitive device against particle contamination and condensing moisture. This paper reports on design, fabrication and test of a resonant actuated two-dimensional micro scanning mirror that is hermetically sealed on wafer level. With resonant frequencies of 30kHz and 1kHz, an achievable Theta-D-product of 13mm.deg and low dynamic deformation <20nm RMS it targets Lissajous projection with SVGA-resolution. Inevitable reflexes at the vacuum package surface can be seperated from the projection field by permanent inclination of the micromirror.

  12. Multiocular image sensor with on-chip beam-splitter and inner meta-micro-lens for single-main-lens stereo camera.

    Science.gov (United States)

    Koyama, Shinzo; Onozawa, Kazutoshi; Tanaka, Keisuke; Saito, Shigeru; Kourkouss, Sahim Mohamed; Kato, Yoshihisa

    2016-08-08

    We developed multiocular 1/3-inch 2.75-μm-pixel-size 2.1M- pixel image sensors by co-design of both on-chip beam-splitter and 100-nm-width 800-nm-depth patterned inner meta-micro-lens for single-main-lens stereo camera systems. A camera with the multiocular image sensor can capture horizontally one-dimensional light filed by both the on-chip beam-splitter horizontally dividing ray according to incident angle, and the inner meta-micro-lens collecting the divided ray into pixel with small optical loss. Cross-talks between adjacent light field images of a fabricated binocular image sensor and of a quad-ocular image sensor are as low as 6% and 7% respectively. With the selection of two images from one-dimensional light filed images, a selective baseline for stereo vision is realized to view close objects with single-main-lens. In addition, by adding multiple light field images with different ratios, baseline distance can be tuned within an aperture of a main lens. We suggest the electrically selective or tunable baseline stereo vision to reduce 3D fatigue of viewers.

  13. Quality control of radiosurgery: dosimetry with micro camera in spherical mannequin; Control de calidad en radiocirugia: dosimetria con microcamara en maniqui esferico

    Energy Technology Data Exchange (ETDEWEB)

    Casado Villalon, F. J.; Navarro Guirado, F.; Garci Pareja, S.; Benitez Villegas, E. M.; Galan Montenegro, P.; Moreno Saiz, C.

    2013-07-01

    The dosimetry of small field is part of quality control in the treatment of cranial radiosurgery. In this work the results of absorbed dose in the isocenter, Planner, with those obtained from are compared experimentally with a micro-camera into an spherical mannequin. (Author)

  14. Caliste 64, a new CdTe micro-camera for hard X-ray spectro-imaging

    Science.gov (United States)

    Meuris, A.; Limousin, O.; Lugiez, F.; Gevin, O.; Blondel, C.; Pinsard, F.; Vassal, M. C.; Soufflet, F.; Le Mer, I.

    2009-10-01

    In the frame of the Simbol-X mission of hard X-ray astrophysics, a prototype of micro-camera with 64 pixels called Caliste 64 has been designed and several samples have been tested. The device integrates ultra-low-noise IDeF-X V1.1 ASICs from CEA and a 1 cm 2 Al Schottky CdTe detector from Acrorad because of its high uniformity and spectroscopic performance. The process of hybridization, mastered by the 3D Plus company, respects space applications standards. The camera is a spectro-imager with time-tagging capability. Each photon interacting in the semiconductor is tagged with a time, a position and an energy. Time resolution is better than 100 ns rms for energy deposits greater than 20 keV, taking into account electronic noise and technological dispersal of the front-end electronics. The spectrum summed across the 64 pixels results in an energy resolution of 664 eV fwhm at 13.94 keV and 842 eV fwhm at 59.54 keV, when the detector is cooled down to -10 °C and biased at -500 V.

  15. Caliste 64, a new CdTe micro-camera for hard X-ray spectro-imaging

    International Nuclear Information System (INIS)

    Meuris, A.; Limousin, O.; Lugiez, F.; Gevin, O.; Blondel, C.; Pinsard, F.; Vassal, M.C.; Soufflet, F.; Le Mer, I.

    2009-01-01

    In the frame of the Simbol-X mission of hard X-ray astrophysics, a prototype of micro-camera with 64 pixels called Caliste 64 has been designed and several samples have been tested. The device integrates ultra-low-noise IDeF-X V1.1 ASICs from CEA and a 1 cm 2 Al Schottky CdTe detector from Acrorad because of its high uniformity and spectroscopic performance. The process of hybridization, mastered by the 3D Plus company, respects space applications standards. The camera is a spectro-imager with time-tagging capability. Each photon interacting in the semiconductor is tagged with a time, a position and an energy. Time resolution is better than 100 ns rms for energy deposits greater than 20 keV, taking into account electronic noise and technological dispersal of the front-end electronics. The spectrum summed across the 64 pixels results in an energy resolution of 664 eV fwhm at 13.94 keV and 842 eV fwhm at 59.54 keV, when the detector is cooled down to -10 deg. C and biased at -500 V.

  16. Caliste 64, a new CdTe micro-camera for hard X-ray spectro-imaging

    Energy Technology Data Exchange (ETDEWEB)

    Meuris, A. [CEA, Irfu, Service d' Astrophysique, Bat. 709, Orme des Merisiers, F-91191 Gif-sur-Yvette (France)], E-mail: aline.meuris@cea.fr; Limousin, O. [CEA, Irfu, Service d' Astrophysique, Bat. 709, Orme des Merisiers, F-91191 Gif-sur-Yvette (France); Lugiez, F.; Gevin, O. [CEA, Irfu, Service d' Electronique, de Detecteurs et d' Informatique, F-91191 Gif-sur-Yvette (France); Blondel, C.; Pinsard, F. [CEA, Irfu, Service d' Astrophysique, Bat. 709, Orme des Merisiers, F-91191 Gif-sur-Yvette (France); Vassal, M.C.; Soufflet, F. [3D Plus, 641 rue Helene Boucher, F-78532 Buc (France); Le Mer, I. [CEA, Irfu, Service d' Astrophysique, Bat. 709, Orme des Merisiers, F-91191 Gif-sur-Yvette (France)

    2009-10-21

    In the frame of the Simbol-X mission of hard X-ray astrophysics, a prototype of micro-camera with 64 pixels called Caliste 64 has been designed and several samples have been tested. The device integrates ultra-low-noise IDeF-X V1.1 ASICs from CEA and a 1 cm{sup 2} Al Schottky CdTe detector from Acrorad because of its high uniformity and spectroscopic performance. The process of hybridization, mastered by the 3D Plus company, respects space applications standards. The camera is a spectro-imager with time-tagging capability. Each photon interacting in the semiconductor is tagged with a time, a position and an energy. Time resolution is better than 100 ns rms for energy deposits greater than 20 keV, taking into account electronic noise and technological dispersal of the front-end electronics. The spectrum summed across the 64 pixels results in an energy resolution of 664 eV fwhm at 13.94 keV and 842 eV fwhm at 59.54 keV, when the detector is cooled down to -10 deg. C and biased at -500 V.

  17. Wafer-scale micro-optics fabrication

    Science.gov (United States)

    Voelkel, Reinhard

    2012-07-01

    Micro-optics is an indispensable key enabling technology for many products and applications today. Probably the most prestigious examples are the diffractive light shaping elements used in high-end DUV lithography steppers. Highly-efficient refractive and diffractive micro-optical elements are used for precise beam and pupil shaping. Micro-optics had a major impact on the reduction of aberrations and diffraction effects in projection lithography, allowing a resolution enhancement from 250 nm to 45 nm within the past decade. Micro-optics also plays a decisive role in medical devices (endoscopes, ophthalmology), in all laser-based devices and fiber communication networks, bringing high-speed internet to our homes. Even our modern smart phones contain a variety of micro-optical elements. For example, LED flash light shaping elements, the secondary camera, ambient light and proximity sensors. Wherever light is involved, micro-optics offers the chance to further miniaturize a device, to improve its performance, or to reduce manufacturing and packaging costs. Wafer-scale micro-optics fabrication is based on technology established by the semiconductor industry. Thousands of components are fabricated in parallel on a wafer. This review paper recapitulates major steps and inventions in wafer-scale micro-optics technology. The state-of-the-art of fabrication, testing and packaging technology is summarized.

  18. The CONNECT project: Combining macro- and micro-structure.

    Science.gov (United States)

    Assaf, Yaniv; Alexander, Daniel C; Jones, Derek K; Bizzi, Albero; Behrens, Tim E J; Clark, Chris A; Cohen, Yoram; Dyrby, Tim B; Huppi, Petra S; Knoesche, Thomas R; Lebihan, Denis; Parker, Geoff J M; Poupon, Cyril; Anaby, Debbie; Anwander, Alfred; Bar, Leah; Barazany, Daniel; Blumenfeld-Katzir, Tamar; De-Santis, Silvia; Duclap, Delphine; Figini, Matteo; Fischi, Elda; Guevara, Pamela; Hubbard, Penny; Hofstetter, Shir; Jbabdi, Saad; Kunz, Nicolas; Lazeyras, Francois; Lebois, Alice; Liptrot, Matthew G; Lundell, Henrik; Mangin, Jean-François; Dominguez, David Moreno; Morozov, Darya; Schreiber, Jan; Seunarine, Kiran; Nava, Simone; Poupon, Cyril; Riffert, Till; Sasson, Efrat; Schmitt, Benoit; Shemesh, Noam; Sotiropoulos, Stam N; Tavor, Ido; Zhang, Hui Gary; Zhou, Feng-Lei

    2013-10-15

    In recent years, diffusion MRI has become an extremely important tool for studying the morphology of living brain tissue, as it provides unique insights into both its macrostructure and microstructure. Recent applications of diffusion MRI aimed to characterize the structural connectome using tractography to infer connectivity between brain regions. In parallel to the development of tractography, additional diffusion MRI based frameworks (CHARMED, AxCaliber, ActiveAx) were developed enabling the extraction of a multitude of micro-structural parameters (axon diameter distribution, mean axonal diameter and axonal density). This unique insight into both tissue microstructure and connectivity has enormous potential value in understanding the structure and organization of the brain as well as providing unique insights to abnormalities that underpin disease states. The CONNECT (Consortium Of Neuroimagers for the Non-invasive Exploration of brain Connectivity and Tracts) project aimed to combine tractography and micro-structural measures of the living human brain in order to obtain a better estimate of the connectome, while also striving to extend validation of these measurements. This paper summarizes the project and describes the perspective of using micro-structural measures to study the connectome. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Micro pan-tilter and focusing mechanism; Micro shikakuyo shisen henko kiko

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    The micro pan-tilter and focusing mechanism can adjust focuses while changing freely the visual axis by using a super small CCD micro camera of 9.2 mm in a diameter times 27 mm in length, and contains a camera control unit (CCU) in this size. Many functions of a camera with a tripod head are concentrated into a size 1/10 of that of conventional cameras. The mechanism has been developed for a micro robot to inspect interior of small pipes in devices such as heat exchangers in a power plant. Future application is expected to medical endoscopes and portable information devices. This mechanism observes forward distant view and side wall short-distance view with the maximum resolution of 20 {mu}m by coordinated operation of three high-torque static power drive motors (with minimum outer diameter of 2.5 mm) fabricated by using the micromachine technology. Auto-focusing is also possible. The hybrid IC built-ion CCU has been realized by using the three-dimensional high-density mounting technology. Part of this research and development is performed under the industrial science technology research and development institution established by the Agency of Industrial Science and Technology of the Ministry of International Trade and Industry. (translated by NEDO)

  20. Effective Project Risk management in Micro Companies : Case study for Persona Optima Iceland ehf.

    OpenAIRE

    Bražinskaitė, Justina

    2011-01-01

    This study is meant to be a guide for micro companies regarding effective project risk management. The main purpose of this thesis is to introduce project risk management and build a user-friendly managerial model toward effective project risk management in micro companies. The research is based on a case company Persona Optima Iceland ehf. analysis. The study investigates risk management, uncertainties and risks in projects, project risk management, its models and particularities in orde...

  1. Analyzer for gamma cameras diagnostic

    International Nuclear Information System (INIS)

    Oramas Polo, I.; Osorio Deliz, J. F.; Diaz Garcia, A.

    2013-01-01

    This research work was carried out to develop an analyzer for gamma cameras diagnostic. It is composed of an electronic system that includes hardware and software capabilities, and operates from the acquisition of the 4 head position signals of a gamma camera detector. The result is the spectrum of the energy delivered by nuclear radiation coming from the camera detector head. This system includes analog processing of position signals from the camera, digitization and the subsequent processing of the energy signal in a multichannel analyzer, sending data to a computer via a standard USB port and processing of data in a personal computer to obtain the final histogram. The circuits are composed of an analog processing board and a universal kit with micro controller and programmable gate array. (Author)

  2. Superimpose methods for uncooled infrared camera applied to the micro-scale thermal characterization of composite materials

    Science.gov (United States)

    Morikawa, Junko

    2015-05-01

    The mobile type apparatus for a quantitative micro-scale thermography using a micro-bolometer was developed based on our original techniques such as an achromatic lens design to capture a micro-scale image in long-wave infrared, a video signal superimposing for the real time emissivity correction, and a pseudo acceleration of a timeframe. The total size of the instrument was designed as it was put in the 17 cm x 28 cm x 26 cm size carrying box. The video signal synthesizer enabled to record a direct digital signal of monitoring temperature or positioning data. The encoded digital signal data embedded in each image was decoded to read out. The protocol to encode/decode the measured data was originally defined. The mixed signals of IR camera and the imposed data were applied to the pixel by pixel emissivity corrections and the pseudo-acceleration of the periodical thermal phenomena. Because the emissivity of industrial materials and biological tissues were usually inhomogeneous, it has the different temperature dependence on each pixel. The time-scale resolution for the periodic thermal event was improved with the algorithm for "pseudoacceleration". It contributes to reduce the noise by integrating the multiple image data, keeping a time resolution. The anisotropic thermal properties of some composite materials such as thermal insulating materials of cellular plastics and the biometric composite materials were analyzed using these techniques.

  3. Compact Micro-Imaging Spectrometer (CMIS): Investigation of Imaging Spectroscopy and Its Application to Mars Geology and Astrobiology

    Science.gov (United States)

    Staten, Paul W.

    2005-01-01

    Future missions to Mars will attempt to answer questions about Mars' geological and biological history. The goal of the CMIS project is to design, construct, and test a capable, multi-spectral micro-imaging spectrometer use in such missions. A breadboard instrument has been constructed with a micro-imaging camera and Several multi-wavelength LED illumination rings. Test samples have been chosen for their interest to spectroscopists, geologists and astrobiologists. Preliminary analysis has demonstrated the advantages of isotropic illumination and micro-imaging spectroscopy over spot spectroscopy.

  4. Relative and Absolute Calibration of a Multihead Camera System with Oblique and Nadir Looking Cameras for a Uas

    Science.gov (United States)

    Niemeyer, F.; Schima, R.; Grenzdörffer, G.

    2013-08-01

    Numerous unmanned aerial systems (UAS) are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg) are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis" software and will give an overview of the results and experiences of test flights.

  5. From whole-body counting to imaging: The computer aided collimation gamma camera project (CACAO)

    Energy Technology Data Exchange (ETDEWEB)

    Jeanguillaume, C.; Begot, S.; Quartuccio, M.; Douiri, A.; Ballongue, P

    2000-07-01

    Whole-body counting is the method of choice for in vivo detection of contamination. To extend this well established method, the possible advantages of imaging radiocontaminants are examined. The use of the CACAO project is then studied. A comparison of simulated reconstructed images obtained by the CACAO project and by a conventional gamma camera used in nuclear medicine follows. Imaging a radionuclide contaminant with a geometrical sensitivity of 10{sup -2} seems possible in the near future. (author)

  6. From whole-body counting to imaging: The computer aided collimation gamma camera project (CACAO)

    International Nuclear Information System (INIS)

    Jeanguillaume, C.; Begot, S.; Quartuccio, M.; Douiri, A.; Ballongue, P.

    2000-01-01

    Whole-body counting is the method of choice for in vivo detection of contamination. To extend this well established method, the possible advantages of imaging radiocontaminants are examined. The use of the CACAO project is then studied. A comparison of simulated reconstructed images obtained by the CACAO project and by a conventional gamma camera used in nuclear medicine follows. Imaging a radionuclide contaminant with a geometrical sensitivity of 10 -2 seems possible in the near future. (author)

  7. Micro-exchangers. Final report: integrated research project: 8.2; Micro-echangeurs. Rapport final: Projet de recherche integree: 8.2

    Energy Technology Data Exchange (ETDEWEB)

    Lallemand, M. [Institut National des Sciences Appliquees (INSA), Centre de Thermique de Lyon (CETHIL), UMR 5008, 69 - Villeurbanne (France); Ayela, F. [Centre de Recherches sur les tres Basses Temperatures (CRTBT), UPR 5001, 38 - Grenoble (France); Tadrist, L. [Institut Universitaire des Systemes Thermiques Industriels (IUSTI/EPUM) - UMR 6595, 13 - Marseille (France); Favre-Marinet, M.; Marty, P. [Institut National Polytechnique, Lab. des Ecoulements Geophysiques et Industriels (LEGI), UMR 5519, 38 - Grenoble (France); Lebouche, M.; Maillet, D. [Ecole Nationale Superieure en Electricite et Mecanique (ENSEM), Lab. d' Energetique et de Mecanique Theorique et Appliquee (LEMTA), UMR 7563, 54 - Nancy (France); Peerhossaini, H. [Ecole polytechnique de l' Universite de Nantes (UPUN) Lab. de Thermocinetique, (LT), UMR 6607, 44 - Nantes (France); Gruss, A. [CEA Grenoble, Groupement pour la Recherche sur les Echangeurs Thermiques (GRETH), 38 (France)

    2004-07-01

    This project concerns the design and the development of efficient exchangers, in the monophasic and two-phase exchangers domain. It provides the results and the analysis of research programs on the following topics: hydrodynamic and transfers in micro channels in monophasic convection; boiling study in microchannels; micro-exchangers. (A.L.B.)

  8. Infrared Imaging Camera Final Report CRADA No. TC02061.0

    Energy Technology Data Exchange (ETDEWEB)

    Roos, E. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nebeker, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-08

    This was a collaborative effort between the University of California, Lawrence Livermore National Laboratory (LLNL) and Cordin Company (Cordin) to enhance the U.S. ability to develop a commercial infrared camera capable of capturing high-resolution images in a l 00 nanoseconds (ns) time frame. The Department of Energy (DOE), under an Initiative for Proliferation Prevention (IPP) project, funded the Russian Federation Nuclear Center All-Russian Scientific Institute of Experimental Physics (RFNC-VNIIEF) in Sarov. VNIIEF was funded to develop a prototype commercial infrared (IR) framing camera and to deliver a prototype IR camera to LLNL. LLNL and Cordin were partners with VNIIEF on this project. A prototype IR camera was delivered by VNIIEF to LLNL in December 2006. In June of 2007, LLNL and Cordin evaluated the camera and the test results revealed that the camera exceeded presently available commercial IR cameras. Cordin believes that the camera can be sold on the international market. The camera is currently being used as a scientific tool within Russian nuclear centers. This project was originally designated as a two year project. The project was not started on time due to changes in the IPP project funding conditions; the project funding was re-directed through the International Science and Technology Center (ISTC), which delayed the project start by over one year. The project was not completed on schedule due to changes within the Russian government export regulations. These changes were directed by Export Control regulations on the export of high technology items that can be used to develop military weapons. The IR camera was on the list that export controls required. The ISTC and Russian government, after negotiations, allowed the delivery of the camera to LLNL. There were no significant technical or business changes to the original project.

  9. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, U.; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is mounted in the ray inlet opening of the camera, while the others are placed on separate supports. The supports are swingably mounted upon a column one above the other through about 90 0 to a collimator exchange position. Each of the separate supports is swingable to a vertically aligned position, with limiting of the swinging movement and positioning of the support at the desired exchange position. The collimators are carried on the supports by means of a series of vertically disposed coil springs. Projections on the camera are movable from above into grooves of the collimator at the exchange position, whereupon the collimator is turned so that it is securely prevented from falling out of the camera head

  10. Photogrammetric measurement of 3D freeform millimetre-sized objects with micro features: an experimental validation of the close-range camera calibration model for narrow angles of view

    Science.gov (United States)

    Percoco, Gianluca; Sánchez Salmerón, Antonio J.

    2015-09-01

    The measurement of millimetre and micro-scale features is performed by high-cost systems based on technologies with narrow working ranges to accurately control the position of the sensors. Photogrammetry would lower the costs of 3D inspection of micro-features and would be applicable to the inspection of non-removable micro parts of large objects too. Unfortunately, the behaviour of photogrammetry is not known when photogrammetry is applied to micro-features. In this paper, the authors address these issues towards the application of digital close-range photogrammetry (DCRP) to the micro-scale, taking into account that in literature there are research papers stating that an angle of view (AOV) around 10° is the lower limit to the application of the traditional pinhole close-range calibration model (CRCM), which is the basis of DCRP. At first a general calibration procedure is introduced, with the aid of an open-source software library, to calibrate narrow AOV cameras with the CRCM. Subsequently the procedure is validated using a reflex camera with a 60 mm macro lens, equipped with extension tubes (20 and 32 mm) achieving magnification of up to 2 times approximately, to verify literature findings with experimental photogrammetric 3D measurements of millimetre-sized objects with micro-features. The limitation experienced by the laser printing technology, used to produce the bi-dimensional pattern on common paper, has been overcome using an accurate pattern manufactured with a photolithographic process. The results of the experimental activity prove that the CRCM is valid for AOVs down to 3.4° and that DCRP results are comparable with the results of existing and more expensive commercial techniques.

  11. Photogrammetric measurement of 3D freeform millimetre-sized objects with micro features: an experimental validation of the close-range camera calibration model for narrow angles of view

    International Nuclear Information System (INIS)

    Percoco, Gianluca; Sánchez Salmerón, Antonio J

    2015-01-01

    The measurement of millimetre and micro-scale features is performed by high-cost systems based on technologies with narrow working ranges to accurately control the position of the sensors. Photogrammetry would lower the costs of 3D inspection of micro-features and would be applicable to the inspection of non-removable micro parts of large objects too. Unfortunately, the behaviour of photogrammetry is not known when photogrammetry is applied to micro-features.In this paper, the authors address these issues towards the application of digital close-range photogrammetry (DCRP) to the micro-scale, taking into account that in literature there are research papers stating that an angle of view (AOV) around 10° is the lower limit to the application of the traditional pinhole close-range calibration model (CRCM), which is the basis of DCRP.At first a general calibration procedure is introduced, with the aid of an open-source software library, to calibrate narrow AOV cameras with the CRCM. Subsequently the procedure is validated using a reflex camera with a 60 mm macro lens, equipped with extension tubes (20 and 32 mm) achieving magnification of up to 2 times approximately, to verify literature findings with experimental photogrammetric 3D measurements of millimetre-sized objects with micro-features. The limitation experienced by the laser printing technology, used to produce the bi-dimensional pattern on common paper, has been overcome using an accurate pattern manufactured with a photolithographic process.The results of the experimental activity prove that the CRCM is valid for AOVs down to 3.4° and that DCRP results are comparable with the results of existing and more expensive commercial techniques. (paper)

  12. An evaluation of the effectiveness of observation camera placement within the MeerKAT radio telescope project

    Directory of Open Access Journals (Sweden)

    Heyns, Andries

    2015-08-01

    Full Text Available A recent development within the MeerKAT sub-project of the Square Kilometre Array radio telescope network was the placement of a network of three observation cameras in pursuit of two specific visibility objectives. In this paper, we evaluate the effectiveness of the locations of the MeerKAT observation camera network according to a novel multi-objective geographic information systems-based facility location framework. We find that the configuration chosen and implemented by the MeerKAT decision-makers is of very high quality, although we are able to uncover slightly superior alternative placement configurations. A significant amount of time and effort could, however, have been saved in the process of choosing the appropriate camera sites, had our solutions been available to the decision-makers.

  13. Commercialization of radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10{sup 6} - 10{sup 8} rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  14. Commercialization of radiation tolerant camera

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10 6 - 10 8 rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  15. RELATIVE AND ABSOLUTE CALIBRATION OF A MULTIHEAD CAMERA SYSTEM WITH OBLIQUE AND NADIR LOOKING CAMERAS FOR A UAS

    Directory of Open Access Journals (Sweden)

    F. Niemeyer

    2013-08-01

    Full Text Available Numerous unmanned aerial systems (UAS are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis“ software and will give an overview of the results and experiences of test flights.

  16. Long wavelength infrared camera (LWIRC): a 10 micron camera for the Keck Telescope

    Energy Technology Data Exchange (ETDEWEB)

    Wishnow, E.H.; Danchi, W.C.; Tuthill, P.; Wurtz, R.; Jernigan, J.G.; Arens, J.F.

    1998-05-01

    The Long Wavelength Infrared Camera (LWIRC) is a facility instrument for the Keck Observatory designed to operate at the f/25 forward Cassegrain focus of the Keck I telescope. The camera operates over the wavelength band 7-13 {micro}m using ZnSe transmissive optics. A set of filters, a circular variable filter (CVF), and a mid-infrared polarizer are available, as are three plate scales: 0.05``, 0.10``, 0.21`` per pixel. The camera focal plane array and optics are cooled using liquid helium. The system has been refurbished with a 128 x 128 pixel Si:As detector array. The electronics readout system used to clock the array is compatible with both the hardware and software of the other Keck infrared instruments NIRC and LWS. A new pre-amplifier/A-D converter has been designed and constructed which decreases greatly the system susceptibility to noise.

  17. Navigation Aiding by a Hybrid Laser-Camera Motion Estimator for Micro Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Jamal Atman

    2016-09-01

    Full Text Available Micro Air Vehicles (MAVs equipped with various sensors are able to carry out autonomous flights. However, the self-localization of autonomous agents is mostly dependent on Global Navigation Satellite Systems (GNSS. In order to provide an accurate navigation solution in absence of GNSS signals, this article presents a hybrid sensor. The hybrid sensor is a deep integration of a monocular camera and a 2D laser rangefinder so that the motion of the MAV is estimated. This realization is expected to be more flexible in terms of environments compared to laser-scan-matching approaches. The estimated ego-motion is then integrated in the MAV’s navigation system. However, first, the knowledge about the pose between both sensors is obtained by proposing an improved calibration method. For both calibration and ego-motion estimation, 3D-to-2D correspondences are used and the Perspective-3-Point (P3P problem is solved. Moreover, the covariance estimation of the relative motion is presented. The experiments show very accurate calibration and navigation results.

  18. Navigation Aiding by a Hybrid Laser-Camera Motion Estimator for Micro Aerial Vehicles.

    Science.gov (United States)

    Atman, Jamal; Popp, Manuel; Ruppelt, Jan; Trommer, Gert F

    2016-09-16

    Micro Air Vehicles (MAVs) equipped with various sensors are able to carry out autonomous flights. However, the self-localization of autonomous agents is mostly dependent on Global Navigation Satellite Systems (GNSS). In order to provide an accurate navigation solution in absence of GNSS signals, this article presents a hybrid sensor. The hybrid sensor is a deep integration of a monocular camera and a 2D laser rangefinder so that the motion of the MAV is estimated. This realization is expected to be more flexible in terms of environments compared to laser-scan-matching approaches. The estimated ego-motion is then integrated in the MAV's navigation system. However, first, the knowledge about the pose between both sensors is obtained by proposing an improved calibration method. For both calibration and ego-motion estimation, 3D-to-2D correspondences are used and the Perspective-3-Point (P3P) problem is solved. Moreover, the covariance estimation of the relative motion is presented. The experiments show very accurate calibration and navigation results.

  19. The potential micro-hydropower projects in Nakhon Ratchasima province, Thailand

    International Nuclear Information System (INIS)

    Kosa, Preeyaphorn; Chinkulkijniwat, Avirut; Horpibulsuk, Suksun; Kulworawanichpong, Thanatchai; Srivoramas, Rerkchai; Teaumroong, Neung

    2011-01-01

    At present, fossil fuel energy is commonly used in developing countries, including Thailand. The tendency to use fossil fuel energy is continuously increasing, and the price of fossil fuels is rising. Thus, renewable energy is of interest. Hydropower is one of the oldest renewable energy forms known and one of the best solutions for providing electricity to rural communities. The present paper aims to determine the potential micro-hydropower sites that could provide more than 50 kW but not over 10 MW in Nakhon Ratchasima Province, Thailand. Both reservoir and run-of-the-river schemes are considered for the assessment of potential micro-hydropower sites. For the reservoir scheme, the discharge in the reservoir is employed for generating micro-hydropower electricity. This installation can be carried out without major modifications to the dam. The run-of-the-river scheme diverts water flow from the river mainstream to the intake via a pressure pipe or an open canal, which is then conveyed to the turbine via a penstock to generate electricity. The results showed that there are 6 suitable projects for the reservoir scheme and 11 suitable projects for the run-of-the-river. The maximum power load was 6000 kW and 320 kW for the reservoir and the run-of-the-river schemes, respectively. Hydropower from the run-of-the-river scheme is more suitable than hydropower from the reservoir scheme because of the many mountains in this province. The designed head for the run-of-the-river scheme is thus generally higher than that for the reservoir scheme. Because stream flow during the dry season is very low, electricity can only be produced in the wet season. This research is a pilot study to determine the potential sites of micro-hydropower projects. (author)

  20. Project Management In Port Harcourt – Based Micro Business ...

    African Journals Online (AJOL)

    There are several perspectives of micro business management, which if strategically redefined and reinvented, could make a difference in the advancement of this crucial constituency of the Nigeria economy. One of such concerns is project analysis and management, particularly in periods of high inflation. This study was ...

  1. Invariant Observer-Based State Estimation for Micro-Aerial Vehicles in GPS-Denied Indoor Environments Using an RGB-D Camera and MEMS Inertial Sensors

    Directory of Open Access Journals (Sweden)

    Dachuan Li

    2015-04-01

    Full Text Available This paper presents a non-linear state observer-based integrated navigation scheme for estimating the attitude, position and velocity of micro aerial vehicles (MAV operating in GPS-denied indoor environments, using the measurements from low-cost MEMS (micro electro-mechanical systems inertial sensors and an RGB-D camera. A robust RGB-D visual odometry (VO approach was developed to estimate the MAV’s relative motion by extracting and matching features captured by the RGB-D camera from the environment. The state observer of the RGB-D visual-aided inertial navigation was then designed based on the invariant observer theory for systems possessing symmetries. The motion estimates from the RGB-D VO were fused with inertial and magnetic measurements from the onboard MEMS sensors via the state observer, providing the MAV with accurate estimates of its full six degree-of-freedom states. Implementations on a quadrotor MAV and indoor flight test results demonstrate that the resulting state observer is effective in estimating the MAV’s states without relying on external navigation aids such as GPS. The properties of computational efficiency and simplicity in gain tuning make the proposed invariant observer-based navigation scheme appealing for actual MAV applications in indoor environments.

  2. An Open Standard for Camera Trap Data

    Directory of Open Access Journals (Sweden)

    Tavis Forrester

    2016-12-01

    Full Text Available Camera traps that capture photos of animals are a valuable tool for monitoring biodiversity. The use of camera traps is rapidly increasing and there is an urgent need for standardization to facilitate data management, reporting and data sharing. Here we offer the Camera Trap Metadata Standard as an open data standard for storing and sharing camera trap data, developed by experts from a variety of organizations. The standard captures information necessary to share data between projects and offers a foundation for collecting the more detailed data needed for advanced analysis. The data standard captures information about study design, the type of camera used, and the location and species names for all detections in a standardized way. This information is critical for accurately assessing results from individual camera trapping projects and for combining data from multiple studies for meta-analysis. This data standard is an important step in aligning camera trapping surveys with best practices in data-intensive science. Ecology is moving rapidly into the realm of big data, and central data repositories are becoming a critical tool and are emerging for camera trap data. This data standard will help researchers standardize data terms, align past data to new repositories, and provide a framework for utilizing data across repositories and research projects to advance animal ecology and conservation.

  3. Demonstration of the CDMA-mode CAOS smart camera.

    Science.gov (United States)

    Riza, Nabeel A; Mazhar, Mohsin A

    2017-12-11

    Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.

  4. Perspective projection for variance pose face recognition from camera calibration

    Science.gov (United States)

    Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.

    2016-04-01

    Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.

  5. Status of the NectarCAM camera project

    International Nuclear Information System (INIS)

    Glicenstein, J.F.; Delagnes, E.; Fesquet, M.; Louis, F.; Moudden, Y.; Moulin, E.; Nunio, F.; Sizun, P.

    2014-01-01

    NectarCAM is a camera designed for the medium-sized telescopes of the Cherenkov Telescope Array (CTA) covering the central energy range 100 GeV to 30 TeV. It has a modular design based on the NECTAr chip, at the heart of which is a GHz sampling Switched Capacitor Array and 12-bit Analog to Digital converter. The camera will be equipped with 265 7-photomultiplier modules, covering a field of view of 7 to 8 degrees. Each module includes the photomultiplier bases, High Voltage supply, pre-amplifier, trigger, readout and Thernet transceiver. Events recorded last between a few nanoseconds and tens of nanoseconds. A flexible trigger scheme allows to read out very long events. NectarCAM can sustain a data rate of 10 kHz. The camera concept, the design and tests of the various sub-components and results of thermal and electrical prototypes are presented. The design includes the mechanical structure, the cooling of electronics, read-out, clock distribution, slow control, data-acquisition, trigger, monitoring and services. A 133-pixel prototype with full scale mechanics, cooling, data acquisition and slow control will be built at the end of 2014. (authors)

  6. Distributed power in Afghanistan: The Padisaw micro-hydro project

    International Nuclear Information System (INIS)

    Hallett, Michael

    2009-01-01

    The provision of electricity is a vital need in reconstruction and development situations, like that in Afghanistan. Indeed, according to the Afghan government's Afghan National Development Strategy (ANDS) the need for electricity featured in 80% of the Provincial Development Plans as a top priority. With the help of the International Community, the government of Afghanistan is attempting to develop a new market oriented approach to the nationwide provision of electrical power. Although the bulk of the electrification effort is directed toward large scale construction of a national grid, the ANDS explicitly mentions a role for 'micro-hydro, solar, waste and small scale diesel power and energy generating sources'. This article will describe a micro-hydro project in Padisaw village, in the Nurgaram district of Nuristan province located in Northeastern Afghanistan and the role Provincial Reconstruction Team played in working with the local community through the project planning and building processes and offer some observation on how, as the Afghan National Development Strategy is executed, the private sector can play an increasingly significant role in the Afghan distributed energy arena. (author)

  7. SCC500: next-generation infrared imaging camera core products with highly flexible architecture for unique camera designs

    Science.gov (United States)

    Rumbaugh, Roy N.; Grealish, Kevin; Kacir, Tom; Arsenault, Barry; Murphy, Robert H.; Miller, Scott

    2003-09-01

    A new 4th generation MicroIR architecture is introduced as the latest in the highly successful Standard Camera Core (SCC) series by BAE SYSTEMS to offer an infrared imaging engine with greatly reduced size, weight, power, and cost. The advanced SCC500 architecture provides great flexibility in configuration to include multiple resolutions, an industry standard Real Time Operating System (RTOS) for customer specific software application plug-ins, and a highly modular construction for unique physical and interface options. These microbolometer based camera cores offer outstanding and reliable performance over an extended operating temperature range to meet the demanding requirements of real-world environments. A highly integrated lens and shutter is included in the new SCC500 product enabling easy, drop-in camera designs for quick time-to-market product introductions.

  8. Final Report for LDRD Project 02-FS-009 Gigapixel Surveillance Camera

    Energy Technology Data Exchange (ETDEWEB)

    Marrs, R E; Bennett, C L

    2010-04-20

    The threats of terrorism and proliferation of weapons of mass destruction add urgency to the development of new techniques for surveillance and intelligence collection. For example, the United States faces a serious and growing threat from adversaries who locate key facilities underground, hide them within other facilities, or otherwise conceal their location and function. Reconnaissance photographs are one of the most important tools for uncovering the capabilities of adversaries. However, current imaging technology provides only infrequent static images of a large area, or occasional video of a small area. We are attempting to add a new dimension to reconnaissance by introducing a capability for large area video surveillance. This capability would enable tracking of all vehicle movements within a very large area. The goal of our project is the development of a gigapixel video surveillance camera for high altitude aircraft or balloon platforms. From very high altitude platforms (20-40 km altitude) it would be possible to track every moving vehicle within an area of roughly 100 km x 100 km, about the size of the San Francisco Bay region, with a gigapixel camera. Reliable tracking of vehicles requires a ground sampling distance (GSD) of 0.5 to 1 m and a framing rate of approximately two frames per second (fps). For a 100 km x 100 km area the corresponding pixel count is 10 gigapixels for a 1-m GSD and 40 gigapixels for a 0.5-m GSD. This is an order of magnitude beyond the 1 gigapixel camera envisioned in our LDRD proposal. We have determined that an instrument of this capacity is feasible.

  9. SKYLARK - A crossbow-launched micro scale cheap UAV for close aerial surveillance

    Directory of Open Access Journals (Sweden)

    Alexandru-Marius PANAIT

    2012-03-01

    Full Text Available Close air support of ground troops especially in densely populated, urban environments has an ever increasing prevalence in the modern warfare. Counter-terrorism activities as well as land-based “surgical strikes” impose a set of special requirements on all the used weapons and equipment so as to minimize weight, cost and complexity and maximize efficiency. Small scale UAVs are in service with all the armed forces around the globe; micro UAVs are emerging as the ground troop close support preferred solution. Endurance and range of these devices is regularly small to very small, and their speed is low. Their small size makes them virtually un-targetable if somewhat still detectable. A new generation of micro-drones is proposed, with a higher speed (up to 350 km/h ground speed averaged and automatic recovery system. Project SKYLARK consists of a reusable minimal micro-UAV featuring a portable micro-USB camera and an aerodynamically assisted timer-based recovery system.

  10. Caliste 64, an innovative CdTe hard X-ray micro-camera

    International Nuclear Information System (INIS)

    Meuris, A.; Limousin, O.; Pinsard, F.; Le Mer, I.; Lugiez, F.; Gevin, O.; Delagnes, E.; Vassal, M.C.; Soufflet, F.; Bocage, R.

    2008-01-01

    A prototype 64 pixel miniature camera has been designed and tested for the Simbol-X hard X-ray observatory to be flown on the joint CNES-ASI space mission in 2014. This device is called Caliste 64. It is a high performance spectro-imager with event time-tagging capability, able to detect photons between 2 keV and 250 keV. Caliste 64 is the assembly of a 1 or 2 min thick CdTe detector mounted on top of a readout module. CdTe detectors equipped with Aluminum Schottky barrier contacts are used because of their very low dark current and excellent spectroscopic performance. Front-end electronics is a stack of four IDeF-X V1.1 ASICs, arranged perpendicular to the detection plane, to read out each pixel independently. The whole camera fits in a 10 * 10 * 20 mm 3 volume and is juxtaposable on its four sides. This allows the device to be used as an elementary unit in a larger array of Caliste 64 cameras. Noise performance resulted in an ENC better than 60 electrons rms in average. The first prototype camera is tested at -10 degrees C with a bias of -400 V. The spectrum summed across the 64 pixels results in a resolution of 697 eV FWHM at 13.9 keV and 808 eV FWFM at 59.54 keV. (authors)

  11. Wafer-level micro-optics: trends in manufacturing, testing, packaging, and applications

    Science.gov (United States)

    Voelkel, Reinhard; Gong, Li; Rieck, Juergen; Zheng, Alan

    2012-11-01

    Micro-optics is an indispensable key enabling technology (KET) for many products and applications today. Probably the most prestigious examples are the diffractive light shaping elements used in high-end DUV lithography steppers. Highly efficient refractive and diffractive micro-optical elements are used for precise beam and pupil shaping. Micro-optics had a major impact on the reduction of aberrations and diffraction effects in projection lithography, allowing a resolution enhancement from 250 nm to 45 nm within the last decade. Micro-optics also plays a decisive role in medical devices (endoscopes, ophthalmology), in all laser-based devices and fiber communication networks (supercomputer, ROADM), bringing high-speed internet to our homes (FTTH). Even our modern smart phones contain a variety of micro-optical elements. For example, LED flashlight shaping elements, the secondary camera, and ambient light and proximity sensors. Wherever light is involved, micro-optics offers the chance to further miniaturize a device, to improve its performance, or to reduce manufacturing and packaging costs. Wafer-scale micro-optics fabrication is based on technology established by semiconductor industry. Thousands of components are fabricated in parallel on a wafer. We report on the state of the art in wafer-based manufacturing, testing, packaging and present examples and applications for micro-optical components and systems.

  12. Caliste 64, an innovative CdTe hard X-ray micro-camera

    Energy Technology Data Exchange (ETDEWEB)

    Meuris, A.; Limousin, O.; Pinsard, F.; Le Mer, I. [CEA Saclay, DSM, DAPNIA, Serv. Astrophys., F-91191 Gif sur Yvette (France); Lugiez, F.; Gevin, O.; Delagnes, E. [CEA Saclay, DSM, DAPNIA, Serv. Electron., F-91191 Gif sur Yvette (France); Vassal, M.C.; Soufflet, F.; Bocage, R. [3D-plus Company, F-78532 Buc (France)

    2008-07-01

    A prototype 64 pixel miniature camera has been designed and tested for the Simbol-X hard X-ray observatory to be flown on the joint CNES-ASI space mission in 2014. This device is called Caliste 64. It is a high performance spectro-imager with event time-tagging capability, able to detect photons between 2 keV and 250 keV. Caliste 64 is the assembly of a 1 or 2 min thick CdTe detector mounted on top of a readout module. CdTe detectors equipped with Aluminum Schottky barrier contacts are used because of their very low dark current and excellent spectroscopic performance. Front-end electronics is a stack of four IDeF-X V1.1 ASICs, arranged perpendicular to the detection plane, to read out each pixel independently. The whole camera fits in a 10 * 10 * 20 mm{sup 3} volume and is juxtaposable on its four sides. This allows the device to be used as an elementary unit in a larger array of Caliste 64 cameras. Noise performance resulted in an ENC better than 60 electrons rms in average. The first prototype camera is tested at -10 degrees C with a bias of -400 V. The spectrum summed across the 64 pixels results in a resolution of 697 eV FWHM at 13.9 keV and 808 eV FWFM at 59.54 keV. (authors)

  13. Product Plan of New Generation System Camera "OLYMPUS PEN E-P1"

    Science.gov (United States)

    Ogawa, Haruo

    "OLYMPUS PEN E-P1", which is new generation system camera, is the first product of Olympus which is new standard "Micro Four-thirds System" for high-resolution mirror-less cameras. It continues good sales by the concept of "small and stylish design, easy operation and SLR image quality" since release on July 3, 2009. On the other hand, the half-size film camera "OLYMPUS PEN" was popular by the concept "small and stylish design and original mechanism" since the first product in 1959 and recorded sale number more than 17 million with 17 models. By the 50th anniversary topic and emotional value of the Olympus pen, Olympus pen E-P1 became big sales. I would like to explain the way of thinking of the product plan that included not only the simple functional value but also emotional value on planning the first product of "Micro Four-thirds System".

  14. First results of micro-neutron tomography by use of a focussing neutron lens

    CERN Document Server

    Masschaele, B; Cauwels, P; Dierick, M; Jolie, J; Mondelaers, W

    2001-01-01

    Since the appearance of high flux neutron beams, scientists experimented with neutron radiography. This high beam flux combined with modern neutron to visible light converters leads to the possibility of performing fast neutron micro-tomography. The first results of cold neutron tomography with a neutron lens are presented in this article. Samples are rotated in the beam and the projections are recorded with a neutron camera. The 3D reconstruction is performed with cone beam reconstruction software.

  15. Theory and applications of smart cameras

    CERN Document Server

    2016-01-01

    This book presents an overview of smart camera systems, considering practical applications but also reviewing fundamental aspects of the underlying technology.  It introduces in a tutorial style the principles of sensing and signal processing, and also describes topics such as wireless connection to the Internet of Things (IoT) which is expected to be the biggest market for smart cameras. It is an excellent guide to the fundamental of smart camera technology, and the chapters complement each other well as the authors have worked as a team under the auspice of GFP(Global Frontier Project), the largest-scale funded research in Korea.  This is the third of three books based on the Integrated Smart Sensors research project, which describe the development of innovative devices, circuits, and system-level enabling technologies.  The aim of the project was to develop common platforms on which various devices and sensors can be loaded, and to create systems offering significant improvements in information processi...

  16. Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection

    Science.gov (United States)

    Yoon, Soweon; Jung, Ho Gi; Park, Kang Ryoung; Kim, Jaihie

    2009-03-01

    Although iris recognition is one of the most accurate biometric technologies, it has not yet been widely used in practical applications. This is mainly due to user inconvenience during the image acquisition phase. Specifically, users try to adjust their eye position within small capture volume at a close distance from the system. To overcome these problems, we propose a novel iris image acquisition system that provides users with unconstrained environments: a large operating range, enabling movement from standing posture, and capturing good-quality iris images in an acceptable time. The proposed system has the following three contributions compared with previous works: (1) the capture volume is significantly increased by using a pan-tilt-zoom (PTZ) camera guided by a light stripe projection, (2) the iris location in the large capture volume is found fast due to 1-D vertical face searching from the user's horizontal position obtained by the light stripe projection, and (3) zooming and focusing on the user's irises at a distance are accurate and fast using the estimated 3-D position of a face by the light stripe projection and the PTZ camera. Experimental results show that the proposed system can capture good-quality iris images in 2.479 s on average at a distance of 1.5 to 3 m, while allowing a limited amount of movement by the user.

  17. Movement-based Interaction in Camera Spaces

    DEFF Research Database (Denmark)

    Eriksson, Eva; Riisgaard Hansen, Thomas; Lykke-Olesen, Andreas

    2006-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movement-based projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space......, relations, and feedback. We see these as central for describing and analysing movement-based systems using camera tracking and we show how these three concepts can be used to analyse other camera tracking applications....

  18. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera.

    Science.gov (United States)

    Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng

    2017-06-20

    The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.

  19. Visual Inspection for Breakage of Micro-milling Cutter

    Directory of Open Access Journals (Sweden)

    WANG Lei

    2014-11-01

    Full Text Available In order to realize visual inspection for breakage of micro-milling cutter, a developed image acquisition method of the surface of a micro-milling cutter was constructed and a classification method based on multilayer neural network was proposed in this article. While the milling cutter was rotating at a constant speed, a camera was triggered by a rotary encoder to capture a series of images. And the developed image of milling cutter was created by image mosaic algorithms. The moment of regional feature as well as the gray feature of the tooth edge was extracted as the input vector of neural network. The feature vector includes moment of inertia, geometric central moment, three-dimensional invariants moment and the gray value of the projection on two principal axis directions of the tooth region. By designing a proper neural network, breakage defects can be detected 100 %. And the false discovery rate is 0.5 %.

  20. Measurement of liquid film flow on nuclear rod bundle in micro-scale by using very high speed camera system

    Science.gov (United States)

    Pham, Son; Kawara, Zensaku; Yokomine, Takehiko; Kunugi, Tomoaki

    2012-11-01

    Playing important roles in the mass and heat transfer as well as the safety of boiling water reactor, the liquid film flow on nuclear fuel rods has been studied by different measurement techniques such as ultrasonic transmission, conductivity probe, etc. Obtained experimental data of this annular two-phase flow, however, are still not enough to construct the physical model for critical heat flux analysis especially at the micro-scale. Remain problems are mainly caused by complicated geometry of fuel rod bundles, high velocity and very unstable interface behavior of liquid and gas flow. To get over these difficulties, a new approach using a very high speed digital camera system has been introduced in this work. The test section simulating a 3×3 rectangular rod bundle was made of acrylic to allow a full optical observation of the camera. Image data were taken through Cassegrain optical system to maintain the spatiotemporal resolution up to 7 μm and 20 μs. The results included not only the real-time visual information of flow patterns, but also the quantitative data such as liquid film thickness, the droplets' size and speed distributions, and the tilt angle of wavy surfaces. These databases could contribute to the development of a new model for the annular two-phase flow. Partly supported by the Global Center of Excellence (G-COE) program (J-051) of MEXT, Japan.

  1. ALGORITHM OF PLACEMENT OF VIDEO SURVEILLANCE CAMERAS AND ITS SOFTWARE IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Loktev Alexey Alexeevich

    2012-10-01

    Full Text Available Comprehensive distributed safety, control, and monitoring systems applied by companies and organizations of different ownership structure play a substantial role in the present-day society. Video surveillance elements that ensure image processing and decision making in automated or automatic modes are the essential components of new systems. This paper covers the modeling of video surveillance systems installed in buildings, and the algorithm, or pattern, of video camera placement with due account for nearly all characteristics of buildings, detection and recognition facilities, and cameras themselves. This algorithm will be subsequently implemented as a user application. The project contemplates a comprehensive approach to the automatic placement of cameras that take account of their mutual positioning and compatibility of tasks. The project objective is to develop the principal elements of the algorithm of recognition of a moving object to be detected by several cameras. The image obtained by different cameras will be processed. Parameters of motion are to be identified to develop a table of possible options of routes. The implementation of the recognition algorithm represents an independent research project to be covered by a different article. This project consists in the assessment of the degree of complexity of an algorithm of camera placement designated for identification of cases of inaccurate algorithm implementation, as well as in the formulation of supplementary requirements and input data by means of intercrossing sectors covered by neighbouring cameras. The project also contemplates identification of potential problems in the course of development of a physical security and monitoring system at the stage of the project design development and testing. The camera placement algorithm has been implemented as a software application that has already been pilot tested on buildings and inside premises that have irregular dimensions. The

  2. C.C.D. readout of a picosecond streak camera with an intensified C.C.D

    International Nuclear Information System (INIS)

    Lemonier, M.; Richard, J.C.; Cavailler, C.; Mens, A.; Raze, G.

    1984-08-01

    This paper deals with a digital streak camera readout device. The device consists in a low light level television camera made of a solid state C.C.D. array coupled to an image intensifier associated to a video-digitizer coupled to a micro-computer system. The streak camera images are picked-up as a video signal, digitized and stored. This system allows the fast recording and the automatic processing of the data provided by the streak tube

  3. Projecting range-wide sun bear population trends using tree cover and camera-trap bycatch data.

    Directory of Open Access Journals (Sweden)

    Lorraine Scotson

    Full Text Available Monitoring population trends of threatened species requires standardized techniques that can be applied over broad areas and repeated through time. Sun bears Helarctos malayanus are a forest dependent tropical bear found throughout most of Southeast Asia. Previous estimates of global population trends have relied on expert opinion and cannot be systematically replicated. We combined data from 1,463 camera traps within 31 field sites across sun bear range to model the relationship between photo catch rates of sun bears and tree cover. Sun bears were detected in all levels of tree cover above 20%, and the probability of presence was positively associated with the amount of tree cover within a 6-km2 buffer of the camera traps. We used the relationship between catch rates and tree cover across space to infer temporal trends in sun bear abundance in response to tree cover loss at country and global-scales. Our model-based projections based on this "space for time" substitution suggested that sun bear population declines associated with tree cover loss between 2000-2014 in mainland southeast Asia were ~9%, with declines highest in Cambodia and lowest in Myanmar. During the same period, sun bear populations in insular southeast Asia (Malaysia, Indonesia and Brunei were projected to have declined at a much higher rate (22%. Cast forward over 30-years, from the year 2000, by assuming a constant rate of change in tree cover, we projected population declines in the insular region that surpassed 50%, meeting the IUCN criteria for endangered if sun bears were listed on the population level. Although this approach requires several assumptions, most notably that trends in abundance across space can be used to infer temporal trends, population projections using remotely sensed tree cover data may serve as a useful alternative (or supplement to expert opinion. The advantages of this approach is that it is objective, data-driven, repeatable, and it requires that

  4. Low-cost uncooled VOx infrared camera development

    Science.gov (United States)

    Li, Chuan; Han, C. J.; Skidmore, George D.; Cook, Grady; Kubala, Kenny; Bates, Robert; Temple, Dorota; Lannon, John; Hilton, Allan; Glukh, Konstantin; Hardy, Busbee

    2013-06-01

    The DRS Tamarisk® 320 camera, introduced in 2011, is a low cost commercial camera based on the 17 µm pixel pitch 320×240 VOx microbolometer technology. A higher resolution 17 µm pixel pitch 640×480 Tamarisk®640 has also been developed and is now in production serving the commercial markets. Recently, under the DARPA sponsored Low Cost Thermal Imager-Manufacturing (LCTI-M) program and internal project, DRS is leading a team of industrial experts from FiveFocal, RTI International and MEMSCAP to develop a small form factor uncooled infrared camera for the military and commercial markets. The objective of the DARPA LCTI-M program is to develop a low SWaP camera (costs less than US $500 based on a 10,000 units per month production rate. To meet this challenge, DRS is developing several innovative technologies including a small pixel pitch 640×512 VOx uncooled detector, an advanced digital ROIC and low power miniature camera electronics. In addition, DRS and its partners are developing innovative manufacturing processes to reduce production cycle time and costs including wafer scale optic and vacuum packaging manufacturing and a 3-dimensional integrated camera assembly. This paper provides an overview of the DRS Tamarisk® project and LCTI-M related uncooled technology development activities. Highlights of recent progress and challenges will also be discussed. It should be noted that BAE Systems and Raytheon Vision Systems are also participants of the DARPA LCTI-M program.

  5. Development of high-speed video cameras

    Science.gov (United States)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  6. Comparison of photon counting versus charge integration micro-CT within the irradiation setup PIXSCAN

    International Nuclear Information System (INIS)

    Ouamara, H.

    2013-01-01

    The pathway that has been followed by the imXgam team at CPPM was to adapt the hybrid pixel technology XPAD to biomedical imaging. It is in this context that the micro-CT PIXSCAN II based on the new generation of hybrid pixel detectors called XPAD3 has been developed. This thesis describes the process undertaken to assess the contribution of the hybrid pixel technology in X-ray computed tomography in terms of contrast and dose and to explore new opportunities for biomedical imaging at low doses. Performance evaluation as well as the validation of the results obtained with data acquired with the detector XPAD3 were compared to results obtained with the CCD camera DALSA XR-4 similar to detectors used in most conventional micro-CT systems. The detector XPAD3 allows to obtain reconstructed images of satisfactory quality close to that of images from the DALSA XR-4 camera, but with a better spatial resolution. At low doses, the images from the detector XPAD3 have a better quality that is those from CCD camera. From an instrumentation point of view, this project demonstrated the proper operations of the device PIXSCAN II for mouse imaging. We were able to reproduce an image quality similar to that obtained with a charge integration detector such as a CCD camera. To improve the performance of the detector XPAD3, we will have to optimize the stability of the thresholds and in order to obtain more homogeneous response curves of the pixels as a function as energy by using a denser sensor such as CdTe. (author)

  7. Calibration Procedures in Mid Format Camera Setups

    Science.gov (United States)

    Pivnicka, F.; Kemper, G.; Geissler, S.

    2012-07-01

    A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU), the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and camera can be applied

  8. Demonstration of micro-projection enabled short-range communication system for 5G.

    Science.gov (United States)

    Chou, Hsi-Hsir; Tsai, Cheng-Yu

    2016-06-13

    A liquid crystal on silicon (LCoS) based polarization modulated image (PMI) system architecture using red-, green- and blue-based light-emitting diodes (LEDs), which offers simultaneous micro-projection and high-speed data transmission at nearly a gigabit, serving as an alternative short-range communication (SRC) approach for personal communication device (PCD) application in 5G, is proposed and experimentally demonstrated. In order to make the proposed system architecture transparent to the future possible wireless data modulation format, baseband modulation schemes such as multilevel pulse amplitude modulation (M-PAM), M-ary phase shift keying modulation (M-PSK) and M-ary quadrature amplitude modulation (M-QAM) which can be further employed by more advanced multicarrier modulation schemes (such as DMT, OFDM and CAP) were used to investigate the highest possible data transmission rate of the proposed system architecture. The results demonstrated that an aggregative data transmission rate of 892 Mb/s and 900 Mb/s at a BER of 10^(-3) can be achieved by using 16-QAM baseband modulation scheme when data transmission were performed with and without micro-projection simultaneously.

  9. Status of the Dark Energy Survey Camera (DECam) Project

    Energy Technology Data Exchange (ETDEWEB)

    Flaugher, Brenna L.; Abbott, Timothy M.C.; Angstadt, Robert; Annis, Jim; Antonik, Michelle, L.; Bailey, Jim; Ballester, Otger.; Bernstein, Joseph P.; Bernstein, Rebbeca; Bonati, Marco; Bremer, Gale; /Fermilab /Cerro-Tololo InterAmerican Obs. /ANL /Texas A-M /Michigan U. /Illinois U., Urbana /Ohio State U. /University Coll. London /LBNL /SLAC /IFAE

    2012-06-29

    The Dark Energy Survey Collaboration has completed construction of the Dark Energy Camera (DECam), a 3 square degree, 570 Megapixel CCD camera which will be mounted on the Blanco 4-meter telescope at CTIO. DECam will be used to perform the 5000 sq. deg. Dark Energy Survey with 30% of the telescope time over a 5 year period. During the remainder of the time, and after the survey, DECam will be available as a community instrument. All components of DECam have been shipped to Chile and post-shipping checkout finished in Jan. 2012. Installation is in progress. A summary of lessons learned and an update of the performance of DECam and the status of the DECam installation and commissioning will be presented.

  10. Status of the Dark Energy Survey Camera (DECam) project

    Energy Technology Data Exchange (ETDEWEB)

    Flaugher, Brenna L.; McLean, Ian S.; Ramsay, Suzanne K.; Abbott, Timothy M. C.; Angstadt, Robert; Takami, Hideki; Annis, Jim; Antonik, Michelle L.; Bailey, Jim; Ballester, Otger; Bernstein, Joseph P.; Bernstein, Rebecca A.; Bonati, Marco; Bremer, Gale; Briones, Jorge; Brooks, David; Buckley-Geer, Elizabeth J.; Campa, Juila; Cardiel-Sas, Laia; Castander, Francisco; Castilla, Javier; Cease, Herman; Chappa, Steve; Chi, Edward C.; da Costa, Luis; DePoy, Darren L.; Derylo, Gregory; de Vincente, Juan; Diehl, H. Thomas; Doel, Peter; Estrada, Juan; Eiting, Jacob; Elliott, Anne E.; Finley, David A.; Flores, Rolando; Frieman, Josh; Gaztanaga, Enrique; Gerdes, David; Gladders, Mike; Guarino, V.; Gutierrez, G.; Grudzinski, Jim; Hanlon, Bill; Hao, Jiangang; Holland, Steve; Honscheid, Klaus; Huffman, Dave; Jackson, Cheryl; Jonas, Michelle; Karliner, Inga; Kau, Daekwang; Kent, Steve; Kozlovsky, Mark; Krempetz, Kurt; Krider, John; Kubik, Donna; Kuehn, Kyler; Kuhlmann, Steve E.; Kuk, Kevin; Lahav, Ofer; Langellier, Nick; Lathrop, Andrew; Lewis, Peter M.; Lin, Huan; Lorenzon, Wolfgang; Martinez, Gustavo; McKay, Timothy; Merritt, Wyatt; Meyer, Mark; Miquel, Ramon; Morgan, Jim; Moore, Peter; Moore, Todd; Neilsen, Eric; Nord, Brian; Ogando, Ricardo; Olson, Jamieson; Patton, Kenneth; Peoples, John; Plazas, Andres; Qian, Tao; Roe, Natalie; Roodman, Aaron; Rossetto, B.; Sanchez, E.; Soares-Santos, Marcelle; Scarpine, Vic; Schalk, Terry; Schindler, Rafe; Schmidt, Ricardo; Schmitt, Richard; Schubnell, Mike; Schultz, Kenneth; Selen, M.; Serrano, Santiago; Shaw, Terri; Simaitis, Vaidas; Slaughter, Jean; Smith, R. Christopher; Spinka, Hal; Stefanik, Andy; Stuermer, Walter; Sypniewski, Adam; Talaga, R.; Tarle, Greg; Thaler, Jon; Tucker, Doug; Walker, Alistair R.; Weaverdyck, Curtis; Wester, William; Woods, Robert J.; Worswick, Sue; Zhao, Allen

    2012-09-24

    The Dark Energy Survey Collaboration has completed construction of the Dark Energy Camera (DECam), a 3 square degree, 570 Megapixel CCD camera which will be mounted on the Blanco 4-meter telescope at CTIO. DECam will be used to perform the 5000 sq. deg. Dark Energy Survey with 30% of the telescope time over a 5 year period. During the remainder of the time, and after the survey, DECam will be available as a community instrument. All components of DECam have been shipped to Chile and post-shipping checkout finished in Jan. 2012. Installation is in progress. A summary of lessons learned and an update of the performance of DECam and the status of the DECam installation and commissioning will be presented.

  11. Ectomography - a tomographic method for gamma camera imaging

    International Nuclear Information System (INIS)

    Dale, S.; Edholm, P.E.; Hellstroem, L.G.; Larsson, S.

    1985-01-01

    In computerised gamma camera imaging the projections are readily obtained in digital form, and the number of picture elements may be relatively few. This condition makes emission techniques suitable for ectomography - a tomographic technique for directly visualising arbitrary sections of the human body. The camera rotates around the patient to acquire different projections in a way similar to SPECT. This method differs from SPECT, however, in that the camera is placed at an angle to the rotational axis, and receives two-dimensional, rather than one-dimensional, projections. Images of body sections are reconstructed by digital filtration and combination of the acquired projections. The main advantages of ectomography - a high and uniform resolution, a low and uniform attenuation and a high signal-to-noise ratio - are obtained when imaging sections close and parallel to a body surface. The filtration eliminates signals representing details outside the section and gives the section a certain thickness. Ectomographic transverse images of a line source and of a human brain have been reconstructed. Details within the sections are correctly visualised and details outside are effectively eliminated. For comparison, the same sections have been imaged with SPECT. (author)

  12. Computing camera heading: A study

    Science.gov (United States)

    Zhang, John Jiaxiang

    2000-08-01

    An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.

  13. Development of Camera Model and Geometric Calibration/validation of Xsat IRIS Imagery

    Science.gov (United States)

    Kwoh, L. K.; Huang, X.; Tan, W. J.

    2012-07-01

    XSAT, launched on 20 April 2011, is the first micro-satellite designed and built in Singapore. It orbits the Earth at altitude of 822 km in a sun synchronous orbit. The satellite carries a multispectral camera IRIS with three spectral bands - 0.52~0.60 mm for Green, 0.63~0.69 mm for Red and 0.76~0.89 mm for NIR at 12 m resolution. In the design of IRIS camera, the three bands were acquired by three lines of CCDs (NIR, Red and Green). These CCDs were physically separated in the focal plane and their first pixels not absolutely aligned. The micro-satellite platform was also not stable enough to allow for co-registration of the 3 bands with simple linear transformation. In the camera model developed, this platform stability was compensated with 3rd to 4th order polynomials for the satellite's roll, pitch and yaw attitude angles. With the camera model, the camera parameters such as the band to band separations, the alignment of the CCDs relative to each other, as well as the focal length of the camera can be validated or calibrated. The results of calibration with more than 20 images showed that the band to band along-track separation agreed well with the pre-flight values provided by the vendor (0.093° and 0.046° for the NIR vs red and for green vs red CCDs respectively). The cross-track alignments were 0.05 pixel and 5.9 pixel for the NIR vs red and green vs red CCDs respectively. The focal length was found to be shorter by about 0.8%. This was attributed to the lower operating temperature which XSAT is currently operating. With the calibrated parameters and the camera model, a geometric level 1 multispectral image with RPCs can be generated and if required, orthorectified imagery can also be produced.

  14. CALIBRATION PROCEDURES IN MID FORMAT CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    F. Pivnicka

    2012-07-01

    Full Text Available A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU, the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and

  15. A SPECT demonstrator—revival of a gamma camera

    Science.gov (United States)

    Valastyán, I.; Kerek, A.; Molnár, J.; Novák, D.; Végh, J.; Emri, M.; Trón, L.

    2006-07-01

    A gamma camera has been updated and converted to serve as a demonstrator for educational purposes. The gantry and the camera head were the only part of the system that remained untouched. The main reason for this modernization was to increase the transparency of the gamma camera by partitioning the different logical building blocks of the system and thus providing access for inspection and improvements throughout the chain. New data acquisition and reconstruction software has been installed. By taking these measures, the camera is now used in education and also serves as a platform for tests of new hardware and software solutions. The camera is also used to demonstrate 3D (SPECT) imaging by collecting 2D projections from a rotatable cylindrical phantom. Since the camera head is not attached mechanically to the phantom, the effect of misalignment between the head and the rotation axis of the phantom can be studied.

  16. A SPECT demonstrator-revival of a gamma camera

    International Nuclear Information System (INIS)

    Valastyan, I.; Kerek, A.; Molnar, J.; Novak, D.; Vegh, J.; Emri, M.; Tron, L.

    2006-01-01

    A gamma camera has been updated and converted to serve as a demonstrator for educational purposes. The gantry and the camera head were the only part of the system that remained untouched. The main reason for this modernization was to increase the transparency of the gamma camera by partitioning the different logical building blocks of the system and thus providing access for inspection and improvements throughout the chain. New data acquisition and reconstruction software has been installed. By taking these measures, the camera is now used in education and also serves as a platform for tests of new hardware and software solutions. The camera is also used to demonstrate 3D (SPECT) imaging by collecting 2D projections from a rotatable cylindrical phantom. Since the camera head is not attached mechanically to the phantom, the effect of misalignment between the head and the rotation axis of the phantom can be studied

  17. Assessing the appropriateness of carbon financing for micro-scale projects in terms of capabilities

    OpenAIRE

    Caitlin Trethewy

    2013-01-01

    Micro-scale development projects are currently underrepresented in global carbon markets. This paper outlines the process of becoming eligible to generate carbon credits and examines some of the barriers that may inhibit access to carbon markets. In particular, it focuses on barriers relating to the capacity and resources of the organisation developing the project. This approach represents a deviation from the standard discourse which has traditionally focused on barriers relating to the avai...

  18. TRANSFORMATION ALGORITHM FOR IMAGES OBTAINED BY OMNIDIRECTIONAL CAMERAS

    Directory of Open Access Journals (Sweden)

    V. P. Lazarenko

    2015-01-01

    Full Text Available Omnidirectional optoelectronic systems find their application in areas where a wide viewing angle is critical. However, omnidirectional optoelectronic systems have a large distortion that makes their application more difficult. The paper compares the projection functions of traditional perspective lenses and omnidirectional wide angle fish-eye lenses with a viewing angle not less than 180°. This comparison proves that distortion models of omnidirectional cameras cannot be described as a deviation from the classic model of pinhole camera. To solve this problem, an algorithm for transforming omnidirectional images has been developed. The paper provides a brief comparison of the four calibration methods available in open source toolkits for omnidirectional optoelectronic systems. Geometrical projection model is given used for calibration of omnidirectional optical system. The algorithm consists of three basic steps. At the first step, we calculate he field of view of a virtual pinhole PTZ camera. This field of view is characterized by an array of 3D points in the object space. At the second step the array of corresponding pixels for these three-dimensional points is calculated. Then we make a calculation of the projection function that expresses the relation between a given 3D point in the object space and a corresponding pixel point. In this paper we use calibration procedure providing the projection function for calibrated instance of the camera. At the last step final image is formed pixel-by-pixel from the original omnidirectional image using calculated array of 3D points and projection function. The developed algorithm gives the possibility for obtaining an image for a part of the field of view of an omnidirectional optoelectronic system with the corrected distortion from the original omnidirectional image. The algorithm is designed for operation with the omnidirectional optoelectronic systems with both catadioptric and fish-eye lenses

  19. Skin vaccination against cervical cancer associated human papillomavirus with a novel micro-projection array in a mouse model.

    Directory of Open Access Journals (Sweden)

    Holly J Corbett

    Full Text Available BACKGROUND: Better delivery systems are needed for routinely used vaccines, to improve vaccine uptake. Many vaccines contain alum or alum based adjuvants. Here we investigate a novel dry-coated densely-packed micro-projection array skin patch (Nanopatch™ as an alternate delivery system to intramuscular injection for delivering an alum adjuvanted human papillomavirus (HPV vaccine (Gardasil® commonly used as a prophylactic vaccine against cervical cancer. METHODOLOGY/PRINCIPAL FINDINGS: Micro-projection arrays dry-coated with vaccine material (Gardasil® delivered to C57BL/6 mouse ear skin released vaccine within 5 minutes. To assess vaccine immunogenicity, doses of corresponding to HPV-16 component of the vaccine between 0.43 ± 0.084 ng and 300 ± 120 ng (mean ± SD were administered to mice at day 0 and day 14. A dose of 55 ± 6.0 ng delivered intracutaneously by micro-projection array was sufficient to produce a maximal virus neutralizing serum antibody response at day 28 post vaccination. Neutralizing antibody titres were sustained out to 16 weeks post vaccination, and, for comparable doses of vaccine, somewhat higher titres were observed with intracutaneous patch delivery than with intramuscular delivery with the needle and syringe at this time point. CONCLUSIONS/SIGNIFICANCE: Use of dry micro-projection arrays (Nanopatch™ has the potential to overcome the need for a vaccine cold chain for common vaccines currently delivered by needle and syringe, and to reduce risk of needle-stick injury and vaccine avoidance due to the fear of the needle especially among children.

  20. The PixFEL project: Progress towards a fine pitch X-ray imaging camera for next generation FEL facilities

    Energy Technology Data Exchange (ETDEWEB)

    Rizzo, G., E-mail: giuliana.rizzo@pi.infn.it [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Batignani, G. [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Benkechkache, M.A. [Università di Trento, Dipartimento di Ingegneria Industriale, I-38123 Trento (Italy); University Constantine 1, Department of Electronics in the Science and Technology Faculty, I-25017, Constantine (Algeria); Bettarini, S.; Casarosa, G. [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Comotti, D. [Università di Pavia, Dipartimento di Ingegneria Industriale e dell' Informazione, I-27100 Pavia (Italy); INFN Sezione di Pavia, I-27100 Pavia (Italy); Dalla Betta, G.-F. [Università di Trento, Dipartimento di Ingegneria Industriale, I-38123 Trento (Italy); TIFPA INFN, I-38123 Trento (Italy); Fabris, L. [INFN Sezione di Pavia, I-27100 Pavia (Italy); Università di Bergamo, Dipartimento di Ingegneria e Scienze Applicate, I-24044 Dalmine (Italy); Forti, F. [Università di Pisa, Dipartimento di Fisica, I-56127 Pisa (Italy); INFN, Sezione di Pisa, I-56127 Pisa (Italy); Grassi, M.; Lodola, L.; Malcovati, P. [Università di Pavia, Dipartimento di Ingegneria Industriale e dell' Informazione, I-27100 Pavia (Italy); INFN Sezione di Pavia, I-27100 Pavia (Italy); Manghisoni, M. [INFN Sezione di Pavia, I-27100 Pavia (Italy); Università di Bergamo, Dipartimento di Ingegneria e Scienze Applicate, I-24044 Dalmine (Italy); and others

    2016-07-11

    The INFN PixFEL project is developing the fundamental building blocks for a large area X-ray imaging camera to be deployed at next generation free electron laser (FEL) facilities with unprecedented intensity. Improvement in performance beyond the state of art in imaging instrumentation will be explored adopting advanced technologies like active edge sensors, a 65 nm node CMOS process and vertical integration. These are the key ingredients of the PixFEL project to realize a seamless large area focal plane instrument composed by a matrix of multilayer four-side buttable tiles. In order to minimize the dead area and reduce ambiguities in image reconstruction, a fine pitch active edge thick sensor is being optimized to cope with very high intensity photon flux, up to 10{sup 4} photons per pixel, in the range from 1 to 10 keV. A low noise analog front-end channel with this wide dynamic range and a novel dynamic compression feature, together with a low power 10 bit analog to digital conversion up to 5 MHz, has been realized in a 110 μm pitch with a 65 nm CMOS process. Vertical interconnection of two CMOS tiers will be also explored in the future to build a four-side buttable readout chip with high density memories. In the long run the objective of the PixFEL project is to build a flexible X-ray imaging camera for operation both in burst mode, like at the European X-FEL, or in continuous mode with the high frame rates anticipated for future FEL facilities.

  1. The PixFEL project: Progress towards a fine pitch X-ray imaging camera for next generation FEL facilities

    International Nuclear Information System (INIS)

    Rizzo, G.; Batignani, G.; Benkechkache, M.A.; Bettarini, S.; Casarosa, G.; Comotti, D.; Dalla Betta, G.-F.; Fabris, L.; Forti, F.; Grassi, M.; Lodola, L.; Malcovati, P.; Manghisoni, M.

    2016-01-01

    The INFN PixFEL project is developing the fundamental building blocks for a large area X-ray imaging camera to be deployed at next generation free electron laser (FEL) facilities with unprecedented intensity. Improvement in performance beyond the state of art in imaging instrumentation will be explored adopting advanced technologies like active edge sensors, a 65 nm node CMOS process and vertical integration. These are the key ingredients of the PixFEL project to realize a seamless large area focal plane instrument composed by a matrix of multilayer four-side buttable tiles. In order to minimize the dead area and reduce ambiguities in image reconstruction, a fine pitch active edge thick sensor is being optimized to cope with very high intensity photon flux, up to 10"4 photons per pixel, in the range from 1 to 10 keV. A low noise analog front-end channel with this wide dynamic range and a novel dynamic compression feature, together with a low power 10 bit analog to digital conversion up to 5 MHz, has been realized in a 110 μm pitch with a 65 nm CMOS process. Vertical interconnection of two CMOS tiers will be also explored in the future to build a four-side buttable readout chip with high density memories. In the long run the objective of the PixFEL project is to build a flexible X-ray imaging camera for operation both in burst mode, like at the European X-FEL, or in continuous mode with the high frame rates anticipated for future FEL facilities.

  2. Multi-person tracking with overlapping cameras in complex, dynamic environments

    NARCIS (Netherlands)

    Liem, M.; Gavrila, D.M.

    2009-01-01

    This paper presents a multi-camera system to track multiple persons in complex, dynamic environments. Position measurements are obtained by carving out the space defined by foreground regions in the overlapping camera views and projecting these onto blobs on the ground plane. Person appearance is

  3. Real-time vehicle matching for multi-camera tunnel surveillance

    Science.gov (United States)

    Jelača, Vedran; Niño Castañeda, Jorge Oswaldo; Frías-Velázquez, Andrés; Pižurica, Aleksandra; Philips, Wilfried

    2011-03-01

    Tracking multiple vehicles with multiple cameras is a challenging problem of great importance in tunnel surveillance. One of the main challenges is accurate vehicle matching across the cameras with non-overlapping fields of view. Since systems dedicated to this task can contain hundreds of cameras which observe dozens of vehicles each, for a real-time performance computational efficiency is essential. In this paper, we propose a low complexity, yet highly accurate method for vehicle matching using vehicle signatures composed of Radon transform like projection profiles of the vehicle image. The proposed signatures can be calculated by a simple scan-line algorithm, by the camera software itself and transmitted to the central server or to the other cameras in a smart camera environment. The amount of data is drastically reduced compared to the whole image, which relaxes the data link capacity requirements. Experiments on real vehicle images, extracted from video sequences recorded in a tunnel by two distant security cameras, validate our approach.

  4. Camera-based micro interferometer for distance sensing

    Science.gov (United States)

    Will, Matthias; Schädel, Martin; Ortlepp, Thomas

    2017-12-01

    Interference of light provides a high precision, non-contact and fast method for measurement method for distances. Therefore this technology dominates in high precision systems. However, in the field of compact sensors capacitive, resistive or inductive methods dominates. The reason is, that the interferometric system has to be precise adjusted and needs a high mechanical stability. As a result, we have usual high-priced complex systems not suitable in the field of compact sensors. To overcome these we developed a new concept for a very small interferometric sensing setup. We combine a miniaturized laser unit, a low cost pixel detector and machine vision routines to realize a demonstrator for a Michelson type micro interferometer. We demonstrate a low cost sensor smaller 1cm3 including all electronics and demonstrate distance sensing up to 30 cm and resolution in nm range.

  5. Project, building and utilization of a tomograph of micro metric resolution to application in soil science

    International Nuclear Information System (INIS)

    Macedo, Alvaro; Torre Neto, Andre; Cruvinel, Paulo Estevao; Crestana, Silvio

    1996-08-01

    This paper describes the project , building and utilization of a tomograph of micro metric resolution in soil science. It describes the problems involved in soil's science study and it describes the system and methodology

  6. Multi-camera synchronization core implemented on USB3 based FPGA platform

    Science.gov (United States)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  7. On the similarities between micro/nano lithography and topology optimization projection methods

    DEFF Research Database (Denmark)

    Jansen, Miche; Lazarov, Boyan Stefanov; Schevenels, Mattias

    2013-01-01

    The aim of this paper is to incorporate a model for micro/nano lithography production processes in topology optimization. The production process turns out to provide a physical analogy for projection filters in topology optimization. Blueprints supplied by the designers cannot be directly used...... as inputs to lithographic processes due to the proximity effect which causes rounding of sharp corners and geometric interaction of closely spaced design elements. Therefore, topology optimization is applied as a tool for proximity effect correction. Furthermore, it is demonstrated that the robust...... projection filter can be used to account for uncertainties due to lithographic production processes which results in manufacturable blueprint designs and eliminates the need for subsequent corrections....

  8. Image reconstruction from limited angle Compton camera data

    International Nuclear Information System (INIS)

    Tomitani, T.; Hirasawa, M.

    2002-01-01

    The Compton camera is used for imaging the distributions of γ ray direction in a γ ray telescope for astrophysics and for imaging radioisotope distributions in nuclear medicine without the need for collimators. The integration of γ rays on a cone is measured with the camera, so that some sort of inversion method is needed. Parra found an analytical inversion algorithm based on spherical harmonics expansion of projection data. His algorithm is applicable to the full set of projection data. In this paper, six possible reconstruction algorithms that allow image reconstruction from projections with a finite range of scattering angles are investigated. Four algorithms have instability problems and two others are practical. However, the variance of the reconstructed image diverges in these two cases, so that window functions are introduced with which the variance becomes finite at a cost of spatial resolution. These two algorithms are compared in terms of variance. The algorithm based on the inversion of the summed back-projection is superior to the algorithm based on the inversion of the summed projection. (author)

  9. Construction and assembly of the wire planes for the MicroBooNE Time Projection Chamber

    Energy Technology Data Exchange (ETDEWEB)

    Acciarri, R.; Adams, C.; Asaadi, J.; Danaher, J.; Fleming, B. T.; Gardner, R.; Gollapinni, S.; Grosso, R.; Guenette, R.; Littlejohn, B. R.; Lockwitz, S.; Raaf, J. L.; Soderberg, M.; John, J. St.; Strauss, T.; Szelc, A. M.; Yu, B.

    2017-03-01

    In this paper we describe how the readout planes for the MicroBooNE Time Projection Chamber were constructed, assembled and installed. We present the individual wire preparation using semi-automatic winding machines and the assembly of wire carrier boards. The details of the wire installation on the detector frame and the tensioning of the wires are given. A strict quality assurance plan ensured the integrity of the readout planes. The different tests performed at all stages of construction and installation provided crucial information to achieve the successful realization of the MicroBooNE wire planes.

  10. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  11. IMAGE ACQUISITION CONSTRAINTS FOR PANORAMIC FRAME CAMERA IMAGING

    Directory of Open Access Journals (Sweden)

    H. Kauhanen

    2012-07-01

    Full Text Available The paper describes an approach to quantify the amount of projective error produced by an offset of projection centres in a panoramic imaging workflow. We have limited this research to such panoramic workflows in which several sub-images using planar image sensor are taken and then stitched together as a large panoramic image mosaic. The aim is to simulate how large the offset can be before it introduces significant error to the dataset. The method uses geometrical analysis to calculate the error in various cases. Constraints for shooting distance, focal length and the depth of the area of interest are taken into account. Considering these constraints, it is possible to safely use even poorly calibrated panoramic camera rig with noticeable offset in projection centre locations. The aim is to create datasets suited for photogrammetric reconstruction. Similar constraints can be used also for finding recommended areas from the image planes for automatic feature matching and thus improve stitching of sub-images into full panoramic mosaics. The results are mainly designed to be used with long focal length cameras where the offset of projection centre of sub-images can seem to be significant but on the other hand the shooting distance is also long. We show that in such situations the error introduced by the offset of the projection centres results only in negligible error when stitching a metric panorama. Even if the main use of the results is with cameras of long focal length, they are feasible for all focal lengths.

  12. Global Calibration of Multiple Cameras Based on Sphere Targets

    Directory of Open Access Journals (Sweden)

    Junhua Sun

    2016-01-01

    Full Text Available Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three, while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view.

  13. Head-coupled remote stereoscopic camera system for telepresence applications

    Science.gov (United States)

    Bolas, Mark T.; Fisher, Scott S.

    1990-09-01

    The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.

  14. Mitigation and control of the overcuring effect in mask projection micro-stereolithography

    OpenAIRE

    O'Neill, Paul; Kent, Nigel J.; Brabazon, Dermot

    2017-01-01

    Mask Projection micro-Stereolithography (MPμSL) is an additive manufacturing technique capable of producing solid parts with micron-scale resolution from a vat of photocurable liquid polymer resin. Although the physical mechanism remains the same, the process differs from traditional laser-galvanometer based stereolithography (SL) in its use of a dynamic mask UV projector, or digital light processor (DLP), which cures each location within each 3D layer at the same time. One area where MPµSL h...

  15. Piezoelectric energy harvesting from morphing wing motions for micro air vehicles

    KAUST Repository

    Abdelkefi, Abdessattar

    2013-09-10

    Wing flapping and morphing can be very beneficial to managing the weight of micro air vehicles through coupling the aerodynamic forces with stability and control. In this letter, harvesting energy from the wing morphing is studied to power cameras, sensors, or communication devices of micro air vehicles and to aid in the management of their power. The aerodynamic loads on flapping wings are simulated using a three-dimensional unsteady vortex lattice method. Active wing shape morphing is considered to enhance the performance of the flapping motion. A gradient-based optimization algorithm is used to pinpoint the optimal kinematics maximizing the propellent efficiency. To benefit from the wing deformation, we place piezoelectric layers near the wing roots. Gauss law is used to estimate the electrical harvested power. We demonstrate that enough power can be generated to operate a camera. Numerical analysis shows the feasibility of exploiting wing morphing to harvest energy and improving the design and performance of micro air vehicles.

  16. 小微企业项目融资途径研究%Research on Financing Way of Small Micro Enterprise Project

    Institute of Scientific and Technical Information of China (English)

    崔英伟

    2013-01-01

    Small micro enterprise is an important part of the market economy, which accounts for more than 90 percent of all small micro enterprises. It is an important force of promoting the economic growth and the main carrier of job enlargement. However, small micro enterprise is the vulnerable groups, and it has difficulties in project financing and development. Based on the analysis of the present situation of small micro enterprise project financing, this paper elaborates on the definition of small micro enterprise, analyzes the causes, putting reference for small micro enterprise project financing.%小微企业是市场经济的重要组成部分,小微企业数量占我国企业总数的百分之九十以上,已成为拉动经济增长的重要力量,成为吸纳社会就业的主要载体.然而小微企业作为企业中的弱势群体,存在着项目融资难、发展难的问题.本文在分析小微企业项目融资现状的基础上,阐述了小微企业的定义,分析小微企业项目融资难的成因,有针对性的为小微企业项目融资提出借鉴和参考.

  17. Camera Trajectory fromWide Baseline Images

    Science.gov (United States)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the

  18. Catalyzed Combustion In Micro-Propulsion Devices: Project Status

    Science.gov (United States)

    Sung, C. J.; Schneider, S. J.

    2003-01-01

    In recent years, there has been a tendency toward shrinking the size of spacecraft. New classes of spacecraft called micro-spacecraft have been defined by their mass, power, and size ranges. Spacecraft in the range of 20 to 100 kg represent the class most likely to be utilized by most small sat users in the near future. There are also efforts to develop 10 to 20 kg class spacecraft for use in satellite constellations. More ambitious efforts will be to develop spacecraft less than 10 kg, in which MEMS fabrication technology is required. These new micro-spacecraft will require new micro-propulsion technology. Although micro-propulsion includes electric propulsion approaches, the focus of this proposed program is micro-chemical propulsion which requires the development of microcombustors. As combustors are scaled down, the surface to volume ratio increases. The heat release rate in the combustor scales with volume, while heat loss rate scales with surface area. Consequently, heat loss eventually dominates over heat release when the combustor size becomes smaller, thereby leading to flame quenching. The limitations imposed on chamber length and diameter has an immediate impact on the degree of miniaturization of a micro-combustor. Before micro-combustors can be realized, such a difficulty must be overcome. One viable combustion alternative is to take advantage of surface catalysis. Micro-chemical propulsion for small spacecraft can be used for primary thrust, orbit insertion, trajectory-control, and attitude control. Grouping micro-propulsion devices in arrays will allow their use for larger thrust applications. By using an array composed of hundreds or thousands of micro-thruster units, a particular configuration can be arranged to be best suited for a specific application. Moreover, different thruster sizes would provide for a range of thrust levels (from N s to mN s) within the same array. Several thrusters could be fired simultaneously for thrust levels higher than

  19. Using a Smartphone Camera for Nanosatellite Attitude Determination

    Science.gov (United States)

    Shimmin, R.

    2014-09-01

    The PhoneSat project at NASA Ames Research Center has repeatedly flown a commercial cellphone in space. As this project continues, additional utility is being extracted from the cell phone hardware to enable more complex missions. The camera in particular shows great potential as an instrument for position and attitude determination, but this requires complex image processing. This paper outlines progress towards that image processing capability. Initial tests on a small collection of sample images have demonstrated the determination of a Moon vector from an image by automatic thresholding and centroiding, allowing the calibration of existing attitude control systems. Work has been undertaken on a further set of sample images towards horizon detection using a variety of techniques including thresholding, edge detection, applying a Hough transform, and circle fitting. Ultimately it is hoped this will allow calculation of an Earth vector for attitude determination and an approximate altitude. A quick discussion of work towards using the camera as a star tracker is then presented, followed by an introduction to further applications of the camera on space missions.

  20. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  1. Comparison of polarimetric cameras

    Science.gov (United States)

    2017-03-01

    Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget , Paperwork Reduction Project (0704-0188...polarimetric camera, remote sensing, space systems 15. NUMBER OF PAGES 93 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18...2016. Hermann Hall, Monterey, CA. The next data in Figure 37. were collected on 01 December 2016 at 1226 PST on the rooftop of the Marriot Hotel in

  2. Miniature gamma-ray camera for tumor localization

    International Nuclear Information System (INIS)

    Lund, J.C.; Olsen, R.W.; James, R.B.; Cross, E.

    1997-08-01

    The overall goal of this LDRD project was to develop technology for a miniature gamma-ray camera for use in nuclear medicine. The camera will meet a need of the medical community for an improved means to image radio-pharmaceuticals in the body. In addition, this technology-with only slight modifications-should prove useful in applications requiring the monitoring and verification of special nuclear materials (SNMs). Utilization of the good energy resolution of mercuric iodide and cadmium zinc telluride detectors provides a means for rejecting scattered gamma-rays and improving the isotopic selectivity in gamma-ray images. The first year of this project involved fabrication and testing of a monolithic mercuric iodide and cadmium zinc telluride detector arrays and appropriate collimators/apertures. The second year of the program involved integration of the front-end detector module, pulse processing electronics, computer, software, and display

  3. FPS camera sync and reset chassis

    International Nuclear Information System (INIS)

    Yates, G.J.

    1980-06-01

    The sync and reset chassis provides all the circuitry required to synchronize an event to be studied, a remote free-running focus projection and scanning (FPS) data-acquisition TV camera, and a video signal recording system. The functions, design, and operation of this chassis are described in detail

  4. Homography-based multiple-camera person-tracking

    Science.gov (United States)

    Turk, Matthew R.

    2009-01-01

    Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of

  5. Biologically Inspired Micro-Flight Research

    Science.gov (United States)

    Raney, David L.; Waszak, Martin R.

    2003-01-01

    Natural fliers demonstrate a diverse array of flight capabilities, many of which are poorly understood. NASA has established a research project to explore and exploit flight technologies inspired by biological systems. One part of this project focuses on dynamic modeling and control of micro aerial vehicles that incorporate flexible wing structures inspired by natural fliers such as insects, hummingbirds and bats. With a vast number of potential civil and military applications, micro aerial vehicles represent an emerging sector of the aerospace market. This paper describes an ongoing research activity in which mechanization and control concepts for biologically inspired micro aerial vehicles are being explored. Research activities focusing on a flexible fixed- wing micro aerial vehicle design and a flapping-based micro aerial vehicle concept are presented.

  6. Acceptance/operational test procedure 241-AN-107 Video Camera System

    International Nuclear Information System (INIS)

    Pedersen, L.T.

    1994-01-01

    This procedure will document the satisfactory operation of the 241-AN-107 Video Camera System. The camera assembly, including camera mast, pan-and-tilt unit, camera, and lights, will be installed in Tank 241-AN-107 to monitor activities during the Caustic Addition Project. The camera focus, zoom, and iris remote controls will be functionally tested. The resolution and color rendition of the camera will be verified using standard reference charts. The pan-and-tilt unit will be tested for required ranges of motion, and the camera lights will be functionally tested. The master control station equipment, including the monitor, VCRs, printer, character generator, and video micrometer will be set up and performance tested in accordance with original equipment manufacturer's specifications. The accuracy of the video micrometer to measure objects in the range of 0.25 inches to 67 inches will be verified. The gas drying distribution system will be tested to ensure that a drying gas can be flowed over the camera and lens in the event that condensation forms on these components. This test will be performed by attaching the gas input connector, located in the upper junction box, to a pressurized gas supply and verifying that the check valve, located in the camera housing, opens to exhaust the compressed gas. The 241-AN-107 camera system will also be tested to assure acceptable resolution of the camera imaging components utilizing the camera system lights

  7. World's fastest and most sensitive astronomical camera

    Science.gov (United States)

    2009-06-01

    corrections to be done at an even higher rate, more than one thousand times a second, and this is where OCam is essential. "The quality of the adaptive optics correction strongly depends on the speed of the camera and on its sensitivity," says Philippe Feautrier from the LAOG, France, who coordinated the whole project. "But these are a priori contradictory requirements, as in general the faster a camera is, the less sensitive it is." This is why cameras normally used for very high frame-rate movies require extremely powerful illumination, which is of course not an option for astronomical cameras. OCam and its CCD220 detector, developed by the British manufacturer e2v technologies, solve this dilemma, by being not only the fastest available, but also very sensitive, making a significant jump in performance for such cameras. Because of imperfect operation of any physical electronic devices, a CCD camera suffers from so-called readout noise. OCam has a readout noise ten times smaller than the detectors currently used on the VLT, making it much more sensitive and able to take pictures of the faintest of sources. "Thanks to this technology, all the new generation instruments of ESO's Very Large Telescope will be able to produce the best possible images, with an unequalled sharpness," declares Jean-Luc Gach, from the Laboratoire d'Astrophysique de Marseille, France, who led the team that built the camera. "Plans are now underway to develop the adaptive optics detectors required for ESO's planned 42-metre European Extremely Large Telescope, together with our research partners and the industry," says Hubin. Using sensitive detectors developed in the UK, with a control system developed in France, with German and Spanish participation, OCam is truly an outcome of a European collaboration that will be widely used and commercially produced. More information The three French laboratories involved are the Laboratoire d'Astrophysique de Marseille (LAM/INSU/CNRS, Université de Provence

  8. 241-AZ-101 Waste Tank Color Video Camera System Shop Acceptance Test Report

    Energy Technology Data Exchange (ETDEWEB)

    WERRY, S.M.

    2000-03-23

    This report includes shop acceptance test results. The test was performed prior to installation at tank AZ-101. Both the camera system and camera purge system were originally sought and procured as a part of initial waste retrieval project W-151.

  9. 241-AZ-101 Waste Tank Color Video Camera System Shop Acceptance Test Report

    International Nuclear Information System (INIS)

    WERRY, S.M.

    2000-01-01

    This report includes shop acceptance test results. The test was performed prior to installation at tank AZ-101. Both the camera system and camera purge system were originally sought and procured as a part of initial waste retrieval project W-151

  10. Innovative private micro-hydro power development in Rwanda

    International Nuclear Information System (INIS)

    Pigaht, Maurice; Plas, Robert J. van der

    2009-01-01

    Under the 'Private Sector Participation in Micro-Hydro Development Project in Rwanda', four newly registered Rwandan companies are each constructing a micro-hydro electricity plant (100-500 kW) and building a low-voltage distribution grid. These companies financed their plants through their own equity and debt with support from the PSP Hydro project. This support comprised a subsidy of 30-50% of investment costs, technical and business development assistance, project monitoring and financial controlling. The experiences gained so far have important implications for similar future micro-hydro energy sector development projects and this paper puts forward three key messages: (i) institutional arrangements rather than technical quality determine the success of such projects; (ii) truly sustainable rural electrification through micro-hydro development demands a high level of local participation at all levels and throughout all project phases, not just after plant commissioning; and (iii) real impact and sustainability can be obtained through close collaboration of local private and financial sector firms requiring only limited external funds. In short, micro-hydro projects can and will be taken up by local investors as a business if the conditions are right. Applying these messages could result in an accelerated uptake of viable micro-hydro activities in Rwanda, and in the opinion of the authors elsewhere too.

  11. Novel development of the micro-tensile test at elevated temperature using a test structure with integrated micro-heater

    Science.gov (United States)

    Ang, W. C.; Kropelnicki, P.; Soe, Oak; Ling, J. H. L.; Randles, A. B.; Hum, A. J. W.; Tsai, J. M. L.; Tay, A. A. O.; Leong, K. C.; Tan, C. S.

    2012-08-01

    This paper describes the novel development of a micro-tensile testing method that allows testing at elevated temperatures. Instead of using a furnace, a titanium/platinum thin film micro-heater was fabricated on a conventional dog-bone-shaped test structure to heat up its gauge section locally. An infrared (IR) camera with 5 µm resolution was employed to verify the temperature uniformity across the gauge section of the test structure. With this micro-heater-integrated test structure, micro-tensile tests can be performed at elevated temperatures using any conventional tensile testing system without any major modification to the system. In this study, the tensile test of the single crystal silicon (SCS) thin film with (1 0 0) surface orientation and tensile direction was performed at room temperature and elevated temperatures, up to 300 °C. Experimental results for Young's modulus as a function of temperature are presented. A micro-sized SCS film showed a low dependence of mechanical properties on temperature up to 300 °C.

  12. An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction

    International Nuclear Information System (INIS)

    Mundy, Daniel W.; Herman, Michael G.

    2011-01-01

    Purpose: Compton camera imaging (CCI) systems are currently under investigation for radiotherapy dose reconstruction and verification. The ability of such a system to provide real-time images during dose delivery will be limited by the computational speed of the image reconstruction algorithm. In this work, the authors present a fast and simple method by which to generate an initial back-projected image from acquired CCI data, suitable for use in a filtered back-projection algorithm or as a starting point for iterative reconstruction algorithms, and compare its performance to the current state of the art. Methods: Each detector event in a CCI system describes a conical surface that includes the true point of origin of the detected photon. Numerical image reconstruction algorithms require, as a first step, the back-projection of each of these conical surfaces into an image space. The algorithm presented here first generates a solution matrix for each slice of the image space by solving the intersection of the conical surface with the image plane. Each element of the solution matrix is proportional to the distance of the corresponding voxel from the true intersection curve. A threshold function was developed to extract those pixels sufficiently close to the true intersection to generate a binary intersection curve. This process is repeated for each image plane for each CCI detector event, resulting in a three-dimensional back-projection image. The performance of this algorithm was tested against a marching algorithm known for speed and accuracy. Results: The threshold-based algorithm was found to be approximately four times faster than the current state of the art with minimal deficit to image quality, arising from the fact that a generically applicable threshold function cannot provide perfect results in all situations. The algorithm fails to extract a complete intersection curve in image slices near the detector surface for detector event cones having axes nearly

  13. Coaxial fundus camera for opthalmology

    Science.gov (United States)

    de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.

    2015-09-01

    A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.

  14. A learning-based approach to understanding success in rural electrification : insights from micro hydro projects in Bolivia

    NARCIS (Netherlands)

    Drinkwaard, ir. W.; Kirkels, A.F.; Romijn, H.A.

    2010-01-01

    The paper analyzes the performance of a set of rural Micro Hydro Power (MHP) projects and MHP-implementing organizations in Bolivia, using a learning-based analytical perspective. Rather than identifying a generic set of critical success factors such as access to finance, adequate technological

  15. Innovative private micro-hydro power development in Rwanda

    Energy Technology Data Exchange (ETDEWEB)

    Pigaht, Maurice; Van der Plas, Robert J. [MARGE-Netherlands, Brem 68, 7577 EW Oldenzaal (Netherlands)

    2009-11-15

    Under the 'Private Sector Participation in Micro-Hydro Development Project in Rwanda', four newly registered Rwandan companies are each constructing a micro-hydro electricity plant (100-500 kW) and building a low-voltage distribution grid. These companies financed their plants through their own equity and debt with support from the PSP Hydro project. This support comprised a subsidy of 30-50% of investment costs, technical and business development assistance, project monitoring and financial controlling. The experiences gained so far have important implications for similar future micro-hydro energy sector development projects and this paper puts forward three key messages: (1) institutional arrangements rather than technical quality determine the success of such projects; (2) truly sustainable rural electrification through micro-hydro development demands a high level of local participation at all levels and throughout all project phases, not just after plant commissioning; and (3) real impact and sustainability can be obtained through close collaboration of local private and financial sector firms requiring only limited external funds. In short, micro-hydro projects can and will be taken up by local investors as a business if the conditions are right. Applying these messages could result in an accelerated uptake of viable micro-hydro activities in Rwanda, and in the opinion of the authors elsewhere too. (author)

  16. CALIBRATION PROCEDURES ON OBLIQUE CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    G. Kemper

    2016-06-01

    step with the help of the nadir camera and the GPS/IMU data, an initial orientation correction and radial correction were calculated. With this approach, the whole project was calculated and calibrated in one step. During the iteration process the radial and tangential parameters were switched on individually for the camera heads and after that the camera constants and principal point positions were checked and finally calibrated. Besides that, the bore side calibration can be performed either on basis of the nadir camera and their offsets, or independently for each camera without correlation to the others. This must be performed in a complete mission anyway to get stability between the single camera heads. Determining the lever arms of the nodal-points to the IMU centre needs more caution than for a single camera especially due to the strong tilt angle. Prepared all these previous steps, you get a highly accurate sensor that enables a fully automated data extraction with a rapid update of you existing data. Frequently monitoring urban dynamics is then possible in fully 3D environment.

  17. Application of micro-PIXE and imaging technology to life science (Joint research)

    International Nuclear Information System (INIS)

    Satoh, Takahiro; Ishii, Keizo

    2011-03-01

    The joint research on 'Application of micro-PIXE and imaging technology to life science' supported by the Inter-organizational Atomic Energy Research Program, had been performed for three years, from 2006FY to 2009FY. Aiming to apply in-air micro-PIXE analytical system to life science, the research was consisting of 7 collaborative themes related to beam engineering for micro-PIXE and applied technology of element mapping in biological/medical fields. The system, so-called micro-PIXE camera, to acquire spatial element mapping in living cells was originally developed by collaborative research between the JAEA and the department of engineering of Tohoku University. This review covers these research results. (author)

  18. [F18]-FDG imaging of experimental animal tumours using a hybrid gamma-camera

    International Nuclear Information System (INIS)

    Lausson, S.; Maurel, G.; Kerrou, K.; Montravers, F.; Petegnief, Y.; Talbot, J.N.; Fredelizi, D.

    2001-01-01

    Positron emission tomography (PET) has been widely used in clinical studies. This technology permits detection of compounds labelled with positron emitting radionuclides and in particular, [F18]-fluorodeoxyglucose ([F18]-FDG).[F18]-FDG uptake and accumulation is generally related to malignancy; some recent works have suggested the usefulness of PET camera dedicated to small laboratory animals (micro-PET). Our study dealt with the feasibility of [F18]-FDG imaging of malignant tumours in animal models by means of an hybrid camera dedicated for human scintigraphy. We evaluated the ability of coincidence detection emission tomography (CDET) using this hybrid camera to visualize in vivo subcutaneous tumours grafted to mice or rats. P815 murine mastocytoma grafted in syngeneic DBA/2 mice resulted with foci of very high FDG uptake. Tumours with a diameter of only 3 mm were clearly visualized. Medullary thyroid cancer provoked by rMTC 6/23 and CA77 lines in syngeneic Wag/Rij rat was also detected. The differentiated CA77 tumours exhibited avidity for [F18]-FDG and a tumour, which was just palpable (diameter lower than 2 mm), was identified. In conclusion, CDET-FDG is a non-invasive imaging tool which can be used to follow grafted tumours in the small laboratory animal, even when their size is smaller than 1 cm. It has the potential to evaluate experimental anticancer treatments in small series of animals by individual follow-up. It offers the opportunity to develop experimental PET research within a nuclear medicine or biophysics department, the shift to a dedicated micro-PET device being subsequently necessary. It is indeed compulsory to strictly follow the rules for non contamination and disinfection of the hybrid camera. (authors)

  19. Piezoelectric energy harvesting from morphing wing motions for micro air vehicles

    KAUST Repository

    Abdelkefi, Abdessattar; Ghommem, Mehdi

    2013-01-01

    Wing flapping and morphing can be very beneficial to managing the weight of micro air vehicles through coupling the aerodynamic forces with stability and control. In this letter, harvesting energy from the wing morphing is studied to power cameras

  20. Status Report on the Development of Micro-Scheduling Software for the Advanced Outage Control Center Project

    Energy Technology Data Exchange (ETDEWEB)

    Germain, Shawn St. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Thomas, Kenneth [Idaho National Lab. (INL), Idaho Falls, ID (United States); Farris, Ronald [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2014-09-01

    The long-term viability of existing nuclear power plants (NPPs) in the United States (U.S.) is dependent upon a number of factors, including maintaining high capacity factors, maintaining nuclear safety, and reducing operating costs, particularly those associated with refueling outages. Refueling outages typically take 20-30 days, and for existing light water NPPs in the U.S., the reactor cannot be in operation during the outage. Furthermore, given that many NPPs generate between $1-1.5 million/day in revenue when in operation, there is considerable interest in shortening the length of refueling outages. Yet, refueling outages are highly complex operations, involving multiple concurrent and dependent activities that are difficult to coordinate. Finding ways to improve refueling outage performance while maintaining nuclear safety has proven to be difficult. The Advanced Outage Control Center project is a research and development (R&D) demonstration activity under the Light Water Reactor Sustainability (LWRS) Program. LWRS is a R&D program which works with industry R&D programs to establish technical foundations for the licensing and managing of long-term, safe, and economical operation of current NPPs. The Advanced Outage Control Center project has the goal of improving the management of commercial NPP refueling outages. To accomplish this goal, this INL R&D project is developing an advanced outage control center (OCC) that is specifically designed to maximize the usefulness of communication and collaboration technologies for outage coordination and problem resolution activities. This report describes specific recent efforts to develop a capability called outage Micro-Scheduling. Micro-Scheduling is the ability to allocate and schedule outage support task resources on a sub-hour basis. Micro-Scheduling is the real-time fine-tuning of the outage schedule to react to the actual progress of the primary outage activities to ensure that support task resources are

  1. Optimum color filters for CCD digital cameras

    Science.gov (United States)

    Engelhardt, Kai; Kunz, Rino E.; Seitz, Peter; Brunner, Harald; Knop, Karl

    1993-12-01

    As part of the ESPRIT II project No. 2103 (MASCOT) a high performance prototype color CCD still video camera was developed. Intended for professional usage such as in the graphic arts, the camera provides a maximum resolution of 3k X 3k full color pixels. A high colorimetric performance was achieved through specially designed dielectric filters and optimized matrixing. The color transformation was obtained by computer simulation of the camera system and non-linear optimization which minimized the perceivable color errors as measured in the 1976 CIELUV uniform color space for a set of about 200 carefully selected test colors. The color filters were designed to allow perfect colorimetric reproduction in principle and at the same time with imperceptible color noise and with special attention to fabrication tolerances. The camera system includes a special real-time digital color processor which carries out the color transformation. The transformation can be selected from a set of sixteen matrices optimized for different illuminants and output devices. Because the actual filter design was based on slightly incorrect data the prototype camera showed a mean colorimetric error of 2.7 j.n.d. (CIELUV) in experiments. Using correct input data in the redesign of the filters, a mean colorimetric error of only 1 j.n.d. (CIELUV) seems to be feasible, implying that it is possible with such an optimized color camera to achieve such a high colorimetric performance that the reproduced colors in an image cannot be distinguished from the original colors in a scene, even in direct comparison.

  2. Design and evaluation of a high-performance charge coupled device camera for astronomical imaging

    International Nuclear Information System (INIS)

    Shang, Yuanyuan; Guan, Yong; Zhang, Weigong; Pan, Wei; Liu, Hui; Zhang, Jie

    2009-01-01

    The Space Solar Telescope (SST) is the first Chinese space astronomy mission. This paper introduces the design of a high-performance 2K × 2K charge coupled device (CCD) camera that is an important payload in the Space Solar Telescope. The camera is composed of an analogue system and a digital embedded system. The analogue system is first discussed in detail, including the power and bias voltage supply circuit, power protection unit, CCD clock driver circuit, 16 bit A/D converter and low-noise amplifier circuit. The digital embedded system integrated with an NIOS II soft-core processor serves as the control and data acquisition system of the camera. In addition, research on evaluation methods for CCDs was carried out to evaluate the performance of the TH7899 CCD camera in relation to the requirements of the SST project. We present the evaluation results, including readout noise, linearity, quantum efficiency, dark current, full-well capacity, charge transfer efficiency and gain. The results show that this high-performance CCD camera can satisfy the specifications of the SST project

  3. Remote micro hydro

    Energy Technology Data Exchange (ETDEWEB)

    1985-03-01

    The micro-hydro project, built on a small tributary of Cowley Creek, near Whitehorse, Yukon, is an important step in the development of alternative energy sources and in conserving expensive diesel fuel. In addition to demonstrating the technical aspects of harnessing water power, the project paved the way for easier regulatory procedures. The power will be generated by a 9 meter head and a 6 inch crossflow turbine. The 36 V DC power will be stored in three 12 V batteries and converted to ac on demand by a 3,800 watt inverter. The system will produce 1.6 kW or 14,016 kWh per year with a firm flow of 1.26 cfs. This is sufficient to supply electricity for household needs and a wood working shop. The project is expected to cost about $18,000 and is more economical than tying into the present grid system, or continuing to use a gasoline generator. An environmental study determined that any impact of the project on the stream would be negligible. It is expected that no other water users will be affected by the project. This pilot project in micro-hydro applications will serve as a good indicator of the viability of this form of alternate energy in the Yukon. The calculations comparing the micro-hydro and grid system indicate that the mico-hydro system is a viable source of inflation-proof power. Higher heads and larger flow resulting in ac generation in excess of 10 kW would yield much better returns than this project. 3 tabs.

  4. Development of underwater camera using high-definition camera

    International Nuclear Information System (INIS)

    Tsuji, Kenji; Watanabe, Masato; Takashima, Masanobu; Kawamura, Shingo; Tanaka, Hiroyuki

    2012-01-01

    In order to reduce the time for core verification or visual inspection of BWR fuels, the underwater camera using a High-Definition camera has been developed. As a result of this development, the underwater camera has 2 lights and 370 x 400 x 328mm dimensions and 20.5kg weight. Using the camera, 6 or so spent-fuel IDs are identified at 1 or 1.5m distance at a time, and 0.3mmφ pin-hole is recognized at 1.5m distance and 20 times zoom-up. Noises caused by radiation less than 15 Gy/h are not affected the images. (author)

  5. Utilization of a gamma camera in research of the concentration in marine products

    International Nuclear Information System (INIS)

    Nakamura, Ryoichi

    1981-01-01

    A gamma camera was used for the study of the metabolism of micro elements in marine products. Hexagrammos otakii (rock trout) was put under anesthesia with MS-222. By cutting partly the abdomen, the internal organs were exposed. 1 - 2 mCi of technetium-99m was injected into the bulbus arteriosus. From immediately after the injection, photographs were taken consecutively, one picture every 0.5 second for 30 seconds, to a total of 60 pictures. Since the gamma camera has been developed solely for human beings, there is some inconvenience when it is applied to marine products. The advantages of using a gamma camera are the observation on the behavior of substances in a body while a marine product is alive, and the grasping of the variation in substance behavior at extremely brief intervals. The disadvantages are the low resolution of about 5 mm - 7 mm, and the difficulty in differentiating overlapping organs. (J.P.N.)

  6. The Theatricality of the Punctum: Re-Viewing Camera Lucida

    Directory of Open Access Journals (Sweden)

    Harry Robert Wilson

    2017-06-01

    Full Text Available I first encountered Roland Barthes’s Camera Lucida (1980 in 2012 when I was developing a performance on falling and photography. Since then I have re-encountered Barthes’s book annually as part of my practice-as-research PhD project on the relationships between performance and photography. This research project seeks to make performance work in response to Barthes’s book – to practice with Barthes in an exploration of theatricality, materiality and affect. This photo-essay weaves critical discourse with performance documentation to explore my relationship to Barthes’s book. Responding to Michael Fried’s claim that Barthes’s Camera Lucida is an exercise in “antitheatrical critical thought” (Fried 2008, 98 the essay seeks to re-view debates on theatricality and anti-theatricality in and around Camera Lucida. Specifically, by exploring Barthes’s conceptualisation of the pose I discuss how performance practice might re-theatricalise the punctum and challenge a supposed antitheatricalism in Barthes’s text. Additionally, I argue for Barthes’s book as an example of philosophy as performance and for my own work as an instance of performance philosophy.

  7. Power estimation of martial arts movement using 3D motion capture camera

    Science.gov (United States)

    Azraai, Nur Zaidi; Awang Soh, Ahmad Afiq Sabqi; Mat Jafri, Mohd Zubir

    2017-06-01

    Motion capture camera (MOCAP) has been widely used in many areas such as biomechanics, physiology, animation, arts, etc. This project is done by approaching physics mechanics and the extended of MOCAP application through sports. Most researchers will use a force plate, but this will only can measure the force of impact, but for us, we are keen to observe the kinematics of the movement. Martial arts is one of the sports that uses more than one part of the human body. For this project, martial art `Silat' was chosen because of its wide practice in Malaysia. 2 performers have been selected, one of them has an experienced in `Silat' practice and another one have no experience at all so that we can compare the energy and force generated by the performers. Every performer will generate a punching with same posture which in this project, two types of punching move were selected. Before the measuring start, a calibration has been done so the software knows the area covered by the camera and reduce the error when analyze by using the T stick that have been pasted with a marker. A punching bag with mass 60 kg was hung on an iron bar as a target. The use of this punching bag is to determine the impact force of a performer when they punch. This punching bag also will be stuck with the optical marker so we can observe the movement after impact. 8 cameras have been used and placed with 2 cameras at every side of the wall with different angle in a rectangular room 270 ft2 and the camera covered approximately 50 ft2. We covered only a small area so less noise will be detected and make the measurement more accurate. A Marker has been pasted on the limb of the entire hand that we want to observe and measure. A passive marker used in this project has a characteristic to reflect the infrared that being generated by the camera. The infrared will reflected to the camera sensor so the marker position can be detected and show in software. The used of many cameras is to increase the

  8. Fabry-Perot interferometry using an image-intensified rotating-mirror streak camera

    International Nuclear Information System (INIS)

    Seitz, W.L.; Stacy, H.L.

    1983-01-01

    A Fabry-Perot velocity interferometer system is described that uses a modified rotating mirror streak camera to recrod the dynamic fringe positions. A Los Alamos Model 72B rotating-mirror streak camera, equipped with a beryllium mirror, was modified to include a high aperture (f/2.5) relay lens and a 40-mm image-intensifier tube such that the image normally formed at the film plane of the streak camera is projected onto the intensifier tube. Fringe records for thin (0.13 mm) flyers driven by a small bridgewire detonator obtained with a Model C1155-01 Hamamatsu and Model 790 Imacon electronic streak cameras are compared with those obtained with the image-intensified rotating-mirror streak camera (I 2 RMC). Resolution comparisons indicate that the I 2 RMC gives better time resolution than either the Hamamatsu or the Imacon for total writing times of a few microseconds or longer

  9. Influence of Digital Camera Errors on the Photogrammetric Image Processing

    Science.gov (United States)

    Sužiedelytė-Visockienė, Jūratė; Bručas, Domantas

    2009-01-01

    The paper deals with the calibration of digital camera Canon EOS 350D, often used for the photogrammetric 3D digitalisation and measurements of industrial and construction site objects. During the calibration data on the optical and electronic parameters, influencing the distortion of images, such as correction of the principal point, focal length of the objective, radial symmetrical and non-symmetrical distortions were obtained. The calibration was performed by means of the Tcc software implementing the polynomial of Chebichev and using a special test-field with the marks, coordinates of which are precisely known. The main task of the research - to determine how parameters of the camera calibration influence the processing of images, i. e. the creation of geometric model, the results of triangulation calculations and stereo-digitalisation. Two photogrammetric projects were created for this task. In first project the non-corrected and in the second the corrected ones, considering the optical errors of the camera obtained during the calibration, images were used. The results of analysis of the images processing is shown in the images and tables. The conclusions are given.

  10. Lights, Camera, Project-Based Learning!

    Science.gov (United States)

    Cox, Dannon G.; Meaney, Karen S.

    2018-01-01

    A physical education instructor incorporates a teaching method known as project-based learning (PBL) in his physical education curriculum. Utilizing video-production equipment to imitate the production of a televisions show, sixth-grade students attending a charter school invited college students to share their stories about physical activity and…

  11. Sub-Camera Calibration of a Penta-Camera

    Science.gov (United States)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  12. The use of low cost compact cameras with focus stacking functionality in entomological digitization projects

    Directory of Open Access Journals (Sweden)

    Jan Mertens

    2017-10-01

    Full Text Available Digitization of specimen collections has become a key priority of many natural history museums. The camera systems built for this purpose are expensive, providing a barrier in institutes with limited funding, and therefore hampering progress. An assessment is made on whether a low cost compact camera with image stacking functionality can help expedite the digitization process in large museums or provide smaller institutes and amateur entomologists with the means to digitize their collections. Images of a professional setup were compared with the Olympus Stylus TG-4 Tough, a low-cost compact camera with internal focus stacking functions. Parameters considered include image quality, digitization speed, price, and ease-of-use. The compact camera’s image quality, although inferior to the professional setup, is exceptional considering its fourfold lower price point. Producing the image slices in the compact camera is a matter of seconds and when optimal image quality is less of a priority, the internal stacking function omits the need for dedicated stacking software altogether, further decreasing the cost and speeding up the process. In general, it is found that, aware of its limitations, this compact camera is capable of digitizing entomological collections with sufficient quality. As technology advances, more institutes and amateur entomologists will be able to easily and affordably catalogue their specimens.

  13. Relative camera localisation in non-overlapping camera networks using multiple trajectories

    NARCIS (Netherlands)

    John, V.; Englebienne, G.; Kröse, B.J.A.

    2012-01-01

    In this article we present an automatic camera calibration algorithm using multiple trajectories in a multiple camera network with non-overlapping field-of-views (FOV). Visible trajectories within a camera FOV are assumed to be measured with respect to the camera local co-ordinate system.

  14. Precision moulding of polymer micro components

    DEFF Research Database (Denmark)

    Tosello, Guido

    2008-01-01

    The present research work contains a study concerning polymer micro components manufacturing by means of the micro injection moulding (µIM) process. The overall process chain was considered and investigated during the project, including part design and simulation, tooling, process analysis, part...... optimization, quality control, multi-material solutions. A series of experimental investigations were carried out on the influence of the main µIM process factors on the polymer melt flow within micro cavities. These investigations were conducted on a conventional injection moulding machine adapted...... to the production of micro polymer components, as well as on a micro injection moulding machine. A new approach based on coordinate optical measurement of flow markers was developed during the project for the characterization of the melt flow. In-line pressure measurements were also performed to characterize...

  15. On the accuracy potential of focused plenoptic camera range determination in long distance operation

    Science.gov (United States)

    Sardemann, Hannes; Maas, Hans-Gerd

    2016-04-01

    Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.

  16. a Modified Projective Transformation Scheme for Mosaicking Multi-Camera Imaging System Equipped on a Large Payload Fixed-Wing Uas

    Science.gov (United States)

    Jhan, J. P.; Li, Y. T.; Rau, J. Y.

    2015-03-01

    In recent years, Unmanned Aerial System (UAS) has been applied to collect aerial images for mapping, disaster investigation, vegetation monitoring and etc. It is a higher mobility and lower risk platform for human operation, but the low payload and short operation time reduce the image collection efficiency. In this study, one nadir and four oblique consumer grade DSLR cameras composed multiple camera system is equipped on a large payload UAS, which is designed to collect large ground coverage images in an effective way. The field of view (FOV) is increased to 127 degree, which is thus suitable to collect disaster images in mountainous area. The synthetic acquired five images are registered and mosaicked as larger format virtual image for reducing the number of images, post processing time, and for easier stereo plotting. Instead of traditional image matching and applying bundle adjustment method to estimate transformation parameters, the IOPs and ROPs of multiple cameras are calibrated and derived the coefficients of modified projective transformation (MPT) model for image mosaicking. However, there are some uncertainty of indoor calibrated IOPs and ROPs since the different environment conditions as well as the vibration of UAS, which will cause misregistration effect of initial MPT results. Remaining residuals are analysed through tie points matching on overlapping area of initial MPT results, in which displacement and scale difference are introduced and corrected to modify the ROPs and IOPs for finer registration results. In this experiment, the internal accuracy of mosaic image is better than 0.5 pixels after correcting the systematic errors. Comparison between separate cameras and mosaic images through rigorous aerial triangulation are conducted, in which the RMSE of 5 control and 9 check points is less than 5 cm and 10 cm in planimetric and vertical directions, respectively, for all cases. It proves that the designed imaging system and the proposed scheme

  17. A MODIFIED PROJECTIVE TRANSFORMATION SCHEME FOR MOSAICKING MULTI-CAMERA IMAGING SYSTEM EQUIPPED ON A LARGE PAYLOAD FIXED-WING UAS

    Directory of Open Access Journals (Sweden)

    J. P. Jhan

    2015-03-01

    Full Text Available In recent years, Unmanned Aerial System (UAS has been applied to collect aerial images for mapping, disaster investigation, vegetation monitoring and etc. It is a higher mobility and lower risk platform for human operation, but the low payload and short operation time reduce the image collection efficiency. In this study, one nadir and four oblique consumer grade DSLR cameras composed multiple camera system is equipped on a large payload UAS, which is designed to collect large ground coverage images in an effective way. The field of view (FOV is increased to 127 degree, which is thus suitable to collect disaster images in mountainous area. The synthetic acquired five images are registered and mosaicked as larger format virtual image for reducing the number of images, post processing time, and for easier stereo plotting. Instead of traditional image matching and applying bundle adjustment method to estimate transformation parameters, the IOPs and ROPs of multiple cameras are calibrated and derived the coefficients of modified projective transformation (MPT model for image mosaicking. However, there are some uncertainty of indoor calibrated IOPs and ROPs since the different environment conditions as well as the vibration of UAS, which will cause misregistration effect of initial MPT results. Remaining residuals are analysed through tie points matching on overlapping area of initial MPT results, in which displacement and scale difference are introduced and corrected to modify the ROPs and IOPs for finer registration results. In this experiment, the internal accuracy of mosaic image is better than 0.5 pixels after correcting the systematic errors. Comparison between separate cameras and mosaic images through rigorous aerial triangulation are conducted, in which the RMSE of 5 control and 9 check points is less than 5 cm and 10 cm in planimetric and vertical directions, respectively, for all cases. It proves that the designed imaging system and the

  18. Optimization of grate combustion by means of an IR camera. Final report; Optimering af risteforbraending IR-kamera. Slut rapport

    Energy Technology Data Exchange (ETDEWEB)

    Didriksen, H.; Jensen, Joergen Peter; Hansen, Joergen (DONG Energy, Fredericia (Denmark)); Clausen, Soennik; Larsen, Henning (Technical Univ. of Denmark, Risoe National Lab. for Sustainable Energy, Roskilde (Denmark))

    2010-09-15

    The target of the project has been to improve the control and regulation of grate-fired straw boilers by involving measuring signals from a specially developed IR camera in a new regulation concept. The project was carried out with the straw boiler at the Avedoere power station. The conclusion has been that it is a very demanding task to develop an IR camera, including software, which must function as a process measuring device for continuous on-line measuring under very demanding conditions in a straw fired boiler. The result showed that this was not possible within the framework of this project. The developed camera has on the other hand proved to be very well suited for measuring campaigns, where the camera is ''manned''/continuously monitored. (Energy 11)

  19. In-line phase contrast micro-CT reconstruction for biomedical specimens.

    Science.gov (United States)

    Fu, Jian; Tan, Renbo

    2014-01-01

    X-ray phase contrast micro computed tomography (micro-CT) can non-destructively provide the internal structure information of soft tissues and low atomic number materials. It has become an invaluable analysis tool for biomedical specimens. Here an in-line phase contrast micro-CT reconstruction technique is reported, which consists of a projection extraction method and the conventional filter back-projection (FBP) reconstruction algorithm. The projection extraction is implemented by applying the Fourier transform to the forward projections of in-line phase contrast micro-CT. This work comprises a numerical study of the method and its experimental verification using a biomedical specimen dataset measured at an X-ray tube source micro-CT setup. The numerical and experimental results demonstrate that the presented technique can improve the imaging contrast of biomedical specimens. It will be of interest for a wide range of in-line phase contrast micro-CT applications in medicine and biology.

  20. MicroASC instrument onboard Juno spacecraft utilizing inertially controlled imaging

    DEFF Research Database (Denmark)

    Pedersen, David Arge Klevang; Jørgensen, Andreas Härstedt; Benn, Mathias

    2016-01-01

    This contribution describes the post-processing of the raw image data acquired by the microASC instrument during the Earth-fly-by of the Juno spacecraft. The images show a unique view of the Earth and Moon system as seen from afar. The procedure utilizes attitude measurements and inter......-calibration of the Camera Head Units of the microASC system to trigger the image capturing. The triggering is synchronized with the inertial attitude and rotational phase of the sensor acquiring the images. This is essentially works as inertially controlled imaging facilitating image acquisition from unexplored...

  1. A sparsity-based iterative algorithm for reconstruction of micro-CT images from highly undersampled projection datasets obtained with a synchrotron X-ray source

    Science.gov (United States)

    Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.

    2016-12-01

    Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.

  2. Novel methods of ozone generation by micro-plasma concept

    Energy Technology Data Exchange (ETDEWEB)

    Fateev, A.; Chiper, A.; Chen, W.; Stamate, E.

    2008-02-15

    The project objective was to study the possibilities for new and cheaper methods of generating ozone by means of different types of micro-plasma generators: DBD (Dielectric Barrier Discharge), MHCD (Micro-Hollow Cathode Discharge) and CPED (Capillary Plasma Electrode Discharge). This project supplements another current project where plasma-based DeNOx is being studied and optimised. The results show potentials for reducing ozone generation costs by means of micro-plasmas but that further development is needed. (ln)

  3. EVALUATION OF THE QUALITY OF ACTION CAMERAS WITH WIDE-ANGLE LENSES IN UAV PHOTOGRAMMETRY

    OpenAIRE

    Hastedt, H.; Ekkel, T.; Luhmann, T.

    2016-01-01

    The application of light-weight cameras in UAV photogrammetry is required due to restrictions in payload. In general, consumer cameras with normal lens type are applied to a UAV system. The availability of action cameras, like the GoPro Hero4 Black, including a wide-angle lens (fish-eye lens) offers new perspectives in UAV projects. With these investigations, different calibration procedures for fish-eye lenses are evaluated in order to quantify their accuracy potential in UAV photogrammetry....

  4. Emission computerized axial tomography from multiple gamma-camera views using frequency filtering.

    Science.gov (United States)

    Pelletier, J L; Milan, C; Touzery, C; Coitoux, P; Gailliard, P; Budinger, T F

    1980-01-01

    Emission computerized axial tomography is achievable in any nuclear medicine department from multiple gamma camera views. Data are collected by rotating the patient in front of the camera. A simple fast algorithm is implemented, known as the convolution technique: first the projection data are Fourier transformed and then an original filter designed for optimizing resolution and noise suppression is applied; finally the inverse transform of the latter operation is back-projected. This program, which can also take into account the attenuation for single photon events, was executed with good results on phantoms and patients. We think that it can be easily implemented for specific diagnostic problems.

  5. Micro-vision servo control of a multi-axis alignment system for optical fiber assembly

    International Nuclear Information System (INIS)

    Chen, Weihai; Yu, Fei; Qu, Jianliang; Chen, Wenjie; Zhang, Jianbin

    2017-01-01

    This paper describes a novel optical fiber assembly system featuring a multi-axis alignment function based on micro-vision feedback control. It consists of an active parallel alignment mechanism, a passive compensation mechanism, a micro-gripper and a micro-vision servo control system. The active parallel alignment part is a parallelogram-based design with remote-center-of-motion (RCM) function to achieve precise rotation without fatal lateral motion. The passive mechanism, with five degrees of freedom (5-DOF), is used to implement passive compensation for multi-axis errors. A specially designed 1-DOF micro-gripper mounted onto the active parallel alignment platform is adopted to grasp and rotate the optical fiber. A micro-vision system equipped with two charge-coupled device (CCD) cameras is introduced to observe the small field of view and obtain multi-axis errors for servo feedback control. The two CCD cameras are installed in an orthogonal arrangement—thus the errors can be easily measured via the captured images. Meanwhile, a series of tracking and measurement algorithms based on specific features of the target objects are developed. Details of the force and displacement sensor information acquisition in the assembly experiment are also provided. An experiment demonstrates the validity of the proposed visual algorithm by achieving the task of eliminating errors and inserting an optical fiber to the U-groove accurately. (paper)

  6. Mobile phone camera benchmarking: combination of camera speed and image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  7. Machine learning for micro-tomography

    Science.gov (United States)

    Parkinson, Dilworth Y.; Pelt, Daniël. M.; Perciano, Talita; Ushizima, Daniela; Krishnan, Harinarayan; Barnard, Harold S.; MacDowell, Alastair A.; Sethian, James

    2017-09-01

    Machine learning has revolutionized a number of fields, but many micro-tomography users have never used it for their work. The micro-tomography beamline at the Advanced Light Source (ALS), in collaboration with the Center for Applied Mathematics for Energy Research Applications (CAMERA) at Lawrence Berkeley National Laboratory, has now deployed a series of tools to automate data processing for ALS users using machine learning. This includes new reconstruction algorithms, feature extraction tools, and image classification and recommen- dation systems for scientific image. Some of these tools are either in automated pipelines that operate on data as it is collected or as stand-alone software. Others are deployed on computing resources at Berkeley Lab-from workstations to supercomputers-and made accessible to users through either scripting or easy-to-use graphical interfaces. This paper presents a progress report on this work.

  8. Infrared Camera Diagnostic for Heat Flux Measurements on NSTX

    International Nuclear Information System (INIS)

    D. Mastrovito; R. Maingi; H.W. Kugel; A.L. Roquemore

    2003-01-01

    An infrared imaging system has been installed on NSTX (National Spherical Torus Experiment) at the Princeton Plasma Physics Laboratory to measure the surface temperatures on the lower divertor and center stack. The imaging system is based on an Indigo Alpha 160 x 128 microbolometer camera with 12 bits/pixel operating in the 7-13 (micro)m range with a 30 Hz frame rate and a dynamic temperature range of 0-700 degrees C. From these data and knowledge of graphite thermal properties, the heat flux is derived with a classic one-dimensional conduction model. Preliminary results of heat flux scaling are reported

  9. The prototype cameras for trans-Neptunian automatic occultation survey

    Science.gov (United States)

    Wang, Shiang-Yu; Ling, Hung-Hsu; Hu, Yen-Sang; Geary, John C.; Chang, Yin-Chang; Chen, Hsin-Yo; Amato, Stephen M.; Huang, Pin-Jie; Pratlong, Jerome; Szentgyorgyi, Andrew; Lehner, Matthew; Norton, Timothy; Jorden, Paul

    2016-08-01

    The Transneptunian Automated Occultation Survey (TAOS II) is a three robotic telescope project to detect the stellar occultation events generated by TransNeptunian Objects (TNOs). TAOS II project aims to monitor about 10000 stars simultaneously at 20Hz to enable statistically significant event rate. The TAOS II camera is designed to cover the 1.7 degrees diameter field of view of the 1.3m telescope with 10 mosaic 4.5k×2k CMOS sensors. The new CMOS sensor (CIS 113) has a back illumination thinned structure and high sensitivity to provide similar performance to that of the back-illumination thinned CCDs. Due to the requirements of high performance and high speed, the development of the new CMOS sensor is still in progress. Before the science arrays are delivered, a prototype camera is developed to help on the commissioning of the robotic telescope system. The prototype camera uses the small format e2v CIS 107 device but with the same dewar and also the similar control electronics as the TAOS II science camera. The sensors, mounted on a single Invar plate, are cooled to the operation temperature of about 200K as the science array by a cryogenic cooler. The Invar plate is connected to the dewar body through a supporting ring with three G10 bipods. The control electronics consists of analog part and a Xilinx FPGA based digital circuit. One FPGA is needed to control and process the signal from a CMOS sensor for 20Hz region of interests (ROI) readout.

  10. Development of the neutron filters for JET gamma-ray cameras

    International Nuclear Information System (INIS)

    Soare, S.; Curuia, M.; Anghel, M.; Constantin, M.; David, E.; Kiptily, V.; Prior, P.; Edlington, T.; Griph, S.; Krivchenkov, Y.; Popovichev, S.; Riccardo, V.; Syme, B; Thompson, V.; Murari, A.; Zoita, V.; Bonheure, G.; Le Guern

    2007-01-01

    The JET gamma-ray camera diagnostics have already provided valuable information on the gamma-ray imaging of fast ion evaluation in JET plasmas. The JET Gamma-Ray Cameras (GRC) upgrade project deals with the design of appropriate neutron/gamma-ray filters ('neutron attenuaters').The main design parameter was the neutron attenuation factor. The two design solutions, that have been finally chosen and developed at the level of scheme design, consist of: a) one quasi-crescent shaped neutron attenuator (for the horizontal camera) and b) two quasi-trapezoid shaped neutron attenuators (for the vertical one). Various neutron-attenuating materials have been considered (lithium hydride with natural isotopic composition and 6 Li enriched, light and heavy water, polyethylene). Pure light water was finally chosen as the attenuating material for the JET gamma-ray cameras. FEA methods used to evaluate the behaviour of the filter casings under the loadings (internal hydrostatic pressure, torques) have proven the stability of the structure. (authors)

  11. Study of on-machine error identification and compensation methods for micro machine tools

    International Nuclear Information System (INIS)

    Wang, Shih-Ming; Yu, Han-Jen; Lee, Chun-Yi; Chiu, Hung-Sheng

    2016-01-01

    Micro machining plays an important role in the manufacturing of miniature products which are made of various materials with complex 3D shapes and tight machining tolerance. To further improve the accuracy of a micro machining process without increasing the manufacturing cost of a micro machine tool, an effective machining error measurement method and a software-based compensation method are essential. To avoid introducing additional errors caused by the re-installment of the workpiece, the measurement and compensation method should be on-machine conducted. In addition, because the contour of a miniature workpiece machined with a micro machining process is very tiny, the measurement method should be non-contact. By integrating the image re-constructive method, camera pixel correction, coordinate transformation, the error identification algorithm, and trajectory auto-correction method, a vision-based error measurement and compensation method that can on-machine inspect the micro machining errors and automatically generate an error-corrected numerical control (NC) program for error compensation was developed in this study. With the use of the Canny edge detection algorithm and camera pixel calibration, the edges of the contour of a machined workpiece were identified and used to re-construct the actual contour of the work piece. The actual contour was then mapped to the theoretical contour to identify the actual cutting points and compute the machining errors. With the use of a moving matching window and calculation of the similarity between the actual and theoretical contour, the errors between the actual cutting points and theoretical cutting points were calculated and used to correct the NC program. With the use of the error-corrected NC program, the accuracy of a micro machining process can be effectively improved. To prove the feasibility and effectiveness of the proposed methods, micro-milling experiments on a micro machine tool were conducted, and the results

  12. Microscope self-calibration based on micro laser line imaging and soft computing algorithms

    Science.gov (United States)

    Apolinar Muñoz Rodríguez, J.

    2018-06-01

    A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.

  13. The HST/WFC3 Quicklook Project: A User Interface to Hubble Space Telescope Wide Field Camera 3 Data

    Science.gov (United States)

    Bourque, Matthew; Bajaj, Varun; Bowers, Ariel; Dulude, Michael; Durbin, Meredith; Gosmeyer, Catherine; Gunning, Heather; Khandrika, Harish; Martlin, Catherine; Sunnquist, Ben; Viana, Alex

    2017-06-01

    The Hubble Space Telescope's Wide Field Camera 3 (WFC3) instrument, comprised of two detectors, UVIS (Ultraviolet-Visible) and IR (Infrared), has been acquiring ~ 50-100 images daily since its installation in 2009. The WFC3 Quicklook project provides a means for instrument analysts to store, calibrate, monitor, and interact with these data through the various Quicklook systems: (1) a ~ 175 TB filesystem, which stores the entire WFC3 archive on disk, (2) a MySQL database, which stores image header data, (3) a Python-based automation platform, which currently executes 22 unique calibration/monitoring scripts, (4) a Python-based code library, which provides system functionality such as logging, downloading tools, database connection objects, and filesystem management, and (5) a Python/Flask-based web interface to the Quicklook system. The Quicklook project has enabled large-scale WFC3 analyses and calibrations, such as the monitoring of the health and stability of the WFC3 instrument, the measurement of ~ 20 million WFC3/UVIS Point Spread Functions (PSFs), the creation of WFC3/IR persistence calibration products, and many others.

  14. Cost effective system for monitoring of fish migration with a camera

    Science.gov (United States)

    Sečnik, Matej; Brilly, Mitja; Vidmar, Andrej

    2016-04-01

    Within the European LIFE project Ljubljanica connects (LIFE10 NAT/SI/000142) we have developed a cost-effective solution for the monitoring of fish migration through the fish passes with the underwater camera. In the fish pass at Ambrožev trg and in the fish pass near the Fužine castle we installed a video camera called "Fishcam" to be able to monitor the migration of fish through the fish passes and success of its reconstruction. Live stream from fishcams installed in the fishpassesis available on our project website (http://ksh.fgg.uni-lj.si/ljubljanicaconnects/ang/12_camera). The system for the fish monitoring is made from two parts. First is the waterproof box for the computer with charger and the second part is the camera itself. We used a high sensitive Sony analogue camera. The advantage of this camera is that it has very good sensitivity in low light conditions, so it can take good quality pictures even at night with a minimum additional lighting. For the night recording we use additional IR reflector to illuminate passing fishes. The camera is connected to an 8-inch tablet PC. We decided to use a tablet PC because it is quite small, cheap, it is relatively fast and has a low power consumption. On the computer we use software which has advanced motion detection capabilities, so we can also detect the small fishes. When the fish is detected by a software, its photograph is automatically saved to local hard drive and for backup also on Google drive. The system for monitoring of fish migration has turned out to work very well. From the beginning of monitoring in June 2015 to end of the year there were more than 100.000 photographs produced. The first analysis of them was already prepared estimating fish species and their frequency in passing the fish pass.

  15. Comparative evaluation of consumer grade cameras and mobile phone cameras for close range photogrammetry

    Science.gov (United States)

    Chikatsu, Hirofumi; Takahashi, Yoji

    2009-08-01

    The authors have been concentrating on developing convenient 3D measurement methods using consumer grade digital cameras, and it was concluded that consumer grade digital cameras are expected to become a useful photogrammetric device for the various close range application fields. On the other hand, mobile phone cameras which have 10 mega pixels were appeared on the market in Japan. In these circumstances, we are faced with alternative epoch-making problem whether mobile phone cameras are able to take the place of consumer grade digital cameras in close range photogrammetric applications. In order to evaluate potentials of mobile phone cameras in close range photogrammetry, comparative evaluation between mobile phone cameras and consumer grade digital cameras are investigated in this paper with respect to lens distortion, reliability, stability and robustness. The calibration tests for 16 mobile phone cameras and 50 consumer grade digital cameras were conducted indoors using test target. Furthermore, practability of mobile phone camera for close range photogrammetry was evaluated outdoors. This paper presents that mobile phone cameras have ability to take the place of consumer grade digital cameras, and develop the market in digital photogrammetric fields.

  16. The MicroActive project: automatic detection of disease-related molecular cell activity

    Science.gov (United States)

    Furuberg, Liv; Mielnik, Michal; Johansen, Ib-Rune; Voitel, Jörg; Gulliksen, Anja; Solli, Lars; Karlsen, Frank; Bayer, Tobias; Schönfeld, Friedhelm; Drese, Klaus; Keegan, Helen; Martin, Cara; O'Leary, John; Riegger, Lutz; Koltay, Peter

    2007-05-01

    The aim of the MicroActive project is to develop an instrument for molecular diagnostics. The instrument will first be tested for patient screening for a group of viruses causing cervical cancer. Two disposable polymer chips with reagents stored on-chip will be inserted into the instrument for each patient sample. The first chip performs sample preparation of the epithelial cervical cells while mRNA amplification and fluorescent detection takes place in the second chip. More than 10 different virus markers will be analysed in one chip. We report results on sub-functions of the amplification chip. The sample is split into smaller droplets, and the droplets move in parallel channels containing different dried reagents for the different analyses. We report experimental results on parallel droplet movement control using one external pump only, combined with hydrophobic valves. Valve burst pressures are controlled by geometry. We show droplet control using valves with burst pressures between 800 and 4500 Pa. We also monitored the re-hydration times for two necessary dried reagents. After sample insertion, uniform concentration of the reagents in the droplet was reached after respectively 60 s and 10 min. These times are acceptable for successful amplification. Finally we have shown positive amplification of HPV type 16 using dried enzymes stored in micro chambers.

  17. Movement-based interaction in camera spaces: a conceptual framework

    DEFF Research Database (Denmark)

    Eriksson, Eva; Hansen, Thomas Riisgaard; Lykke-Olesen, Andreas

    2007-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movementbased projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space,...

  18. A G-APD based Camera for Imaging Atmospheric Cherenkov Telescopes

    International Nuclear Information System (INIS)

    Anderhub, H.; Backes, M.; Biland, A.; Boller, A.; Braun, I.; Bretz, T.; Commichau, S.; Commichau, V.; Dorner, D.; Gendotti, A.; Grimm, O.; Gunten, H. von; Hildebrand, D.; Horisberger, U.; Koehne, J.-H.; Kraehenbuehl, T.; Kranich, D.; Lorenz, E.; Lustermann, W.; Mannheim, K.

    2011-01-01

    Imaging Atmospheric Cherenkov Telescopes (IACT) for Gamma-ray astronomy are presently using photomultiplier tubes as photo sensors. Geiger-mode avalanche photodiodes (G-APD) promise an improvement in sensitivity and, important for this application, ease of construction, operation and ruggedness. G-APDs have proven many of their features in the laboratory, but a qualified assessment of their performance in an IACT camera is best undertaken with a prototype. This paper describes the design and construction of a full-scale camera based on G-APDs realized within the FACT project (First G-APD Cherenkov Telescope).

  19. Multi-dimensional diagnostics of high power ion beams by Arrayed Pinhole Camera System

    International Nuclear Information System (INIS)

    Yasuike, K.; Miyamoto, S.; Shirai, N.; Akiba, T.; Nakai, S.; Imasaki, K.; Yamanaka, C.

    1993-01-01

    The authors developed multi-dimensional beam diagnostics system (with spatially and time resolution). They used newly developed Arrayed Pinhole Camera (APC) for this diagnosis. The APC can get spatial distribution of divergence and flux density. They use two types of particle detectors in this study. The one is CR-39 can get time integrated images. The other one is gated Micro-Channel-Plate (MCP) with CCD camera. It enables time resolving diagnostics. The diagnostics systems have resolution better than 10mrad divergence, 0.5mm spatial resolution on the objects respectively. The time resolving system has 10ns time resolution. The experiments are performed on Reiden-IV and Reiden-SHVS induction linac. The authors get time integrated divergence distributions on Reiden-IV proton beam. They also get time resolved image on Reiden-SHVS

  20. A micro-machined retro-reflector for improving light yield in ultra-high-resolution gamma cameras

    NARCIS (Netherlands)

    Heemskerk, J.W.T.; Korevaar, M.A.N.; Kreuger, R.; Ligtvoet, C.M.; Schotanus, P.; Beekman, F.J.

    2009-01-01

    High-resolution imaging of x-ray and gamma-ray distributions can be achieved with cameras that use charge coupled devices (CCDs) for detecting scintillation light flashes. The energy and interaction position of individual gamma photons can be determined by rapid processing of CCD images of

  1. Multi-Purpose Crew Vehicle Camera Asset Planning: Imagery Previsualization

    Science.gov (United States)

    Beaulieu, K.

    2014-01-01

    Using JSC-developed and other industry-standard off-the-shelf 3D modeling, animation, and rendering software packages, the Image Science Analysis Group (ISAG) supports Orion Project imagery planning efforts through dynamic 3D simulation and realistic previsualization of ground-, vehicle-, and air-based camera output.

  2. People counting with stereo cameras : two template-based solutions

    NARCIS (Netherlands)

    Englebienne, Gwenn; van Oosterhout, Tim; Kröse, B.J.A.

    2012-01-01

    People counting is a challenging task with many applications. We propose a method with a fixed stereo camera that is based on projecting a template onto the depth image. The method was tested on a challenging outdoor dataset with good results and runs in real time.

  3. Replacement power supply with micro-hydropower. Case micro central - Pipinta

    International Nuclear Information System (INIS)

    Gomez, Jorge I; Hincapie, Luis A; Woodcock, Edgar; Arregoces Alvaro

    2006-01-01

    This paper describes the Micro-Hydro Electric Pipinta through its main components and their parameters. An evaluation of the actual power plant costs and a proposed redesign are also given with prices referenced to year 2004. A comparison between the electricity price supplied by the grid and the cost for the electricity in this Micro-Hydro Power Plant let conclude that projects of this type are available in rural areas of Colombia reached by the grid.

  4. Multiple-camera tracking: UK government requirements

    Science.gov (United States)

    Hosmer, Paul

    2007-10-01

    The Imagery Library for Intelligent Detection Systems (i-LIDS) is the UK government's new standard for Video Based Detection Systems (VBDS). The standard was launched in November 2006 and evaluations against it began in July 2007. With the first four i-LIDS scenarios completed, the Home Office Scientific development Branch (HOSDB) are looking toward the future of intelligent vision in the security surveillance market by adding a fifth scenario to the standard. The fifth i-LIDS scenario will concentrate on the development, testing and evaluation of systems for the tracking of people across multiple cameras. HOSDB and the Centre for the Protection of National Infrastructure (CPNI) identified a requirement to track targets across a network of CCTV cameras using both live and post event imagery. The Detection and Vision Systems group at HOSDB were asked to determine the current state of the market and develop an in-depth Operational Requirement (OR) based on government end user requirements. Using this OR the i-LIDS team will develop a full i-LIDS scenario to aid the machine vision community in its development of multi-camera tracking systems. By defining a requirement for multi-camera tracking and building this into the i-LIDS standard the UK government will provide a widely available tool that developers can use to help them turn theory and conceptual demonstrators into front line application. This paper will briefly describe the i-LIDS project and then detail the work conducted in building the new tracking aspect of the standard.

  5. Two-dimensional diced scintillator array for innovative, fine-resolution gamma camera

    International Nuclear Information System (INIS)

    Fujita, T.; Kataoka, J.; Nishiyama, T.; Ohsuka, S.; Nakamura, S.; Yamamoto, S.

    2014-01-01

    We are developing a technique to fabricate fine spatial resolution (FWHM<0.5mm) and cost-effective photon counting detectors, by using silicon photomultipliers (SiPMs) coupled with a finely pixelated scintillator plate. Unlike traditional X-ray imagers that use a micro-columnar CsI(Tl) plate, we can pixelate various scintillation crystal plates more than 1 mm thick, and easily develop large-area, fine-pitch scintillator arrays with high precision. Coupling a fine pitch scintillator array with a SiPM array results in a compact, fast-response detector that is ideal for X-ray, gamma-ray, and charged particle detection as used in autoradiography, gamma cameras, and photon counting CTs. As the first step, we fabricated a 2-D, cerium-doped Gd 3 Al 2 Ga 3 O 12 (Ce:GAGG) scintillator array of 0.25 mm pitch, by using a dicing saw to cut micro-grooves 50μm wide into a 1.0 mm thick Ce:GAGG plate. The scintillator plate is optically coupled with a 3.0×3.0mm pixel 4×4 SiPM array and read-out via the resistive charge-division network. Even when using this simple system as a gamma camera, we obtained excellent spatial resolution of 0.48 mm (FWHM) for 122 keV gamma-rays. We will present our plans to further improve the signal-to-noise ratio in the image, and also discuss a variety of possible applications in the near future

  6. An ordinary camera in an extraordinary location: Outreach with the Mars Webcam

    Science.gov (United States)

    Ormston, T.; Denis, M.; Scuka, D.; Griebel, H.

    2011-09-01

    The European Space Agency's Mars Express mission was launched in 2003 and was Europe's first mission to Mars. On-board was a small camera designed to provide ‘visual telemetry’ of the separation of the Beagle-2 lander. After achieving its goal it was shut down while the primary science mission of Mars Express got underway. In 2007 this camera was reactivated by the flight control team of Mars Express for the purpose of providing public education and outreach—turning it into the ‘Mars Webcam’.The camera is a small, 640×480 pixel colour CMOS camera with a wide-angle 30°×40° field of view. This makes it very similar in almost every way to the average home PC webcam. The major difference is that this webcam is not in an average location but is instead in orbit around Mars. On a strict basis of non-interference with the primary science activities, the camera is turned on to provide unique wide-angle views of the planet below.A highly automated process ensures that the observations are scheduled on the spacecraft and then uploaded to the internet as rapidly as possible. There is no intermediate stage, so that visitors to the Mars Webcam blog serve as ‘citizen scientists’. Full raw datasets and processing instructions are provided along with a mechanism to allow visitors to comment on the blog. Members of the public are encouraged to use this in either a personal or an educational context and work with the images. We then take their excellent work and showcase it back on the blog. We even apply techniques developed by them to improve the data and webcam experience for others.The accessibility and simplicity of the images also makes the data ideal for educational use, especially as educational projects can then be showcased on the site as inspiration for others. The oft-neglected target audience of space enthusiasts is also important as this allows them to participate as part of an interplanetary instrument team.This paper will cover the history of the

  7. A Compton camera for spectroscopic imaging from 100 keV to 1 MeV

    International Nuclear Information System (INIS)

    Earnhart, J.R.D.

    1998-01-01

    A review of spectroscopic imaging issues, applications, and technology is presented. Compton cameras based on solid state semiconductor detectors stands out as the best system for the nondestructive assay of special nuclear materials. A camera for this application has been designed based on an efficient specific purpose Monte Carlo code developed for this project. Preliminary experiments have been performed which demonstrate the validity of the Compton camera concept and the accuracy of the code. Based on these results, a portable prototype system is in development. Proposed future work is addressed

  8. The use of a Micromegas as a detector for gamma camera

    International Nuclear Information System (INIS)

    Barbouchi, Asma; Trabelsi, Adel

    2008-01-01

    The micromegas (Micro Mesh Gaseaous) is a gas detector; it was developed by I.Giomattaris and G.Charpak for application in the field of experimental particle physics. But the polyvalence of this detector makes it to be used in several areas such as medical imaging. This detector has an X-Y readout capability of resolution less than 100μm, an energy resolution down to 14% for energy range 1-10 keV and an overall efficiency of 70%. Monte carlo simulation is widely used in nuclear medicine. It allows predicting the behaviour of system. Gate (Geant4 for Application Tomography Emission) is a platform for monte carlo simulation. It is dedicated to PET/SPECT (Position Emission Tomography / single Photon Emission Tomography) applications. Our goal is to model a gamma camera that use a Micromegas as a detector and to compare their performances (energy resolution, point spread function...) with those of a scintillated gamma camera by using Gate

  9. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, Ul; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is replaceably mounted in the ray inlet opening of the camera, while the others are placed on separate supports. Supports are swingably mounted upon a column one above the other

  10. MicroSCADA project documentation database

    OpenAIRE

    Kolam, Karolina

    2014-01-01

    Detta ingenjörsarbete var beställt av ABB Power Systems, Network Management. Syftet med detta ingenjörsarbete var att skapa en databas för dokumentering av information om MicroSCADA projekt. Ett lämpligt verktyg för att skapa rapporter och skriva ny data till databasen skulle också ingå. Före detta ingenjörsarbete sparades all information som skilda textdokument. Med en databas kunde man samla all information på ett ställe för att arkiveras under en längre tid. Det förenklad...

  11. An innovative silicon photomultiplier digitizing camera for gamma-ray astronomy

    Energy Technology Data Exchange (ETDEWEB)

    Heller, M. [DPNC-Universite de Geneve, Geneva (Switzerland); Schioppa, E. Jr; Porcelli, A.; Pujadas, I.T.; Della Volpe, D.; Montaruli, T.; Cadoux, F.; Favre, Y.; Christov, A.; Rameez, M.; Miranda, L.D.M. [DPNC-Universite de Geneve, Geneva (Switzerland); Zietara, K.; Idzkowski, B.; Jamrozy, M.; Ostrowski, M.; Stawarz, L.; Zagdanski, A. [Jagellonian University, Astronomical Observatory, Krakow (Poland); Aguilar, J.A. [DPNC-Universite de Geneve, Geneva (Switzerland); Universite Libre Bruxelles, Faculte des Sciences, Brussels (Belgium); Prandini, E.; Lyard, E.; Neronov, A.; Walter, R. [Universite de Geneve, Department of Astronomy, Geneva (Switzerland); Rajda, P.; Bilnik, W.; Kasperek, J.; Lalik, K.; Wiecek, M. [AGH University of Science and Technology, Krakow (Poland); Blocki, J.; Mach, E.; Michalowski, J.; Niemiec, J.; Skowron, K.; Stodulski, M. [Instytut Fizyki Jadrowej im. H. Niewodniczanskiego Polskiej Akademii Nauk, Krakow (Poland); Bogacz, L. [Jagiellonian University, Department of Information Technologies, Krakow (Poland); Borkowski, J.; Frankowski, A.; Janiak, M.; Moderski, R. [Polish Academy of Science, Nicolaus Copernicus Astronomical Center, Warsaw (Poland); Bulik, T.; Grudzinska, M. [University of Warsaw, Astronomical Observatory, Warsaw (Poland); Mandat, D.; Pech, M.; Schovanek, P. [Institute of Physics of the Czech Academy of Sciences, Prague (Czech Republic); Marszalek, A.; Stodulska, M. [Instytut Fizyki Jadrowej im. H. Niewodniczanskiego Polskiej Akademii Nauk, Krakow (Poland); Jagellonian University, Astronomical Observatory, Krakow (Poland); Pasko, P.; Seweryn, K. [Centrum Badan Kosmicznych Polskiej Akademii Nauk, Warsaw (Poland); Sliusar, V. [Universite de Geneve, Department of Astronomy, Geneva (Switzerland); Taras Shevchenko National University of Kyiv, Astronomical Observatory, Kyiv (Ukraine)

    2017-01-15

    The single-mirror small-size telescope (SST-1M) is one of the three proposed designs for the small-size telescopes (SSTs) of the Cherenkov Telescope Array (CTA) project. The SST-1M will be equipped with a 4 m-diameter segmented reflector dish and an innovative fully digital camera based on silicon photo-multipliers. Since the SST sub-array will consist of up to 70 telescopes, the challenge is not only to build telescopes with excellent performance, but also to design them so that their components can be commissioned, assembled and tested by industry. In this paper we review the basic steps that led to the design concepts for the SST-1M camera and the ongoing realization of the first prototype, with focus on the innovative solutions adopted for the photodetector plane and the readout and trigger parts of the camera. In addition, we report on results of laboratory measurements on real scale elements that validate the camera design and show that it is capable of matching the CTA requirements of operating up to high moonlight background conditions. (orig.)

  12. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    International Nuclear Information System (INIS)

    Anderson, Robert J.

    2014-01-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  13. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  14. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Robotic and Security Systems Dept.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  15. High accuracy and precision micro injection moulding of thermoplastic elastomers micro ring production

    DEFF Research Database (Denmark)

    Calaon, Matteo; Tosello, Guido; Elsborg, René

    2016-01-01

    The mass-replication nature of the process calls for fast monitoring of process parameters and product geometrical characteristics. In this direction, the present study addresses the possibility to develop a micro manufacturing platform for micro assembly injection moulding with real-time process....../product monitoring and metrology. The study represent a new concept yet to be developed with great potential for high precision mass-manufacturing of highly functional 3D multi-material (i.e. including metal/soft polymer) micro components. The activities related to HINMICO project objectives proves the importance...

  16. Radiation-resistant camera tube

    International Nuclear Information System (INIS)

    Kuwahata, Takao; Manabe, Sohei; Makishima, Yasuhiro

    1982-01-01

    It was a long time ago that Toshiba launched on manufacturing black-and-white radiation-resistant camera tubes employing nonbrowning face-plate glass for ITV cameras used in nuclear power plants. Now in compliance with the increasing demand in nuclear power field, the Company is at grips with the development of radiation-resistant single color-camera tubes incorporating a color-stripe filter for color ITV cameras used under radiation environment. Herein represented are the results of experiments on characteristics of materials for single color-camera tubes and prospects for commercialization of the tubes. (author)

  17. GRACE star camera noise

    Science.gov (United States)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  18. Advanced Exploration Technologies: Micro and Nano Technologies Enabling Space Missions in the 21st Century

    Science.gov (United States)

    Krabach, Timothy

    1998-01-01

    Some of the many new and advanced exploration technologies which will enable space missions in the 21st century and specifically the Manned Mars Mission are explored in this presentation. Some of these are the system on a chip, the Computed-Tomography imaging Spectrometer, the digital camera on a chip, and other Micro Electro Mechanical Systems (MEMS) technology for space. Some of these MEMS are the silicon micromachined microgyroscope, a subliming solid micro-thruster, a micro-ion thruster, a silicon seismometer, a dewpoint microhygrometer, a micro laser doppler anemometer, and tunable diode laser (TDL) sensors. The advanced technology insertion is critical for NASA to decrease mass, volume, power and mission costs, and increase functionality, science potential and robustness.

  19. Use of camera drive in stereoscopic display of learning contents of introductory physics

    Science.gov (United States)

    Matsuura, Shu

    2011-03-01

    Simple 3D physics simulations with stereoscopic display were created for a part of introductory physics e-Learning. First, cameras to see the 3D world can be made controllable by the user. This enabled to observe the system and motions of objects from any position in the 3D world. Second, cameras were made attachable to one of the moving object in the simulation so as to observe the relative motion of other objects. By this option, it was found that users perceive the velocity and acceleration more sensibly on stereoscopic display than on non-stereoscopic 3D display. Simulations were made using Adobe Flash ActionScript, and Papervison 3D library was used to render the 3D models in the flash web pages. To display the stereogram, two viewports from virtual cameras were displayed in parallel in the same web page. For observation of stereogram, the images of two viewports were superimposed by using 3D stereogram projection box (T&TS CO., LTD.), and projected on an 80-inch screen. The virtual cameras were controlled by keyboard and also by Nintendo Wii remote controller buttons. In conclusion, stereoscopic display offers learners more opportunities to play with the simulated models, and to perceive the characteristics of motion better.

  20. [A Method for Selecting Self-Adoptive Chromaticity of the Projected Markers].

    Science.gov (United States)

    Zhao, Shou-bo; Zhang, Fu-min; Qu, Xing-hua; Zheng, Shi-wei; Chen, Zhe

    2015-04-01

    The authors designed a self-adaptive projection system which is composed of color camera, projector and PC. In detail, digital micro-mirror device (DMD) as a spatial light modulator for the projector was introduced in the optical path to modulate the illuminant spectrum based on red, green and blue light emitting diodes (LED). However, the color visibility of active markers is affected by the screen which has unknown reflective spectrum as well. Here active markers are projected spot array. And chromaticity feature of markers is sometimes submerged in similar spectral screen. In order to enhance the color visibility of active markers relative to screen, a method for selecting self-adaptive chromaticity of the projected markers in 3D scanning metrology is described. Color camera with 3 channels limits the accuracy of device characterization. For achieving interconversion of device-independent color space and device-dependent color space, high-dimensional linear model of reflective spectrum was built. Prior training samples provide additional constraints to yield high-dimensional linear model with more than three degrees of freedom. Meanwhile, spectral power distribution of ambient light was estimated. Subsequently, markers' chromaticity in CIE color spaces was selected via maximization principle of Euclidean distance. The setting values of RGB were easily estimated via inverse transform. Finally, we implemented a typical experiment to show the performance of the proposed approach. An 24 Munsell Color Checker was used as projective screen. Color difference in the chromaticity coordinates between the active marker and the color patch was utilized to evaluate the color visibility of active markers relative to the screen. The result comparison between self-adaptive projection system and traditional diode-laser light projector was listed and discussed to highlight advantage of our proposed method.

  1. New light field camera based on physical based rendering tracing

    Science.gov (United States)

    Chung, Ming-Han; Chang, Shan-Ching; Lee, Chih-Kung

    2014-03-01

    Even though light field technology was first invented more than 50 years ago, it did not gain popularity due to the limitation imposed by the computation technology. With the rapid advancement of computer technology over the last decade, the limitation has been uplifted and the light field technology quickly returns to the spotlight of the research stage. In this paper, PBRT (Physical Based Rendering Tracing) was introduced to overcome the limitation of using traditional optical simulation approach to study the light field camera technology. More specifically, traditional optical simulation approach can only present light energy distribution but typically lack the capability to present the pictures in realistic scenes. By using PBRT, which was developed to create virtual scenes, 4D light field information was obtained to conduct initial data analysis and calculation. This PBRT approach was also used to explore the light field data calculation potential in creating realistic photos. Furthermore, we integrated the optical experimental measurement results with PBRT in order to place the real measurement results into the virtually created scenes. In other words, our approach provided us with a way to establish a link of virtual scene with the real measurement results. Several images developed based on the above-mentioned approaches were analyzed and discussed to verify the pros and cons of the newly developed PBRT based light field camera technology. It will be shown that this newly developed light field camera approach can circumvent the loss of spatial resolution associated with adopting a micro-lens array in front of the image sensors. Detailed operational constraint, performance metrics, computation resources needed, etc. associated with this newly developed light field camera technique were presented in detail.

  2. Stability and Behaviors of Methane/Propane and Hydrogen Micro Flames

    Science.gov (United States)

    Yoshimoto, Takamitsu; Kinoshita, Koichiro; Kitamura, Hideki; Tanigawa, Ryoichi

    The flame stability limits essentially define the fundamental operation of the combustion system. Recently the micro diffusion flame has been remarked. The critical conditions of the flame stability limit are highly dependent on nozzle diameter, species of fuel and so on. The micro diffusion flame of Methane/Propane and Hydrogen is formed by using the micro-scale nozzle of which inner diameter is less than 1mm. The configurations and behaviors of the flame are observed directly and visualized by the high speed video camera The criteria of stability limits are proposed for the micro diffusion flame. The objectives of the present study are to get further understanding of lifting/blow-off for the micro diffusion flame. The results obtained are as follows. (1) The behaviors of the flames are classified into some regions for each diffusion flame. (2) The micro diffusion flame of Methane/Propane cannot be sustained, when the nozzle diameter is less than 0.14 mm. (3) The diffusion flame cannot be sustained below the critical fuel flow rate. (4) The minimum flow which is formed does not depends on the average jet velocity, but on the fuel flow rate. (5) the micro flame is laminar. The flame length is decided by fuel flow rate.

  3. An open-access platform for camera-trapping data

    Directory of Open Access Journals (Sweden)

    Mario César Lavariega

    2018-02-01

    Full Text Available In southern Mexico, local communities have been playing important roles in the design and collection of wildlife data through camera-trapping in community-based monitoring of biodiversity projects. However, the methods used to store the data have limited their use in matters of decision-making and research. Thus, we present the Platform for Community-based Monitoring of Biodiversity (PCMB, a repository, which allows storage, visualization, and downloading of photographs captured by community-based monitoring of biodiversity projects in protected areas of southern Mexico. The platform was developed using agile software development with extensive interaction between computer scientists and biologists. System development included gathering data, design, built, database and attributes creation, and quality control. The PCMB currently contains 28,180 images of 6478 animals (69.4% mammals and 30.3% birds. Of the 32 species of mammals recorded in 18 PA since 2012, approximately a quarter of all photographs were of white-tailed deer (Odocoileus virginianus. Platforms permitting access to camera-trapping data are a valuable step in opening access to data of biodiversity; the PCMB is a practical new tool for wildlife management and research with data generated through local participation. Thus, this work encourages research on the data generated through the community-based monitoring of biodiversity projects in protected areas, to provide an important information infrastructure for effective management and conservation of wildlife.

  4. Gamma camera

    International Nuclear Information System (INIS)

    Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  5. Picosecond camera

    International Nuclear Information System (INIS)

    Decroisette, Michel

    A Kerr cell activated by infrared pulses of a model locked Nd glass laser, acts as an ultra-fast and periodic shutter, with a few p.s. opening time. Associated with a S.T.L. camera, it gives rise to a picosecond camera allowing us to study very fast effects [fr

  6. Utilizzo di laser scanner e camera digitale aviotrasportati nella progettazione di impianti fotovoltaici

    Directory of Open Access Journals (Sweden)

    Nicola Santomauro

    2012-04-01

    Full Text Available La normativa nazionale nel perseguire le direttive impartite dalla CEE in materia di energia, hai ncentivato fin dal 2007 lo sviluppo delle energie rinnovabili e di conseguenza il sorgere della cosiddetta green-economy ove la Geocart ha deciso di investire nella progettazione di impianti fotovoltaici di microgenerazione, con potenza installata inferiore ad 1 MW. Di particolare rilevanza nella fase di progettazione è risultato un laser scanner ed una camera digitaleintegrati nella piattaforma aviotrasportata MAPPING nel processo di rilievo dei siti individuati come idonei alla installazione di impianti fotovoltaici.Using airborne laser scanner and digital camera in the design of photovoltaic power plantsThe design of ground-mounted photovoltaic power plants re-quires a deep knowledge of the territory where people work, mainly if the area of interest has a wide coverage and the survey is not smooth. In this article, it is described the experience gained by Geo-cart in the design of 4-MW photovoltaic solar power plants of micro-generation, developed also by means of airborne laser scanner and digital camera for aerial survey of large scale areas within the Matera and Oppido Lucano’s municipalities in Basilicata.

  7. Reducing the Variance of Intrinsic Camera Calibration Results in the ROS Camera_Calibration Package

    Science.gov (United States)

    Chiou, Geoffrey Nelson

    The intrinsic calibration of a camera is the process in which the internal optical and geometric characteristics of the camera are determined. If accurate intrinsic parameters of a camera are known, the ray in 3D space that every point in the image lies on can be determined. Pairing with another camera allows for the position of the points in the image to be calculated by intersection of the rays. Accurate intrinsics also allow for the position and orientation of a camera relative to some world coordinate system to be calculated. These two reasons for having accurate intrinsic calibration for a camera are especially important in the field of industrial robotics where 3D cameras are frequently mounted on the ends of manipulators. In the ROS (Robot Operating System) ecosystem, the camera_calibration package is the default standard for intrinsic camera calibration. Several researchers from the Industrial Robotics & Automation division at Southwest Research Institute have noted that this package results in large variances in the intrinsic parameters of the camera when calibrating across multiple attempts. There are also open issues on this matter in their public repository that have not been addressed by the developers. In this thesis, we confirm that the camera_calibration package does indeed return different results across multiple attempts, test out several possible hypothesizes as to why, identify the reason, and provide simple solution to fix the cause of the issue.

  8. Cameras in mobile phones

    Science.gov (United States)

    Nummela, Ville; Viinikanoja, Jarkko; Alakarhu, Juha

    2006-04-01

    One of the fastest growing markets in consumer markets today are camera phones. During past few years total volume has been growing fast and today millions of mobile phones with camera will be sold. At the same time resolution and functionality of the cameras has been growing from CIF towards DSC level. From camera point of view the mobile world is an extremely challenging field. Cameras should have good image quality but in small size. They also need to be reliable and their construction should be suitable for mass manufacturing. All components of the imaging chain should be well optimized in this environment. Image quality and usability are the most important parameters to user. The current trend of adding more megapixels to cameras and at the same time using smaller pixels is affecting both. On the other hand reliability and miniaturization are key drivers for product development as well as the cost. In optimized solution all parameters are in balance but the process of finding the right trade-offs is not an easy task. In this paper trade-offs related to optics and their effects to image quality and usability of cameras are discussed. Key development areas from mobile phone camera point of view are also listed.

  9. Acquisition of Bidirectional Reflectance Factor Dataset Using a Micro Unmanned Aerial Vehicle and a Consumer Camera

    Directory of Open Access Journals (Sweden)

    Jouni I. Peltoniemi

    2010-03-01

    Full Text Available This paper describes a method for retrieving the bidirectional reflectance factor (BRF of land-surface areas, using a small consumer camera on board an unmanned aerial vehicle (UAV and introducing an advanced calibration routine. Images with varying view directions were taken of snow cover using the UAV. The vignetting effect was corrected from the images, and reflectance factor images were calculated using a calibrated white target as a reference. After spatial registration of the images using a corresponding point method, the target surface was divided into a grid, and a BRF was generated for each grid element. Lastly a model was fitted to the BRF dataset for data interpretation. The retrieved BRF were compared to parallel ground measurements. Comparison showed similar BRF and reflectance factor characteristics, which suggests that accurate measurements can be taken with cheap consumer cameras, if enough attention is paid to calibration of the images.

  10. A pilot project combining multispectral proximal sensors and digital cameras for monitoring tropical pastures

    Science.gov (United States)

    Handcock, Rebecca N.; Gobbett, D. L.; González, Luciano A.; Bishop-Hurley, Greg J.; McGavin, Sharon L.

    2016-08-01

    Timely and accurate monitoring of pasture biomass and ground cover is necessary in livestock production systems to ensure productive and sustainable management. Interest in the use of proximal sensors for monitoring pasture status in grazing systems has increased, since data can be returned in near real time. Proximal sensors have the potential for deployment on large properties where remote sensing may not be suitable due to issues such as spatial scale or cloud cover. There are unresolved challenges in gathering reliable sensor data and in calibrating raw sensor data to values such as pasture biomass or vegetation ground cover, which allow meaningful interpretation of sensor data by livestock producers. Our goal was to assess whether a combination of proximal sensors could be reliably deployed to monitor tropical pasture status in an operational beef production system, as a precursor to designing a full sensor deployment. We use this pilot project to (1) illustrate practical issues around sensor deployment, (2) develop the methods necessary for the quality control of the sensor data, and (3) assess the strength of the relationships between vegetation indices derived from the proximal sensors and field observations across the wet and dry seasons. Proximal sensors were deployed at two sites in a tropical pasture on a beef production property near Townsville, Australia. Each site was monitored by a Skye SKR-four-band multispectral sensor (every 1 min), a digital camera (every 30 min), and a soil moisture sensor (every 1 min), each of which were operated over 18 months. Raw data from each sensor was processed to calculate multispectral vegetation indices. The data capture from the digital cameras was more reliable than the multispectral sensors, which had up to 67 % of data discarded after data cleaning and quality control for technical issues related to the sensor design, as well as environmental issues such as water incursion and insect infestations. We recommend

  11. Can micro-volunteering help in Africa?

    CSIR Research Space (South Africa)

    Butgereit, L

    2013-05-01

    Full Text Available is convenient to the micro-volunteer, and in small pieces of time (bitesized). This paper looks at a micro-volunteering project where participants can volunteer for five to ten minutes at a time using a smart phone and assist pupils with their mathematics....

  12. Improved depth estimation with the light field camera

    Science.gov (United States)

    Wang, Huachun; Sang, Xinzhu; Chen, Duo; Guo, Nan; Wang, Peng; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    Light-field cameras are used in consumer and industrial applications. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift one's viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. Thus, depth estimation from both defocus and correspondence are now available in a single capture. And Lytro.Inc also provides a depth estimation from a single-shot capture with light field camera, like Lytro Illum. This Lytro depth estimation containing many correct depth information can be used for higher quality estimation. In this paper, we present a novel simple and principled algorithm that computes dense depth estimation by combining defocus, correspondence and Lytro depth estimations. We analyze 2D epipolar image (EPI) to get defocus and correspondence depth maps. Defocus depth is obtained by computing the spatial gradient after angular integration and correspondence depth by computing the angular variance from EPIs. Lytro depth can be extracted from Lyrto Illum with software. We then show how to combine the three cues into a high quality depth map. Our method for depth estimation is suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction, as well as light filed display

  13. 4D ANIMATION RECONSTRUCTION FROM MULTI-CAMERA COORDINATES TRANSFORMATION

    Directory of Open Access Journals (Sweden)

    J. P. Jhan

    2016-06-01

    Full Text Available Reservoir dredging issues are important to extend the life of reservoir. The most effective and cost reduction way is to construct a tunnel to desilt the bottom sediment. Conventional technique is to construct a cofferdam to separate the water, construct the intake of tunnel inside and remove the cofferdam afterwards. In Taiwan, the ZengWen reservoir dredging project will install an Elephant-trunk Steel Pipe (ETSP in the water to connect the desilting tunnel without building the cofferdam. Since the installation is critical to the whole project, a 1:20 model was built to simulate the installation steps in a towing tank, i.e. launching, dragging, water injection, and sinking. To increase the construction safety, photogrammetry technic is adopted to record images during the simulation, compute its transformation parameters for dynamic analysis and reconstruct the 4D animations. In this study, several Australis© coded targets are fixed on the surface of ETSP for auto-recognition and measurement. The cameras orientations are computed by space resection where the 3D coordinates of coded targets are measured. Two approaches for motion parameters computation are proposed, i.e. performing 3D conformal transformation from the coordinates of cameras and relative orientation computation by the orientation of single camera. Experimental results show the 3D conformal transformation can achieve sub-mm simulation results, and relative orientation computation shows the flexibility for dynamic motion analysis which is easier and more efficiency.

  14. Goal-oriented rectification of camera-based document images.

    Science.gov (United States)

    Stamatopoulos, Nikolaos; Gatos, Basilis; Pratikakis, Ioannis; Perantonis, Stavros J

    2011-04-01

    Document digitization with either flatbed scanners or camera-based systems results in document images which often suffer from warping and perspective distortions that deteriorate the performance of current OCR approaches. In this paper, we present a goal-oriented rectification methodology to compensate for undesirable document image distortions aiming to improve the OCR result. Our approach relies upon a coarse-to-fine strategy. First, a coarse rectification is accomplished with the aid of a computationally low cost transformation which addresses the projection of a curved surface to a 2-D rectangular area. The projection of the curved surface on the plane is guided only by the textual content's appearance in the document image while incorporating a transformation which does not depend on specific model primitives or camera setup parameters. Second, pose normalization is applied on the word level aiming to restore all the local distortions of the document image. Experimental results on various document images with a variety of distortions demonstrate the robustness and effectiveness of the proposed rectification methodology using a consistent evaluation methodology that encounters OCR accuracy and a newly introduced measure using a semi-automatic procedure.

  15. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    Science.gov (United States)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  16. Technical assessment of Navitar Zoom 6000 optic and Sony HDC-X310 camera for MEMS presentations and training.

    Energy Technology Data Exchange (ETDEWEB)

    Diegert, Carl F.

    2006-02-01

    This report evaluates a newly-available, high-definition, video camera coupled with a zoom optical system for microscopic imaging of micro-electro-mechanical systems. We did this work to support configuration of three document-camera-like stations as part of an installation in a new Microsystems building at Sandia National Laboratories. The video display walls to be installed as part of these three presentation and training stations are of extraordinary resolution and quality. The new availability of a reasonably-priced, cinema-quality, high-definition video camera offers the prospect of filling these displays with full-motion imaging of Sandia's microscopic products at a quality substantially beyond the quality of typical video microscopes. Simple and robust operation of the microscope stations will allow the extraordinary-quality imaging to contribute to Sandia's day-to-day research and training operations. This report illustrates the disappointing image quality from a camera/lens system comprised of a Sony HDC-X310 high-definition video camera coupled to a Navitar Zoom 6000 lens. We determined that this Sony camera is capable of substantially more image quality than the Navitar optic can deliver. We identified an optical doubler lens from Navitar as the component of their optical system that accounts for a substantial part of the image quality problem. While work continues to incrementally improve performance of the Navitar system, we are also evaluating optical systems from other vendors to couple to this Sony camera.

  17. The Regulatory Noose: Logan City’s Adventures in Micro-Hydropower

    Directory of Open Access Journals (Sweden)

    Megan Hansen

    2016-06-01

    Full Text Available Recent growth in the renewable energy industry has increased government support for alternative energy. In the United States, hydropower is the largest source of renewable energy and also one of the most efficient. Currently, there are 30,000 megawatts of potential energy capacity through small- and micro-hydro projects throughout the United States. Increased development of micro-hydro could double America’s hydropower energy generation, but micro-hydro is not being developed at the same rate as other renewable sources. Micro-hydro is regulated by the Federal Energy Regulatory Commission and subject to the same regulation as large hydroelectric projects despite its minimal environmental impact. We studied two cases of micro-hydro projects in Logan, Utah, and Afton, Wyoming, which are both small rural communities. Both cases showed that the web of federal regulation is likely discouraging the development of micro-hydro in the United States by increasing the costs in time and funds for developers. Federal environmental regulation like the National Environmental Policy Act, the Endangered Species Act, and others are likely discouraging the development of clean renewable energy through micro-hydro technology.

  18. Divergence-ratio axi-vision camera (Divcam): A distance mapping camera

    International Nuclear Information System (INIS)

    Iizuka, Keigo

    2006-01-01

    A novel distance mapping camera the divergence-ratio axi-vision camera (Divcam) is proposed. The decay rate of the illuminating light with distance due to the divergence of the light is used as means of mapping the distance. Resolutions of 10 mm over a range of meters and 0.5 mm over a range of decimeters were achieved. The special features of this camera are its high resolution real-time operation, simplicity, compactness, light weight, portability, and yet low fabrication cost. The feasibility of various potential applications is also included

  19. Miniature CCD X-Ray Imaging Camera Technology Final Report CRADA No. TC-773-94

    Energy Technology Data Exchange (ETDEWEB)

    Conder, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mummolo, F. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-10-19

    The goal of the project was to develop a compact, large active area, high spatial resolution, high dynamic range, charge-coupled device (CCD) camera to replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating X-rays. The camera head and controller needed to be capable of operation within a vacuum environment and small enough to be fielded within the small vacuum target chambers at LLNL.

  20. Dual filtered backprojection for micro-rotation confocal microscopy

    International Nuclear Information System (INIS)

    Laksameethanasan, Danai; Brandt, Sami S; Renaud, Olivier; Shorte, Spencer L

    2009-01-01

    Micro-rotation confocal microscopy is a novel optical imaging technique which employs dielectric fields to trap and rotate individual cells to facilitate 3D fluorescence imaging using a confocal microscope. In contrast to computed tomography (CT) where an image can be modelled as parallel projection of an object, the ideal confocal image is recorded as a central slice of the object corresponding to the focal plane. In CT, the projection images and the 3D object are related by the Fourier slice theorem which states that the Fourier transform of a CT image is equal to the central slice of the Fourier transform of the 3D object. In the micro-rotation application, we have a dual form of this setting, i.e. the Fourier transform of the confocal image equals the parallel projection of the Fourier transform of the 3D object. Based on the observed duality, we present here the dual of the classical filtered back projection (FBP) algorithm and apply it in micro-rotation confocal imaging. Our experiments on real data demonstrate that the proposed method is a fast and reliable algorithm for the micro-rotation application, as FBP is for CT application

  1. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  2. High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project

    Science.gov (United States)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2015-04-01

    Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.

  3. Assessing the appropriateness of carbon financing for micro-scale projects in terms of capabilities

    Directory of Open Access Journals (Sweden)

    Caitlin Trethewy

    2013-08-01

    Full Text Available Micro-scale development projects are currently underrepresented in global carbon markets. This paper outlines the process of becoming eligible to generate carbon credits and examines some of the barriers that may inhibit access to carbon markets. In particular, it focuses on barriers relating to the capacity and resources of the organisation developing the project. This approach represents a deviation from the standard discourse which has traditionally focused on barriers relating to the availability of up-front finance and the capacity of local public and private sector institutions required to participate in the carbon standard certification process. The paper contains an analysis of the carbon offset project cycle from which follows a discussion of potential capacity- related barriers focusing on time, skills and resources. Recommendations are made as to how these may be overcome with a particular focus on the role of technical organisations in assisting project developers. Completed during 2012 this research comes at an interesting time for global carbon markets as the Kyoto Protocol’s first commitment period ended in 2012 and negotiations have failed to produce and agreement that would commit major emitters to reductions targets from 2013 onward. Despite this, reducing greenhouse gas emissions has gained momentum on the national level and many governments are in the process of formulating and introducing emissions trading schemes.

  4. Accurate measurement of imaging photoplethysmographic signals based camera using weighted average

    Science.gov (United States)

    Pang, Zongguang; Kong, Lingqin; Zhao, Yuejin; Sun, Huijuan; Dong, Liquan; Hui, Mei; Liu, Ming; Liu, Xiaohua; Liu, Lingling; Li, Xiaohui; Li, Rongji

    2018-01-01

    Imaging Photoplethysmography (IPPG) is an emerging technique for the extraction of vital signs of human being using video recordings. IPPG technology with its advantages like non-contact measurement, low cost and easy operation has become one research hot spot in the field of biomedicine. However, the noise disturbance caused by non-microarterial area cannot be removed because of the uneven distribution of micro-arterial, different signal strength of each region, which results in a low signal noise ratio of IPPG signals and low accuracy of heart rate. In this paper, we propose a method of improving the signal noise ratio of camera-based IPPG signals of each sub-region of the face using a weighted average. Firstly, we obtain the region of interest (ROI) of a subject's face based camera. Secondly, each region of interest is tracked and feature-based matched in each frame of the video. Each tracked region of face is divided into 60x60 pixel block. Thirdly, the weights of PPG signal of each sub-region are calculated, based on the signal-to-noise ratio of each sub-region. Finally, we combine the IPPG signal from all the tracked ROI using weighted average. Compared with the existing approaches, the result shows that the proposed method takes modest but significant effects on improvement of signal noise ratio of camera-based PPG estimated and accuracy of heart rate measurement.

  5. Radiation camera exposure control

    International Nuclear Information System (INIS)

    Martone, R.J.; Yarsawich, M.; Wolczek, W.

    1976-01-01

    A system and method for governing the exposure of an image generated by a radiation camera to an image sensing camera is disclosed. The exposure is terminated in response to the accumulation of a predetermined quantity of radiation, defining a radiation density, occurring in a predetermined area. An index is produced which represents the value of that quantity of radiation whose accumulation causes the exposure termination. The value of the predetermined radiation quantity represented by the index is sensed so that the radiation camera image intensity can be calibrated to compensate for changes in exposure amounts due to desired variations in radiation density of the exposure, to maintain the detectability of the image by the image sensing camera notwithstanding such variations. Provision is also made for calibrating the image intensity in accordance with the sensitivity of the image sensing camera, and for locating the index for maintaining its detectability and causing the proper centering of the radiation camera image

  6. Scout-view assisted interior micro-CT

    International Nuclear Information System (INIS)

    Sharma, Kriti Sen; Narayanan, Shree; Agah, Masoud; Holzner, Christian; Vasilescu, Dragoş M; Jin, Xin; Hoffman, Eric A; Yu, Hengyong; Wang, Ge

    2013-01-01

    Micro computed tomography (micro-CT) is a widely-used imaging technique. A challenge of micro-CT is to quantitatively reconstruct a sample larger than the field-of-view (FOV) of the detector. This scenario is characterized by truncated projections and associated image artifacts. However, for such truncated scans, a low resolution scout scan with an increased FOV is frequently acquired so as to position the sample properly. This study shows that the otherwise discarded scout scans can provide sufficient additional information to uniquely and stably reconstruct the interior region of interest. Two interior reconstruction methods are designed to utilize the multi-resolution data without significant computational overhead. While most previous studies used numerically truncated global projections as interior data, this study uses truly hybrid scans where global and interior scans were carried out at different resolutions. Additionally, owing to the lack of standard interior micro-CT phantoms, we designed and fabricated novel interior micro-CT phantoms for this study to provide means of validation for our algorithms. Finally, two characteristic samples from separate studies were scanned to show the effect of our reconstructions. The presented methods show significant improvements over existing reconstruction algorithms. (paper)

  7. The application of μPIV technique in the study of magnetic flows in a micro-channel

    International Nuclear Information System (INIS)

    Nguyen, N.T.; Wu, Z.G.; Huang, X.Y.; Wen, C.-Y..

    2005-01-01

    In this preliminary experimental study, micro-scale particle image velocimetry (μPIV) was adopted for the first time to get the quantitative information of magnetic flows in a micro-channel. The μPIV consists of an inverted florescent microscope, a Q-switch Nd:YAG laser and a CCD camera. The florescent liquid with particles of 3 μm diameter was blended homogeneously with the prepared magnetic fluid. A permanent magnet approached and left one end of the micro-channel. The response of the magnetic fluid was recorded with the μPIV simultaneously. The flow features validate the feasibility of using μPIV technique in the study of magnetic flows in a micro-channel. μPIV provides a promising experimental tool for visualization and quantitative measurement of magnetic micro-flows

  8. Micro-gen metering solutions

    Energy Technology Data Exchange (ETDEWEB)

    Elland, J.; Dickson, J.; Cranfield, P.

    2003-07-01

    This report summarises the results of a project to investigate the regulation of domestic electricity metering work and identify the most economic options for micro-generator installers to undertake work on electricity meters. A micro-generation unit is defined as an energy conversion system converting non-electrical energy into electrical energy and can include technologies such as photovoltaic systems, small-scale wind turbines, micro-hydroelectric systems, and combined heat and power systems. Details of six tasks are given and cover examination of the existing framework and legal documentation for metering work, the existing technical requirements for meter operators, meter operator personnel accreditation, appraisal of options for meter changes and for micro-generation installation, document change procedures, industry consultation, and a review of the costs implications of the options.

  9. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen- er...

  10. High-precision real-time 3D shape measurement based on a quad-camera system

    Science.gov (United States)

    Tao, Tianyang; Chen, Qian; Feng, Shijie; Hu, Yan; Zhang, Minliang; Zuo, Chao

    2018-01-01

    Phase-shifting profilometry (PSP) based 3D shape measurement is well established in various applications due to its high accuracy, simple implementation, and robustness to environmental illumination and surface texture. In PSP, higher depth resolution generally requires higher fringe density of projected patterns which, in turn, lead to severe phase ambiguities that must be solved with additional information from phase coding and/or geometric constraints. However, in order to guarantee the reliability of phase unwrapping, available techniques are usually accompanied by increased number of patterns, reduced amplitude of fringe, and complicated post-processing algorithms. In this work, we demonstrate that by using a quad-camera multi-view fringe projection system and carefully arranging the relative spatial positions between the cameras and the projector, it becomes possible to completely eliminate the phase ambiguities in conventional three-step PSP patterns with high-fringe-density without projecting any additional patterns or embedding any auxiliary signals. Benefiting from the position-optimized quad-camera system, stereo phase unwrapping can be efficiently and reliably performed by flexible phase consistency checks. Besides, redundant information of multiple phase consistency checks is fully used through a weighted phase difference scheme to further enhance the reliability of phase unwrapping. This paper explains the 3D measurement principle and the basic design of quad-camera system, and finally demonstrates that in a large measurement volume of 200 mm × 200 mm × 400 mm, the resultant dynamic 3D sensing system can realize real-time 3D reconstruction at 60 frames per second with a depth precision of 50 μm.

  11. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    Energy Technology Data Exchange (ETDEWEB)

    Strehlow, J.P.

    1994-08-24

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE` s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1).

  12. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    International Nuclear Information System (INIS)

    Strehlow, J.P.

    1994-01-01

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE' s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1)

  13. The role of camera-bundled image management software in the consumer digital imaging value chain

    Science.gov (United States)

    Mueller, Milton; Mundkur, Anuradha; Balasubramanian, Ashok; Chirania, Virat

    2005-02-01

    This research was undertaken by the Convergence Center at the Syracuse University School of Information Studies (www.digital-convergence.info). Project ICONICA, the name for the research, focuses on the strategic implications of digital Images and the CONvergence of Image management and image CApture. Consumer imaging - the activity that we once called "photography" - is now recognized as in the throes of a digital transformation. At the end of 2003, market researchers estimated that about 30% of the households in the U.S. and 40% of the households in Japan owned digital cameras. In 2004, of the 86 million new cameras sold (excluding one-time use cameras), a majority (56%) were estimated to be digital cameras. Sales of photographic film, while still profitable, are declining precipitously.

  14. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  15. Camera Movement in Narrative Cinema

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2007-01-01

    section unearths what characterizes the literature on camera movement. The second section of the dissertation delineates the history of camera movement itself within narrative cinema. Several organizational principles subtending the on-screen effect of camera movement are revealed in section two...... but they are not organized into a coherent framework. This is the task that section three meets in proposing a functional taxonomy for camera movement in narrative cinema. Two presumptions subtend the taxonomy: That camera movement actively contributes to the way in which we understand the sound and images on the screen......, commentative or valuative manner. 4) Focalization: associating the movement of the camera with the viewpoints of characters or entities in the story world. 5) Reflexive: inviting spectators to engage with the artifice of camera movement. 6) Abstract: visualizing abstract ideas and concepts. In order...

  16. Telecentric 3D profilometry based on phase-shifting fringe projection.

    Science.gov (United States)

    Li, Dong; Liu, Chunyang; Tian, Jindong

    2014-12-29

    Three dimensional shape measurement in the microscopic range becomes increasingly important with the development of micro manufacturing technology. Microscopic fringe projection techniques offer a fast, robust, and full-field measurement for field sizes from approximately 1 mm2 to several cm2. However, the depth of field is very small due to the imaging of non-telecentric microscope, which is often not sufficient to measure the complete depth of a 3D-object. And the calibration of phase-to-depth conversion is complicated which need a precision translation stage and a reference plane. In this paper, we propose a novel telecentric phase-shifting projected fringe profilometry for small and thick objects. Telecentric imaging extends the depth of field approximately to millimeter order, which is much larger than that of microscopy. To avoid the complicated phase-to-depth conversion in microscopic fringe projection, we develop a new system calibration method of camera and projector based on telecentric imaging model. Based on these, a 3D reconstruction of telecentric imaging is presented with stereovision aided by fringe phase maps. Experiments demonstrated the feasibility and high measurement accuracy of the proposed system for thick object.

  17. Visualizing the transient electroosmotic flow and measuring the zeta potential of microchannels with a micro-PIV technique.

    Science.gov (United States)

    Yan, Deguang; Nguyen, Nam-Trung; Yang, Chun; Huang, Xiaoyang

    2006-01-14

    We have demonstrated a transient micro particle image velocimetry (micro-PIV) technique to measure the temporal development of electroosmotic flow in microchannels. Synchronization of different trigger signals for the laser, the CCD camera, and the high-voltage switch makes this measurement possible with a conventional micro-PIV setup. Using the transient micro-PIV technique, we have further proposed a method on the basis of inertial decoupling between the particle electrophoretic motion and the fluid electroosmotic flow to determine the electrophoretic component in the particle velocity and the zeta potential of the channel wall. It is shown that using the measured zeta potentials, the theoretical predictions agree well with the transient response of the electroosmotic velocities measured in this work.

  18. Survey of practical application fields of micro-machine and micro-factory technologies in Japan; Nippon ni okeru maikuro machine oyobi maikuro factory gijutsu no jitsuyoka bun`ya chosa

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-02-01

    As for micro-machine and micro-factory technologies, research and development trends promoted by private companies were surveyed except for national projects. In the field of main technology development by private companies which do not participate in national projects, developments of micro-devices, such as micro-sensor, and micro-actuator, as well as basic technologies, such as machining, assembly, and material technology, are predominant. The applied fields of these aim at electronic industries for measurement and analysis equipment, motorcar sensors, information, communication, and home electric products. While, there are a few research and developments as to micro-robots. Research and developments aiming at application to medical field are widely promoted by private companies. In this field, micro-machining technology for micro-surgery and endoscopes is prospective. There is a photo-forceps technology for handling the micro-parts. However, there are few researches considering the micro-factory. 146 refs., 73 figs., 7 tabs.

  19. Application of a micro-credit scheme to some ecological activities

    Science.gov (United States)

    Hakoyama, F.

    2017-03-01

    Micro-credit schemes are expanding very rapidly worldwide in ecological activities. Providing gas-cooking equipments in Burkina Faso is a successful example in which the micro-credit system contributes to improve not only poor women’s life but also ecological environment. In Bangladesh, a solar PV system program through micro-credit has been implemented widely and successfully: big NGOs act as equipment dealers and provide micro-credit loans to individual poor households. In contrast, there are very few cases which showed positive results in sanitation projects. Micro-credit schemes are, in principle, based on the income generated through the fund. But in usual cases, sanitation activities do not yield any income. High cost of latrine construction is another barrier. In this paper, we reviewed why we could not apply a micro-credit scheme to our “Améli-eaur project” in Burkina Faso. Common features for the success in ecological activities are 1) enough income yielded from the activity itself, 2) the strong needs from population side, and 3) established system support, both technically and administratively. If we find a way to fulfill these elements in a sanitation project, it can be a long, sustainable project.

  20. Vision-Aided Autonomous Landing and Ingress of Micro Aerial Vehicles

    Science.gov (United States)

    Brockers, Roland; Ma, Jeremy C.; Matthies, Larry H.; Bouffard, Patrick

    2012-01-01

    Micro aerial vehicles have limited sensor suites and computational power. For reconnaissance tasks and to conserve energy, these systems need the ability to autonomously land at vantage points or enter buildings (ingress). But for autonomous navigation, information is needed to identify and guide the vehicle to the target. Vision algorithms can provide egomotion estimation and target detection using input from cameras that are easy to include in miniature systems.

  1. Applying image quality in cell phone cameras: lens distortion

    Science.gov (United States)

    Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje

    2009-01-01

    This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.

  2. Mixel camera--a new push-broom camera concept for high spatial resolution keystone-free hyperspectral imaging.

    Science.gov (United States)

    Høye, Gudrun; Fridman, Andrei

    2013-05-06

    Current high-resolution push-broom hyperspectral cameras introduce keystone errors to the captured data. Efforts to correct these errors in hardware severely limit the optical design, in particular with respect to light throughput and spatial resolution, while at the same time the residual keystone often remains large. The mixel camera solves this problem by combining a hardware component--an array of light mixing chambers--with a mathematical method that restores the hyperspectral data to its keystone-free form, based on the data that was recorded onto the sensor with large keystone. A Virtual Camera software, that was developed specifically for this purpose, was used to compare the performance of the mixel camera to traditional cameras that correct keystone in hardware. The mixel camera can collect at least four times more light than most current high-resolution hyperspectral cameras, and simulations have shown that the mixel camera will be photon-noise limited--even in bright light--with a significantly improved signal-to-noise ratio compared to traditional cameras. A prototype has been built and is being tested.

  3. VUV testing of science cameras at MSFC: QE measurement of the CLASP flight cameras

    Science.gov (United States)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-08-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint MSFC, National Astronomical Observatory of Japan (NAOJ), Instituto de Astrofisica de Canarias (IAC) and Institut D'Astrophysique Spatiale (IAS) sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512 × 512 detector, dual channel analog readout and an internally mounted cold block. At the flight CCD temperature of -20C, the CLASP cameras exceeded the low-noise performance requirements (UV, EUV and X-ray science cameras at MSFC.

  4. Neutron cameras for ITER

    International Nuclear Information System (INIS)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-01-01

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from 16 N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with 16 N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins

  5. Real-time multi-camera video acquisition and processing platform for ADAS

    Science.gov (United States)

    Saponara, Sergio

    2016-04-01

    The paper presents the design of a real-time and low-cost embedded system for image acquisition and processing in Advanced Driver Assisted Systems (ADAS). The system adopts a multi-camera architecture to provide a panoramic view of the objects surrounding the vehicle. Fish-eye lenses are used to achieve a large Field of View (FOV). Since they introduce radial distortion of the images projected on the sensors, a real-time algorithm for their correction is also implemented in a pre-processor. An FPGA-based hardware implementation, re-using IP macrocells for several ADAS algorithms, allows for real-time processing of input streams from VGA automotive CMOS cameras.

  6. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many

  7. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  8. A High Performance Micro Channel Interface for Real-Time Industrial Image Processing

    Science.gov (United States)

    Thomas H. Drayer; Joseph G. Tront; Richard W. Conners

    1995-01-01

    Data collection and transfer devices are critical to the performance of any machine vision system. The interface described in this paper collects image data from a color line scan camera and transfers the data obtained into the system memory of a Micro Channel-based host computer. A maximum data transfer rate of 20 Mbytes/sec can be achieved using the DMA capabilities...

  9. Data transmission protocol for Pi-of-the-Sky cameras

    Science.gov (United States)

    Uzycki, J.; Kasprowicz, G.; Mankiewicz, M.; Nawrocki, K.; Sitek, P.; Sokolowski, M.; Sulej, R.; Tlaczala, W.

    2006-10-01

    The large amount of data collected by the automatic astronomical cameras has to be transferred to the fast computers in a reliable way. The method chosen should ensure data streaming in both directions but in nonsymmetrical way. The Ethernet interface is very good choice because of its popularity and proven performance. However it requires TCP/IP stack implementation in devices like cameras for full compliance with existing network and operating systems. This paper describes NUDP protocol, which was made as supplement to standard UDP protocol and can be used as a simple-network protocol. The NUDP does not need TCP protocol implementation and makes it possible to run the Ethernet network with simple devices based on microcontroller and/or FPGA chips. The data transmission idea was created especially for the "Pi of the Sky" project.

  10. High spatial resolution infrared camera as ISS external experiment

    Science.gov (United States)

    Eckehard, Lorenz; Frerker, Hap; Fitch, Robert Alan

    High spatial resolution infrared camera as ISS external experiment for monitoring global climate changes uses ISS internal and external resources (eg. data storage). The optical experiment will consist of an infrared camera for monitoring global climate changes from the ISS. This technology was evaluated by the German small satellite mission BIRD and further developed in different ESA projects. Compared to BIRD the presended instrument uses proven sensor advanced technologies (ISS external) and ISS on board processing and storage capabili-ties (internal). The instrument will be equipped with a serial interfaces for TM/TC and several relay commands for the power supply. For data processing and storage a mass memory is re-quired. The access to actual attitude data is highly desired to produce geo referenced maps-if possible by an on board processing.

  11. Photogrammetry and Remote Sensing: New German Standards (din) Setting Quality Requirements of Products Generated by Digital Cameras, Pan-Sharpening and Classification

    Science.gov (United States)

    Reulke, R.; Baltrusch, S.; Brunn, A.; Komp, K.; Kresse, W.; von Schönermark, M.; Spreckels, V.

    2012-08-01

    10 years after the first introduction of a digital airborne mapping camera in the ISPRS conference 2000 in Amsterdam, several digital cameras are now available. They are well established in the market and have replaced the analogue camera. A general improvement in image quality accompanied the digital camera development. The signal-to-noise ratio and the dynamic range are significantly better than with the analogue cameras. In addition, digital cameras can be spectrally and radiometrically calibrated. The use of these cameras required a rethinking in many places though. New data products were introduced. In the recent years, some activities took place that should lead to a better understanding of the cameras and the data produced by these cameras. Several projects, like the projects of the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) or EuroSDR (European Spatial Data Research), were conducted to test and compare the performance of the different cameras. In this paper the current DIN (Deutsches Institut fuer Normung - German Institute for Standardization) standards will be presented. These include the standard for digital cameras, the standard for ortho rectification, the standard for classification, and the standard for pan-sharpening. In addition, standards for the derivation of elevation models, the use of Radar / SAR, and image quality are in preparation. The OGC has indicated its interest in participating that development. The OGC has already published specifications in the field of photogrammetry and remote sensing. One goal of joint future work could be to merge these formerly independent developments and the joint development of a suite of implementation specifications for photogrammetry and remote sensing.

  12. Determination of feature generation methods for PTZ camera object tracking

    Science.gov (United States)

    Doyle, Daniel D.; Black, Jonathan T.

    2012-06-01

    Object detection and tracking using computer vision (CV) techniques have been widely applied to sensor fusion applications. Many papers continue to be written that speed up performance and increase learning of artificially intelligent systems through improved algorithms, workload distribution, and information fusion. Military application of real-time tracking systems is becoming more and more complex with an ever increasing need of fusion and CV techniques to actively track and control dynamic systems. Examples include the use of metrology systems for tracking and measuring micro air vehicles (MAVs) and autonomous navigation systems for controlling MAVs. This paper seeks to contribute to the determination of select tracking algorithms that best track a moving object using a pan/tilt/zoom (PTZ) camera applicable to both of the examples presented. The select feature generation algorithms compared in this paper are the trained Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), the Mixture of Gaussians (MoG) background subtraction method, the Lucas- Kanade optical flow method (2000) and the Farneback optical flow method (2003). The matching algorithm used in this paper for the trained feature generation algorithms is the Fast Library for Approximate Nearest Neighbors (FLANN). The BSD licensed OpenCV library is used extensively to demonstrate the viability of each algorithm and its performance. Initial testing is performed on a sequence of images using a stationary camera. Further testing is performed on a sequence of images such that the PTZ camera is moving in order to capture the moving object. Comparisons are made based upon accuracy, speed and memory.

  13. Development of compact Compton camera for 3D image reconstruction of radioactive contamination

    Science.gov (United States)

    Sato, Y.; Terasaka, Y.; Ozawa, S.; Nakamura Miyamura, H.; Kaburagi, M.; Tanifuji, Y.; Kawabata, K.; Torii, T.

    2017-11-01

    The Fukushima Daiichi Nuclear Power Station (FDNPS), operated by Tokyo Electric Power Company Holdings, Inc., went into meltdown after the large tsunami caused by the Great East Japan Earthquake of March 11, 2011. Very large amounts of radionuclides were released from the damaged plant. Radiation distribution measurements inside FDNPS buildings are indispensable to execute decommissioning tasks in the reactor buildings. We have developed a compact Compton camera to measure the distribution of radioactive contamination inside the FDNPS buildings three-dimensionally (3D). The total weight of the Compton camera is lower than 1.0 kg. The gamma-ray sensor of the Compton camera employs Ce-doped GAGG (Gd3Al2Ga3O12) scintillators coupled with a multi-pixel photon counter. Angular correction of the detection efficiency of the Compton camera was conducted. Moreover, we developed a 3D back-projection method using the multi-angle data measured with the Compton camera. We successfully observed 3D radiation images resulting from the two 137Cs radioactive sources, and the image of the 9.2 MBq source appeared stronger than that of the 2.7 MBq source.

  14. Improving Situational Awareness in camera surveillance by combining top-view maps with camera images

    NARCIS (Netherlands)

    Kooi, F.L.; Zeeders, R.

    2009-01-01

    The goal of the experiment described is to improve today's camera surveillance in public spaces. Three designs with the camera images combined on a top-view map were compared to each other and to the current situation in camera surveillance. The goal was to test which design makes spatial

  15. Optomechanical System Development of the AWARE Gigapixel Scale Camera

    Science.gov (United States)

    Son, Hui S.

    Electronic focal plane arrays (FPA) such as CMOS and CCD sensors have dramatically improved to the point that digital cameras have essentially phased out film (except in very niche applications such as hobby photography and cinema). However, the traditional method of mating a single lens assembly to a single detector plane, as required for film cameras, is still the dominant design used in cameras today. The use of electronic sensors and their ability to capture digital signals that can be processed and manipulated post acquisition offers much more freedom of design at system levels and opens up many interesting possibilities for the next generation of computational imaging systems. The AWARE gigapixel scale camera is one such computational imaging system. By utilizing a multiscale optical design, in which a large aperture objective lens is mated with an array of smaller, well corrected relay lenses, we are able to build an optically simple system that is capable of capturing gigapixel scale images via post acquisition stitching of the individual pictures from the array. Properly shaping the array of digital cameras allows us to form an effectively continuous focal surface using off the shelf (OTS) flat sensor technology. This dissertation details developments and physical implementations of the AWARE system architecture. It illustrates the optomechanical design principles and system integration strategies we have developed through the course of the project by summarizing the results of the two design phases for AWARE: AWARE-2 and AWARE-10. These systems represent significant advancements in the pursuit of scalable, commercially viable snapshot gigapixel imaging systems and should serve as a foundation for future development of such systems.

  16. Calibration of an Outdoor Distributed Camera Network with a 3D Point Cloud

    Directory of Open Access Journals (Sweden)

    Agustín Ortega

    2014-07-01

    Full Text Available Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies between image coordinates and world points in the ground plane (walking areas to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME patio at the Universitat Politècnica de Catalunya (UPC.

  17. VUV Testing of Science Cameras at MSFC: QE Measurement of the CLASP Flight Cameras

    Science.gov (United States)

    Champey, Patrick R.; Kobayashi, Ken; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512x512 detector, dual channel analog readout electronics and an internally mounted cold block. At the flight operating temperature of -20 C, the CLASP cameras achieved the low-noise performance requirements (less than or equal to 25 e- read noise and greater than or equal to 10 e-/sec/pix dark current), in addition to maintaining a stable gain of approximately equal to 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Four flight-like cameras were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise, dark current and residual non-linearity of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV and EUV science cameras at MSFC.

  18. Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?

    Science.gov (United States)

    Barazzetti, L.; Previtali, M.; Roncoroni, F.

    2018-05-01

    360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.

  19. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  20. Adaptive pixel-to-pixel projection intensity adjustment for measuring a shiny surface using orthogonal color fringe pattern projection

    Science.gov (United States)

    Chen, Chao; Gao, Nan; Wang, Xiangjun; Zhang, Zonghua

    2018-05-01

    Three-dimensional (3D) shape measurement based on fringe pattern projection techniques has been commonly used in various fields. One of the remaining challenges in fringe pattern projection is that camera sensor saturation may occur if there is a large range of reflectivity variation across the surface that causes measurement errors. To overcome this problem, a novel fringe pattern projection method is proposed to avoid image saturation and maintain high-intensity modulation for measuring shiny surfaces by adaptively adjusting the pixel-to-pixel projection intensity according to the surface reflectivity. First, three sets of orthogonal color fringe patterns and a sequence of uniform gray-level patterns with different gray levels are projected onto a measured surface by a projector. The patterns are deformed with respect to the object surface and captured by a camera from a different viewpoint. Subsequently, the optimal projection intensity at each pixel is determined by fusing different gray levels and transforming the camera pixel coordinate system into the projector pixel coordinate system. Finally, the adapted fringe patterns are created and used for 3D shape measurement. Experimental results on a flat checkerboard and shiny objects demonstrate that the proposed method can measure shiny surfaces with high accuracy.

  1. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  2. Evaluation of Biomaterials Using Micro-Computerized Tomography

    International Nuclear Information System (INIS)

    Torris, A. T. Arun; Columbus, K. C. Soumya; Saaj, U. S.; Krishnan, Kalliyana V.; Nair, Manitha B.

    2008-01-01

    Micro-computed tomography or Micro-CT is a high resolution, non-invasive, x-ray scanning technique that allows precise three-dimensional imaging and quantification of micro-architectural and structural parameters of objects. Tomographic reconstruction is based on a cone-beam convolution-back-projection algorithm. Micro-architectural and structural parameters such as porosity, surface area to volume ratio, interconnectivity, pore size, wall thickness, anisotropy and cross-section area of biomaterials and bio-specimens such as trabecular bone, polymer scaffold, bio-ceramics and dental restorative were evaluated through imaging and computer aided manipulation of the object scan data sets.

  3. SU-D-BRC-07: System Design for a 3D Volumetric Scintillation Detector Using SCMOS Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Darne, C; Robertson, D; Alsanea, F; Beddar, S [UT MD Anderson Cancer Center, Houston, TX (United States)

    2016-06-15

    Purpose: The purpose of this project is to build a volumetric scintillation detector for quantitative imaging of 3D dose distributions of proton beams accurately in near real-time. Methods: The liquid scintillator (LS) detector consists of a transparent acrylic tank (20×20×20 cm{sup 3}) filled with a liquid scintillator that when irradiated with protons generates scintillation light. To track rapid spatial and dose variations in spot scanning proton beams we used three scientific-complementary metal-oxide semiconductor (sCMOS) imagers (2560×2160 pixels). The cameras collect optical signal from three orthogonal projections. To reduce system footprint two mirrors oriented at 45° to the tank surfaces redirect scintillation light to cameras for capturing top and right views. Selection of fixed focal length objective lenses for these cameras was based on their ability to provide large depth of field (DoF) and required field of view (FoV). Multiple cross-hairs imprinted on the tank surfaces allow for image corrections arising from camera perspective and refraction. Results: We determined that by setting sCMOS to 16-bit dynamic range, truncating its FoV (1100×1100 pixels) to image the entire volume of the LS detector, and using 5.6 msec integration time imaging rate can be ramped up to 88 frames per second (fps). 20 mm focal length lens provides a 20 cm imaging DoF and 0.24 mm/pixel resolution. Master-slave camera configuration enable the slaves to initiate image acquisition instantly (within 2 µsec) after receiving a trigger signal. A computer with 128 GB RAM was used for spooling images from the cameras and can sustain a maximum recording time of 2 min per camera at 75 fps. Conclusion: The three sCMOS cameras are capable of high speed imaging. They can therefore be used for quick, high-resolution, and precise mapping of dose distributions from scanned spot proton beams in three dimensions.

  4. SU-D-BRC-07: System Design for a 3D Volumetric Scintillation Detector Using SCMOS Cameras

    International Nuclear Information System (INIS)

    Darne, C; Robertson, D; Alsanea, F; Beddar, S

    2016-01-01

    Purpose: The purpose of this project is to build a volumetric scintillation detector for quantitative imaging of 3D dose distributions of proton beams accurately in near real-time. Methods: The liquid scintillator (LS) detector consists of a transparent acrylic tank (20×20×20 cm"3) filled with a liquid scintillator that when irradiated with protons generates scintillation light. To track rapid spatial and dose variations in spot scanning proton beams we used three scientific-complementary metal-oxide semiconductor (sCMOS) imagers (2560×2160 pixels). The cameras collect optical signal from three orthogonal projections. To reduce system footprint two mirrors oriented at 45° to the tank surfaces redirect scintillation light to cameras for capturing top and right views. Selection of fixed focal length objective lenses for these cameras was based on their ability to provide large depth of field (DoF) and required field of view (FoV). Multiple cross-hairs imprinted on the tank surfaces allow for image corrections arising from camera perspective and refraction. Results: We determined that by setting sCMOS to 16-bit dynamic range, truncating its FoV (1100×1100 pixels) to image the entire volume of the LS detector, and using 5.6 msec integration time imaging rate can be ramped up to 88 frames per second (fps). 20 mm focal length lens provides a 20 cm imaging DoF and 0.24 mm/pixel resolution. Master-slave camera configuration enable the slaves to initiate image acquisition instantly (within 2 µsec) after receiving a trigger signal. A computer with 128 GB RAM was used for spooling images from the cameras and can sustain a maximum recording time of 2 min per camera at 75 fps. Conclusion: The three sCMOS cameras are capable of high speed imaging. They can therefore be used for quick, high-resolution, and precise mapping of dose distributions from scanned spot proton beams in three dimensions.

  5. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  6. Using DSLR cameras in digital holography

    Science.gov (United States)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  7. Lensless Photoluminescence Hyperspectral Camera Employing Random Speckle Patterns.

    Czech Academy of Sciences Publication Activity Database

    Žídek, Karel; Denk, Ondřej; Hlubuček, Jiří

    2017-01-01

    Roč. 7, č. 1 (2017), č. článku 15309. ISSN 2045-2322 R&D Projects: GA MŠk(CZ) LO1206; GA ČR(CZ) GJ17-26284Y Institutional support: RVO:61389021 Keywords : compressed sensing * photoluminescence imaging * laser speckles * single-pixel camera Subject RIV: BH - Optics, Masers, Lasers OBOR OECD: Optics (including laser optics and quantum optics) Impact factor: 4.259, year: 2016 https://www.nature.com/articles/s41598-017-14443-4

  8. Micro Fourier Transform Profilometry (μFTP): 3D shape measurement at 10,000 frames per second

    Science.gov (United States)

    Zuo, Chao; Tao, Tianyang; Feng, Shijie; Huang, Lei; Asundi, Anand; Chen, Qian

    2018-03-01

    Fringe projection profilometry is a well-established technique for optical 3D shape measurement. However, in many applications, it is desirable to make 3D measurements at very high speed, especially with fast moving or shape changing objects. In this work, we demonstrate a new 3D dynamic imaging technique, Micro Fourier Transform Profilometry (μFTP), which can realize an acquisition rate up to 10,000 3D frame per second (fps). The high measurement speed is achieved by the number of patterns reduction as well as high-speed fringe projection hardware. In order to capture 3D information in such a short period of time, we focus on the improvement of the phase recovery, phase unwrapping, and error compensation algorithms, allowing to reconstruct an accurate, unambiguous, and distortion-free 3D point cloud with every two projected patterns. We also develop a high-frame-rate fringe projection hardware by pairing a high-speed camera and a DLP projector, enabling binary pattern switching and precisely synchronized image capture at a frame rate up to 20,000 fps. Based on this system, we demonstrate high-quality textured 3D imaging of 4 transient scenes: vibrating cantilevers, rotating fan blades, flying bullet, and bursting balloon, which were previously difficult or even unable to be captured with conventional approaches.

  9. Selecting a digital camera for telemedicine.

    Science.gov (United States)

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  10. Tomographic Small-Animal Imaging Using a High-Resolution Semiconductor Camera

    Science.gov (United States)

    Kastis, GA; Wu, MC; Balzer, SJ; Wilson, DW; Furenlid, LR; Stevenson, G; Barber, HB; Barrett, HH; Woolfenden, JM; Kelly, P; Appleby, M

    2015-01-01

    We have developed a high-resolution, compact semiconductor camera for nuclear medicine applications. The modular unit has been used to obtain tomographic images of phantoms and mice. The system consists of a 64 x 64 CdZnTe detector array and a parallel-hole tungsten collimator mounted inside a 17 cm x 5.3 cm x 3.7 cm tungsten-aluminum housing. The detector is a 2.5 cm x 2.5 cm x 0.15 cm slab of CdZnTe connected to a 64 x 64 multiplexer readout via indium-bump bonding. The collimator is 7 mm thick, with a 0.38 mm pitch that matches the detector pixel pitch. We obtained a series of projections by rotating the object in front of the camera. The axis of rotation was vertical and about 1.5 cm away from the collimator face. Mouse holders were made out of acrylic plastic tubing to facilitate rotation and the administration of gas anesthetic. Acquisition times were varied from 60 sec to 90 sec per image for a total of 60 projections at an equal spacing of 6 degrees between projections. We present tomographic images of a line phantom and mouse bone scan and assess the properties of the system. The reconstructed images demonstrate spatial resolution on the order of 1–2 mm. PMID:26568676

  11. Lane detection algorithm for an onboard camera

    Science.gov (United States)

    Bellino, Mario; Lopez de Meneses, Yuri; Ryser, Peter; Jacot, Jacques

    2005-02-01

    After analysing the major causes of injuries and death on roads, it is understandable that one of the main goals in the automotive industry is to increase vehicle safety. The European project SPARC (Secure Propulsion using Advanced Redundant Control) is developing the next generation of trucks that will fulfil these aims. The main technologies that will be used in the SPARC project to achieve the desiderated level of safety will be presented. In order to avoid accidents in critical situations, it is necessary to have a representation of the environment of the vehicle. Thus, several solutions using different sensors will be described and analysed. Particularly, a division of this project aims to integrate cameras in automotive vehicles to increase security and prevent driver's mistakes. Indeed, with this vision platform it would be possible to extract the position of the lane with respect to the vehicle, and thus, help the driver to follow the optimal trajectory. A definition of lane is proposed, and a lane detection algorithm is presented. In order to improve the detection, several criteria are explained and detailed. Regrettably, such an embedded camera is subject to the vibration of the truck, and the resulting sequence of images is difficult to analyse. Thus, we present different solutions to stabilize the images and particularly a new approach developed by the "Laboratoire de Production Microtechnique". Indeed, it was demonstrated in previous works that the presence of noise can be used, through a phenomenon called Stochastic Resonance. Thus, instead of decreasing the influence of noise in industrial applications, which has non negligible costs, it is perhaps interesting to use this phenomenon to reveal some useful information, such as for example the contour of the objects and lanes.

  12. Piloting a community-based micro-hydro power generation project

    International Nuclear Information System (INIS)

    Buenafe, Menandro B.; Eponio, Melchor P.

    1998-01-01

    A community based microhydro power generation project was successfully piloted in Dulao, Malibcong, Abra. The project started with the identification and evaluation of five potential creeks flowing near villages in the Cordillera hinterlands. All the sites showed comparative hydrologic features except for one factor that decided the project's implementation: the willingness of the people to invest by providing their labor- counterpart. On this account, only the residents of Dulao put their full trust in the implementing institutions, the main reason for the project's success. The micro-hydro power project consisted of an earthen diversion canal that conveyed part of the streamflow unto a forebay located above the powerhouse. The forebay was built of riprap and concrete, equipped with a desilting chamber, trashrack, a spillway, and an overflow canal that directed water to the ricefields downstream. A polyethylenevinyl penstock was laid underground along the slope,from the forebay to the powerhouse. The penstock assumed a Y-configuration inside the powerhouse where the two crossflow turbines were separately mounted on each arms. Two butterfly valves were positioned just before each turbine so that flow can be alternately controlled for the two machines. A tailrace drained the discharge from the turbines back to the same creek. Originally, the setup could only operate the 3kw turbine that ran the ricemill by means of a flat belt drive. Upon further hydrologic study, an 8kw crossflow turbine was installed to a drive a 7.5kva, two-pole, single phase alternator. The 8kw turbine can operate under three design flows, namely: 20,40, and 60 liters per second. The turbine-alternator setup was achieved by a pulley and belt drive arrangement. Typically, the AC generator was provided with monitoring instruments like a volt meter, frequency meter, and ampere meter. An electronic load controller (ELC) was observed to effectively protect the alternator from runaway speeds, over

  13. Thermal engineering and micro-technology; Thermique et microtechnologie

    Energy Technology Data Exchange (ETDEWEB)

    Kandlikar, S. [Rochester Inst. of Tech., NY (United States); Luo, L. [Institut National Polytechnique, 54 - Nancy (France); Gruss, A. [CEA Grenoble, GRETH, 38 (France); Wautelet, M. [Mons Univ. (Belgium); Gidon, S. [CEA Grenoble, Lab. d' Electronique et de Technologie de l' Informatique (LETI), 38 (France); Gillot, C. [Ecole Nationale Superieure d' Ingenieurs Electriciens de Grenoble, 38 - Saint Martin d' Heres (France)]|[CEA Grenoble, Lab. Electronique et de Technologie de l' Informatique (LETI), 38 (France); Therme, J.; Marvillet, Ch.; Vidil, R. [CEA Grenoble, 38 (France); Dutartre, D. [ST Microelectronique, France (France); Lefebvre, Ph. [SNECMA, 75 - Paris (France); Lallemand, M. [Institut National des Sciences Appliquees (INSA), 69 - Villeurbanne (France); Colin, S. [Institut National des Sciences Appliquees (INSA), 31 - Toulouse (France); Joulin, K. [Ecole Nationale Superieure de Mecanique et d' Aerotechnique (ENSMA), 86 - Poitiers (France); Gad el Hak, M. [Virginia Univ., Charlottesville, VA (United States)

    2003-07-01

    This document gathers the abstracts and transparencies of 5 invited conferences of this congress of the SFT about heat transfers and micro-technologies: Flow boiling in microchannels: non-dimensional groups and heat transfer mechanisms (S. Kandlikar); Intensification and multi-scale process units (L. Luo and A. Gruss); Macro-, micro- and nano-systems: different physics? (M. Wautelet); micro-heat pipes (M. Lallemand); liquid and gas flows inside micro-ducts (S. Colin). The abstracts of the following presentations are also included: Electro-thermal writing of nano-scale memory points in a phase change material (S. Gidon); micro-technologies for cooling in micro-electronics (C. Gillot); the Minatec project (J. Therme); importance and trends of thermal engineering in micro-electronics (D. Dutartre); Radiant heat transfers at short length scales (K. Joulain); Momentum and heat transfer in micro-electromechanical systems (M. Gad-el-Hak). (J.S.)

  14. Direct cone beam SPECT reconstruction with camera tilt

    International Nuclear Information System (INIS)

    Jianying Li; Jaszczak, R.J.; Greer, K.L.; Coleman, R.E.; Zongjian Cao; Tsui, B.M.W.

    1993-01-01

    A filtered backprojection (FBP) algorithm is derived to perform cone beam (CB) single-photon emission computed tomography (SPECT) reconstruction with camera tilt using circular orbits. This algorithm reconstructs the tilted angle CB projection data directly by incorporating the tilt angle into it. When the tilt angle becomes zero, this algorithm reduces to that of Feldkamp. Experimentally acquired phantom studies using both a two-point source and the three-dimensional Hoffman brain phantom have been performed. The transaxial tilted cone beam brain images and profiles obtained using the new algorithm are compared with those without camera tilt. For those slices which have approximately the same distance from the detector in both tilt and non-tilt set-ups, the two transaxial reconstructions have similar profiles. The two-point source images reconstructed from this new algorithm and the tilted cone beam brain images are also compared with those reconstructed from the existing tilted cone beam algorithm. (author)

  15. Motionless active depth from defocus system using smart optics for camera autofocus applications

    Science.gov (United States)

    Amin, M. Junaid; Riza, Nabeel A.

    2016-04-01

    This paper describes a motionless active Depth from Defocus (DFD) system design suited for long working range camera autofocus applications. The design consists of an active illumination module that projects a scene illuminating coherent conditioned optical radiation pattern which maintains its sharpness over multiple axial distances allowing an increased DFD working distance range. The imager module of the system responsible for the actual DFD operation deploys an electronically controlled variable focus lens (ECVFL) as a smart optic to enable a motionless imager design capable of effective DFD operation. An experimental demonstration is conducted in the laboratory which compares the effectiveness of the coherent conditioned radiation module versus a conventional incoherent active light source, and demonstrates the applicability of the presented motionless DFD imager design. The fast response and no-moving-parts features of the DFD imager design are especially suited for camera scenarios where mechanical motion of lenses to achieve autofocus action is challenging, for example, in the tiny camera housings in smartphones and tablets. Applications for the proposed system include autofocus in modern day digital cameras.

  16. Comparison of the effectiveness of three retinal camera technologies for malarial retinopathy detection in Malawi

    Science.gov (United States)

    Soliz, Peter; Nemeth, Sheila C.; Barriga, E. Simon; Harding, Simon P.; Lewallen, Susan; Taylor, Terrie E.; MacCormick, Ian J.; Joshi, Vinayak S.

    2016-03-01

    The purpose of this study was to test the suitability of three available camera technologies (desktop, portable, and iphone based) for imaging comatose children who presented with clinical symptoms of malaria. Ultimately, the results of the project would form the basis for a design of a future camera to screen for malaria retinopathy (MR) in a resource challenged environment. The desktop, portable, and i-phone based cameras were represented by the Topcon, Pictor Plus, and Peek cameras, respectively. These cameras were tested on N=23 children presenting with symptoms of cerebral malaria (CM) at a malaria clinic, Queen Elizabeth Teaching Hospital in Malawi, Africa. Each patient was dilated for binocular indirect ophthalmoscopy (BIO) exam by an ophthalmologist followed by imaging with all three cameras. Each of the cases was graded according to an internationally established protocol and compared to the BIO as the clinical ground truth. The reader used three principal retinal lesions as markers for MR: hemorrhages, retinal whitening, and vessel discoloration. The study found that the mid-priced Pictor Plus hand-held camera performed considerably better than the lower price mobile phone-based camera, and slightly the higher priced table top camera. When comparing the readings of digital images against the clinical reference standard (BIO), the Pictor Plus camera had sensitivity and specificity for MR of 100% and 87%, respectively. This compares to a sensitivity and specificity of 87% and 75% for the i-phone based camera and 100% and 75% for the desktop camera. The drawback of all the cameras were their limited field of view which did not allow complete view of the periphery where vessel discoloration occurs most frequently. The consequence was that vessel discoloration was not addressed in this study. None of the cameras offered real-time image quality assessment to ensure high quality images to afford the best possible opportunity for reading by a remotely located

  17. Automatic locking radioisotope camera lock

    International Nuclear Information System (INIS)

    Rosauer, P.J.

    1978-01-01

    The lock of the present invention secures the isotope source in a stored shielded condition in the camera until a positive effort has been made to open the lock and take the source outside of the camera and prevents disconnection of the source pigtail unless the source is locked in a shielded condition in the camera. It also gives a visual indication of the locked or possible exposed condition of the isotope source and prevents the source pigtail from being completely pushed out of the camera, even when the lock is released. (author)

  18. Acceptance/Operational Test Report for Tank 241-AN-104 camera and camera purge control system

    International Nuclear Information System (INIS)

    Castleberry, J.L.

    1995-11-01

    This Acceptance/Operational Test Procedure (ATP/OTP) will document the satisfactory operation of the camera purge panel, purge control panel, color camera system and associated control components destined for installation. The final acceptance of the complete system will be performed in the field. The purge panel and purge control panel will be tested for its safety interlock which shuts down the camera and pan-and-tilt inside the tank vapor space during loss of purge pressure and that the correct purge volume exchanges are performed as required by NFPA 496. This procedure is separated into seven sections. This Acceptance/Operational Test Report documents the successful acceptance and operability testing of the 241-AN-104 camera system and camera purge control system

  19. Development of an ultra-fast X-ray camera using hybrid pixel detectors

    International Nuclear Information System (INIS)

    Dawiec, A.

    2011-05-01

    The aim of the project whose work described in this thesis is part, was to design a high-speed X-ray camera using hybrid pixels applied to biomedical imaging and for material science. As a matter of fact the hybrid pixel technology meets the requirements of these two research fields, particularly by providing energy selection and low dose imaging capabilities. In this thesis, high frame rate X-ray imaging based on the XPAD3-S photons counting chip is presented. Within a collaboration between CPPM, ESRF and SOLEIL, three XPAD3 cameras were built. Two of them are being operated at the beamline of the ESRF and SOLEIL synchrotron facilities and the third one is embedded in the PIXSCAN II irradiation setup of CPPM. The XPAD3 camera is a large surface X-ray detector composed of eight detection modules of seven XPAD3-S chips each with a high-speed data acquisition system. The readout architecture of the camera is based on the PCI Express interface and on programmable FPGA chips. The camera achieves a readout speed of 240 images/s, with maximum number of images limited by the RAM memory of the acquisition PC. The performance of the device was characterized by carrying out several high speed imaging experiments using the PIXSCAN II irradiation setup described in the last chapter of this thesis. (author)

  20. Development of a time projection chamber with micro-pixel electrodes

    International Nuclear Information System (INIS)

    Kubo, Hidetoshi; Miuchi, Kentaro; Nagayoshi, Tsutomu; Ochi, Atsuhiko; Orito, Reiko; Takada, Atsushi; Tanimori, Toru; Ueno, Masaru

    2003-01-01

    A time projection chamber (TPC) based on a gaseous chamber with micro-pixel electrodes (μ-PIC) has been developed for measuring three-dimensional tracks of charged particles. The μ-PIC with a detection area of 10x10 cm 2 consists of a double-sided printing circuit board. Anode pixels are formed with 0.4 mm pitch on strips aligned perpendicular to the cathode strips in order to obtain a two-dimensional position. In the TPC with drift length of 8 cm, 4 mm wide field cage electrodes are aligned at 1 mm spaces and a uniform electric field of about 0.4 kV/cm is produced. For encoding of the three-dimensional position a synchronous readout system has been developed using Field Programmable Gate Arrays with 40 MHz clock. This system enables us to reconstruct the three-dimensional track of the particle at successive points like a cloud chamber even at high event rate. The drift velocity of electrons in the TPC was measured with the tracks of cosmic muons for 3 days, during which the TPC worked stably with the gas gain of 3000. With a radioisotope of gamma-ray source the three-dimensional track of a Compton scattered electron was taken successfully

  1. The micro turbine: the MIT example; La micro turbine: l'exemple du MIT

    Energy Technology Data Exchange (ETDEWEB)

    Ribaud, Y. [Office National d' Etudes et de Recherches Aerospatiales (ONERA-DEFA), 92 - Chatillon (France)

    2001-10-01

    The micro turbine study began a few years ago at the MIT, with the participation of specialists from different fields. The purpose is the development of a MEMS (micro electro mechanical systems) based, 1 cm in diameter, micro gas turbine. Potential applications are devoted to micro drone propulsion, electric power generation for portable power sources in order to replace heavy Lithium batteries, satellite motorization, the surface distributed power for boundary suction on plane wings. The manufacturing constraints at such small scales lead to 2-D extruded shapes. The physical constraints stem from viscous effects and from limitations given by 2-D geometry. The time scales are generally shorter than for conventional machines. Otherwise the material properties are better at such length scales. Transposition from conventional turbomachinery laws is no more applicable and new design methods must be established. The present paper highlights the project progress and the technology breakthroughs. (author)

  2. MicroShell Minimalist Shell for Xilinx Microprocessors

    Science.gov (United States)

    Werne, Thomas A.

    2011-01-01

    MicroShell is a lightweight shell environment for engineers and software developers working with embedded microprocessors in Xilinx FPGAs. (MicroShell has also been successfully ported to run on ARM Cortex-M1 microprocessors in Actel ProASIC3 FPGAs, but without project-integration support.) Micro Shell decreases the time spent performing initial tests of field-programmable gate array (FPGA) designs, simplifies running customizable one-time-only experiments, and provides a familiar-feeling command-line interface. The program comes with a collection of useful functions and enables the designer to add an unlimited number of custom commands, which are callable from the command-line. The commands are parameterizable (using the C-based command-line parameter idiom), so the designer can use one function to exercise hardware with different values. Also, since many hardware peripherals instantiated in FPGAs have reasonably simple register-mapped I/O interfaces, the engineer can edit and view hardware parameter settings at any time without stopping the processor. MicroShell comes with a set of support scripts that interface seamlessly with Xilinx's EDK tool. Adding an instance of MicroShell to a project is as simple as marking a check box in a library configuration dialog box and specifying a software project directory. The support scripts then examine the hardware design, build design-specific functions, conditionally include processor-specific functions, and complete the compilation process. For code-size constrained designs, most of the stock functionality can be excluded from the compiled library. When all of the configurable options are removed from the binary, MicroShell has an unoptimized memory footprint of about 4.8 kB and a size-optimized footprint of about 2.3 kB. Since MicroShell allows unfettered access to all processor-accessible memory locations, it is possible to perform live patching on a running system. This can be useful, for instance, if a bug is

  3. Robust Pose Estimation using the SwissRanger SR-3000 Camera

    DEFF Research Database (Denmark)

    Gudmundsson, Sigurjon Arni; Larsen, Rasmus; Ersbøll, Bjarne Kjær

    2007-01-01

    In this paper a robust method is presented to classify and estimate an objects pose from a real time range image and a low dimensional model. The model is made from a range image training set which is reduced dimensionally by a nonlinear manifold learning method named Local Linear Embedding (LLE)......). New range images are then projected to this model giving the low dimensional coordinates of the object pose in an efficient manner. The range images are acquired by a state of the art SwissRanger SR-3000 camera making the projection process work in real-time....

  4. Video camera use at nuclear power plants

    International Nuclear Information System (INIS)

    Estabrook, M.L.; Langan, M.O.; Owen, D.E.

    1990-08-01

    A survey of US nuclear power plants was conducted to evaluate video camera use in plant operations, and determine equipment used and the benefits realized. Basic closed circuit television camera (CCTV) systems are described and video camera operation principles are reviewed. Plant approaches for implementing video camera use are discussed, as are equipment selection issues such as setting task objectives, radiation effects on cameras, and the use of disposal cameras. Specific plant applications are presented and the video equipment used is described. The benefits of video camera use --- mainly reduced radiation exposure and increased productivity --- are discussed and quantified. 15 refs., 6 figs

  5. The development of large-aperture test system of infrared camera and visible CCD camera

    Science.gov (United States)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  6. An innovative silicon photomultiplier digitizing camera for gamma-ray astronomy

    Czech Academy of Sciences Publication Activity Database

    Heller, M.; Schioppa, E.jr.; Porcelli, A.; Pujadas, I.T.; Zietara, K.; della Volpe, D.; Montaruli, T.; Cadoux, F.; Favre, Y.; Aguilar, J.A.; Christov, A.; Prandini, E.; Rajda, P.; Rameez, M.; Bilnik, W.; Blocki, J.; Bogacz, L.; Borkowski, J.; Bulik, T.; Frankowski, A.; Grudzinska, M.; Idzkowski, B.; Jamrozy, M.; Janiak, M.; Kasperek, J.; Lalik, K.; Lyard, E.; Mach, E.; Mandát, Dušan; Marszalek, A.; Medina Miranda, L. D.; Michałowski, J.; Moderski, R.; Neronov, A.; Niemiec, J.; Ostrowski, M.; Pasko, P.; Pech, Miroslav; Schovánek, Petr; Seweryn, K.; Sliusar, V.; Skowron, K.; Stawarz, L.; Stodulska, M.; Stodulski, M.; Walter, R.; Wiecek, M.; Zagdanski, A.

    2017-01-01

    Roč. 77, č. 1 (2017), s. 1-31, č. článku 47. ISSN 1434-6044 R&D Projects: GA MŠk LE13012; GA MŠk LG14019 Institutional support: RVO:68378271 Keywords : silicon photomultiplier * digitizing camera * gamma-ray astronomy Subject RIV: BF - Elementary Particles and High Energy Physics OBOR OECD: Particles and field physics Impact factor: 5.331, year: 2016

  7. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras

    Directory of Open Access Journals (Sweden)

    Yajie Liao

    2017-06-01

    Full Text Available Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices, which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer’s calibration.

  8. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras.

    Science.gov (United States)

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-06-24

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer's calibration.

  9. Human tracking over camera networks: a review

    Science.gov (United States)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  10. Microprocessor-controlled wide-range streak camera

    Science.gov (United States)

    Lewis, Amy E.; Hollabaugh, Craig

    2006-08-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  11. Microprocessor-controlled, wide-range streak camera

    International Nuclear Information System (INIS)

    Amy E. Lewis; Craig Hollabaugh

    2006-01-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized

  12. X-ray image intensifier camera tubes and semiconductor targets

    International Nuclear Information System (INIS)

    1979-01-01

    A semiconductor target for use in an image intensifier camera tube and a camera using the target are described. The semiconductor wafer for converting an electron image onto electrical signal consists mainly of a collector region, preferably n-type silicon. It has one side for receiving the electron image and an opposite side for storing charge carriers generated in the collector region by high energy electrons forming a charge image. The first side comprises a highly doped surface layer covered with a metal buffer layer permeable to the incident electrons and thick enough to dissipate some of the incident electron energy thereby improving the signal-to-noise ratio. This layer comprises beryllium on niobium on the highly doped silicon surface zone. Low energy Kα X-ray radiation is generated in the first layer, the radiation generated in the second layer (mainly Lα radiation) is strongly absorbed in the silicon layer. A camera tube using such a target with a photocathode for converting an X-ray image into an electron image, means to project this image onto the first side of the semiconductor wafer and means to read out the charge pattern on the second side are also described. (U.K.)

  13. MicroCuenca, una apuesta por la lectura

    OpenAIRE

    González Merchán, Byron Darío

    2015-01-01

    MicroCuenca una apuesta por la lectura; es un proyecto que busca fomentar la lectura a través de cuentos cortos, microrelatos específicamente. En donde se publica textos y fotografías abstractas y concretas, que giran en torno a la ciudad de Cuenca y a su gente. MicroCuenca a commitment to reading; It is a project that seeks to promote reading through short stories, specifically micro-stories. Where abstract and concrete texts and pictures that revolve around the city of Cuenca and its p...

  14. Electronics for the camera of the First G-APD Cherenkov Telescope (FACT) for ground based gamma-ray astronomy

    International Nuclear Information System (INIS)

    Anderhub, H; Biland, A; Boller, A; Braun, I; Commichau, V; Djambazov, L; Dorner, D; Gendotti, A; Grimm, O; Gunten, H P von; Hildebrand, D; Horisberger, U; Huber, B; Kim, K-S; Krähenbühl, T; Backes, M; Köhne, J-H; Krumm, B; Bretz, T; Farnier, C

    2012-01-01

    Within the FACT project, we construct a new type of camera based on Geiger-mode avalanche photodiodes (G-APDs). Compared to photomultipliers, G-APDs are more robust, need a lower operation voltage and have the potential of higher photon-detection efficiency and lower cost, but were never fully tested in the harsh environments of Cherenkov telescopes. The FACT camera consists of 1440 G-APD pixels and readout channels, based on the DRS4 (Domino Ring Sampler) analog pipeline chip and commercial Ethernet components. Preamplifiers, trigger system, digitization, slow control and power converters are integrated into the camera.

  15. Towards Adaptive Virtual Camera Control In Computer Games

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2011-01-01

    Automatic camera control aims to define a framework to control virtual camera movements in dynamic and unpredictable virtual environments while ensuring a set of desired visual properties. We inves- tigate the relationship between camera placement and playing behaviour in games and build a user...... model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platform game, we cluster gaze and camera information to identify camera behaviour profiles and we employ...... camera control in games is discussed....

  16. Micro Machining Enhances Precision Fabrication

    Science.gov (United States)

    2007-01-01

    Advanced thermal systems developed for the Space Station Freedom project are now in use on the International Space Station. These thermal systems employ evaporative ammonia as their coolant, and though they employ the same series of chemical reactions as terrestrial refrigerators, the space-bound coolers are significantly smaller. Two Small Business Innovation Research (SBIR) contracts between Creare Inc. of Hanover, NH and Johnson Space Center developed an ammonia evaporator for thermal management systems aboard Freedom. The principal investigator for Creare Inc., formed Mikros Technologies Inc. to commercialize the work. Mikros Technologies then developed an advanced form of micro-electrical discharge machining (micro-EDM) to make tiny holes in the ammonia evaporator. Mikros Technologies has had great success applying this method to the fabrication of micro-nozzle array systems for industrial ink jet printing systems. The company is currently the world leader in fabrication of stainless steel micro-nozzles for this market, and in 2001 the company was awarded two SBIR research contracts from Goddard Space Flight Center to advance micro-fabrication and high-performance thermal management technologies.

  17. Positioning the laparoscopic camera with industrial robot arm

    DEFF Research Database (Denmark)

    Capolei, Marie Claire; Wu, Haiyan; Andersen, Nils Axel

    2017-01-01

    This paper introduces a solution for the movement control of the laparoscopic camera employing a teleoperated robotic assistant. The project propose an autonomous robotic solution based on an industrial manipulator, provided with a modular software which is applicable to large scale. The robot arm...... industrial robot arm is designated to accomplish this manipulation task. The software is implemented in ROS in order to facilitate future extensions. The experimental results shows a manipulator capable of moving fast and smoothly the surgical tool around a remote center of motion....

  18. Upgrading of analogue gamma cameras with PC based computer system

    International Nuclear Information System (INIS)

    Fidler, V.; Prepadnik, M.

    2002-01-01

    Full text: Dedicated nuclear medicine computers for acquisition and processing of images from analogue gamma cameras in developing countries are in many cases faulty and technologically obsolete. The aim of the upgrading project of International Atomic Energy Agency (IAEA) was to support the development of the PC based computer system which would cost 5.000 $ in total. Several research institutions from different countries (China, Cuba, India and Slovenia) were financially supported in this development. The basic demands for the system were: one acquisition card an ISA bus, image resolution up to 256x256, SVGA graphics, low count loss at high count rates, standard acquisition and clinical protocols incorporated in PIP (Portable Image Processing), on-line energy and uniformity correction, graphic printing and networking. The most functionally stable acquisition system tested on several international workshops and university clinics was the Slovenian one with a complete set of acquisition and clinical protocols, transfer of scintigraphic data from acquisition card to PC through PORT, count loss less than 1 % at count rate of 120 kc/s, improvement of integral uniformity index by a factor of 3-5 times, reporting, networking and archiving solutions for simple MS network or server oriented network systems (NT server, etc). More than 300 gamma cameras in 52 countries were digitized and put in the routine work. The project of upgrading the analogue gamma cameras yielded a high promotion of nuclear medicine in the developing countries by replacing the old computer systems, improving the technological knowledge of end users on workshops and training courses and lowering the maintenance cost of the departments. (author)

  19. Investigating the micro-rheology of the vitreous humor using an optically trapped local probe

    Science.gov (United States)

    Watts, Fiona; Ean Tan, Lay; Wilson, Clive G.; Girkin, John M.; Tassieri, Manlio; Wright, Amanda J.

    2014-01-01

    We demonstrate that an optically trapped silica bead can be used as a local probe to measure the micro-rheology of the vitreous humor. The Brownian motion of the bead was observed using a fast camera and the micro-rheology determined by analysis of the time-dependent mean-square displacement of the bead. We observed regions of the vitreous that showed different degrees of viscoelasticity, along with the homogeneous and inhomogeneous nature of different regions. The motivation behind this study is to understand the vitreous structure, in particular changes due to aging, allowing more confident prediction of pharmaceutical drug behavior and delivery within the vitreous humor.

  20. Investigating the micro-rheology of the vitreous humor using an optically trapped local probe

    International Nuclear Information System (INIS)

    Watts, Fiona; Wright, Amanda J; Tan, Lay Ean; Wilson, Clive G; Girkin, John M; Tassieri, Manlio

    2014-01-01

    We demonstrate that an optically trapped silica bead can be used as a local probe to measure the micro-rheology of the vitreous humor. The Brownian motion of the bead was observed using a fast camera and the micro-rheology determined by analysis of the time-dependent mean-square displacement of the bead. We observed regions of the vitreous that showed different degrees of viscoelasticity, along with the homogeneous and inhomogeneous nature of different regions. The motivation behind this study is to understand the vitreous structure, in particular changes due to aging, allowing more confident prediction of pharmaceutical drug behavior and delivery within the vitreous humor. (paper)

  1. Micro-motion Recognition of Spatial Cone Target Based on ISAR Image Sequences

    Directory of Open Access Journals (Sweden)

    Changyong Shu

    2016-04-01

    Full Text Available The accurate micro-motions recognition of spatial cone target is the foundation of the characteristic parameter acquisition. For this reason, a micro-motion recognition method based on the distinguishing characteristics extracted from the Inverse Synthetic Aperture Radar (ISAR sequences is proposed in this paper. The projection trajectory formula of cone node strong scattering source and cone bottom slip-type strong scattering sources, which are located on the spatial cone target, are deduced under three micro-motion types including nutation, precession, and spinning, and the correctness is verified by the electromagnetic simulation. By comparison, differences are found among the projection of the scattering sources with different micro-motions, the coordinate information of the scattering sources in the Inverse Synthetic Aperture Radar sequences is extracted by the CLEAN algorithm, and the spinning is recognized by setting the threshold value of Doppler. The double observation points Interacting Multiple Model Kalman Filter is used to separate the scattering sources projection of the nutation target or precession target, and the cross point number of each scattering source’s projection track is used to classify the nutation or precession. Finally, the electromagnetic simulation data are used to verify the effectiveness of the micro-motion recognition method.

  2. Micro-Mechanical Temperature Sensors

    DEFF Research Database (Denmark)

    Larsen, Tom

    Temperature is the most frequently measured physical quantity in the world. The field of thermometry is therefore constantly evolving towards better temperature sensors and better temperature measurements. The aim of this Ph.D. project was to improve an existing type of micro-mechanical temperature...... sensor or to develop a new one. Two types of micro-mechanical temperature sensors have been studied: Bilayer cantilevers and string-like beam resonators. Both sensor types utilize thermally generated stress. Bilayer cantilevers are frequently used as temperature sensors at the micro-scale, and the goal....... The reduced sensitivity was due to initial bending of the cantilevers and poor adhesion between the two cantilever materials. No further attempts were made to improve the sensitivity of bilayer cantilevers. The concept of using string-like resonators as temperature sensors has, for the first time, been...

  3. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-07-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2-deg deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  4. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-04-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2 0 deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  5. Recent Results from NASA's Morphing Project

    Science.gov (United States)

    McGowan, Anna-Maria R.; Washburn, Anthony E.; Horta, Lucas G.; Bryant, Robert G.; Cox, David E.; Siochi, Emilie J.; Padula, Sharon L.; Holloway, Nancy M.

    2002-01-01

    The NASA Morphing Project seeks to develop and assess advanced technologies and integrated component concepts to enable efficient, multi-point adaptability in air and space vehicles. In the context of the project, the word "morphing" is defined as "efficient, multi-point adaptability" and may include macro, micro, structural and/or fluidic approaches. The project includes research on smart materials, adaptive structures, micro flow control, biomimetic concepts, optimization and controls. This paper presents an updated overview of the content of the Morphing Project including highlights of recent research results.

  6. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.; Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  7. The eye of the camera: effects of security cameras on pro-social behavior

    NARCIS (Netherlands)

    van Rompay, T.J.L.; Vonk, D.J.; Fransen, M.L.

    2009-01-01

    This study addresses the effects of security cameras on prosocial behavior. Results from previous studies indicate that the presence of others can trigger helping behavior, arising from the need for approval of others. Extending these findings, the authors propose that security cameras can likewise

  8. Passive auto-focus for digital still cameras and camera phones: Filter-switching and low-light techniques

    Science.gov (United States)

    Gamadia, Mark Noel

    In order to gain valuable market share in the growing consumer digital still camera and camera phone market, camera manufacturers have to continually add and improve existing features to their latest product offerings. Auto-focus (AF) is one such feature, whose aim is to enable consumers to quickly take sharply focused pictures with little or no manual intervention in adjusting the camera's focus lens. While AF has been a standard feature in digital still and cell-phone cameras, consumers often complain about their cameras' slow AF performance, which may lead to missed photographic opportunities, rendering valuable moments and events with undesired out-of-focus pictures. This dissertation addresses this critical issue to advance the state-of-the-art in the digital band-pass filter, passive AF method. This method is widely used to realize AF in the camera industry, where a focus actuator is adjusted via a search algorithm to locate the in-focus position by maximizing a sharpness measure extracted from a particular frequency band of the incoming image of the scene. There are no known systematic methods for automatically deriving the parameters such as the digital pass-bands or the search step-size increments used in existing passive AF schemes. Conventional methods require time consuming experimentation and tuning in order to arrive at a set of parameters which balance AF performance in terms of speed and accuracy ultimately causing a delay in product time-to-market. This dissertation presents a new framework for determining an optimal set of passive AF parameters, named Filter- Switching AF, providing an automatic approach to achieve superior AF performance, both in good and low lighting conditions based on the following performance measures (metrics): speed (total number of iterations), accuracy (offset from truth), power consumption (total distance moved), and user experience (in-focus position overrun). Performance results using three different prototype cameras

  9. Image compensation for camera and lighting variability

    Science.gov (United States)

    Daley, Wayne D.; Britton, Douglas F.

    1996-12-01

    With the current trend of integrating machine vision systems in industrial manufacturing and inspection applications comes the issue of camera and illumination stabilization. Unless each application is built around a particular camera and highly controlled lighting environment, the interchangeability of cameras of fluctuations in lighting become a problem as each camera usually has a different response. An empirical approach is proposed where color tile data is acquired using the camera of interest, and a mapping is developed to some predetermined reference image using neural networks. A similar analytical approach based on a rough analysis of the imaging systems is also considered for deriving a mapping between cameras. Once a mapping has been determined, all data from one camera is mapped to correspond to the images of the other prior to performing any processing on the data. Instead of writing separate image processing algorithms for the particular image data being received, the image data is adjusted based on each particular camera and lighting situation. All that is required when swapping cameras is the new mapping for the camera being inserted. The image processing algorithms can remain the same as the input data has been adjusted appropriately. The results of utilizing this technique are presented for an inspection application.

  10. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  11. Science, conservation, and camera traps

    Science.gov (United States)

    Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.

  12. Micro-Scale Avionics Thermal Management

    Science.gov (United States)

    Moran, Matthew E.

    2001-01-01

    Trends in the thermal management of avionics and commercial ground-based microelectronics are converging, and facing the same dilemma: a shortfall in technology to meet near-term maximum junction temperature and package power projections. Micro-scale devices hold the key to significant advances in thermal management, particularly micro-refrigerators/coolers that can drive cooling temperatures below ambient. A microelectromechanical system (MEMS) Stirling cooler is currently under development at the NASA Glenn Research Center to meet this challenge with predicted efficiencies that are an order of magnitude better than current and future thermoelectric coolers.

  13. Soft x-ray streak cameras

    International Nuclear Information System (INIS)

    Stradling, G.L.

    1988-01-01

    This paper is a discussion of the development and of the current state of the art in picosecond soft x-ray streak camera technology. Accomplishments from a number of institutions are discussed. X-ray streak cameras vary from standard visible streak camera designs in the use of an x-ray transmitting window and an x-ray sensitive photocathode. The spectral sensitivity range of these instruments includes portions of the near UV and extends from the subkilovolt x- ray region to several tens of kilovolts. Attendant challenges encountered in the design and use of x-ray streak cameras include the accommodation of high-voltage and vacuum requirements, as well as manipulation of a photocathode structure which is often fragile. The x-ray transmitting window is generally too fragile to withstand atmospheric pressure, necessitating active vacuum pumping and a vacuum line of sight to the x-ray signal source. Because of the difficulty of manipulating x-ray beams with conventional optics, as is done with visible light, the size of the photocathode sensing area, access to the front of the tube, the ability to insert the streak tube into a vacuum chamber and the capability to trigger the sweep with very short internal delay times are issues uniquely relevant to x-ray streak camera use. The physics of electron imaging may place more stringent limitations on the temporal and spatial resolution obtainable with x-ray photocathodes than with the visible counterpart. Other issues which are common to the entire streak camera community also concern the x-ray streak camera users and manufacturers

  14. New camera systems for fuel services

    International Nuclear Information System (INIS)

    Hummel, W.; Beck, H.J.

    2010-01-01

    AREVA NP Fuel Services have many years of experience in visual examination and measurements on fuel assemblies and associated core components by using state of the art cameras and measuring technologies. The used techniques allow the surface and dimensional characterization of materials and shapes by visual examination. New enhanced and sophisticated technologies for fuel services f. e. are two shielded color camera systems for use under water and close inspection of a fuel assembly. Nowadays the market requirements for detecting and characterization of small defects (lower than the 10th of one mm) or cracks and analyzing surface appearances on an irradiated fuel rod cladding or fuel assembly structure parts have increased. Therefore it is common practice to use movie cameras with higher resolution. The radiation resistance of high resolution CCD cameras is in general very low and it is not possible to use them unshielded close to a fuel assembly. By extending the camera with a mirror system and shielding around the sensitive parts, the movie camera can be utilized for fuel assembly inspection. AREVA NP Fuel Services is now equipped with such kind of movie cameras. (orig.)

  15. Urban micro-grids

    International Nuclear Information System (INIS)

    Faure, Maeva; Salmon, Martin; El Fadili, Safae; Payen, Luc; Kerlero, Guillaume; Banner, Arnaud; Ehinger, Andreas; Illouz, Sebastien; Picot, Roland; Jolivet, Veronique; Michon Savarit, Jeanne; Strang, Karl Axel

    2017-02-01

    ENEA Consulting published the results of a study on urban micro-grids conducted in partnership with the Group ADP, the Group Caisse des Depots, ENEDIS, Omexom, Total and the Tuck Foundation. This study offers a vision of the definition of an urban micro-grid, the value brought by a micro-grid in different contexts based on real case studies, and the upcoming challenges that micro-grid stakeholders will face (regulation, business models, technology). The electric production and distribution system, as the backbone of an increasingly urbanized and energy dependent society, is urged to shift towards a more resilient, efficient and environment-friendly infrastructure. Decentralisation of electricity production into densely populated areas is a promising opportunity to achieve this transition. A micro-grid enhances local production through clustering electricity producers and consumers within a delimited electricity network; it has the ability to disconnect from the main grid for a limited period of time, offering an energy security service to its customers during grid outages for example. However: The islanding capability is an inherent feature of the micro-grid concept that leads to a significant premium on electricity cost, especially in a system highly reliant on intermittent electricity production. In this case, a smart grid, with local energy production and no islanding capability, can be customized to meet relevant sustainability and cost savings goals at lower costs For industrials, urban micro-grids can be economically profitable in presence of high share of reliable energy production and thermal energy demand micro-grids face strong regulatory challenges that should be overcome for further development Whether islanding is or is not implemented into the system, end-user demand for a greener, more local, cheaper and more reliable energy, as well as additional services to the grid, are strong drivers for local production and consumption. In some specific cases

  16. INFN Camera demonstrator for the Cherenkov Telescope Array

    CERN Document Server

    Ambrosi, G; Aramo, C.; Bertucci, B.; Bissaldi, E.; Bitossi, M.; Brasolin, S.; Busetto, G.; Carosi, R.; Catalanotti, S.; Ciocci, M.A.; Consoletti, R.; Da Vela, P.; Dazzi, F.; De Angelis, A.; De Lotto, B.; de Palma, F.; Desiante, R.; Di Girolamo, T.; Di Giulio, C.; Doro, M.; D'Urso, D.; Ferraro, G.; Ferrarotto, F.; Gargano, F.; Giglietto, N.; Giordano, F.; Giraudo, G.; Iacovacci, M.; Ionica, M.; Iori, M.; Longo, F.; Mariotti, M.; Mastroianni, S.; Minuti, M.; Morselli, A.; Paoletti, R.; Pauletta, G.; Rando, R.; Fernandez, G. Rodriguez; Rugliancich, A.; Simone, D.; Stella, C.; Tonachini, A.; Vallania, P.; Valore, L.; Vagelli, V.; Verzi, V.; Vigorito, C.

    2015-01-01

    The Cherenkov Telescope Array is a world-wide project for a new generation of ground-based Cherenkov telescopes of the Imaging class with the aim of exploring the highest energy region of the electromagnetic spectrum. With two planned arrays, one for each hemisphere, it will guarantee a good sky coverage in the energy range from a few tens of GeV to hundreds of TeV, with improved angular resolution and a sensitivity in the TeV energy region better by one order of magnitude than the currently operating arrays. In order to cover this wide energy range, three different telescope types are envisaged, with different mirror sizes and focal plane features. In particular, for the highest energies a possible design is a dual-mirror Schwarzschild-Couder optical scheme, with a compact focal plane. A silicon photomultiplier (SiPM) based camera is being proposed as a solution to match the dimensions of the pixel (angular size of ~ 0.17 degrees). INFN is developing a camera demonstrator made by 9 Photo Sensor Modules (PSMs...

  17. Automatic multi-camera calibration for deployable positioning systems

    Science.gov (United States)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  18. Towards next generation 3D cameras

    Science.gov (United States)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (robotic inspection and assembly systems.

  19. Multi-Angle Snowflake Camera Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Stuefer, Martin [Univ. of Alaska, Fairbanks, AK (United States); Bailey, J. [Univ. of Alaska, Fairbanks, AK (United States)

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASC cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.

  20. Upgrade of the JET gamma-ray cameras

    International Nuclear Information System (INIS)

    Soare, S.; Curuia, M.; Anghel, M.; Constantin, M.; David, E.; Craciunescu, T.; Falie, D.; Pantea, A.; Tiseanu, I.; Kiptily, V.; Prior, P.; Edlington, T.; Griph, S.; Krivchenkov, Y.; Loughlin, M.; Popovichev, S.; Riccardo, V; Syme, B.; Thompson, V.; Lengar, I.; Murari, A.; Bonheure, G.; Le Guern, F.

    2007-01-01

    Full text: The JET gamma-ray camera diagnostics have already provided valuable information on the gamma-ray imaging of fast ion in JET plasmas. The applicability of gamma-ray imaging to high performance deuterium and deuterium-tritium JET discharges is strongly dependent on the fulfilment of rather strict requirements for the characterisation of the neutron and gamma-ray radiation fields. These requirements have to be satisfied within very stringent boundary conditions for the design, such as the requirement of minimum impact on the co-existing neutron camera diagnostics. The JET Gamma-Ray Cameras (GRC) upgrade project deals with these issues with particular emphasis on the design of appropriate neutron/gamma-ray filters ('neutron attenuators'). Several design versions have been developed and evaluated for the JET GRC neutron attenuators at the conceptual design level. The main design parameter was the neutron attenuation factor. The two design solutions, that have been finally chosen and developed at the level of scheme design, consist of: a) one quasi-crescent shaped neutron attenuator (for the horizontal camera) and b) two quasi-trapezoid shaped neutron attenuators (for the vertical one). The second design solution has different attenuation lengths: a short version, to be used together with the horizontal attenuator for deuterium discharges, and a long version to be used for high performance deuterium and DT discharges. Various neutron-attenuating materials have been considered (lithium hydride with natural isotopic composition and 6 Li enriched, light and heavy water, polyethylene). Pure light water was finally chosen as the attenuating material for the JET gamma-ray cameras. The neutron attenuators will be steered in and out of the detector line-of-sight by means of an electro-pneumatic steering and control system. The MCNP code was used for neutron and gamma ray transport in order to evaluate the effect of the neutron attenuators on the neutron field of the

  1. Transmission electron microscope CCD camera

    Science.gov (United States)

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  2. Wavefront analysis for plenoptic camera imaging

    International Nuclear Information System (INIS)

    Luan Yin-Sen; Xu Bing; Yang Ping; Tang Guo-Mao

    2017-01-01

    The plenoptic camera is a single lens stereo camera which can retrieve the direction of light rays while detecting their intensity distribution. In this paper, to reveal more truths of plenoptic camera imaging, we present the wavefront analysis for the plenoptic camera imaging from the angle of physical optics but not from the ray tracing model of geometric optics. Specifically, the wavefront imaging model of a plenoptic camera is analyzed and simulated by scalar diffraction theory and the depth estimation is redescribed based on physical optics. We simulate a set of raw plenoptic images of an object scene, thereby validating the analysis and derivations and the difference between the imaging analysis methods based on geometric optics and physical optics are also shown in simulations. (paper)

  3. Integrated sensor array for on-line monitoring micro bioreactors

    NARCIS (Netherlands)

    Krommenhoek, E.E.

    2007-01-01

    The “Fed��?batch on a chip��?��?project, which was carried out in close cooperation with the Technical University of Delft, aims to miniaturize and parallelize micro bioreactors suitable for on-line screening of micro-organisms. This thesis describes an electrochemical sensor array which has been

  4. The MicroBooNE Technical Design Report

    Energy Technology Data Exchange (ETDEWEB)

    Fleming, Bonnie [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)

    2012-02-24

    MicroBooNE will build, operate, and extract physics from the first large liquid argon time projection chamber (LArTPC) that will be exposed to a high-intensity neutrino beam. With its unparalleled capabilities in tracking, vertexing, calorimetry, and particle identification, all with full electronic readout, MicroBooNE represents a major advance in detector technology for neutrino physics in the energy regime of most importance for elucidating oscillation phenomena.

  5. IMAGE CAPTURE WITH SYNCHRONIZED MULTIPLE-CAMERAS FOR EXTRACTION OF ACCURATE GEOMETRIES

    Directory of Open Access Journals (Sweden)

    M. Koehl

    2016-06-01

    Full Text Available This paper presents a project of recording and modelling tunnels, traffic circles and roads from multiple sensors. The aim is the representation and the accurate 3D modelling of a selection of road infrastructures as dense point clouds in order to extract profiles and metrics from it. Indeed, these models will be used for the sizing of infrastructures in order to simulate exceptional convoy truck routes. The objective is to extract directly from the point clouds the heights, widths and lengths of bridges and tunnels, the diameter of gyrating and to highlight potential obstacles for a convoy. Light, mobile and fast acquisition approaches based on images and videos from a set of synchronized sensors have been tested in order to obtain useable point clouds. The presented solution is based on a combination of multiple low-cost cameras designed on an on-boarded device allowing dynamic captures. The experimental device containing GoPro Hero4 cameras has been set up and used for tests in static or mobile acquisitions. That way, various configurations have been tested by using multiple synchronized cameras. These configurations are discussed in order to highlight the best operational configuration according to the shape of the acquired objects. As the precise calibration of each sensor and its optics are major factors in the process of creation of accurate dense point clouds, and in order to reach the best quality available from such cameras, the estimation of the internal parameters of fisheye lenses of the cameras has been processed. Reference measures were also realized by using a 3D TLS (Faro Focus 3D to allow the accuracy assessment.

  6. Image Capture with Synchronized Multiple-Cameras for Extraction of Accurate Geometries

    Science.gov (United States)

    Koehl, M.; Delacourt, T.; Boutry, C.

    2016-06-01

    This paper presents a project of recording and modelling tunnels, traffic circles and roads from multiple sensors. The aim is the representation and the accurate 3D modelling of a selection of road infrastructures as dense point clouds in order to extract profiles and metrics from it. Indeed, these models will be used for the sizing of infrastructures in order to simulate exceptional convoy truck routes. The objective is to extract directly from the point clouds the heights, widths and lengths of bridges and tunnels, the diameter of gyrating and to highlight potential obstacles for a convoy. Light, mobile and fast acquisition approaches based on images and videos from a set of synchronized sensors have been tested in order to obtain useable point clouds. The presented solution is based on a combination of multiple low-cost cameras designed on an on-boarded device allowing dynamic captures. The experimental device containing GoPro Hero4 cameras has been set up and used for tests in static or mobile acquisitions. That way, various configurations have been tested by using multiple synchronized cameras. These configurations are discussed in order to highlight the best operational configuration according to the shape of the acquired objects. As the precise calibration of each sensor and its optics are major factors in the process of creation of accurate dense point clouds, and in order to reach the best quality available from such cameras, the estimation of the internal parameters of fisheye lenses of the cameras has been processed. Reference measures were also realized by using a 3D TLS (Faro Focus 3D) to allow the accuracy assessment.

  7. Scintillation camera for high activity sources

    International Nuclear Information System (INIS)

    Arseneau, R.E.

    1978-01-01

    The invention described relates to a scintillation camera used for clinical medical diagnosis. Advanced recognition of many unacceptable pulses allows the scintillation camera to discard such pulses at an early stage in processing. This frees the camera to process a greater number of pulses of interest within a given period of time. Temporary buffer storage allows the camera to accommodate pulses received at a rate in excess of its maximum rated capability due to statistical fluctuations in the level of radioactivity of the radiation source measured. (U.K.)

  8. Decision about buying a gamma camera

    International Nuclear Information System (INIS)

    Ganatra, R.D.

    1992-01-01

    A large part of the referral to a nuclear medicine department is usually for imaging studies. Sooner or later, the nuclear medicine specialist will be called upon to make a decision about when and what type of gamma camera to buy. There is no longer an option of choosing between a rectilinear scanner and a gamma camera as the former is virtually out of the market. The decision that one has to make is when to invest in a gamma camera, and then on what basis to select the gamma camera

  9. Decision about buying a gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Ganatra, R D

    1993-12-31

    A large part of the referral to a nuclear medicine department is usually for imaging studies. Sooner or later, the nuclear medicine specialist will be called upon to make a decision about when and what type of gamma camera to buy. There is no longer an option of choosing between a rectilinear scanner and a gamma camera as the former is virtually out of the market. The decision that one has to make is when to invest in a gamma camera, and then on what basis to select the gamma camera 1 tab., 1 fig

  10. Selective-imaging camera

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  11. Video Chat with Multiple Cameras

    OpenAIRE

    MacCormick, John

    2012-01-01

    The dominant paradigm for video chat employs a single camera at each end of the conversation, but some conversations can be greatly enhanced by using multiple cameras at one or both ends. This paper provides the first rigorous investigation of multi-camera video chat, concentrating especially on the ability of users to switch between views at either end of the conversation. A user study of 23 individuals analyzes the advantages and disadvantages of permitting a user to switch between views at...

  12. Microprocessor-controlled, wide-range streak camera

    Energy Technology Data Exchange (ETDEWEB)

    Amy E. Lewis, Craig Hollabaugh

    2006-09-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera’s user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  13. Investigation of the Arc-Anode Attachment Area by Utilizing a High-Speed Camera

    Czech Academy of Sciences Publication Activity Database

    Ondáč, Peter; Mašláni, Alan; Hrabovský, Milan

    2016-01-01

    Roč. 3, č. 1 (2016), s. 1-5 ISSN 2336-2626 R&D Projects: GA ČR(CZ) GA15-19444S Institutional support: RVO:61389021 Keywords : plasma * arc * anode * attachment * camera * wave Subject RIV: BL - Plasma and Gas Discharge Physics http://ppt.fel.cvut.cz/ppt2016.html#number1

  14. Micron-CT using quasi-monochromatic x-rays produced in micro-PIXE

    International Nuclear Information System (INIS)

    Ishii, K.

    2009-01-01

    In ion-atom collision, characteristic X-rays are intensively produced and can be considered as a monochromatic X-ray source. We apply this feature to X-ray CT. By using micro-beams, cross sectional images can be provided with a spatial resolution of about 1 μm. On the basis of this idea, we developed a micron-CT consisting of a micro-beam system and an X-ray CCD camera. A tube holding samples was rotated by a stepping motor and the transmission images of the sample were taken with characteristic K-X-rays of Ti (4.558 keV) produced by 3 MeV proton micro-beams. After image reconstruction, images of cross sections of small objects were obtained with a spatial resolution of 3 μm. Using an absorption edge, we can identify an element in a sample. It is expected that our micron-CT can provide cross sectional images of in-vivo cellular samples and can be applied to a wide range of researches in biology and medicine. (author)

  15. Single Camera Calibration in 3D Vision

    Directory of Open Access Journals (Sweden)

    Caius SULIMAN

    2009-12-01

    Full Text Available Camera calibration is a necessary step in 3D vision in order to extract metric information from 2D images. A camera is considered to be calibrated when the parameters of the camera are known (i.e. principal distance, lens distorsion, focal length etc.. In this paper we deal with a single camera calibration method and with the help of this method we try to find the intrinsic and extrinsic camera parameters. The method was implemented with succes in the programming and simulation environment Matlab.

  16. A camera specification for tendering purposes

    International Nuclear Information System (INIS)

    Lunt, M.J.; Davies, M.D.; Kenyon, N.G.

    1985-01-01

    A standardized document is described which is suitable for sending to companies which are being invited to tender for the supply of a gamma camera. The document refers to various features of the camera, the performance specification of the camera, maintenance details, price quotations for various options and delivery, installation and warranty details. (U.K.)

  17. Application of X-ray micro-computed tomography on high-speed cavitating diesel fuel flows

    Energy Technology Data Exchange (ETDEWEB)

    Mitroglou, N.; Lorenzi, M.; Gavaises, M. [City University London, School of Mathematics Computer Science and Engineering, London (United Kingdom); Santini, M. [University of Bergamo, Department of Engineering, Bergamo (Italy)

    2016-11-15

    The flow inside a purpose built enlarged single-orifice nozzle replica is quantified using time-averaged X-ray micro-computed tomography (micro-CT) and high-speed shadowgraphy. Results have been obtained at Reynolds and cavitation numbers similar to those of real-size injectors. Good agreement for the cavitation extent inside the orifice is found between the micro-CT and the corresponding temporal mean 2D cavitation image, as captured by the high-speed camera. However, the internal 3D structure of the developing cavitation cloud reveals a hollow vapour cloud ring formed at the hole entrance and extending only at the lower part of the hole due to the asymmetric flow entry. Moreover, the cavitation volume fraction exhibits a significant gradient along the orifice volume. The cavitation number and the needle valve lift seem to be the most influential operating parameters, while the Reynolds number seems to have only small effect for the range of values tested. Overall, the study demonstrates that use of micro-CT can be a reliable tool for cavitation in nozzle orifices operating under nominal steady-state conditions. (orig.)

  18. Application of X-ray micro-computed tomography on high-speed cavitating diesel fuel flows

    Science.gov (United States)

    Mitroglou, N.; Lorenzi, M.; Santini, M.; Gavaises, M.

    2016-11-01

    The flow inside a purpose built enlarged single-orifice nozzle replica is quantified using time-averaged X-ray micro-computed tomography (micro-CT) and high-speed shadowgraphy. Results have been obtained at Reynolds and cavitation numbers similar to those of real-size injectors. Good agreement for the cavitation extent inside the orifice is found between the micro-CT and the corresponding temporal mean 2D cavitation image, as captured by the high-speed camera. However, the internal 3D structure of the developing cavitation cloud reveals a hollow vapour cloud ring formed at the hole entrance and extending only at the lower part of the hole due to the asymmetric flow entry. Moreover, the cavitation volume fraction exhibits a significant gradient along the orifice volume. The cavitation number and the needle valve lift seem to be the most influential operating parameters, while the Reynolds number seems to have only small effect for the range of values tested. Overall, the study demonstrates that use of micro-CT can be a reliable tool for cavitation in nozzle orifices operating under nominal steady-state conditions.

  19. Application of X-ray micro-computed tomography on high-speed cavitating diesel fuel flows

    International Nuclear Information System (INIS)

    Mitroglou, N.; Lorenzi, M.; Gavaises, M.; Santini, M.

    2016-01-01

    The flow inside a purpose built enlarged single-orifice nozzle replica is quantified using time-averaged X-ray micro-computed tomography (micro-CT) and high-speed shadowgraphy. Results have been obtained at Reynolds and cavitation numbers similar to those of real-size injectors. Good agreement for the cavitation extent inside the orifice is found between the micro-CT and the corresponding temporal mean 2D cavitation image, as captured by the high-speed camera. However, the internal 3D structure of the developing cavitation cloud reveals a hollow vapour cloud ring formed at the hole entrance and extending only at the lower part of the hole due to the asymmetric flow entry. Moreover, the cavitation volume fraction exhibits a significant gradient along the orifice volume. The cavitation number and the needle valve lift seem to be the most influential operating parameters, while the Reynolds number seems to have only small effect for the range of values tested. Overall, the study demonstrates that use of micro-CT can be a reliable tool for cavitation in nozzle orifices operating under nominal steady-state conditions. (orig.)

  20. State of art in radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Choi; Young Soo; Kim, Seong Ho; Cho, Jae Wan; Kim, Chang Hoi; Seo, Young Chil

    2002-02-01

    Working in radiation environment such as nuclear power plant, RI facility, nuclear fuel fabrication facility, medical center has to be considered radiation exposure, and we can implement these job by remote observation and operation. However the camera used for general industry is weakened at radiation, so radiation-tolerant camera is needed for radiation environment. The application of radiation-tolerant camera system is nuclear industry, radio-active medical, aerospace, and so on. Specially nuclear industry, the demand is continuous in the inspection of nuclear boiler, exchange of pellet, inspection of nuclear waste. In the nuclear developed countries have been an effort to develop radiation-tolerant cameras. Now they have many kinds of radiation-tolerant cameras which can tolerate to 10{sup 6}-10{sup 8} rad total dose. In this report, we examine into the state-of-art about radiation-tolerant cameras, and analyze these technology. We want to grow up the concern of developing radiation-tolerant camera by this paper, and upgrade the level of domestic technology.

  1. 16 CFR 501.1 - Camera film.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk still...

  2. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  3. Micro-cathode Arc Thruster PhoneSat Experiment

    Data.gov (United States)

    National Aeronautics and Space Administration — The Micro-cathode Arc Thruster Phonesat Experiment  was a joint project between George Washington University and NASA Ames Research Center that successfully...

  4. Principle of some gamma cameras (efficiencies, limitations, development)

    International Nuclear Information System (INIS)

    Allemand, R.; Bourdel, J.; Gariod, R.; Laval, M.; Levy, G.; Thomas, G.

    1975-01-01

    The quality of scintigraphic images is shown to depend on the efficiency of both the input collimator and the detector. Methods are described by which the quality of these images may be improved by adaptations to either the collimator (Fresnel zone camera, Compton effect camera) or the detector (Anger camera, image amplification camera). The Anger camera and image amplification camera are at present the two main instruments whereby acceptable space and energy resolutions may be obtained. A theoretical comparative study of their efficiencies is carried out, independently of their technological differences, after which the instruments designed or under study at the LETI are presented: these include the image amplification camera, the electron amplifier tube camera using a semi-conductor target CdTe and HgI 2 detector [fr

  5. Streak camera recording of interferometer fringes

    International Nuclear Information System (INIS)

    Parker, N.L.; Chau, H.H.

    1977-01-01

    The use of an electronic high-speed camera in the streaking mode to record interference fringe motion from a velocity interferometer is discussed. Advantages of this method over the photomultiplier tube-oscilloscope approach are delineated. Performance testing and data for the electronic streak camera are discussed. The velocity profile of a mylar flyer accelerated by an electrically exploded bridge, and the jump-off velocity of metal targets struck by these mylar flyers are measured in the camera tests. Advantages of the streak camera include portability, low cost, ease of operation and maintenance, simplified interferometer optics, and rapid data analysis

  6. A pin diode x-ray camera for laser fusion diagnostic imaging: Final technical report

    International Nuclear Information System (INIS)

    Jernigan, J.G.

    1987-01-01

    An x-ray camera has been constructed and tested for diagnostic imaging of laser fusion targets at the Laboratory for Laser Energetics (LLE) of the University of Rochester. The imaging detector, developed by the Hughes Aircraft Company, is a germanium PIN diode array of 10 x 64 separate elements which are bump bonded to a silicon readout chip containing a separate low noise amplifier for each pixel element. The camera assembly consists of a pinhole alignment mechanism, liquid nitrogen cryostat with detector mount and a thin beryllium entrance window, and a shielded rack containing the analog and digital electronics for operations. This x-ray camera has been tested on the OMEGA laser target chamber, the primary laser target facility of LLE, and operated via an Ethernet link to a SUN Microsystems workstation. X-ray images of laser targets are presented. The successful operation of this particular x-ray camera is a demonstration of the viability of the hybrid detector technology for future imaging and spectroscopic applications. This work was funded by the Department of Energy (DOE) as a project of the National Laser Users Facility (NLUF)

  7. The Role of MicroRNAs in Pancreatitis

    Science.gov (United States)

    2015-10-01

    AD______________ AWARD NUMBER: W81XWH-14-1-0469 TITLE: The Role of microRNAs in Pancreatitis PRINCIPAL INVESTIGATOR: Li, Yong RECIPIENT...The Role of MicroRNAs in Pancreatitis 5b. GRANT NUMBER W81XWH-14-1-0469 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Li, Yong 5e...AVAILABILITY STATEMENT Approved for Public Release; Distribution Unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Pancreatitis (inflammation of the

  8. Automated Meteor Detection by All-Sky Digital Camera Systems

    Czech Academy of Sciences Publication Activity Database

    Suk, Tomáš; Šimberová, Stanislava

    2017-01-01

    Roč. 120, č. 3 (2017), s. 189-215 ISSN 0167-9295 R&D Projects: GA ČR GA15-16928S Institutional support: RVO:67985815 ; RVO:67985556 Keywords : meteor detection * autonomous fireball observatories * fish-eye camera * Hough transformation Subject RIV: IN - Informatics, Computer Science; BN - Astronomy, Celestial Mechanics, Astrophysics (ASU-R) OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8); Astronomy (including astrophysics,space science) (ASU-R) Impact factor: 0.875, year: 2016

  9. The fly's eye camera system

    Science.gov (United States)

    Mészáros, L.; Pál, A.; Csépány, G.; Jaskó, A.; Vida, K.; Oláh, K.; Mezö, G.

    2014-12-01

    We introduce the Fly's Eye Camera System, an all-sky monitoring device intended to perform time domain astronomy. This camera system design will provide complementary data sets for other synoptic sky surveys such as LSST or Pan-STARRS. The effective field of view is obtained by 19 cameras arranged in a spherical mosaic form. These individual cameras of the device stand on a hexapod mount that is fully capable of achieving sidereal tracking for the subsequent exposures. This platform has many advantages. First of all it requires only one type of moving component and does not include unique parts. Hence this design not only eliminates problems implied by unique elements, but the redundancy of the hexapod allows smooth operations even if one or two of the legs are stuck. In addition, it can calibrate itself by observed stars independently from both the geographical location (including northen and southern hemisphere) and the polar alignment of the full mount. All mechanical elements and electronics are designed within the confines of our institute Konkoly Observatory. Currently, our instrument is in testing phase with an operating hexapod and reduced number of cameras.

  10. Motorcycle detection and counting using stereo camera, IR camera, and microphone array

    Science.gov (United States)

    Ling, Bo; Gibson, David R. P.; Middleton, Dan

    2013-03-01

    Detection, classification, and characterization are the key to enhancing motorcycle safety, motorcycle operations and motorcycle travel estimation. Average motorcycle fatalities per Vehicle Mile Traveled (VMT) are currently estimated at 30 times those of auto fatalities. Although it has been an active research area for many years, motorcycle detection still remains a challenging task. Working with FHWA, we have developed a hybrid motorcycle detection and counting system using a suite of sensors including stereo camera, thermal IR camera and unidirectional microphone array. The IR thermal camera can capture the unique thermal signatures associated with the motorcycle's exhaust pipes that often show bright elongated blobs in IR images. The stereo camera in the system is used to detect the motorcyclist who can be easily windowed out in the stereo disparity map. If the motorcyclist is detected through his or her 3D body recognition, motorcycle is detected. Microphones are used to detect motorcycles that often produce low frequency acoustic signals. All three microphones in the microphone array are placed in strategic locations on the sensor platform to minimize the interferences of background noises from sources such as rain and wind. Field test results show that this hybrid motorcycle detection and counting system has an excellent performance.

  11. Performance of the prototype LaBr{sub 3} spectrometer developed for the JET gamma-ray camera upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Rigamonti, D., E-mail: davide.rigamonti@mib.infn.it; Nocente, M.; Gorini, G. [Dipartimento di Fisica “G. Occhialini,” Università degli Studi di Milano-Bicocca, Milano (Italy); Istituto di Fisica del Plasma “P. Caldirola,” CNR, Milano (Italy); Muraro, A.; Giacomelli, L.; Cippo, E. P.; Tardocchi, M. [Istituto di Fisica del Plasma “P. Caldirola,” CNR, Milano (Italy); Perseo, V. [Dipartimento di Fisica “G. Occhialini,” Università degli Studi di Milano-Bicocca, Milano (Italy); Boltruczyk, G.; Gosk, M.; Korolczuk, S.; Mianowski, S.; Zychor, I. [Narodowe Centrum Badań Jądrowych (NCBJ), 05-400 Otwock-Swierk (Poland); Fernandes, A.; Pereira, R. C. [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, Lisboa (Portugal); Figueiredo, J. [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, Lisboa (Portugal); EUROfusion Programme Management Unit, Culham Science Centre, OX14 3DB Abingdon (United Kingdom); Kiptily, V. [Culham Science Centre for Fusion Energy, Culham (United Kingdom); Murari, A. [EUROfusion Programme Management Unit, Culham Science Centre, OX14 3DB Abingdon (United Kingdom); Consorzio RFX (CNR, ENEA, INFN, Universita’ di Padova, Acciaierie Venete SpA), Padova (Italy); Collaboration: EUROfusion Consortium, JET, Culham Science Centre, Abingdon OX14 3DB (United Kingdom)

    2016-11-15

    In this work, we describe the solution developed by the gamma ray camera upgrade enhancement project to improve the spectroscopic properties of the existing JET γ-ray camera. Aim of the project is to enable gamma-ray spectroscopy in JET deuterium-tritium plasmas. A dedicated pilot spectrometer based on a LaBr{sub 3} crystal coupled to a silicon photo-multiplier has been developed. A proper pole zero cancellation network able to shorten the output signal to a length of 120 ns has been implemented allowing for spectroscopy at MHz count rates. The system has been characterized in the laboratory and shows an energy resolution of 5.5% at E{sub γ} = 0.662 MeV, which extrapolates favorably in the energy range of interest for gamma-ray emission from fast ions in fusion plasmas.

  12. Scintillation camera

    International Nuclear Information System (INIS)

    Zioni, J.; Klein, Y.; Inbar, D.

    1975-01-01

    The scintillation camera is to make pictures of the density distribution of radiation fields created by the injection or administration radioactive medicaments into the body of the patient. It contains a scintillation crystal, several photomultipliers and computer circuits to obtain an analytical function at the exits of the photomultiplier which is dependent on the position of the scintillations at the time in the crystal. The scintillation crystal is flat and spatially corresponds to the production site of radiation. The photomultipliers form a pattern whose basic form consists of at least three photomultipliers. They are assigned to at least two crossing parallel series groups where a vertical running reference axis in the crystal plane belongs to each series group. The computer circuits are each assigned to a reference axis. Each series of a series group assigned to one of the reference axes in the computer circuit has an adder to produce a scintillation dependent series signal. Furthermore, the projection of the scintillation on this reference axis is calculated. A series signal is used for this which originates from a series chosen from two neighbouring photomultiplier series of this group. The scintillation must have appeared between these chosen series. They are termed as basic series. The photomultiplier can be arranged hexagonally or rectangularly. (GG/LH) [de

  13. Tests of Micro-Pattern Gaseous Detectors for active target time projection chambers in nuclear physics

    Energy Technology Data Exchange (ETDEWEB)

    Pancin, J., E-mail: pancin@ganil.fr [GANIL, CEA/DSM-CNRS/IN2P3, Bvd H. Becquerel, Caen (France); Damoy, S.; Perez Loureiro, D. [GANIL, CEA/DSM-CNRS/IN2P3, Bvd H. Becquerel, Caen (France); Chambert, V.; Dorangeville, F. [IPNO, CNRS/IN2P3, Orsay (France); Druillole, F. [CEA, DSM/Irfu/SEDI, Gif-Sur-Yvette (France); Grinyer, G.F. [GANIL, CEA/DSM-CNRS/IN2P3, Bvd H. Becquerel, Caen (France); Lermitage, A.; Maroni, A.; Noël, G. [IPNO, CNRS/IN2P3, Orsay (France); Porte, C.; Roger, T. [GANIL, CEA/DSM-CNRS/IN2P3, Bvd H. Becquerel, Caen (France); Rosier, P. [IPNO, CNRS/IN2P3, Orsay (France); Suen, L. [GANIL, CEA/DSM-CNRS/IN2P3, Bvd H. Becquerel, Caen (France)

    2014-01-21

    Active target detection systems, where the gas used as the detection medium is also a target for nuclear reactions, have been used for a wide variety of nuclear physics applications since the eighties. Improvements in Micro-Pattern Gaseous Detectors (MPGDs) and in micro-electronics achieved in the last decade permit the development of a new generation of active targets with higher granularity pad planes that allow spatial and time information to be determined with unprecedented accuracy. A novel active target and time projection chamber (ACTAR TPC), that will be used to study reactions and decays of exotic nuclei at facilities such as SPIRAL2, is presently under development and will be based on MPGD technology. Several MPGDs (Micromegas and Thick GEM) coupled to a 2×2 mm{sup 2} pixelated pad plane have been tested and their performances have been determined with different gases over a wide range of pressures. Of particular interest for nuclear physics experiments are the angular and energy resolutions. The angular resolution has been determined to be better than 1° FWHM for short traces of about 4 cm in length and the energy resolution deduced from the particle range was found to be better than 5% for 5.5 MeV α particles. These performances have been compared to Geant4 simulations. These experimental results validate the use of these detectors for several applications in nuclear physics.

  14. Tests of Micro-Pattern Gaseous Detectors for active target time projection chambers in nuclear physics

    International Nuclear Information System (INIS)

    Pancin, J.; Damoy, S.; Perez Loureiro, D.; Chambert, V.; Dorangeville, F.; Druillole, F.; Grinyer, G.F.; Lermitage, A.; Maroni, A.; Noël, G.; Porte, C.; Roger, T.; Rosier, P.; Suen, L.

    2014-01-01

    Active target detection systems, where the gas used as the detection medium is also a target for nuclear reactions, have been used for a wide variety of nuclear physics applications since the eighties. Improvements in Micro-Pattern Gaseous Detectors (MPGDs) and in micro-electronics achieved in the last decade permit the development of a new generation of active targets with higher granularity pad planes that allow spatial and time information to be determined with unprecedented accuracy. A novel active target and time projection chamber (ACTAR TPC), that will be used to study reactions and decays of exotic nuclei at facilities such as SPIRAL2, is presently under development and will be based on MPGD technology. Several MPGDs (Micromegas and Thick GEM) coupled to a 2×2 mm 2 pixelated pad plane have been tested and their performances have been determined with different gases over a wide range of pressures. Of particular interest for nuclear physics experiments are the angular and energy resolutions. The angular resolution has been determined to be better than 1° FWHM for short traces of about 4 cm in length and the energy resolution deduced from the particle range was found to be better than 5% for 5.5 MeV α particles. These performances have been compared to Geant4 simulations. These experimental results validate the use of these detectors for several applications in nuclear physics

  15. Performance Evaluation of Thermographic Cameras for Photogrammetric Measurements

    Science.gov (United States)

    Yastikli, N.; Guler, E.

    2013-05-01

    The aim of this research is the performance evaluation of the termographic cameras for possible use for photogrammetric documentation and deformation analyses caused by moisture and isolation problem of the historical and cultural heritage. To perform geometric calibration of the termographic camera, the 3D test object was designed with 77 control points which were distributed in different depths. For performance evaluation, Flir A320 termographic camera with 320 × 240 pixels and lens with 18 mm focal length was used. The Nikon D3X SLR digital camera with 6048 × 4032 pixels and lens with 20 mm focal length was used as reference for comparison. The size of pixel was 25 μm for the Flir A320 termographic camera and 6 μm for the Nikon D3X SLR digital camera. The digital images of the 3D test object were recorded with the Flir A320 termographic camera and Nikon D3X SLR digital camera and the image coordinate of the control points in the images were measured. The geometric calibration parameters, including the focal length, position of principal points, radial and tangential distortions were determined with introduced additional parameters in bundle block adjustments. The measurement of image coordinates and bundle block adjustments with additional parameters were performed using the PHIDIAS digital photogrammetric system. The bundle block adjustment was repeated with determined calibration parameter for both Flir A320 termographic camera and Nikon D3X SLR digital camera. The obtained standard deviation of measured image coordinates was 9.6 μm and 10.5 μm for Flir A320 termographic camera and 8.3 μm and 7.7 μm for Nikon D3X SLR digital camera. The obtained standard deviation of measured image points in Flir A320 termographic camera images almost same accuracy level with digital camera in comparison with 4 times bigger pixel size. The obtained results from this research, the interior geometry of the termographic cameras and lens distortion was modelled efficiently

  16. Imaging capabilities of germanium gamma cameras

    International Nuclear Information System (INIS)

    Steidley, J.W.

    1977-01-01

    Quantitative methods of analysis based on the use of a computer simulation were developed and used to investigate the imaging capabilities of germanium gamma cameras. The main advantage of the computer simulation is that the inherent unknowns of clinical imaging procedures are removed from the investigation. The effects of patient scattered radiation were incorporated using a mathematical LSF model which was empirically developed and experimentally verified. Image modifying effects of patient motion, spatial distortions, and count rate capabilities were also included in the model. Spatial domain and frequency domain modeling techniques were developed and used in the simulation as required. The imaging capabilities of gamma cameras were assessed using low contrast lesion source distributions. The results showed that an improvement in energy resolution from 10% to 2% offers significant clinical advantages in terms of improved contrast, increased detectability, and reduced patient dose. The improvements are of greatest significance for small lesions at low contrast. The results of the computer simulation were also used to compare a design of a hypothetical germanium gamma camera with a state-of-the-art scintillation camera. The computer model performed a parametric analysis of the interrelated effects of inherent and technological limitations of gamma camera imaging. In particular, the trade-off between collimator resolution and collimator efficiency for detection of a given low contrast lesion was directly addressed. This trade-off is an inherent limitation of both gamma cameras. The image degrading effects of patient motion, camera spatial distortions, and low count rate were shown to modify the improvements due to better energy resolution. Thus, based on this research, the continued development of germanium cameras to the point of clinical demonstration is recommended

  17. The bit slice micro-processor 'GESPRO' as a project in the UA2 experiment

    CERN Document Server

    Becam, C; Delanghe, J; Fest, H M; Lecoq, J; Martin, H; Mencik, M; MerkeI, B; Meyer, J M; Perrin, M; Plothow, H; Rampazzo, J P; Schittly, A

    1981-01-01

    The bit slice micro-processor GESPRO is a CAMAC module plugged into a standard Elliot system crate via which it communicates as a slave with its host computer. It has full control of CAMAC as a master unit. GESPRO is a 24 bit machine with multi-mode memory addressing capacity of 64K words. The micro-processor structure uses 5 buses including pipe-line registers to mask access time and 16 interrupt levels. The micro-program memory capacity is 2K (RAM) words of 48 bits each. A special hardwired module allows floating point, as well as integer, multiplication of 24*24 bits, result in 48 bits, in about 200 ns. This micro-processor could be used in the UA2 data acquisition chain and trigger system for the following tasks: (a) online data reduction, i.e. to read DURANDAL, process the information resulting in accepting or rejecting the event; (b) readout and analysis of the accepted data; (c) preprocess the data. The UA2 version of GESPRO is under construction, programs and micro-programs are under development. Hard...

  18. Recent development of micro-triangulation for magnet fiducialisation

    CERN Document Server

    Vlachakis, Vasileios; Mainaud Durand, Helene; CERN. Geneva. ATS Department

    2016-01-01

    The micro-triangulation method is proposed as an alternative for magnet fiducialisation. The main objective is to measure horizontal and vertical angles to fiducial points and stretched wires, utilising theodolites equipped with cameras. This study aims to develop various methods, algorithms and software tools to enable the data acquisition and processing. In this paper, we present the first test measurement as an attempt to demonstrate the feasibility of the method and to evaluate the accuracy. The preliminary results are very promising, with accuracy always better than 20 μm for the wire position, and of about40 μm/m for the wire orientation, compared with a coordinate measuring machine.

  19. The CONNECT project

    DEFF Research Database (Denmark)

    Assaf, Yaniv; Alexander, Daniel C; Jones, Derek K

    2013-01-01

    Of Neuroimagers for the Non-invasive Exploration of brain Connectivity and Tracts) project aimed to combine tractography and micro-structural measures of the living human brain in order to obtain a better estimate of the connectome, while also striving to extend validation of these measurements. This paper...... summarizes the project and describes the perspective of using micro-structural measures to study the connectome.......In recent years, diffusion MRI has become an extremely important tool for studying the morphology of living brain tissue, as it provides unique insights into both its macrostructure and microstructure. Recent applications of diffusion MRI aimed to characterize the structural connectome using...

  20. MICRO-CHP System for Residential Applications

    Energy Technology Data Exchange (ETDEWEB)

    Joseph Gerstmann

    2009-01-31

    This is the final report of progress under Phase I of a project to develop and commercialize a micro-CHP system for residential applications that provides electrical power, heating, and cooling for the home. This is the first phase of a three-phase effort in which the residential micro-CHP system will be designed (Phase I), developed and tested in the laboratory (Phase II); and further developed and field tested (Phase III). The project team consists of Advanced Mechanical Technology, Inc. (AMTI), responsible for system design and integration; Marathon Engine Systems, Inc. (MES), responsible for design of the engine-generator subsystem; AO Smith, responsible for design of the thermal storage and water heating subsystems; Trane, a business of American Standard Companies, responsible for design of the HVAC subsystem; and AirXchange, Inc., responsible for design of the mechanical ventilation and dehumidification subsystem.

  1. Relative Panoramic Camera Position Estimation for Image-Based Virtual Reality Networks in Indoor Environments

    Science.gov (United States)

    Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.

    2017-09-01

    Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  2. RELATIVE PANORAMIC CAMERA POSITION ESTIMATION FOR IMAGE-BASED VIRTUAL REALITY NETWORKS IN INDOOR ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    M. Nakagawa

    2017-09-01

    Full Text Available Image-based virtual reality (VR is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  3. Stereo Pinhole Camera: Assembly and experimental activities

    Directory of Open Access Journals (Sweden)

    Gilmário Barbosa Santos

    2015-05-01

    Full Text Available This work describes the assembling of a stereo pinhole camera for capturing stereo-pairs of images and proposes experimental activities with it. A pinhole camera can be as sophisticated as you want, or so simple that it could be handcrafted with practically recyclable materials. This paper describes the practical use of the pinhole camera throughout history and currently. Aspects of optics and geometry involved in the building of the stereo pinhole camera are presented with illustrations. Furthermore, experiments are proposed by using the images obtained by the camera for 3D visualization through a pair of anaglyph glasses, and the estimation of relative depth by triangulation is discussed.

  4. A projective surgical navigation system for cancer resection

    Science.gov (United States)

    Gan, Qi; Shao, Pengfei; Wang, Dong; Ye, Jian; Zhang, Zeshu; Wang, Xinrui; Xu, Ronald

    2016-03-01

    Near infrared (NIR) fluorescence imaging technique can provide precise and real-time information about tumor location during a cancer resection surgery. However, many intraoperative fluorescence imaging systems are based on wearable devices or stand-alone displays, leading to distraction of the surgeons and suboptimal outcome. To overcome these limitations, we design a projective fluorescence imaging system for surgical navigation. The system consists of a LED excitation light source, a monochromatic CCD camera, a host computer, a mini projector and a CMOS camera. A software program is written by C++ to call OpenCV functions for calibrating and correcting fluorescence images captured by the CCD camera upon excitation illumination of the LED source. The images are projected back to the surgical field by the mini projector. Imaging performance of this projective navigation system is characterized in a tumor simulating phantom. Image-guided surgical resection is demonstrated in an ex-vivo chicken tissue model. In all the experiments, the projected images by the projector match well with the locations of fluorescence emission. Our experimental results indicate that the proposed projective navigation system can be a powerful tool for pre-operative surgical planning, intraoperative surgical guidance, and postoperative assessment of surgical outcome. We have integrated the optoelectronic elements into a compact and miniaturized system in preparation for further clinical validation.

  5. Micro-manufacturing technologies and their applications a theoretical and practical guide

    CERN Document Server

    Shipley, David

    2017-01-01

    This book provides in-depth theoretical and practical information on recent advances in micro-manufacturing technologies and processes, covering such topics as micro-injection moulding, micro-cutting, micro-EDM, micro-assembly, micro-additive manufacturing, moulded interconnected devices, and microscale metrology. It is designed to provide complementary material for the related e-learning platform on micro-manufacturing developed within the framework of the Leonardo da Vinci project 2013-3748/542424: MIMAN-T: Micro-Manufacturing Training System for SMEs. The book is mainly addressed to technicians and prospective professionals in the sector and will serve as an easily usable tool to facilitate the translation of micro-manufacturing technologies into tangible industrial benefits. Numerous examples are included to assist readers in learning and implementing the described technologies. In addition, an individual chapter is devoted to technological foresight, addressing market analysis and business models for mic...

  6. Measuring high-resolution sky luminance distributions with a CCD camera.

    Science.gov (United States)

    Tohsing, Korntip; Schrempf, Michael; Riechelmann, Stefan; Schilke, Holger; Seckmeyer, Gunther

    2013-03-10

    We describe how sky luminance can be derived from a newly developed hemispherical sky imager (HSI) system. The system contains a commercial compact charge coupled device (CCD) camera equipped with a fish-eye lens. The projection of the camera system has been found to be nearly equidistant. The luminance from the high dynamic range images has been calculated and then validated with luminance data measured by a CCD array spectroradiometer. The deviation between both datasets is less than 10% for cloudless and completely overcast skies, and differs by no more than 20% for all sky conditions. The global illuminance derived from the HSI pictures deviates by less than 5% and 20% under cloudless and cloudy skies for solar zenith angles less than 80°, respectively. This system is therefore capable of measuring sky luminance with the high spatial and temporal resolution of more than a million pixels and every 20 s respectively.

  7. First results from the TOPSAT camera

    Science.gov (United States)

    Greenway, Paul; Tosh, Ian; Morris, Nigel; Burton, Gary; Cawley, Steve

    2017-11-01

    The TopSat camera is a low cost remote sensing imager capable of producing 2.5 metre resolution panchromatic imagery, funded by the British National Space Centre's Mosaic programme. The instrument was designed and assembled at the Space Science & Technology Department of the CCLRC's Rutherford Appleton Laboratory (RAL) in the UK, and was launched on the 27th October 2005 from Plesetsk Cosmodrome in Northern Russia on a Kosmos-3M. The camera utilises an off-axis three mirror system, which has the advantages of excellent image quality over a wide field of view, combined with a compactness that makes its overall dimensions smaller than its focal length. Keeping the costs to a minimum has been a major design driver in the development of this camera. The camera is part of the TopSat mission, which is a collaboration between four UK organisations; QinetiQ, Surrey Satellite Technology Ltd (SSTL), RAL and Infoterra. Its objective is to demonstrate provision of rapid response high resolution imagery to fixed and mobile ground stations using a low cost minisatellite. The paper "Development of the TopSat Camera" presented by RAL at the 5th ICSO in 2004 described the opto-mechanical design, assembly, alignment and environmental test methods implemented. Now that the spacecraft is in orbit and successfully acquiring images, this paper presents the first results from the camera and makes an initial assessment of the camera's in-orbit performance.

  8. A retrospective on the LBNL PEM project

    International Nuclear Information System (INIS)

    Huber, J.S.; Moses, W.W.; Wang, G.C.; Derenzo, S.E.; Huesman, R.H.; Qi, J.; Virador, P.; Choong, W.S.; Mandelli, E.; Beuville, E.; Pedrali-Noy, M.; Krieger, B.; Meddeler, G.

    2004-01-01

    We present a retrospective on the LBNL Positron Emission Mammography (PEM) project, looking back on our design and experiences. The LBNL PEM camera utilizes detector modules that are capable of measuring depth of interaction (DOI) and places them into 4 detector banks in a rectangular geometry. In order to build this camera, we had to develop the DOI detector module, LSO etching, Lumirror-epoxy reflector for the LSO array (to achieve optimal DOI), photodiode array, custom IC, rigid-flex readout board, packaging, DOI calibration and reconstruction algorithms for the rectangular camera geometry. We will discuss the highlights (good and bad) of these developments

  9. The Eye of the Camera

    NARCIS (Netherlands)

    van Rompay, Thomas Johannes Lucas; Vonk, Dorette J.; Fransen, M.L.

    2009-01-01

    This study addresses the effects of security cameras on prosocial behavior. Results from previous studies indicate that the presence of others can trigger helping behavior, arising from the need for approval of others. Extending these findings, the authors propose that security cameras can likewise

  10. Extending the Classroom: Digital Micro-Narratives for Novice Language Learners

    Science.gov (United States)

    Kronenberg, Felix A.

    2014-01-01

    Digital Storytelling offers many advantages for language learning, especially within a project-based framework. In this article, the use of Digital Micro-Narratives is proposed as particularly useful for second language learners at the novice level. As a sub-genre of Digital Storytelling, Digital Micro-Narratives focus more on frequently updated…

  11. The MicroObservatory Net

    Science.gov (United States)

    Brecher, K.; Sadler, P.

    1994-12-01

    A group of scientists, engineers and educators based at the Harvard-Smithsonian Center for Astrophysics (CfA) has developed a prototype of a small, inexpensive and fully integrated automated astronomical telescope and image processing system. The project team is now building five second generation instruments. The MicroObservatory has been designed to be used for classroom instruction by teachers as well as for original scientific research projects by students. Probably in no other area of frontier science is it possible for a broad spectrum of students (not just the gifted) to have access to state-of-the-art technologies that would allow for original research. The MicroObservatory combines the imaging power of a cooled CCD, with a self contained and weatherized reflecting optical telescope and mount. A microcomputer points the telescope and processes the captured images. The MicroObservatory has also been designed to be used as a valuable new capture and display device for real time astronomical imaging in planetariums and science museums. When the new instruments are completed in the next few months, they will be tried with high school students and teachers, as well as with museum groups. We are now planning to make the MicroObservatories available to students, teachers and other individual users over the Internet. We plan to allow the telescope to be controlled in real time or in batch mode, from a Macintosh or PC compatible computer. In the real-time mode, we hope to give individual access to all of the telescope control functions without the need for an "on-site" operator. Users would sign up for a specific period of time. In the batch mode, users would submit jobs for the telescope. After the MicroObservatory completed a specific job, the images would be e-mailed back to the user. At present, we are interested in gaining answers to the following questions: (1) What are the best approaches to scheduling real-time observations? (2) What criteria should be used

  12. NEET Micro-Pocket Fission Detector. Final Project report

    Energy Technology Data Exchange (ETDEWEB)

    Unruh, T. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rempe, Joy [Idaho National Lab. (INL), Idaho Falls, ID (United States); McGregor, Douglas [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ugorowski, Philip [Idaho National Lab. (INL), Idaho Falls, ID (United States); Reichenberger, Michael [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ito, Takashi [Idaho National Lab. (INL), Idaho Falls, ID (United States); Villard, J. -F. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2014-09-01

    A collaboration between the Idaho National Laboratory (INL), the Kansas State University (KSU), and the French Alternative Energies and Atomic Energy Commission, Commissariat à l'Énergie Atomique et aux Energies Alternatives, (CEA), is funded by the Nuclear Energy Enabling Technologies (NEET) program to develop and test Micro-Pocket Fission Detectors (MPFDs), which are compact fission chambers capable of simultaneously measuring thermal neutron flux, fast neutron flux and temperature within a single package. When deployed, these sensors will significantly advance flux detection capabilities for irradiation tests in US Material Test Reactors (MTRs). Ultimately, evaluations may lead to a more compact, more accurate, and longer lifetime flux sensor for critical mock-ups, and high performance reactors, allowing several Department of Energy Office of Nuclear Energy (DOE-NE) programs to obtain higher accuracy/higher resolution data from irradiation tests of candidate new fuels and materials. Specifically, deployment of MPFDs will address several challenges faced in irradiations performed at MTRs: Current fission chamber technologies do not offer the ability to measure fast flux, thermal flux and temperature within a single compact probe; MPFDs offer this option. MPFD construction is very different than current fission chamber construction; the use of high temperature materials allow MPFDs to be specifically tailored to survive harsh conditions encountered in-core of high performance MTRs. The higher accuracy, high fidelity data available from the compact MPFD will significantly enhance efforts to validate new high-fidelity reactor physics codes and new multi-scale, multi-physics codes. MPFDs can be built with variable sensitivities to survive the lifetime of an experiment or fuel assembly in some MTRs, allowing for more efficient and cost effective power monitoring. The small size of the MPFDs allows multiple sensors to be deployed, offering the potential to

  13. Poster: A Software-Defined Multi-Camera Network

    OpenAIRE

    Chen, Po-Yen; Chen, Chien; Selvaraj, Parthiban; Claesen, Luc

    2016-01-01

    The widespread popularity of OpenFlow leads to a significant increase in the number of applications developed in SoftwareDefined Networking (SDN). In this work, we propose the architecture of a Software-Defined Multi-Camera Network consisting of small, flexible, economic, and programmable cameras which combine the functions of the processor, switch, and camera. A Software-Defined Multi-Camera Network can effectively reduce the overall network bandwidth and reduce a large amount of the Capex a...

  14. Gamma camera performance: technical assessment protocol

    Energy Technology Data Exchange (ETDEWEB)

    Bolster, A.A. [West Glasgow Hospitals NHS Trust, London (United Kingdom). Dept. of Clinical Physics; Waddington, W.A. [University College London Hospitals NHS Trust, London (United Kingdom). Inst. of Nuclear Medicine

    1996-12-31

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera`s computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author).

  15. The Light Field Attachment: Turning a DSLR into a Light Field Camera Using a Low Budget Camera Ring

    KAUST Repository

    Wang, Yuwang

    2016-11-16

    We propose a concept for a lens attachment that turns a standard DSLR camera and lens into a light field camera. The attachment consists of 8 low-resolution, low-quality side cameras arranged around the central high-quality SLR lens. Unlike most existing light field camera architectures, this design provides a high-quality 2D image mode, while simultaneously enabling a new high-quality light field mode with a large camera baseline but little added weight, cost, or bulk compared with the base DSLR camera. From an algorithmic point of view, the high-quality light field mode is made possible by a new light field super-resolution method that first improves the spatial resolution and image quality of the side cameras and then interpolates additional views as needed. At the heart of this process is a super-resolution method that we call iterative Patch- And Depth-based Synthesis (iPADS), which combines patch-based and depth-based synthesis in a novel fashion. Experimental results obtained for both real captured data and synthetic data confirm that our method achieves substantial improvements in super-resolution for side-view images as well as the high-quality and view-coherent rendering of dense and high-resolution light fields.

  16. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  17. PERFORMANCE EVALUATION OF THERMOGRAPHIC CAMERAS FOR PHOTOGRAMMETRIC MEASUREMENTS

    Directory of Open Access Journals (Sweden)

    N. Yastikli

    2013-05-01

    Full Text Available The aim of this research is the performance evaluation of the termographic cameras for possible use for photogrammetric documentation and deformation analyses caused by moisture and isolation problem of the historical and cultural heritage. To perform geometric calibration of the termographic camera, the 3D test object was designed with 77 control points which were distributed in different depths. For performance evaluation, Flir A320 termographic camera with 320 × 240 pixels and lens with 18 mm focal length was used. The Nikon D3X SLR digital camera with 6048 × 4032 pixels and lens with 20 mm focal length was used as reference for comparison. The size of pixel was 25 μm for the Flir A320 termographic camera and 6 μm for the Nikon D3X SLR digital camera. The digital images of the 3D test object were recorded with the Flir A320 termographic camera and Nikon D3X SLR digital camera and the image coordinate of the control points in the images were measured. The geometric calibration parameters, including the focal length, position of principal points, radial and tangential distortions were determined with introduced additional parameters in bundle block adjustments. The measurement of image coordinates and bundle block adjustments with additional parameters were performed using the PHIDIAS digital photogrammetric system. The bundle block adjustment was repeated with determined calibration parameter for both Flir A320 termographic camera and Nikon D3X SLR digital camera. The obtained standard deviation of measured image coordinates was 9.6 μm and 10.5 μm for Flir A320 termographic camera and 8.3 μm and 7.7 μm for Nikon D3X SLR digital camera. The obtained standard deviation of measured image points in Flir A320 termographic camera images almost same accuracy level with digital camera in comparison with 4 times bigger pixel size. The obtained results from this research, the interior geometry of the termographic cameras and lens distortion was

  18. Comparison of myocardial perfusion imaging between the new high-speed gamma camera and the standard anger camera

    International Nuclear Information System (INIS)

    Tanaka, Hirokazu; Chikamori, Taishiro; Hida, Satoshi

    2013-01-01

    Cadmium-zinc-telluride (CZT) solid-state detectors have been recently introduced into the field of myocardial perfusion imaging. The aim of this study was to prospectively compare the diagnostic performance of the CZT high-speed gamma camera (Discovery NM 530c) with that of the standard 3-head gamma camera in the same group of patients. The study group consisted of 150 consecutive patients who underwent a 1-day stress-rest 99m Tc-sestamibi or tetrofosmin imaging protocol. Image acquisition was performed first on a standard gamma camera with a 15-min scan time each for stress and for rest. All scans were immediately repeated on a CZT camera with a 5-min scan time for stress and a 3-min scan time for rest, using list mode. The correlations between the CZT camera and the standard camera for perfusion and function analyses were strong within narrow Bland-Altman limits of agreement. Using list mode analysis, image quality for stress was rated as good or excellent in 97% of the 3-min scans, and in 100% of the ≥4-min scans. For CZT scans at rest, similarly, image quality was rated as good or excellent in 94% of the 1-min scans, and in 100% of the ≥2-min scans. The novel CZT camera provides excellent image quality, which is equivalent to standard myocardial single-photon emission computed tomography, despite a short scan time of less than half of the standard time. (author)

  19. EX1102: ROV and Camera Sled Integration and Shakedown on NOAA Ship Okeanos Explorer (EM302)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This project will involve two legs - EX1102L1 and EX1102L2. The first, between April 4-18, 2011, will involve the dockside integration of the new OER camera platform...

  20. High resolution RGB color line scan camera

    Science.gov (United States)

    Lynch, Theodore E.; Huettig, Fred

    1998-04-01

    A color line scan camera family which is available with either 6000, 8000 or 10000 pixels/color channel, utilizes off-the-shelf lenses, interfaces with currently available frame grabbers, includes on-board pixel by pixel offset correction, and is configurable and controllable via RS232 serial port for computer controlled or stand alone operation is described in this paper. This line scan camera is based on an available 8000 element monochrome line scan camera designed by AOA for OEM use. The new color version includes improvements such as better packaging and additional user features which make the camera easier to use. The heart of the camera is a tri-linear CCD sensor with on-chip color balancing for maximum accuracy and pinned photodiodes for low lag response. Each color channel is digitized to 12 bits and all three channels are multiplexed together so that the resulting camera output video is either a 12 or 8 bit data stream at a rate of up to 24Megpixels/sec. Conversion from 12 to 8 bit, or user-defined gamma, is accomplished by on board user-defined video look up tables. The camera has two user-selectable operating modes; lows speed, high sensitivity mode or high speed, reduced sensitivity mode. The intended uses of the camera include industrial inspection, digital archiving, document scanning, and graphic arts applications.

  1. Camera network video summarization

    Science.gov (United States)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  2. The use of a portable gamma camera for preoperative lymphatic mapping: a comparison with a conventional gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Vidal-Sicart, Sergi; Paredes, Pilar [Hospital Clinic Barcelona, Nuclear Medicine Department (CDIC), Barcelona (Spain); Institut d' Investigacio Biomedica Agusti Pi Sunyer (IDIBAPS), Barcelona (Spain); Vermeeren, Lenka; Valdes-Olmos, Renato A. [Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital (NKI-AVL), Nuclear Medicine Department, Amsterdam (Netherlands); Sola, Oriol [Hospital Clinic Barcelona, Nuclear Medicine Department (CDIC), Barcelona (Spain)

    2011-04-15

    Planar lymphoscintigraphy is routinely used for preoperative sentinel node visualization, but large gamma cameras are not always available. We evaluated the reproducibility of lymphatic mapping with a smaller and portable gamma camera. In two centres, 52 patients with breast cancer received preoperative lymphoscintigraphy with a conventional gamma camera with a field of view of 40 x 40 cm. Static anterior and lateral images were performed at 15 min, 2 h and 4 h after injection of the radiotracer ({sup 99m}Tc-nanocolloid). At 2 h after injection, anterior and oblique images were also performed with a portable gamma camera (Sentinella, Oncovision) positioned to obtain a field of view of 20 x 20 cm. Visualization of lymphatic drainage on conventional images and images with the portable device were compared for number of nodes depicted, their intensity and localization of sentinel nodes. The images performed with the conventional gamma camera depicted sentinel nodes in 94%, while the portable gamma camera showed drainage in 73%. There was however no significant difference in visualization between the two devices when a lead shield was used to mask the injection area in 43 patients (95 vs 88%, p = 0.25). Second-echelon nodes were visualized in 62% of the patients with the conventional gamma camera and in 29% of the cases with the portable gamma camera. Preoperative imaging with a portable gamma camera fitted with a pinhole collimator to obtain a field of view of 20 x 20 cm is able to depict sentinel nodes in 88% of the cases, if a lead shield is used to mask the injection site. This device may be useful in centres without the possibility to perform a preoperative image. (orig.)

  3. The use of a portable gamma camera for preoperative lymphatic mapping: a comparison with a conventional gamma camera

    International Nuclear Information System (INIS)

    Vidal-Sicart, Sergi; Paredes, Pilar; Vermeeren, Lenka; Valdes-Olmos, Renato A.; Sola, Oriol

    2011-01-01

    Planar lymphoscintigraphy is routinely used for preoperative sentinel node visualization, but large gamma cameras are not always available. We evaluated the reproducibility of lymphatic mapping with a smaller and portable gamma camera. In two centres, 52 patients with breast cancer received preoperative lymphoscintigraphy with a conventional gamma camera with a field of view of 40 x 40 cm. Static anterior and lateral images were performed at 15 min, 2 h and 4 h after injection of the radiotracer ( 99m Tc-nanocolloid). At 2 h after injection, anterior and oblique images were also performed with a portable gamma camera (Sentinella, Oncovision) positioned to obtain a field of view of 20 x 20 cm. Visualization of lymphatic drainage on conventional images and images with the portable device were compared for number of nodes depicted, their intensity and localization of sentinel nodes. The images performed with the conventional gamma camera depicted sentinel nodes in 94%, while the portable gamma camera showed drainage in 73%. There was however no significant difference in visualization between the two devices when a lead shield was used to mask the injection area in 43 patients (95 vs 88%, p = 0.25). Second-echelon nodes were visualized in 62% of the patients with the conventional gamma camera and in 29% of the cases with the portable gamma camera. Preoperative imaging with a portable gamma camera fitted with a pinhole collimator to obtain a field of view of 20 x 20 cm is able to depict sentinel nodes in 88% of the cases, if a lead shield is used to mask the injection site. This device may be useful in centres without the possibility to perform a preoperative image. (orig.)

  4. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.

  5. Development and evaluation of an improved high-resolution TOFPET camera: TOFPET II: [Progress report, 1986-1987

    International Nuclear Information System (INIS)

    1987-01-01

    Our TOFPET II positron camera is now in the construction phase. A majority of the components have been prototyped and tested, including the gantry, detectors and electronics. We also anticipate some additional development effort in the time-of-flight electronics, however, this development can be carried out during the test and evaluation phase of the contract. the fourth and final year of this contract is devoted to testing and evaluating the TOFPET II camera. Preliminary testing has already begun together with the software development effort so that the total contract can be completed by the projected deadline of April 30, 1988

  6. Evaluation of 3D reconstruction algorithms for a small animal PET camera

    International Nuclear Information System (INIS)

    Johnson, C.A.; Gandler, W.R.; Seidel, J.

    1996-01-01

    The use of paired, opposing position-sensitive phototube scintillation cameras (SCs) operating in coincidence for small animal imaging with positron emitters is currently under study. Because of the low sensitivity of the system even in 3D mode and the need to produce images with high resolution, it was postulated that a 3D expectation maximization (EM) reconstruction algorithm might be well suited for this application. We investigated four reconstruction algorithms for the 3D SC PET camera: 2D filtered back-projection (FBP), 2D ordered subset EM (OSEM), 3D reprojection (3DRP), and 3D OSEM. Noise was assessed for all slices by the coefficient of variation in a simulated uniform cylinder. Resolution was assessed from a simulation of 15 point sources in the warm background of the uniform cylinder. At comparable noise levels, the resolution achieved with OSEM (0.9-mm to 1.2-mm) is significantly better than that obtained with FBP or 3DRP (1.5-mm to 2.0-mm.) Images of a rat skull labeled with 18 F-fluoride suggest that 3D OSEM can improve image quality of a small animal PET camera

  7. Video Sharing System Based on Wi-Fi Camera

    OpenAIRE

    Qidi Lin; Hewei Yu; Jinbin Huang; Weile Liang

    2015-01-01

    This paper introduces a video sharing platform based on WiFi, which consists of camera, mobile phone and PC server. This platform can receive wireless signal from the camera and show the live video on the mobile phone captured by camera. In addition, it is able to send commands to camera and control the camera's holder to rotate. The platform can be applied to interactive teaching and dangerous area's monitoring and so on. Testing results show that the platform can share ...

  8. Astronomical Research with the MicroObservatory Net

    Science.gov (United States)

    Brecher, K.; Sadler, P.; Gould, R.; Leiker, S.; Antonucci, P.; Deutsch, F.

    1997-05-01

    We have developed a fully integrated automated astronomical telescope system which combines the imaging power of a cooled CCD, with a self-contained and weatherized 15 cm reflecting optical telescope and mount. The MicroObservatory Net consists of five of these telescopes. They are currently being deployed around the world at widely distributed longitudes. Remote access to the MicroObservatories over the Internet has now been implemented. Software for computer control, pointing, focusing, filter selection as well as pattern recognition have all been developed as part of the project. The telescopes can be controlled in real time or in delay mode, from a Macintosh, PC or other computer using Web-based software. The Internet address of the telescopes is http://cfa- www.harvard.edu/cfa/sed/MicroObservatory/MicroObservatory.html. In the real-time mode, individuals have access to all of the telescope control functions without the need for an `on-site' operator. Users can sign up for a specific period of ti me. In the batch mode, users can submit requests for delayed telescope observations. After a MicroObservatory completes a job, the user is automatically notified by e-mail that the image is available for viewing and downloading from the Web site. The telescopes were designed for classroom instruction, as well as for use by students and amateur astronomers for original scientific research projects. We are currently examining a variety of technical and educational questions about the use of the telescopes including: (1) What are the best approaches to scheduling real-time versus batch mode observations? (2) What criteria should be used for allocating telescope time? (3) With deployment of more than one telescope, is it advantageous for each telescope to be used for just one type of observation, i.e., some for photometric use, others for imaging? And (4) What are the most valuable applications of the MicroObservatories in astronomical research? Support for the Micro

  9. EDICAM (Event Detection Intelligent Camera)

    Energy Technology Data Exchange (ETDEWEB)

    Zoletnik, S. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Szabolics, T., E-mail: szabolics.tamas@wigner.mta.hu [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Kocsis, G.; Szepesi, T.; Dunai, D. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary)

    2013-10-15

    Highlights: ► We present EDICAM's hardware modules. ► We present EDICAM's main design concepts. ► This paper will describe EDICAM firmware architecture. ► Operation principles description. ► Further developments. -- Abstract: A new type of fast framing camera has been developed for fusion applications by the Wigner Research Centre for Physics during the last few years. A new concept was designed for intelligent event driven imaging which is capable of focusing image readout to Regions of Interests (ROIs) where and when predefined events occur. At present these events mean intensity changes and external triggers but in the future more sophisticated methods might also be defined. The camera provides 444 Hz frame rate at full resolution of 1280 × 1024 pixels, but monitoring of smaller ROIs can be done in the 1–116 kHz range even during exposure of the full image. Keeping space limitations and the harsh environment in mind the camera is divided into a small Sensor Module and a processing card interconnected by a fast 10 Gbit optical link. This camera hardware has been used for passive monitoring of the plasma in different devices for example at ASDEX Upgrade and COMPASS with the first version of its firmware. The new firmware and software package is now available and ready for testing the new event processing features. This paper will present the operation principle and features of the Event Detection Intelligent Camera (EDICAM). The device is intended to be the central element in the 10-camera monitoring system of the Wendelstein 7-X stellarator.

  10. Testing by photometric measurement and camera study of theoretical prediction of microvolume universal sessile dropshape

    International Nuclear Information System (INIS)

    Smith, S R P; O'Neill, M; McMillan, N D; Arthure, K; Smith, S; Riedel, S

    2011-01-01

    The approach to the theory of sessile dropshapes held on a cylindrical drophead is discussed. It reveals an 'undifferentiable' universal micro-dropshape for volumes below 3μL. Camera studies demonstrate the veracity of this prediction exploited in the design of a new microvolume spectrometer. The mean pathlength of light injected through a microvolume sessile drop has been determined both from the model and from experiment. Drop volumes determine accurately the mean pathlength and with this Beer's law relationship is experimentally confirmed. The Transmitted Light Drop Analyser uses this universal 'natural cuvette' to deliver both high-performance UV spectra and absorbance measurements at discrete wavelengths.

  11. An Open Standard for Camera Trap Data

    NARCIS (Netherlands)

    Forrester, Tavis; O'Brien, Tim; Fegraus, Eric; Jansen, P.A.; Palmer, Jonathan; Kays, Roland; Ahumada, Jorge; Stern, Beth; McShea, William

    2016-01-01

    Camera traps that capture photos of animals are a valuable tool for monitoring biodiversity. The use of camera traps is rapidly increasing and there is an urgent need for standardization to facilitate data management, reporting and data sharing. Here we offer the Camera Trap Metadata Standard as an

  12. Research on a Micro Flip Robot That Can Climb Stairs

    Directory of Open Access Journals (Sweden)

    Jianzhong Wang

    2016-03-01

    Full Text Available Micro mobile robots (MMRs can operate in rugged, narrow or dangerous regions; thus, they are widely used in numerous areas including surveillance, rescue and exploration. In urban environments, stairs are common obstacles, ones that such robots find difficult to manoeuvre over. The authors analysed the research status of MMRs, particularly in terms of difficulties when performing stair climbing and present a novel type of MMR called the micro flip robot (MFRobot. A support arm subassembly was added to the centre of a wheeled chassis; using this structure, the MFRobot can climb stairs when a flipping mode is utilized. Based on this structure, the authors established a kinematic model of the stair-climbing process and analysed the force conditions for the key status, contributing to the existing knowledge of robot design. An MFRobot prototype was produced and the stair-climbing experiments, as well as experiments on manoeuvring through rubble regions and slope surfaces, were conducted. The results show that the MFRobot can rapidly climb common stairs and can easily manoeuvre through a rubble region. The maximum slope angle the robot can climb was shown to be about 35° for concrete and wooden slope surfaces. In the case where the robot needed to be equipped with sensors, particularly a camera, the camera was equipped on the support arm of robot. The MFRobot prototype weighs 2.5 kg and is easily transportable. This structure can resolve contradictions between portability and performance in terms of overcoming obstacles; in addition, operational effectiveness can be improved using this structure.

  13. Polarizing aperture stereoscopic cinema camera

    Science.gov (United States)

    Lipton, Lenny

    2012-07-01

    The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.

  14. Control system for gamma camera

    International Nuclear Information System (INIS)

    Miller, D.W.

    1977-01-01

    An improved gamma camera arrangement is described which utilizing a solid state detector, formed of high purity germanium. the central arrangement of the camera operates to effect the carrying out of a trapezoidal filtering operation over antisymmetrically summed spatial signals through gated integration procedures utilizing idealized integrating intervals. By simultaneously carrying out peak energy evaluation of the input signals, a desirable control over pulse pile-up phenomena is achieved. Additionally, through the use of the time derivative of incoming pulse or signal energy information to initially enable the control system, a low level information evaluation is provided serving to enhance the signal processing efficiency of the camera

  15. Scintillating camera

    International Nuclear Information System (INIS)

    Vlasbloem, H.

    1976-01-01

    The invention relates to a scintillating camera and in particular to an apparatus for determining the position coordinates of a light pulse emitting point on the anode of an image intensifier tube which forms part of a scintillating camera, comprising at least three photomultipliers which are positioned to receive light emitted by the anode screen on their photocathodes, circuit means for processing the output voltages of the photomultipliers to derive voltages that are representative of the position coordinates; a pulse-height discriminator circuit adapted to be fed with the sum voltage of the output voltages of the photomultipliers for gating the output of the processing circuit when the amplitude of the sum voltage of the output voltages of the photomultipliers lies in a predetermined amplitude range, and means for compensating the distortion introduced in the image on the anode screen

  16. MEMS-based thermally-actuated image stabilizer for cellular phone camera

    International Nuclear Information System (INIS)

    Lin, Chun-Ying; Chiou, Jin-Chern

    2012-01-01

    This work develops an image stabilizer (IS) that is fabricated using micro-electro-mechanical system (MEMS) technology and is designed to counteract the vibrations when human using cellular phone cameras. The proposed IS has dimensions of 8.8 × 8.8 × 0.3 mm 3 and is strong enough to suspend an image sensor. The processes that is utilized to fabricate the IS includes inductive coupled plasma (ICP) processes, reactive ion etching (RIE) processes and the flip-chip bonding method. The IS is designed to enable the electrical signals from the suspended image sensor to be successfully emitted out using signal output beams, and the maximum actuating distance of the stage exceeds 24.835 µm when the driving current is 155 mA. Depending on integration of MEMS device and designed controller, the proposed IS can decrease the hand tremor by 72.5%. (paper)

  17. The "All Sky Camera Network"

    Science.gov (United States)

    Caldwell, Andy

    2005-01-01

    In 2001, the "All Sky Camera Network" came to life as an outreach program to connect the Denver Museum of Nature and Science (DMNS) exhibit "Space Odyssey" with Colorado schools. The network is comprised of cameras placed strategically at schools throughout Colorado to capture fireballs--rare events that produce meteorites.…

  18. Initial laboratory evaluation of color video cameras: Phase 2

    Energy Technology Data Exchange (ETDEWEB)

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. The monochrome cameras were selected over color cameras because they have greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color camera technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories has established an on-going program to evaluate the newest color solid-state cameras. Phase One of the Sandia program resulted in the SAND91-2579/1 report titled: Initial Laboratory Evaluation of Color Video Cameras. The report briefly discusses imager chips, color cameras, and monitors, describes the camera selection, details traditional test parameters and procedures, and gives the results reached by evaluating 12 cameras. Here, in Phase Two of the report, we tested 6 additional cameras using traditional methods. In addition, all 18 cameras were tested by newly developed methods. This Phase 2 report details those newly developed test parameters and procedures, and evaluates the results.

  19. Projection Mapping User Interface for Disabled People.

    Science.gov (United States)

    Gelšvartas, Julius; Simutis, Rimvydas; Maskeliūnas, Rytis

    2018-01-01

    Difficulty in communicating is one of the key challenges for people suffering from severe motor and speech disabilities. Often such person can communicate and interact with the environment only using assistive technologies. This paper presents a multifunctional user interface designed to improve communication efficiency and person independence. The main component of this interface is a projection mapping technique used to highlight objects in the environment. Projection mapping makes it possible to create a natural augmented reality information presentation method. The user interface combines a depth sensor and a projector to create camera-projector system. We provide a detailed description of camera-projector system calibration procedure. The described system performs tabletop object detection and automatic projection mapping. Multiple user input modalities have been integrated into the multifunctional user interface. Such system can be adapted to the needs of people with various disabilities.

  20. Projection Mapping User Interface for Disabled People

    Science.gov (United States)

    Simutis, Rimvydas; Maskeliūnas, Rytis

    2018-01-01

    Difficulty in communicating is one of the key challenges for people suffering from severe motor and speech disabilities. Often such person can communicate and interact with the environment only using assistive technologies. This paper presents a multifunctional user interface designed to improve communication efficiency and person independence. The main component of this interface is a projection mapping technique used to highlight objects in the environment. Projection mapping makes it possible to create a natural augmented reality information presentation method. The user interface combines a depth sensor and a projector to create camera-projector system. We provide a detailed description of camera-projector system calibration procedure. The described system performs tabletop object detection and automatic projection mapping. Multiple user input modalities have been integrated into the multifunctional user interface. Such system can be adapted to the needs of people with various disabilities. PMID:29686827

  1. THE EXAMPLE OF USING THE XIAOMI CAMERAS IN INVENTORY OF MONUMENTAL OBJECTS - FIRST RESULTS

    Directory of Open Access Journals (Sweden)

    J. S. Markiewicz

    2017-11-01

    Full Text Available At present, digital documentation recorded in the form of raster or vector files is the obligatory way of inventorying historical objects. Today, photogrammetry is becoming more and more popular and is becoming the standard of documentation in many projects involving the recording of all possible spatial data on landscape, architecture, or even single objects. Low-cost sensors allow for the creation of reliable and accurate three-dimensional models of investigated objects. This paper presents the results of a comparison between the outcomes obtained when using three sources of image: low-cost Xiaomi cameras, a full-frame camera (Canon 5D Mark II and middle-frame camera (Hasselblad-Hd4. In order to check how the results obtained from the two sensors differ the following parameters were analysed: the accuracy of the orientation of the ground level photos on the control and check points, the distribution of appointed distortion in the self-calibration process, the flatness of the walls, the discrepancies between point clouds from the low-cost cameras and references data. The results presented below are a result of co-operation of researchers from three institutions: the Systems Research Institute PAS, The Department of Geodesy and Cartography at the Warsaw University of Technology and the National Museum in Warsaw.

  2. The Example of Using the Xiaomi Cameras in Inventory of Monumental Objects - First Results

    Science.gov (United States)

    Markiewicz, J. S.; Łapiński, S.; Bienkowski, R.; Kaliszewska, A.

    2017-11-01

    At present, digital documentation recorded in the form of raster or vector files is the obligatory way of inventorying historical objects. Today, photogrammetry is becoming more and more popular and is becoming the standard of documentation in many projects involving the recording of all possible spatial data on landscape, architecture, or even single objects. Low-cost sensors allow for the creation of reliable and accurate three-dimensional models of investigated objects. This paper presents the results of a comparison between the outcomes obtained when using three sources of image: low-cost Xiaomi cameras, a full-frame camera (Canon 5D Mark II) and middle-frame camera (Hasselblad-Hd4). In order to check how the results obtained from the two sensors differ the following parameters were analysed: the accuracy of the orientation of the ground level photos on the control and check points, the distribution of appointed distortion in the self-calibration process, the flatness of the walls, the discrepancies between point clouds from the low-cost cameras and references data. The results presented below are a result of co-operation of researchers from three institutions: the Systems Research Institute PAS, The Department of Geodesy and Cartography at the Warsaw University of Technology and the National Museum in Warsaw.

  3. Photogrammetric Applications of Immersive Video Cameras

    OpenAIRE

    Kwiatek, K.; Tokarczyk, R.

    2014-01-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to ov...

  4. 21 CFR 886.1120 - Opthalmic camera.

    Science.gov (United States)

    2010-04-01

    ... DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1120 Opthalmic camera. (a) Identification. An ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding area...

  5. Development of Micro Air Vehicle Technology With In-Flight Adaptive-Wing Structure

    Science.gov (United States)

    Waszak, Martin R. (Technical Monitor); Shkarayev, Sergey; Null, William; Wagner, Matthew

    2004-01-01

    This is a final report on the research studies, "Development of Micro Air Vehicle Technology with In-Flight Adaptrive-Wing Structure". This project involved the development of variable-camber technology to achieve efficient design of micro air vehicles. Specifically, it focused on the following topics: 1) Low Reynolds number wind tunnel testing of cambered-plate wings. 2) Theoretical performance analysis of micro air vehicles. 3) Design of a variable-camber MAV actuated by micro servos. 4) Test flights of a variable-camber MAV.

  6. Science objectives and first results from the SMART-1/AMIE multicolour micro-camera

    Science.gov (United States)

    Josset, J.-L.; Beauvivre, S.; Cerroni, P.; de Sanctis, M. C.; Pinet, P.; Chevrel, S.; Langevin, Y.; Barucci, M. A.; Plancke, P.; Koschny, D.; Almeida, M.; Sodnik, Z.; Mancuso, S.; Hofmann, B. A.; Muinonen, K.; Shevchenko, V.; Shkuratov, Yu.; Ehrenfreund, P.; Foing, B. H.

    The Advanced Moon micro-Imager Experiment (AMIE), on-board SMART-1, the first European mission to the Moon, is an imaging system with scientific, technical and public outreach objectives. The science objectives are to image the lunar South Pole, permanent shadow areas (ice deposit), eternal light (crater rims), ancient lunar non-mare volcanism, local spectrophotometry and physical state of the lunar surface, and to map high latitudes regions (south) mainly at far side (South Pole Aitken basin). The technical objectives are to perform a Laserlink experiment (detection of laser beam emitted by ESA/Tenerife ground station), flight demonstration of new technologies and on-board autonomy navigation. The public outreach and educational objectives are to promote planetary exploration and space. We present here the first results obtained during the cruise phase.

  7. Dynamic imaging with coincidence gamma camera

    International Nuclear Information System (INIS)

    Elhmassi, Ahmed

    2008-01-01

    In this paper we develop a technique to calculate dynamic parameters from data acquired using gamma-camera PET (gc PET). Our method is based on an algorithm development for dynamic SPECT, which processes all decency projection data simultaneously instead of reconstructing a series of static images individually. The algorithm was modified to account for the extra data that is obtained with gc PET (compared with SPEC). The method was tested using simulated projection data for both a SPECT and a gc PET geometry. These studies showed the ability of the code to reconstruct simulated data with a varying range of half-lives. The accuracy of the algorithm was measured in terms of the reconstructed half-life and initial activity for the simulated object. The reconstruction of gc PET data showed improvement in half-life and activity compared to SPECT data of 23% and 20%, respectively (at 50 iterations). The gc PET algorithm was also tested using data from an experimental phantom and finally, applied to a clinical dataset, where the algorithm was further modified to deal with the situation where the activity in certain pixels decreases and then increases during the acquisition. (author)

  8. Measurement of a Neutrino-Induced Charged Current Single Neutral Pion Cross Section at MicroBooNE

    Energy Technology Data Exchange (ETDEWEB)

    Hackenburg, Ariana [Yale U.

    2018-01-01

    Micro Booster Neutrino Experiment (MicroBooNE) is a Liquid Argon Time Projection Chamber (LArTPC) operating in the Booster Neutrino Beamline at Fermi National Accelerator Laboratory. MicroBooNE's physics goals include studying short basline $\

  9. Prism-based single-camera system for stereo display

    Science.gov (United States)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  10. Automated recognition and tracking of aerosol threat plumes with an IR camera pod

    Science.gov (United States)

    Fauth, Ryan; Powell, Christopher; Gruber, Thomas; Clapp, Dan

    2012-06-01

    Protection of fixed sites from chemical, biological, or radiological aerosol plume attacks depends on early warning so that there is time to take mitigating actions. Early warning requires continuous, autonomous, and rapid coverage of large surrounding areas; however, this must be done at an affordable cost. Once a potential threat plume is detected though, a different type of sensor (e.g., a more expensive, slower sensor) may be cued for identification purposes, but the problem is to quickly identify all of the potential threats around the fixed site of interest. To address this problem of low cost, persistent, wide area surveillance, an IR camera pod and multi-image stitching and processing algorithms have been developed for automatic recognition and tracking of aerosol plumes. A rugged, modular, static pod design, which accommodates as many as four micro-bolometer IR cameras for 45deg to 180deg of azimuth coverage, is presented. Various OpenCV1 based image-processing algorithms, including stitching of multiple adjacent FOVs, recognition of aerosol plume objects, and the tracking of aerosol plumes, are presented using process block diagrams and sample field test results, including chemical and biological simulant plumes. Methods for dealing with the background removal, brightness equalization between images, and focus quality for optimal plume tracking are also discussed.

  11. The Use of Camera Traps in Wildlife

    Directory of Open Access Journals (Sweden)

    Yasin Uçarlı

    2013-11-01

    Full Text Available Camera traps are increasingly used in the abundance and density estimates of wildlife species. Camera traps are very good alternative for direct observation in case, particularly, steep terrain, dense vegetation covered areas or nocturnal species. The main reason for the use of camera traps is eliminated that the economic, personnel and time loss in a continuous manner at the same time in different points. Camera traps, motion and heat sensitive, can take a photo or video according to the models. Crossover points and feeding or mating areas of the focal species are addressed as a priority camera trap set locations. The population size can be finding out by the images combined with Capture-Recapture methods. The population density came out the population size divided to effective sampling area size. Mating and breeding season, habitat choice, group structures and survival rates of the focal species can be achieved from the images. Camera traps are very useful to obtain the necessary data about the particularly mysterious species with economically in planning and conservation efforts.

  12. Architectural Design Document for Camera Models

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study.......Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study....

  13. C-RED One and C-RED2: SWIR high-performance cameras using Saphira e-APD and Snake InGaAs detectors

    Science.gov (United States)

    Gach, Jean-Luc; Feautrier, Philippe; Stadler, Eric; Clop, Fabien; Lemarchand, Stephane; Carmignani, Thomas; Wanwanscappel, Yann; Boutolleau, David

    2018-02-01

    After the development of the OCAM2 EMCCD fast visible camera dedicated to advanced adaptive optics wavefront sensing, First Light Imaging moved to the SWIR fast cameras with the development of the C-RED One and the C-RED 2 cameras. First Light Imaging's C-RED One infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise and very low background. C-RED One is based on the last version of the SAPHIRA detector developed by Leonardo UK. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. C-RED One is an autonomous system with an integrated cooling system and a vacuum regeneration system. It operates its sensor with a wide variety of read out techniques and processes video on-board thanks to an FPGA. We will show its performances and expose its main features. In addition to this project, First Light Imaging developed an InGaAs 640x512 fast camera with unprecedented performances in terms of noise, dark and readout speed based on the SNAKE SWIR detector from Sofradir. The camera was called C-RED 2. The C-RED 2 characteristics and performances will be described. The C-RED One project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944. The C-RED 2 development is supported by the "Investments for the future" program and the Provence Alpes Côte d'Azur Region, in the frame of the CPER.

  14. Hidden cameras everything you need to know about covert recording, undercover cameras and secret filming

    CERN Document Server

    Plomin, Joe

    2016-01-01

    Providing authoritative information on the practicalities of using hidden cameras to expose abuse or wrongdoing, this book is vital reading for anyone who may use or encounter secret filming. It gives specific advice on using phones or covert cameras and unravels the complex legal and ethical issues that need to be considered.

  15. Barbed micro-spikes for micro-scale biopsy

    Science.gov (United States)

    Byun, Sangwon; Lim, Jung-Min; Paik, Seung-Joon; Lee, Ahra; Koo, Kyo-in; Park, Sunkil; Park, Jaehong; Choi, Byoung-Doo; Seo, Jong Mo; Kim, Kyung-ah; Chung, Hum; Song, Si Young; Jeon, Doyoung; Cho, Dongil

    2005-06-01

    Single-crystal silicon planar micro-spikes with protruding barbs are developed for micro-scale biopsy and the feasibility of using the micro-spike as a micro-scale biopsy tool is evaluated for the first time. The fabrication process utilizes a deep silicon etch to define the micro-spike outline, resulting in protruding barbs of various shapes. Shanks of the fabricated micro-spikes are 3 mm long, 100 µm thick and 250 µm wide. Barbs protruding from micro-spike shanks facilitate the biopsy procedure by tearing off and retaining samples from target tissues. Micro-spikes with barbs successfully extracted tissue samples from the small intestines of the anesthetized pig, whereas micro-spikes without barbs failed to obtain a biopsy sample. Parylene coating can be applied to improve the biocompatibility of the micro-spike without deteriorating the biopsy function of the micro-spike. In addition, to show that the biopsy with the micro-spike can be applied to tissue analysis, samples obtained by micro-spikes were examined using immunofluorescent staining. Nuclei and F-actin of cells which are extracted by the micro-spike from a transwell were clearly visualized by immunofluorescent staining.

  16. Lights! Camera! Action Projects! Engaging Psychopharmacology Students in Service-based Action Projects Focusing on Student Alcohol Abuse.

    Science.gov (United States)

    Kennedy, Susan

    2016-01-01

    Alcohol abuse continues to be an issue of major concern for the health and well-being of college students. Estimates are that over 80% of college students are involved in the campus "alcohol culture." Annually, close to 2000 students die in the United States due to alcohol-related accidents, with another 600,000 sustaining injury due to alcohol-related incidents (NIAAA, 2013). Students enrolled in a Psychopharmacology course engaged in action projects (community outreach) focused on alcohol abuse on our campus. Research has indicated that these types of projects can increase student engagement in course material and foster important skills, including working with peers and developing involvement in one's community. This paper describes the structure and requirements of five student outreach projects and the final projects designed by the students, summarizes the grading and assessment of the projects, and discusses the rewards and challenges of incorporating such projects into a course.

  17. Evaluation of the algorithms for recovering reflectance from virtual digital camera response

    Directory of Open Access Journals (Sweden)

    Ana Gebejes

    2012-10-01

    Full Text Available In the recent years many new methods for quality control in graphic industry are proposed. All of these methodshave one in common – using digital camera as a capturing device and appropriate image processing method/algorithmto obtain desired information. With the development of new, more accurate sensors, digital cameras becameeven more dominant and the use of cameras as measuring device became more emphasized. The idea of using cameraas spectrophotometer is interesting because this kind of measurement would be more economical, faster, widelyavailable and it would provide a possibility of multiple colour capture with a single shot. This can be very usefulfor capturing colour targets for characterization of different properties of a print device. A lot of effort is put into enablingcommercial colour CCD cameras (3 acquisition channels to obtain enough of the information for reflectancerecovery. Unfortunately, RGB camera was not made with the idea of performing colour measurements but ratherfor producing an image that is visually pleasant for the observer. This somewhat complicates the task and seeks fora development of different algorithms that will estimate the reflectance information from the available RGB cameraresponses with minimal possible error. In this paper three different reflectance estimation algorithms are evaluated(Orthogonal projection,Wiener and optimized Wiener estimation, together with the method for reflectance approximationbased on principal component analysis (PCA. The aim was to perform reflectance estimation pixelwise and analyze the performance of some reflectance estimation algorithms locally, at some specific pixels in theimage, and globally, on the whole image. Performances of each algorithm were evaluated visually and numericallyby obtaining pixel wise colour difference and pixel wise difference of estimated reflectance to the original values. Itwas concluded that Wiener method gives the best reflectance estimation

  18. Using a laser scanning camera for reactor inspection

    International Nuclear Information System (INIS)

    Armour, I.A.; Adrain, R.S.; Klewe, R.C.

    1984-01-01

    Inspection of nuclear reactors is normally carried out using TV or film cameras. There are, however, several areas where these cameras show considerable shortcomings. To overcome these difficulties, laser scanning cameras have been developed. This type of camera can be used for general visual inspection as well as the provision of high resolution video images with high ratio on and off-axis zoom capability. In this paper, we outline the construction and operation of a laser scanning camera and give examples of how it has been used in various power stations, and indicate future potential developments. (author)

  19. A beam test of prototype time projection chamber using micro ...

    Indian Academy of Sciences (India)

    High Energy Accelerator Organization (KEK), Tsukuba 305-0801, Japan. E-mail: makoto.kobayashi.exp@kek.jp. Abstract. We conducted a series of beam tests of prototype TPCs for the international linear collider (ILC) experiment, equipped with an MWPC, a MicroMEGAS, or GEMs as a readout device. The prototype ...

  20. Modelling Virtual Camera Behaviour Through Player Gaze

    DEFF Research Database (Denmark)

    Picardi, Andrea; Burelli, Paolo; Yannakakis, Georgios N.

    2012-01-01

    industry and game AI research focus on the devel- opment of increasingly sophisticated systems to automate the control of the virtual camera integrating artificial intel- ligence algorithms within physical simulations. However, in both industry and academia little research has been carried out......In a three-dimensional virtual environment, aspects such as narrative and interaction largely depend on the placement and animation of the virtual camera. Therefore, virtual camera control plays a critical role in player experience and, thereby, in the overall quality of a computer game. Both game...... on the relationship between virtual camera, game-play and player behaviour. We run a game user experiment to shed some light on this relationship and identify relevant dif- ferences between camera behaviours through different game sessions, playing behaviours and player gaze patterns. Re- sults show that users can...

  1. MicroEcos: Micro-Scale Explorations of Large-Scale Late Pleistocene Ecosystems

    Science.gov (United States)

    Gellis, B. S.

    2017-12-01

    Pollen data can inform the reconstruction of early-floral environments by providing data for artistic representations of what early-terrestrial ecosystems looked like, and how existing terrestrial landscapes have evolved. For example, what did the Bighorn Basin look like when large ice sheets covered modern Canada, the Yellowstone Plateau had an ice cap, and the Bighorn Mountains were mantled with alpine glaciers? MicroEcos is an immersive, multimedia project that aims to strengthen human-nature connections through the understanding and appreciation of biological ecosystems. Collected pollen data elucidates flora that are visible in the fossil record - associated with the Late-Pleistocene - and have been illustrated and described in botanical literature. It aims to make scientific data accessible and interesting to all audiences through a series of interactive-digital sculptures, large-scale photography and field-based videography. While this project is driven by scientific data, it is rooted in deeply artistic and outreach-based practices, which include broad artistic practices, e.g.: digital design, illustration, photography, video and sound design. Using 3D modeling and printing technology MicroEcos centers around a series of 3D-printed models of the Last Canyon rock shelter on the Wyoming and Montana border, Little Windy Hill pond site in Wyoming's Medicine Bow National Forest, and Natural Trap Cave site in Wyoming's Big Horn Basin. These digital, interactive-3D sculpture provide audiences with glimpses of three-dimensional Late-Pleistocene environments, and helps create dialogue of how grass, sagebrush, and spruce based ecosystems form. To help audiences better contextualize how MicroEcos bridges notions of time, space, and place, modern photography and videography of the Last Canyon, Little Windy Hill and Natural Trap Cave sites surround these 3D-digital reconstructions.

  2. Why Micro-foundations for Resource-Based Theory Are Needed and What They May Look Like

    DEFF Research Database (Denmark)

    Foss, Nicolai Juul

    2011-01-01

    One of the important events in the development of resource-based theory (RBT) over the past decade has been the call for establishing micro-foundations for RBT. However, the micro-foundations project is still largely an unfulfilled promise. This article clarifies the nature of the micro-foundatio...

  3. Event detection intelligent camera development

    International Nuclear Information System (INIS)

    Szappanos, A.; Kocsis, G.; Molnar, A.; Sarkozi, J.; Zoletnik, S.

    2008-01-01

    A new camera system 'event detection intelligent camera' (EDICAM) is being developed for the video diagnostics of W-7X stellarator, which consists of 10 distinct and standalone measurement channels each holding a camera. Different operation modes will be implemented for continuous and for triggered readout as well. Hardware level trigger signals will be generated from real time image processing algorithms optimized for digital signal processor (DSP) and field programmable gate array (FPGA) architectures. At full resolution a camera sends 12 bit sampled 1280 x 1024 pixels with 444 fps which means 1.43 Terabyte over half an hour. To analyse such a huge amount of data is time consuming and has a high computational complexity. We plan to overcome this problem by EDICAM's preprocessing concepts. EDICAM camera system integrates all the advantages of CMOS sensor chip technology and fast network connections. EDICAM is built up from three different modules with two interfaces. A sensor module (SM) with reduced hardware and functional elements to reach a small and compact size and robust action in harmful environment as well. An image processing and control unit (IPCU) module handles the entire user predefined events and runs image processing algorithms to generate trigger signals. Finally a 10 Gigabit Ethernet compatible image readout card functions as the network interface for the PC. In this contribution all the concepts of EDICAM and the functions of the distinct modules are described

  4. Lock-in thermography, penetrant inspection, and scanning electron microscopy for quantitative evaluation of open micro-cracks at the tooth-restoration interface

    Science.gov (United States)

    Streza, M.; Hodisan, I.; Prejmerean, C.; Boue, C.; Tessier, Gilles

    2015-03-01

    The evaluation of a dental restoration in a non-invasive way is of paramount importance in clinical practice. The aim of this study was to assess the minimum detectable open crack at the cavity-restorative material interface by the lock-in thermography technique, at laser intensities which are safe for living teeth. For the analysis of the interface, 18 box-type class V standardized cavities were prepared on the facial and oral surfaces of each tooth, with coronal margins in enamel and apical margins in dentine. The preparations were restored with the Giomer Beautifil (Shofu) in combination with three different adhesive systems. Three specimens were randomly selected from each experimental group and each slice has been analysed by visible, infrared (IR), and scanning electron microscopy (SEM). Lock-in thermography showed the most promising results in detecting both marginal and internal defects. The proposed procedure leads to a diagnosis of micro-leakages having openings of 1 µm, which is close to the diffraction limit of the IR camera. Clinical use of a thermographic camera in assessing the marginal integrity of a restoration becomes possible. The method overcomes some drawbacks of standard SEM or dye penetration testing. The results support the use of an IR camera in dentistry, for the diagnosis of micro-gaps at bio-interfaces.

  5. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    A positron emission tomography camera having a plurality of detector rings positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom. Each detector ring or offset ring includes a plurality of photomultiplier tubes and a plurality of scintillation crystals are positioned relative to the photomultiplier tubes whereby each tube is responsive to more than one crystal. Each alternate crystal in the ring is offset by one-half or less of the thickness of the crystal such that the staggered crystals are seen by more than one photomultiplier tube. This sharing of crystals and photomultiplier tubes allows identification of the staggered crystal and the use of smaller detectors shared by larger photomultiplier tubes thereby requiring less photomultiplier tubes, creating more scanning slices, providing better data sampling, and reducing the cost of the camera. The offset detector ring geometry reduces the costs of the positron camera and improves its performance

  6. A multi-camera system for real-time pose estimation

    Science.gov (United States)

    Savakis, Andreas; Erhard, Matthew; Schimmel, James; Hnatow, Justin

    2007-04-01

    This paper presents a multi-camera system that performs face detection and pose estimation in real-time and may be used for intelligent computing within a visual sensor network for surveillance or human-computer interaction. The system consists of a Scene View Camera (SVC), which operates at a fixed zoom level, and an Object View Camera (OVC), which continuously adjusts its zoom level to match objects of interest. The SVC is set to survey the whole filed of view. Once a region has been identified by the SVC as a potential object of interest, e.g. a face, the OVC zooms in to locate specific features. In this system, face candidate regions are selected based on skin color and face detection is accomplished using a Support Vector Machine classifier. The locations of the eyes and mouth are detected inside the face region using neural network feature detectors. Pose estimation is performed based on a geometrical model, where the head is modeled as a spherical object that rotates upon the vertical axis. The triangle formed by the mouth and eyes defines a vertical plane that intersects the head sphere. By projecting the eyes-mouth triangle onto a two dimensional viewing plane, equations were obtained that describe the change in its angles as the yaw pose angle increases. These equations are then combined and used for efficient pose estimation. The system achieves real-time performance for live video input. Testing results assessing system performance are presented for both still images and video.

  7. Forward rectification: spatial image normalization for a video from a forward facing vehicle camera

    Science.gov (United States)

    Prun, Viktor; Polevoy, Dmitri; Postnikov, Vassiliy

    2017-03-01

    The work in this paper is focused around visual ADAS (Advanced Driver Assistance Systems). We introduce forward rectification - a technique for making computer vision algorithms more robust against camera mount point and mount angles. Using the technique can increase the quality of recognition as well as lower the dimensionality for algorithm invariance, making it possible to apply simpler affine-invariant algorithms for applications that require projective invariance. Providing useful results this rectification requires thorough calibration of the camera, which can be done automatically or semi-automatically. The technique is of general nature and can be applied to different algorithms, such as pattern matching detectors, convolutional neural networks. The applicability of the technique is demonstrated on HOG-based car detector detection rate.

  8. Micro-Cathode Arc Thruster (μCAT) System Development

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this project is to develop a highly reliable smallsat class micro propulsion guidance, navigation, and control (GN&C) actuator that will be used...

  9. Static and dynamic characterization of robust superhydrophobic surfaces built from nano-flowers on silicon micro-post arrays

    KAUST Repository

    Chen, Longquan

    2010-09-01

    Superhydrophobic nano-flower surfaces were fabricated using MEMS technology and microwave plasma-enhanced chemical vapor deposition (MPCVD) of carbon nanotubes on silicon micro-post array surfaces. The nano-flower structures can be readily formed within 1-2 min on the micro-post arrays with the spacing ranging from 25 to 30 μm. The petals of the nano-flowers consisted of clusters of multi-wall carbon nanotubes. Patterned nano-flower structures were characterized using various microscopy techniques. After MPCVD, the apparent contact angle (160 ± 0.2°), abbreviated as ACA (defined as the measured angle between the apparent solid surface and the tangent to the liquid-fluid interface), of the nano-flower surfaces increased by 139% compared with that of the silicon micro-post arrays. The measured ACA of the nano-flower surface is consistent with the predicted ACA from a modified Cassie-Baxter equation. A high-speed CCD camera was used to study droplet impact dynamics on various micro/nanostructured surfaces. Both static testing (ACA and sliding angle) and droplet impact dynamics demonstrated that, among seven different micro/nanostructured surfaces, the nano-flower surfaces are the most robust superhydrophobic surfaces. © 2010 IOP Publishing Ltd.

  10. Static and dynamic characterization of robust superhydrophobic surfaces built from nano-flowers on silicon micro-post arrays

    KAUST Repository

    Chen, Longquan; Xiao, Zhiyong; Chan, Philip C H; Lee, Yi-Kuen

    2010-01-01

    Superhydrophobic nano-flower surfaces were fabricated using MEMS technology and microwave plasma-enhanced chemical vapor deposition (MPCVD) of carbon nanotubes on silicon micro-post array surfaces. The nano-flower structures can be readily formed within 1-2 min on the micro-post arrays with the spacing ranging from 25 to 30 μm. The petals of the nano-flowers consisted of clusters of multi-wall carbon nanotubes. Patterned nano-flower structures were characterized using various microscopy techniques. After MPCVD, the apparent contact angle (160 ± 0.2°), abbreviated as ACA (defined as the measured angle between the apparent solid surface and the tangent to the liquid-fluid interface), of the nano-flower surfaces increased by 139% compared with that of the silicon micro-post arrays. The measured ACA of the nano-flower surface is consistent with the predicted ACA from a modified Cassie-Baxter equation. A high-speed CCD camera was used to study droplet impact dynamics on various micro/nanostructured surfaces. Both static testing (ACA and sliding angle) and droplet impact dynamics demonstrated that, among seven different micro/nanostructured surfaces, the nano-flower surfaces are the most robust superhydrophobic surfaces. © 2010 IOP Publishing Ltd.

  11. Search for Bs0 --> micro+ micro- and B0 --> micro+ micro- decays with 2 fb-1 of pp collisions.

    Science.gov (United States)

    Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Behari, S; Bellettini, G; Bellinger, J; Belloni, A; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Crescioli, F; Cuenca Almenar, C; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; De Lorenzo, G; Dell'orso, M; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Forrester, S; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Giagu, S; Giakoumopolou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; Iyutin, B; James, E; Jayatilaka, B; Jeans, D; Jeon, E J; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kraus, J; Kreps, M; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhlmann, S E; Kuhr, T; Kulkarni, N P; Kusakabe, Y; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, J; Lee, J; Lee, Y J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Lin, C; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, M; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miles, J; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moed, S; Moggi, N; Moon, C S; Moore, R; Morello, M; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Portell, X; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Salamanna, G; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyria, A; Shalhout, S Z; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sun, H; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tu, Y; Turini, N; Ukegawa, F; Uozumi, S; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, T; Yang, C; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S

    2008-03-14

    We have performed a search for B(s)(0) --> micro(+) micro(-) and B(0) --> micro(+) micro(-) decays in pp collisions at square root s = 1.96 TeV using 2 fb(-1) of integrated luminosity collected by the CDF II detector at the Fermilab Tevatron Collider. The observed number of B(s)(0) and B0 candidates is consistent with background expectations. The resulting upper limits on the branching fractions are B(B(s)0) --> micro(+) micro(-)) micro(+) micro(-))<1.8 x 10(-8) at 95% C.L.

  12. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    Science.gov (United States)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellutta, Paolo; Sherwin, Gary W.

    2011-01-01

    The ability to perform off-road autonomous navigation at any time of day or night is a requirement for some unmanned ground vehicle (UGV) programs. Because there are times when it is desirable for military UGVs to operate without emitting strong, detectable electromagnetic signals, a passive only terrain perception mode of operation is also often a requirement. Thermal infrared (TIR) cameras can be used to provide day and night passive terrain perception. TIR cameras have a detector sensitive to either mid-wave infrared (MWIR) radiation (3-5?m) or long-wave infrared (LWIR) radiation (8-12?m). With the recent emergence of high-quality uncooled LWIR cameras, TIR cameras have become viable passive perception options for some UGV programs. The Jet Propulsion Laboratory (JPL) has used a stereo pair of TIR cameras under several UGV programs to perform stereo ranging, terrain mapping, tree-trunk detection, pedestrian detection, negative obstacle detection, and water detection based on object reflections. In addition, we have evaluated stereo range data at a variety of UGV speeds, evaluated dual-band TIR classification of soil, vegetation, and rock terrain types, analyzed 24 hour water and 12 hour mud TIR imagery, and analyzed TIR imagery for hazard detection through smoke. Since TIR cameras do not currently provide the resolution available from megapixel color cameras, a UGV's daytime safe speed is often reduced when using TIR instead of color cameras. In this paper, we summarize the UGV terrain perception work JPL has performed with TIR cameras over the last decade and describe a calibration target developed by General Dynamics Robotic Systems (GDRS) for TIR cameras and other sensors.

  13. Projection Mapping User Interface for Disabled People

    Directory of Open Access Journals (Sweden)

    Julius Gelšvartas

    2018-01-01

    Full Text Available Difficulty in communicating is one of the key challenges for people suffering from severe motor and speech disabilities. Often such person can communicate and interact with the environment only using assistive technologies. This paper presents a multifunctional user interface designed to improve communication efficiency and person independence. The main component of this interface is a projection mapping technique used to highlight objects in the environment. Projection mapping makes it possible to create a natural augmented reality information presentation method. The user interface combines a depth sensor and a projector to create camera-projector system. We provide a detailed description of camera-projector system calibration procedure. The described system performs tabletop object detection and automatic projection mapping. Multiple user input modalities have been integrated into the multifunctional user interface. Such system can be adapted to the needs of people with various disabilities.

  14. Traveling wave deflector design for femtosecond streak camera

    International Nuclear Information System (INIS)

    Pei, Chengquan; Wu, Shengli; Luo, Duan; Wen, Wenlong; Xu, Junkai; Tian, Jinshou; Zhang, Minrui; Chen, Pin; Chen, Jianzhong; Liu, Rong

    2017-01-01

    In this paper, a traveling wave deflection deflector (TWD) with a slow-wave property induced by a microstrip transmission line is proposed for femtosecond streak cameras. The pass width and dispersion properties were simulated. In addition, the dynamic temporal resolution of the femtosecond camera was simulated by CST software. The results showed that with the proposed TWD a femtosecond streak camera can achieve a dynamic temporal resolution of less than 600 fs. Experiments were done to test the femtosecond streak camera, and an 800 fs dynamic temporal resolution was obtained. Guidance is provided for optimizing a femtosecond streak camera to obtain higher temporal resolution.

  15. Traveling wave deflector design for femtosecond streak camera

    Energy Technology Data Exchange (ETDEWEB)

    Pei, Chengquan; Wu, Shengli [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi’an 710049 (China); Luo, Duan [Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Wen, Wenlong [Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Xu, Junkai [Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Tian, Jinshou, E-mail: tianjs@opt.ac.cn [Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan, Shanxi 030006 (China); Zhang, Minrui; Chen, Pin [Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Chen, Jianzhong [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi’an 710049 (China); Liu, Rong [Xi' an Technological University, Xi' an 710021 (China)

    2017-05-21

    In this paper, a traveling wave deflection deflector (TWD) with a slow-wave property induced by a microstrip transmission line is proposed for femtosecond streak cameras. The pass width and dispersion properties were simulated. In addition, the dynamic temporal resolution of the femtosecond camera was simulated by CST software. The results showed that with the proposed TWD a femtosecond streak camera can achieve a dynamic temporal resolution of less than 600 fs. Experiments were done to test the femtosecond streak camera, and an 800 fs dynamic temporal resolution was obtained. Guidance is provided for optimizing a femtosecond streak camera to obtain higher temporal resolution.

  16. Multi-Angle Snowflake Camera Value-Added Product

    Energy Technology Data Exchange (ETDEWEB)

    Shkurko, Konstantin [Univ. of Utah, Salt Lake City, UT (United States); Garrett, T. [Univ. of Utah, Salt Lake City, UT (United States); Gaustad, K [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-12-01

    The Multi-Angle Snowflake Camera (MASC) addresses a need for high-resolution multi-angle imaging of hydrometeors in freefall with simultaneous measurement of fallspeed. As illustrated in Figure 1, the MASC consists of three cameras, separated by 36°, each pointing at an identical focal point approximately 10 cm away. Located immediately above each camera, a light aims directly at the center of depth of field for its corresponding camera. The focal point at which the cameras are aimed lies within a ring through which hydrometeors fall. The ring houses a system of near-infrared emitter-detector pairs, arranged in two arrays separated vertically by 32 mm. When hydrometeors pass through the lower array, they simultaneously trigger all cameras and lights. Fallspeed is calculated from the time it takes to traverse the distance between the upper and lower triggering arrays. The trigger electronics filter out ambient light fluctuations associated with varying sunlight and shadows. The microprocessor onboard the MASC controls the camera system and communicates with the personal computer (PC). The image data is sent via FireWire 800 line, and fallspeed (and camera control) is sent via a Universal Serial Bus (USB) line that relies on RS232-over-USB serial conversion. See Table 1 for specific details on the MASC located at the Oliktok Point Mobile Facility on the North Slope of Alaska. The value-added product (VAP) detailed in this documentation analyzes the raw data (Section 2.0) using Python: images rely on OpenCV image processing library and derived aggregated statistics rely on some clever averaging. See Sections 4.1 and 4.2 for more details on what variables are computed.

  17. CameraHRV: robust measurement of heart rate variability using a camera

    Science.gov (United States)

    Pai, Amruta; Veeraraghavan, Ashok; Sabharwal, Ashutosh

    2018-02-01

    The inter-beat-interval (time period of the cardiac cycle) changes slightly for every heartbeat; this variation is measured as Heart Rate Variability (HRV). HRV is presumed to occur due to interactions between the parasym- pathetic and sympathetic nervous system. Therefore, it is sometimes used as an indicator of the stress level of an individual. HRV also reveals some clinical information about cardiac health. Currently, HRV is accurately measured using contact devices such as a pulse oximeter. However, recent research in the field of non-contact imaging Photoplethysmography (iPPG) has made vital sign measurements using just the video recording of any exposed skin (such as a person's face) possible. The current signal processing methods for extracting HRV using peak detection perform well for contact-based systems but have poor performance for the iPPG signals. The main reason for this poor performance is the fact that current methods are sensitive to large noise sources which are often present in iPPG data. Further, current methods are not robust to motion artifacts that are common in iPPG systems. We developed a new algorithm, CameraHRV, for robustly extracting HRV even in low SNR such as is common with iPPG recordings. CameraHRV combined spatial combination and frequency demodulation to obtain HRV from the instantaneous frequency of the iPPG signal. CameraHRV outperforms other current methods of HRV estimation. Ground truth data was obtained from FDA-approved pulse oximeter for validation purposes. CameraHRV on iPPG data showed an error of 6 milliseconds for low motion and varying skin tone scenarios. The improvement in error was 14%. In case of high motion scenarios like reading, watching and talking, the error was 10 milliseconds.

  18. Jellyfish Identification Software for Underwater Laser Cameras (JTRACK

    Directory of Open Access Journals (Sweden)

    Patrizio Mariani

    2018-02-01

    Full Text Available Jellyfish can form erratic blooms in response to seasonal and irregular changes in environmental conditions with often large, transient effects on local ecosystem structure as well as effects on several sectors of the marine and maritime economy. Early warning systems able to detect conditions for jelly fish proliferation can enable management responses to mitigate such effects providing benefit to local ecosystems and economies. We propose here the creation of a research team in response to the EU call for proposal under the European Maritime and Fisheries Fund called “Blue Labs: innovative solutions for maritime challenges”. The project will establish a BLUELAB team with a strong cross-sectorial component that will benefit of the expertise of researchers in IT and Marine Biology, Computer Vision and embedded systems, which will work in collaboration with Industry and Policy maker to develop an early warning system using a new underwater imaging system based on Time of Flight Laser cameras. The camera will be combined to machine learning algorithm allowing autonomous early detection of jellyfish species (e.g. polyp, ephyra and planula stages. The team will develop the system and the companion software and will demonstrate its applications in real case conditions.

  19. STREAK CAMERA MEASUREMENTS OF THE APS PC GUN DRIVE LASER

    Energy Technology Data Exchange (ETDEWEB)

    Dooling, J. C.; Lumpkin, A. H.

    2017-06-25

    We report recent pulse-duration measurements of the APS PC Gun drive laser at both second harmonic and fourth harmonic wavelengths. The drive laser is a Nd:Glass-based chirped pulsed amplifier (CPA) operating at an IR wavelength of 1053 nm, twice frequency-doubled to obtain UV output for the gun. A Hamamatsu C5680 streak camera and an M5675 synchroscan unit are used for these measurements; the synchroscan unit is tuned to 119 MHz, the 24th subharmonic of the linac s-band operating frequency. Calibration is accomplished both electronically and optically. Electronic calibration utilizes a programmable delay line in the 119 MHz rf path. The optical delay uses an etalon with known spacing between reflecting surfaces and is coated for the visible, SH wavelength. IR pulse duration is monitored with an autocorrelator. Fitting the streak camera image projected profiles with Gaussians, UV rms pulse durations are found to vary from 2.1 ps to 3.5 ps as the IR varies from 2.2 ps to 5.2 ps.

  20. Super-resolution processing for pulsed neutron imaging system using a high-speed camera

    International Nuclear Information System (INIS)

    Ishizuka, Ken; Kai, Tetsuya; Shinohara, Takenao; Segawa, Mariko; Mochiki, Koichi

    2015-01-01

    Super-resolution and center-of-gravity processing improve the resolution of neutron-transmitted images. These processing methods calculate the center-of-gravity pixel or sub-pixel of the neutron point converted into light by a scintillator. The conventional neutron-transmitted image is acquired using a high-speed camera by integrating many frames when a transmitted image with one frame is not provided. It succeeds in acquiring the transmitted image and calculating a spectrum by integrating frames of the same energy. However, because a high frame rate is required for neutron resonance absorption imaging, the number of pixels of the transmitted image decreases, and the resolution decreases to the limit of the camera performance. Therefore, we attempt to improve the resolution by integrating the frames after applying super-resolution or center-of-gravity processing. The processed results indicate that center-of-gravity processing can be effective in pulsed-neutron imaging with a high-speed camera. In addition, the results show that super-resolution processing is effective indirectly. A project to develop a real-time image data processing system has begun, and this system will be used at J-PARC in JAEA. (author)

  1. MicroED data collection and processing

    Energy Technology Data Exchange (ETDEWEB)

    Hattne, Johan; Reyes, Francis E.; Nannenga, Brent L.; Shi, Dan; Cruz, M. Jason de la [Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147 (United States); Leslie, Andrew G. W. [Medical Research Council Laboratory of Molecular Biology, Cambridge (United Kingdom); Gonen, Tamir, E-mail: gonent@janelia.hhmi.org [Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147 (United States)

    2015-07-01

    The collection and processing of MicroED data are presented. MicroED, a method at the intersection of X-ray crystallography and electron cryo-microscopy, has rapidly progressed by exploiting advances in both fields and has already been successfully employed to determine the atomic structures of several proteins from sub-micron-sized, three-dimensional crystals. A major limiting factor in X-ray crystallography is the requirement for large and well ordered crystals. By permitting electron diffraction patterns to be collected from much smaller crystals, or even single well ordered domains of large crystals composed of several small mosaic blocks, MicroED has the potential to overcome the limiting size requirement and enable structural studies on difficult-to-crystallize samples. This communication details the steps for sample preparation, data collection and reduction necessary to obtain refined, high-resolution, three-dimensional models by MicroED, and presents some of its unique challenges.

  2. Topography changes monitoring of small islands using camera drone

    Science.gov (United States)

    Bang, E.

    2017-12-01

    Drone aerial photogrammetry was conducted for monitoring topography changes of small islands in the east sea of Korea. Severe weather and sea wave is eroding the islands and sometimes cause landslide and falling rock. Due to rugged cliffs in all direction and bad accessibility, ground based survey methods are less efficient in monitoring topography changes of the whole area. Camera drones can provide digital images and movie in every corner of the islands, and drone aerial photogrammetry is powerful to get precise digital surface model (DSM) for a limited area. We have got a set of digital images to construct a textured 3D model of the project area every year since 2014. Flight height is in less than 100m from the top of those islands to get enough ground sampling distance (GSD). Most images were vertically captured with automatic flights, but we also flied drones around the islands with about 30°-45° camera angle for constructing 3D model better. Every digital image has geo-reference, but we set several ground control points (GCPs) on the islands and their coordinates were measured with RTK surveying methods to increase the absolute accuracy of the project. We constructed 3D textured model using photogrammetry tool, which generates 3D spatial information from digital images. From the polygonal model, we could get DSM with contour lines. Thematic maps such as hill shade relief map, aspect map and slope map were also processed. Those maps make us understand topography condition of the project area better. The purpose of this project is monitoring topography change of these small islands. Elevation difference map between DSMs of each year is constructed. There are two regions showing big negative difference value. By comparing constructed textured models and captured digital images around these regions, it is checked that a region have experienced real topography change. It is due to huge rock fall near the center of the east island. The size of fallen rock can be

  3. An image-tube camera for cometary spectrography

    Science.gov (United States)

    Mamadov, O.

    The paper discusses the mounting of an image tube camera. The cathode is of antimony, sodium, potassium, and cesium. The parts used for mounting are of acrylic plastic and a fabric-based laminate. A mounting design that does not include cooling is presented. The aperture ratio of the camera is 1:27. Also discussed is the way that the camera is joined to the spectrograph.

  4. Study of Electromagnetic Interactions in the MicroBooNE Liquid Argon Time Projection Chamber

    Energy Technology Data Exchange (ETDEWEB)

    Caratelli, David [Columbia U.

    2018-01-01

    This thesis presents results on the study of electromagnetic (EM) activity in the MicroBooNE Liquid Argon Time Projection Chamber (LArTPC) neutrino detector. The LArTPC detector technology provides bubble-chamber like information on neutrino interaction final states, necessary to perform precision measurements of neutrino oscillation parameters. Accelerator-based oscillation experiments heavily rely on the appearance channel ! e to make such measurements. Identifying and reconstructing the energy of the outgoing electrons from such interactions is therefore crucial for their success. This work focuses on two sources of EM activity: Michel electrons in the 10-50 MeV energy range, and photons from 0 decay in the 30-300 MeV range. Studies of biases in the energy reconstruction measurement, and energy resolution are performed. The impact of shower topology at different energies is discussed, and the importance of thresholding and other reconstruction effects on producing an asymmetric and biased energy measurement are highlighted. This work further presents a study of the calorimetric separation of electrons and photons with a focus on the shower energy dependence of the separation power.

  5. Micro-propulsion and micro-combustion; Micropropulsion microcombustion

    Energy Technology Data Exchange (ETDEWEB)

    Ribaud, Y.; Dessornes, O.

    2002-10-01

    The AAAF (french space and aeronautic association) organized at Paris a presentation on the micro-propulsion. The first part was devoted to the thermal micro-machines for micro drones, the second part to the micro-combustion applied to micro-turbines. (A.L.B.)

  6. Characterization of SWIR cameras by MRC measurements

    Science.gov (United States)

    Gerken, M.; Schlemmer, H.; Haan, Hubertus A.; Siemens, Christofer; Münzberg, M.

    2014-05-01

    Cameras for the SWIR wavelength range are becoming more and more important because of the better observation range for day-light operation under adverse weather conditions (haze, fog, rain). In order to choose the best suitable SWIR camera or to qualify a camera for a given application, characterization of the camera by means of the Minimum Resolvable Contrast MRC concept is favorable as the MRC comprises all relevant properties of the instrument. With the MRC known for a given camera device the achievable observation range can be calculated for every combination of target size, illumination level or weather conditions. MRC measurements in the SWIR wavelength band can be performed widely along the guidelines of the MRC measurements of a visual camera. Typically measurements are performed with a set of resolution targets (e.g. USAF 1951 target) manufactured with different contrast values from 50% down to less than 1%. For a given illumination level the achievable spatial resolution is then measured for each target. The resulting curve is showing the minimum contrast that is necessary to resolve the structure of a target as a function of spatial frequency. To perform MRC measurements for SWIR cameras at first the irradiation parameters have to be given in radiometric instead of photometric units which are limited in their use to the visible range. In order to do so, SWIR illumination levels for typical daylight and twilight conditions have to be defined. At second, a radiation source is necessary with appropriate emission in the SWIR range (e.g. incandescent lamp) and the irradiance has to be measured in W/m2 instead of Lux = Lumen/m2. At third, the contrast values of the targets have to be calibrated newly for the SWIR range because they typically differ from the values determined for the visual range. Measured MRC values of three cameras are compared to the specified performance data of the devices and the results of a multi-band in-house designed Vis-SWIR camera

  7. Projection model for flame chemiluminescence tomography based on lens imaging

    Science.gov (United States)

    Wan, Minggang; Zhuang, Jihui

    2018-04-01

    For flame chemiluminescence tomography (FCT) based on lens imaging, the projection model is essential because it formulates the mathematical relation between the flame projections captured by cameras and the chemiluminescence field, and, through this relation, the field is reconstructed. This work proposed the blurry-spot (BS) model, which takes more universal assumptions and has higher accuracy than the widely applied line-of-sight model. By combining the geometrical camera model and the thin-lens equation, the BS model takes into account perspective effect of the camera lens; by combining ray-tracing technique and Monte Carlo simulation, it also considers inhomogeneous distribution of captured radiance on the image plane. Performance of these two models in FCT was numerically compared, and results showed that using the BS model could lead to better reconstruction quality in wider application ranges.

  8. A quality control atlas for scintillation camera systems

    International Nuclear Information System (INIS)

    Busemann Sokole, E.; Graham, L.S.; Todd-Pokropek, A.; Wegst, A.; Robilotta, C.C.

    2002-01-01

    Full text: The accurate interpretation of quality control and clinical nuclear medicine image data is coupled to an understanding of image patterns and quantitative results. Understanding is gained by learning from different examples, and knowledge of underlying principles of image production. An Atlas of examples has been created to assist with interpreting quality control tests and recognizing artifacts in clinical examples. The project was initiated and supported by the International Atomic Energy Agency (IAEA). The Atlas was developed and written by Busemann Sokole from image examples submitted from nuclear medicine users from around the world. The descriptive text was written in a consistent format to accompany each image or image set. Each example in the atlas finally consisted of the images; a brief description of the data acquisition, radionuclide/radiopharmaceutical, specific circumstances under which the image was produced; results describing the images and subsequent conclusions; comments, where appropriate, giving guidelines for follow-up strategies and trouble shooting; and occasional literature references. Hardcopy images required digitizing into JPEG format for inclusion into a digital document. Where possible, an example was contained on one page. The atlas was reviewed by an international group of experts. A total of about 250 examples were compiled into 6 sections: planar, SPECT, whole body, camera/computer interface, environment/radioactivity, and display/hardcopy. Subtle loss of image quality may be difficult to detect. SPECT examples, therefore, include simulations demonstrating effects of deterioration in camera performance (e.g. center-of-rotation offset, non-uniformity) or suboptimal clinical performance. The atlas includes normal results, results from poor adjustment of the camera system, poor results obtained at acceptance testing, artifacts due to system malfunction, and artifacts due to environmental situations. Some image patterns are

  9. Integrating Gigabit ethernet cameras into EPICS at Diamond light source

    International Nuclear Information System (INIS)

    Cobb, T.

    2012-01-01

    At Diamond Light Source a range of cameras are used to provide images for diagnostic purposes in both the accelerator and photo beamlines. The accelerator and existing beamlines use Point Grey Flea and Flea2 Firewire cameras. We have selected Gigabit Ethernet cameras supporting GigE Vision for our new photon beamlines. GigE Vision is an interface standard for high speed Ethernet cameras which encourages inter-operability between manufacturers. This paper describes the challenges encountered while integrating GigE Vision cameras from a range of vendors into EPICS. GigE Vision cameras appear to be more reliable than the Firewire cameras, and the simple cabling makes much easier to move the cameras to different positions. Upcoming power over Ethernet versions of the cameras will reduce the number of cables still further

  10. Digital airborne camera introduction and technology

    CERN Document Server

    Sandau, Rainer

    2014-01-01

    The last decade has seen great innovations on the airborne camera. This book is the first ever written on the topic and describes all components of a digital airborne camera ranging from the object to be imaged to the mass memory device.

  11. Chinese Manned Space Utility Project

    Science.gov (United States)

    Gu, Y.

    Since 1992 China has been carrying out a conspicuous manned space mission A utility project has been defined and created during the same period The Utility Project of the Chinese Manned Space Mission involves wide science areas such as earth observation life science micro-gravity fluid physics and material science astronomy space environment etc In the earth observation area it is focused on the changes of global environments and relevant exploration technologies A Middle Revolution Image Spectrometer and a Multi-model Micro-wave Remote Sensor have been developed The detectors for cirrostratus distribution solar constant earth emission budget earth-atmosphere ultra-violet spectrum and flux have been manufactured and tested All of above equipment was engaged in orbital experiments on-board the Shenzhou series spacecrafts Space life science biotechnologies and micro-gravity science were much concerned with the project A series of experiments has been made both in ground laboratories and spacecraft capsules The environmental effect in different biological bodies in space protein crystallization electrical cell-fusion animal cells cultural research on separation by using free-low electrophoresis a liquid drop Marangoni migration experiment under micro-gravity as well as a set of crystal growth and metal processing was successfully operated in space The Gamma-ray burst and high-energy emission from solar flares have been explored A set of particle detectors and a mass spectrometer measured

  12. Camera Traps Can Be Heard and Seen by Animals

    Science.gov (United States)

    Meek, Paul D.; Ballard, Guy-Anthony; Fleming, Peter J. S.; Schaefer, Michael; Williams, Warwick; Falzon, Greg

    2014-01-01

    Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals’ hearing and produce illumination that can be seen by many species. PMID:25354356

  13. Camera traps can be heard and seen by animals.

    Directory of Open Access Journals (Sweden)

    Paul D Meek

    Full Text Available Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5 and infrared illumination outputs (n = 7 of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21 and assessed the vision ranges (n = 3 of mammals species (where data existed to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals' hearing and produce illumination that can be seen by many species.

  14. Analysis of Camera Parameters Value in Various Object Distances Calibration

    International Nuclear Information System (INIS)

    Yusoff, Ahmad Razali; Ariff, Mohd Farid Mohd; Idris, Khairulnizam M; Majid, Zulkepli; Setan, Halim; Chong, Albert K

    2014-01-01

    In photogrammetric applications, good camera parameters are needed for mapping purpose such as an Unmanned Aerial Vehicle (UAV) that encompassed with non-metric camera devices. Simple camera calibration was being a common application in many laboratory works in order to get the camera parameter's value. In aerial mapping, interior camera parameters' value from close-range camera calibration is used to correct the image error. However, the causes and effects of the calibration steps used to get accurate mapping need to be analyze. Therefore, this research aims to contribute an analysis of camera parameters from portable calibration frame of 1.5 × 1 meter dimension size. Object distances of two, three, four, five, and six meters are the research focus. Results are analyzed to find out the changes in image and camera parameters' value. Hence, camera calibration parameter's of a camera is consider different depend on type of calibration parameters and object distances

  15. A beam test of prototype time projection chamber using micro ...

    Indian Academy of Sciences (India)

    We conducted a series of beam tests of prototype TPCs for the international linear collider (ILC) experiment, equipped with an MWPC, a MicroMEGAS, or GEMs as a readout device. The prototype operated successfully in a test beam at KEK under an axial magnetic field of up to 1 T. The analysis of data is now in progress ...

  16. NV-CMOS HD camera for day/night imaging

    Science.gov (United States)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  17. Semi-automated camera trap image processing for the detection of ungulate fence crossing events.

    Science.gov (United States)

    Janzen, Michael; Visser, Kaitlyn; Visscher, Darcy; MacLeod, Ian; Vujnovic, Dragomir; Vujnovic, Ksenija

    2017-09-27

    Remote cameras are an increasingly important tool for ecological research. While remote camera traps collect field data with minimal human attention, the images they collect require post-processing and characterization before it can be ecologically and statistically analyzed, requiring the input of substantial time and money from researchers. The need for post-processing is due, in part, to a high incidence of non-target images. We developed a stand-alone semi-automated computer program to aid in image processing, categorization, and data reduction by employing background subtraction and histogram rules. Unlike previous work that uses video as input, our program uses still camera trap images. The program was developed for an ungulate fence crossing project and tested against an image dataset which had been previously processed by a human operator. Our program placed images into categories representing the confidence of a particular sequence of images containing a fence crossing event. This resulted in a reduction of 54.8% of images that required further human operator characterization while retaining 72.6% of the known fence crossing events. This program can provide researchers using remote camera data the ability to reduce the time and cost required for image post-processing and characterization. Further, we discuss how this procedure might be generalized to situations not specifically related to animal use of linear features.

  18. Non-intrusive measurements of frictional forces between micro-spheres and flat surfaces

    Science.gov (United States)

    Lin, Wei-Hsun; Daraio, Chiara; Daraio's Group Team

    2014-03-01

    We report a novel, optical pump-probe experimental setup to study micro-friction phenomena between micro-particles and a flat surface. We present a case study of stainless steel microspheres, of diameter near 250 μm, in contact with different surfaces of variable roughness. In these experiments, the contact area between the particles and the substrates is only a few nanometers wide. To excite the particles, we deliver an impulse using a pulsed, high-power laser. The reaction force resulting from the surface ablation induced by the laser imparts a controlled initial velocity to the target particle. This initial velocity can be varied between 10-5 to 1 m/s. We investigate the vibrating and rolling motions of the micro-particles by detecting their velocity and displacement with a laser vibrometer and a high-speed microscope camera. We calculate the effective Hamaker constant from the vibrating motion of a particle, and study its relation to the substrate's surface roughness. We analyze the relation between rolling friction and the minimum momentum required to break surface bonding forces. This non-contact and non-intrusive technique could be employed to study a variety of contact and tribology problems at the microscale.

  19. The bit slice micro-processor 'GESPRO' as a project in the UA2 experiment

    International Nuclear Information System (INIS)

    Becam, C.; Bernaudin, P.; Delanghe, J.; Mencik, M.; Merkel, B.; Plothow, H.; Fest, H.M.; Lecoq, J.; Martin, H.; Meyer, J.M.

    1981-01-01

    The bit slice micro-processor GESPRO, as it is proposed for use in the UA 2 data acquisition chain and trigger system, is a CAMAC module plugged into a standard Elliott System crate via which it communicates as a slave with its host computer (ND, DEC). It has full control of CAMAC as a master unit. GESPRO is a 24 bit machine (150 ns effective cycle time) with multi-mode memory addressing capacity of 64 K words. The micro-processor structure uses 5 busses including pipe-line registers to mask access time and 16 interrupt levels. The micro-program memory capacity is 2 K (RAM) words of 48 bits each. A special hardwired module allows floating point (as well as integer) multiplication of 24 x 24 bits, result in 48 bits, in about 200 ns. This micro-processor could be used in the UA2 data acquisition chain and trigger system for the following tasks: a) online data reduction, i.e. to read DURANDAL (fast ADC's = the hardware trigger in the experiment), process the information (effective mass calculation, etc.) resulting in accepting or rejecting the event. b) read out and analysis of the accepted data (collect statistical information). c) preprocess the data (calculation of pointers, address decoding, etc.). The UA2 version of GESPRO is under construction, programs and micro-programs are under development. Hardware and software will be tested with simulated data. First results are expected in about one year from now. (orig.)

  20. MEASUREMENT OF LARGE-SCALE SOLAR POWER PLANT BY USING IMAGES ACQUIRED BY NON-METRIC DIGITAL CAMERA ON BOARD UAV

    Directory of Open Access Journals (Sweden)

    R. Matsuoka

    2012-07-01

    Full Text Available This paper reports an experiment conducted in order to investigate the feasibility of the deformation measurement of a large-scale solar power plant on reclaimed land by using images acquired by a non-metric digital camera on board a micro unmanned aerial vehicle (UAV. It is required that a root mean squares of errors (RMSE in height measurement should be less than 26 mm that is 1/3 of the critical limit of deformation of 78 mm off the plane of a solar panel. Images utilized in the experiment have been obtained by an Olympus PEN E-P2 digital camera on board a Microdrones md4-1000 quadrocopter. The planned forward and side overlap ratios of vertical image acquisition have been 60 % and 60 % respectively. The planned flying height of the UAV has been 20 m above the ground level and the ground resolution of an image is approximately 5.0 mm by 5.0 mm. 8 control points around the experiment area are utilized for orientation. Measurement results are evaluated by the space coordinates of 220 check points which are corner points of 55 solar panels selected from 1768 solar panels in the experiment area. Two teams engage in the experiment. One carries out orientation and measurement by using 171 images following the procedure of conventional aerial photogrammetry, and the other executes those by using 126 images in the manner of close range photogrammetry. The former fails to satisfy the required accuracy, while the RMSE in height measurement by the latter is 8.7 mm that satisfies the required accuracy. From the experiment results, we conclude that the deformation measurement of a large-scale solar power plant on reclaimed land by using images acquired by a nonmetric digital camera on board a micro UAV would be feasible if points utilized in orientation and measurement have a sufficient number of bundles in good geometry and self-calibration in orientation is carried out.

  1. Mississippi State University Cooling, Heating, and Power (Micro-CHP) and Bio-Fuel Center

    Energy Technology Data Exchange (ETDEWEB)

    Mago, Pedro [Mississippi State Univ., Mississippi State, MS (United States); Newell, LeLe [Mississippi State Univ., Mississippi State, MS (United States)

    2014-01-31

    Between 2008 and 2014, the U.S. Department of Energy funded the MSU Micro-CHP and Bio-Fuel Center located at Mississippi State University. The overall objective of this project was to enable micro-CHP (micro-combined heat and power) utilization, to facilitate and promote the use of CHP systems and to educate architects, engineers, and agricultural producers and scientists on the benefits of CHP systems. Therefore, the work of the Center focused on the three areas: CHP system modeling and optimization, outreach, and research. In general, the results obtained from this project demonstrated that CHP systems are attractive because they can provide energy, environmental, and economic benefits. Some of these benefits include the potential to reduce operational cost, carbon dioxide emissions, primary energy consumption, and power reliability during electric grid disruptions. The knowledge disseminated in numerous journal and conference papers from the outcomes of this project is beneficial to engineers, architects, agricultural producers, scientists and the public in general who are interested in CHP technology and applications. In addition, more than 48 graduate students and 23 undergraduate students, benefited from the training and research performed in the MSU Micro-CHP and Bio-Fuel Center.

  2. Design of Endoscopic Capsule With Multiple Cameras.

    Science.gov (United States)

    Gu, Yingke; Xie, Xiang; Li, Guolin; Sun, Tianjia; Wang, Dan; Yin, Zheng; Zhang, Pengfei; Wang, Zhihua

    2015-08-01

    In order to reduce the miss rate of the wireless capsule endoscopy, in this paper, we propose a new system of the endoscopic capsule with multiple cameras. A master-slave architecture, including an efficient bus architecture and a four level clock management architecture, is applied for the Multiple Cameras Endoscopic Capsule (MCEC). For covering more area of the gastrointestinal tract wall with low power, multiple cameras with a smart image capture strategy, including movement sensitive control and camera selection, are used in the MCEC. To reduce the data transfer bandwidth and power consumption to prolong the MCEC's working life, a low complexity image compressor with PSNR 40.7 dB and compression rate 86% is implemented. A chipset is designed and implemented for the MCEC and a six cameras endoscopic capsule prototype is implemented by using the chipset. With the smart image capture strategy, the coverage rate of the MCEC prototype can achieve 98% and its power consumption is only about 7.1 mW.

  3. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Romps, David [Univ. of California, Berkeley, CA (United States); Oktem, Rusen [Univ. of California, Berkeley, CA (United States)

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together to obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.

  4. Evaluation of Red Light Camera Enforcement at Signalized Intersections

    Directory of Open Access Journals (Sweden)

    Abdulrahman AlJanahi

    2007-12-01

    Full Text Available The study attempts to find the effectiveness of adopting red light cameras in reducing red light violators. An experimental approach was adopted to investigate the use of red light cameras at signalized intersections in the Kingdom of Bahrain. The study locations were divided into three groups. The first group was related to the approaches monitored with red light cameras. The second group was related to approaches without red light cameras, but located within an intersection that had one of its approaches monitored with red light cameras. The third group was related to intersection approaches located at intersection without red light cameras (controlled sites. A methodology was developed for data collection. The data were then tested statistically by Z-test using proportion methods to compare the proportion of red light violations occurring at different sites. The study found that the proportion of red light violators at approaches monitored with red light cameras was significantly less than those at the controlled sites for most of the time. Approaches without red light cameras located within intersections having red light cameras showed, in general, fewer violations than controlled sites, but the results were not significant for all times of the day. The study reveals that red light cameras have a positive effect on reducing red light violations. However, these conclusions need further evaluations to justify their safe and economic use.

  5. Gamma ray camera

    International Nuclear Information System (INIS)

    Wang, S.-H.; Robbins, C.D.

    1979-01-01

    An Anger gamma ray camera is improved by the substitution of a gamma ray sensitive, proximity type image intensifier tube for the scintillator screen in the Anger camera. The image intensifier tube has a negatively charged flat scintillator screen, a flat photocathode layer, and a grounded, flat output phosphor display screen, all of which have the same dimension to maintain unit image magnification; all components are contained within a grounded metallic tube, with a metallic, inwardly curved input window between the scintillator screen and a collimator. The display screen can be viewed by an array of photomultipliers or solid state detectors. There are two photocathodes and two phosphor screens to give a two stage intensification, the two stages being optically coupled by a light guide. (author)

  6. 3D for the people: multi-camera motion capture in the field with consumer-grade cameras and open source software

    Directory of Open Access Journals (Sweden)

    Brandon E. Jackson

    2016-09-01

    Full Text Available Ecological, behavioral and biomechanical studies often need to quantify animal movement and behavior in three dimensions. In laboratory studies, a common tool to accomplish these measurements is the use of multiple, calibrated high-speed cameras. Until very recently, the complexity, weight and cost of such cameras have made their deployment in field situations risky; furthermore, such cameras are not affordable to many researchers. Here, we show how inexpensive, consumer-grade cameras can adequately accomplish these measurements both within the laboratory and in the field. Combined with our methods and open source software, the availability of inexpensive, portable and rugged cameras will open up new areas of biological study by providing precise 3D tracking and quantification of animal and human movement to researchers in a wide variety of field and laboratory contexts.

  7. Extended spectrum SWIR camera with user-accessible Dewar

    Science.gov (United States)

    Benapfl, Brendan; Miller, John Lester; Vemuri, Hari; Grein, Christoph; Sivananthan, Siva

    2017-02-01

    Episensors has developed a series of extended short wavelength infrared (eSWIR) cameras based on high-Cd concentration Hg1-xCdxTe absorbers. The cameras have a bandpass extending to 3 microns cutoff wavelength, opening new applications relative to traditional InGaAs-based cameras. Applications and uses are discussed and examples given. A liquid nitrogen pour-filled version was initially developed. This was followed by a compact Stirling-cooled version with detectors operating at 200 K. Each camera has unique sensitivity and performance characteristics. The cameras' size, weight and power specifications are presented along with images captured with band pass filters and eSWIR sources to demonstrate spectral response beyond 1.7 microns. The soft seal Dewars of the cameras are designed for accessibility, and can be opened and modified in a standard laboratory environment. This modular approach allows user flexibility for swapping internal components such as cold filters and cold stops. The core electronics of the Stirlingcooled camera are based on a single commercial field programmable gate array (FPGA) that also performs on-board non-uniformity corrections, bad pixel replacement, and directly drives any standard HDMI display.

  8. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different....... For this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of scenarios...

  9. The GISMO-2 Bolometer Camera

    Science.gov (United States)

    Staguhn, Johannes G.; Benford, Dominic J.; Fixsen, Dale J.; Hilton, Gene; Irwin, Kent D.; Jhabvala, Christine A.; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; Miller, Timothy M.; hide

    2012-01-01

    We present the concept for the GISMO-2 bolometer camera) which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISM0-2 will operate Simultaneously in the 1 mm and 2 mm atmospherical windows. The 1 mm channel uses a 32 x 40 TES-based Backshort Under Grid (BUG) bolometer array, the 2 mm channel operates with a 16 x 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISM0-2 was strongly influenced by our experience with the GISMO 2 mm bolometer camera which is successfully operating at the 30m telescope. GISMO is accessible to the astronomical community through the regular IRAM call for proposals.

  10. Novel, full 3D scintillation dosimetry using a static plenoptic camera

    Science.gov (United States)

    Goulet, Mathieu; Rilling, Madison; Gingras, Luc; Beddar, Sam; Beaulieu, Luc; Archambault, Louis

    2014-01-01

    Purpose: Patient-specific quality assurance (QA) of dynamic radiotherapy delivery would gain from being performed using a 3D dosimeter. However, 3D dosimeters, such as gels, have many disadvantages limiting to quality assurance, such as tedious read-out procedures and poor reproducibility. The purpose of this work is to develop and validate a novel type of high resolution 3D dosimeter based on the real-time light acquisition of a plastic scintillator volume using a plenoptic camera. This dosimeter would allow for the QA of dynamic radiation therapy techniques such as intensity-modulated radiation therapy (IMRT) or volumetric-modulated arc therapy (VMAT). Methods: A Raytrix R5 plenoptic camera was used to image a 10 × 10 × 10 cm3 EJ-260 plastic scintillator embedded inside an acrylic phantom at a rate of one acquisition per second. The scintillator volume was irradiated with both an IMRT and VMAT treatment plan on a Clinac iX linear accelerator. The 3D light distribution emitted by the scintillator volume was reconstructed at a 2 mm resolution in all dimensions by back-projecting the light collected by each pixel of the light-field camera using an iterative reconstruction algorithm. The latter was constrained by a beam's eye view projection of the incident dose acquired using the portal imager integrated with the linac and by physical consideration of the dose behavior as a function of depth in the phantom. Results: The absolute dose difference between the reconstructed 3D dose and the expected dose calculated using the treatment planning software Pinnacle3 was on average below 1.5% of the maximum dose for both integrated IMRT and VMAT deliveries, and below 3% for each individual IMRT incidences. Dose agreement between the reconstructed 3D dose and a radiochromic film acquisition in the same experimental phantom was on average within 2.1% and 1.2% of the maximum recorded dose for the IMRT and VMAT delivery, respectively. Conclusions: Using plenoptic camera

  11. [Analog gamma camera digitalization computer system].

    Science.gov (United States)

    Rojas, G M; Quintana, J C; Jer, J; Astudillo, S; Arenas, L; Araya, H

    2004-01-01

    Digitalization of analogue gamma cameras systems, using special acquisition boards in microcomputers and appropriate software for acquisition and processing of nuclear medicine images is described in detail. Microcomputer integrated systems interconnected by means of a Local Area Network (LAN) and connected to several gamma cameras have been implemented using specialized acquisition boards. The PIP software (Portable Image Processing) was installed on each microcomputer to acquire and preprocess the nuclear medicine images. A specialized image processing software has been designed and developed for these purposes. This software allows processing of each nuclear medicine exam, in a semiautomatic procedure, and recording of the results on radiological films. . A stable, flexible and inexpensive system which makes it possible to digitize, visualize, process, and print nuclear medicine images obtained from analogue gamma cameras was implemented in the Nuclear Medicine Division. Such a system yields higher quality images than those obtained with analogue cameras while keeping operating costs considerably lower (filming: 24.6%, fixing 48.2% and developing 26%.) Analogue gamma camera systems can be digitalized economically. This system makes it possible to obtain optimal clinical quality nuclear medicine images, to increase the acquisition and processing efficiency, and to reduce the steps involved in each exam.

  12. Gamma cameras - a method of evaluation

    International Nuclear Information System (INIS)

    Oates, L.; Bibbo, G.

    2000-01-01

    Full text: With the sophistication and longevity of the modern gamma camera it is not often that the need arises to evaluate a gamma camera for purchase. We have recently been placed in the position of retiring our two single headed cameras of some vintage and replacing them with a state of the art dual head variable angle gamma camera. The process used for the evaluation consisted of five parts: (1) Evaluation of the technical specification as expressed in the tender document; (2) A questionnaire adapted from the British Society of Nuclear Medicine; (3) Site visits to assess gantry configuration, movement, patient access and occupational health, welfare and safety considerations; (4) Evaluation of the processing systems offered; (5) Whole of life costing based on equally configured systems. The results of each part of the evaluation were expressed using a weighted matrix analysis with each of the criteria assessed being weighted in accordance with their importance to the provision of an effective nuclear medicine service for our centre and the particular importance to paediatric nuclear medicine. This analysis provided an objective assessment of each gamma camera system from which a purchase recommendation was made. Copyright (2000) The Australian and New Zealand Society of Nuclear Medicine Inc

  13. Lunar Reconnaissance Orbiter Camera (LROC) instrument overview

    Science.gov (United States)

    Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.

  14. On-chip micro-power: three-dimensional structures for micro-batteries and micro-supercapacitors

    Science.gov (United States)

    Beidaghi, Majid; Wang, Chunlei

    2010-04-01

    With the miniaturization of portable electronic devices, there is a demand for micro-power source which can be integrated on the semiconductor chips. Various micro-batteries have been developed in recent years to generate or store the energy that is needed by microsystems. Micro-supercapacitors are also developed recently to couple with microbatteries and energy harvesting microsystems and provide the peak power. Increasing the capacity per footprint area of micro-batteries and micro-supercapacitors is a great challenge. One promising route is the manufacturing of three dimensional (3D) structures for these micro-devices. In this paper, the recent advances in fabrication of 3D structure for micro-batteries and micro-supercapacitors are briefly reviewed.

  15. Laser based micro forming and assembly.

    Energy Technology Data Exchange (ETDEWEB)

    MacCallum, Danny O' Neill; Wong, Chung-Nin Channy; Knorovsky, Gerald Albert; Steyskal, Michele D.; Lehecka, Tom (Pennsylvania State University, Freeport, PA); Scherzinger, William Mark; Palmer, Jeremy Andrew

    2006-11-01

    It has been shown that thermal energy imparted to a metallic substrate by laser heating induces a transient temperature gradient through the thickness of the sample. In favorable conditions of laser fluence and absorptivity, the resulting inhomogeneous thermal strain leads to a measurable permanent deflection. This project established parameters for laser micro forming of thin materials that are relevant to MESA generation weapon system components and confirmed methods for producing micrometer displacements with repeatable bend direction and magnitude. Precise micro forming vectors were realized through computational finite element analysis (FEA) of laser-induced transient heating that indicated the optimal combination of laser heat input relative to the material being heated and its thermal mass. Precise laser micro forming was demonstrated in two practical manufacturing operations of importance to the DOE complex: micrometer gap adjustments of precious metal alloy contacts and forming of meso scale cones.

  16. Monitoring of morphology and physical properties of cultured cells using a micro camera and a quartz crystal with transparent indium tin oxide electrodes after injections of glutaraldehyde and trypsin

    International Nuclear Information System (INIS)

    Kang, Hyen-Wook; Ida, Kazumi; Yamamoto, Yuji; Muramatsu, Hiroshi

    2008-01-01

    For investigating the effects of chemical stimulation to cultured cells, we have developed a quartz crystal sensor system with a micro charge-coupled device (CCD) camera that enables microphotograph imaging simultaneously with quartz crystal measurement. Human hepatoma cell line (HepG2) cells were cultured on the quartz crystal through a collagen film. The electrode of the quartz crystal was made of indium tin oxide (ITO) transparent electrodes that enable to obtain a transparent mode photograph. Glutaraldehyde and trypsin were injected to the chamber of the cells, respectively. The response of the quartz crystal was monitored and microphotographs were recorded, and the resonance frequency and resonance resistance were analyzed with an F-R diagram that plotted the resonance frequency and resonance resistance. In the case of the glutaraldehyde injection, the cells responded in two steps that included the fast response of the cross-linking reaction and the successive internal change in the cells. In the case of the trypsin injection, the responses included two processes. In the first step, cell adhesion factors were cleaved and the cell structure became round, and in the next step, the cells were deposited on the quartz crystal surface and the surface of the cells was directly in contact with the quartz crystal surface

  17. New readout and data-acquisition system in an electron-tracking Compton camera for MeV gamma-ray astronomy (SMILE-II)

    Energy Technology Data Exchange (ETDEWEB)

    Mizumoto, T., E-mail: mizumoto@cr.scphys.kyoto-u.ac.jp [Department of Physics, Kyoto University, 606-8502 Kyoto (Japan); Matsuoka, Y. [Department of Physics, Kyoto University, 606-8502 Kyoto (Japan); Mizumura, Y. [Unit of Synergetic Studies for Space, Kyoto University, 606-8502 Kyoto (Japan); Department of Physics, Kyoto University, 606-8502 Kyoto (Japan); Tanimori, T. [Department of Physics, Kyoto University, 606-8502 Kyoto (Japan); Unit of Synergetic Studies for Space, Kyoto University, 606-8502 Kyoto (Japan); Kubo, H.; Takada, A.; Iwaki, S.; Sawano, T.; Nakamura, K.; Komura, S.; Nakamura, S.; Kishimoto, T.; Oda, M.; Miyamoto, S.; Takemura, T.; Parker, J.D.; Tomono, D.; Sonoda, S. [Department of Physics, Kyoto University, 606-8502 Kyoto (Japan); Miuchi, K. [Department of Physics, Kobe University, 658-8501 Kobe (Japan); Kurosawa, S. [Institute for Materials Research, Tohoku University, 980-8577 Sendai (Japan)

    2015-11-11

    For MeV gamma-ray astronomy, we have developed an electron-tracking Compton camera (ETCC) as a MeV gamma-ray telescope capable of rejecting the radiation background and attaining the high sensitivity of near 1 mCrab in space. Our ETCC comprises a gaseous time-projection chamber (TPC) with a micro pattern gas detector for tracking recoil electrons and a position-sensitive scintillation camera for detecting scattered gamma rays. After the success of a first balloon experiment in 2006 with a small ETCC (using a 10×10×15 cm{sup 3} TPC) for measuring diffuse cosmic and atmospheric sub-MeV gamma rays (Sub-MeV gamma-ray Imaging Loaded-on-balloon Experiment I; SMILE-I), a (30 cm){sup 3} medium-sized ETCC was developed to measure MeV gamma-ray spectra from celestial sources, such as the Crab Nebula, with single-day balloon flights (SMILE-II). To achieve this goal, a 100-times-larger detection area compared with that of SMILE-I is required without changing the weight or power consumption of the detector system. In addition, the event rate is also expected to dramatically increase during observation. Here, we describe both the concept and the performance of the new data-acquisition system with this (30 cm){sup 3} ETCC to manage 100 times more data while satisfying the severe restrictions regarding the weight and power consumption imposed by a balloon-borne observation. In particular, to improve the detection efficiency of the fine tracks in the TPC from ~10% to ~100%, we introduce a new data-handling algorithm in the TPC. Therefore, for efficient management of such large amounts of data, we developed a data-acquisition system with parallel data flow.

  18. Gamma camera performance: technical assessment protocol

    International Nuclear Information System (INIS)

    Bolster, A.A.; Waddington, W.A.

    1996-01-01

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera's computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author)

  19. a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging

    Science.gov (United States)

    Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.

    2017-08-01

    Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.

  20. A SPATIO-SPECTRAL CAMERA FOR HIGH RESOLUTION HYPERSPECTRAL IMAGING

    Directory of Open Access Journals (Sweden)

    S. Livens

    2017-08-01

    Full Text Available Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600–900 nm in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots, horticulture (crop status monitoring to evaluate irrigation management in strawberry fields and geology (meteorite detection on a grassland field. Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475–925 nm, and we discuss future work.

  1. Performance analysis for gait in camera networks

    OpenAIRE

    Michela Goffredo; Imed Bouchrika; John Carter; Mark Nixon

    2008-01-01

    This paper deploys gait analysis for subject identification in multi-camera surveillance scenarios. We present a new method for viewpoint independent markerless gait analysis that does not require camera calibration and works with a wide range of directions of walking. These properties make the proposed method particularly suitable for gait identification in real surveillance scenarios where people and their behaviour need to be tracked across a set of cameras. Tests on 300 synthetic and real...

  2. Development of gamma camera and application to decontamination

    International Nuclear Information System (INIS)

    Yoshida, Akira; Moro, Eiji; Takahashi, Isao

    2013-01-01

    A gamma camera has been developed to support recovering from the contamination caused by the accident of Fukushima Dai-ichi Nuclear Power Plant of Tokyo Electric Power Company. The gamma camera enables recognition of the contamination by visualizing radioactivity. The gamma camera has been utilized for risk communication (explanation to community resident) at local governments in Fukushima. From now on, the gamma camera will be applied to solve decontaminations issues; improving efficiency of decontamination, visualizing the effect of decontamination work and reducing radioactive waste. (author)

  3. The making of analog module for gamma camera interface

    International Nuclear Information System (INIS)

    Yulinarsari, Leli; Rl, Tjutju; Susila, Atang; Sukandar

    2003-01-01

    The making of an analog module for gamma camera has been conducted. For computerization of planar gamma camera 37 PMT it has been developed interface hardware technology and software between the planar gamma camera with PC. With this interface gamma camera image information (Originally analog signal) was changed to digital single, therefore processes of data acquisition, image quality increase and data analysis as well as data base processing can be conducted with the help of computers, there are three gamma camera main signals, i.e. X, Y and Z . This analog module makes digitation of analog signal X and Y from the gamma camera that conveys position information coming from the gamma camera crystal. Analog conversion to digital was conducted by 2 converters ADC 12 bit with conversion time 800 ns each, conversion procedure for each coordinate X and Y was synchronized using suitable strobe signal Z for information acceptance

  4. Micro fabrication of biodegradable polymer drug delivery devices

    DEFF Research Database (Denmark)

    Nagstrup, Johan

    The pharmaceutical industry is presently facing several obstacles in developing oral drug delivery systems. This is primarily due to the nature of the discovered drug candidates. The discovered drugs often have poor solubility and low permeability across the gastro intestinal epithelium. Furtherm......The pharmaceutical industry is presently facing several obstacles in developing oral drug delivery systems. This is primarily due to the nature of the discovered drug candidates. The discovered drugs often have poor solubility and low permeability across the gastro intestinal epithelium...... permeability and degradation. These systems are for the majority based on traditional materials used in micro technology, such as SU-8, silicon, poly(methyl methacrylate). The next step in developing these new drug delivery systems is to replace classical micro fabrication materials with biodegradable polymers....... In order to successfully do this, methods for fabricating micro structures in biodegradable polymers need to be developed. The goal of this project has been to develop methods for micro fabrication in biodegradable polymers and to use these methods to produce micro systems for oral drug delivery. This has...

  5. An evolution of image source camera attribution approaches.

    Science.gov (United States)

    Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul

    2016-05-01

    Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics

  6. Driving with head-slaved camera system

    NARCIS (Netherlands)

    Oving, A.B.; Erp, J.B.F. van

    2001-01-01

    In a field experiment, we tested the effectiveness of a head-slaved camera system for driving an armoured vehicle under armour. This system consists of a helmet-mounted display (HMD), a headtracker, and a motion platform with two cameras. Subjects performed several driving tasks on paved and in

  7. Improving Photometric Calibration of Meteor Video Camera Systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at approx. 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  8. Phase camera experiment for Advanced Virgo

    International Nuclear Information System (INIS)

    Agatsuma, Kazuhiro; Beuzekom, Martin van; Schaaf, Laura van der; Brand, Jo van den

    2016-01-01

    We report on a study of the phase camera, which is a frequency selective wave-front sensor of a laser beam. This sensor is utilized for monitoring sidebands produced by phase modulations in a gravitational wave (GW) detector. Regarding the operation of the GW detectors, the laser modulation/demodulation method is used to measure mirror displacements and used for the position controls. This plays a significant role because the quality of controls affect the noise level of the GW detector. The phase camera is able to monitor each sideband separately, which has a great benefit for the manipulation of the delicate controls. Also, overcoming mirror aberrations will be an essential part of Advanced Virgo (AdV), which is a GW detector close to Pisa. Especially low-frequency sidebands can be affected greatly by aberrations in one of the interferometer cavities. The phase cameras allow tracking such changes because the state of the sidebands gives information on mirror aberrations. A prototype of the phase camera has been developed and is currently tested. The performance checks are almost completed and the installation of the optics at the AdV site has started. After the installation and commissioning, the phase camera will be combined to a thermal compensation system that consists of CO 2 lasers and compensation plates. In this paper, we focus on the prototype and show some limitations from the scanner performance. - Highlights: • The phase camera is being developed for a gravitational wave detector. • A scanner performance limits the operation speed and layout design of the system. • An operation range was found by measuring the frequency response of the scanner.

  9. Phase camera experiment for Advanced Virgo

    Energy Technology Data Exchange (ETDEWEB)

    Agatsuma, Kazuhiro, E-mail: agatsuma@nikhef.nl [National Institute for Subatomic Physics, Amsterdam (Netherlands); Beuzekom, Martin van; Schaaf, Laura van der [National Institute for Subatomic Physics, Amsterdam (Netherlands); Brand, Jo van den [National Institute for Subatomic Physics, Amsterdam (Netherlands); VU University, Amsterdam (Netherlands)

    2016-07-11

    We report on a study of the phase camera, which is a frequency selective wave-front sensor of a laser beam. This sensor is utilized for monitoring sidebands produced by phase modulations in a gravitational wave (GW) detector. Regarding the operation of the GW detectors, the laser modulation/demodulation method is used to measure mirror displacements and used for the position controls. This plays a significant role because the quality of controls affect the noise level of the GW detector. The phase camera is able to monitor each sideband separately, which has a great benefit for the manipulation of the delicate controls. Also, overcoming mirror aberrations will be an essential part of Advanced Virgo (AdV), which is a GW detector close to Pisa. Especially low-frequency sidebands can be affected greatly by aberrations in one of the interferometer cavities. The phase cameras allow tracking such changes because the state of the sidebands gives information on mirror aberrations. A prototype of the phase camera has been developed and is currently tested. The performance checks are almost completed and the installation of the optics at the AdV site has started. After the installation and commissioning, the phase camera will be combined to a thermal compensation system that consists of CO{sub 2} lasers and compensation plates. In this paper, we focus on the prototype and show some limitations from the scanner performance. - Highlights: • The phase camera is being developed for a gravitational wave detector. • A scanner performance limits the operation speed and layout design of the system. • An operation range was found by measuring the frequency response of the scanner.

  10. Sustainable Micro-Manufacturing of Micro-Components via Micro Electrical Discharge Machining

    Directory of Open Access Journals (Sweden)

    Valeria Marrocco

    2011-12-01

    Full Text Available Micro-manufacturing emerged in the last years as a new engineering area with the potential of increasing peoples’ quality of life through the production of innovative micro-devices to be used, for example, in the biomedical, micro-electronics or telecommunication sectors. The possibility to decrease the energy consumption makes the micro-manufacturing extremely appealing in terms of environmental protection. However, despite this common belief that the micro-scale implies a higher sustainability compared to traditional manufacturing processes, recent research shows that some factors can make micro-manufacturing processes not as sustainable as expected. In particular, the use of rare raw materials and the need of higher purity of processes, to preserve product quality and manufacturing equipment, can be a source for additional environmental burden and process costs. Consequently, research is needed to optimize micro-manufacturing processes in order to guarantee the minimum consumption of raw materials, consumables and energy. In this paper, the experimental results obtained by the micro-electrical discharge machining (micro-EDM of micro-channels made on Ni–Cr–Mo steel is reported. The aim of such investigation is to shed a light on the relation and dependence between the material removal process, identified in the evaluation of material removal rate (MRR and tool wear ratio (TWR, and some of the most important technological parameters (i.e., open voltage, discharge current, pulse width and frequency, in order to experimentally quantify the material waste produced and optimize the technological process in order to decrease it.

  11. Two-Phase Algorithm for Optimal Camera Placement

    Directory of Open Access Journals (Sweden)

    Jun-Woo Ahn

    2016-01-01

    Full Text Available As markers for visual sensor networks have become larger, interest in the optimal camera placement problem has continued to increase. The most featured solution for the optimal camera placement problem is based on binary integer programming (BIP. Due to the NP-hard characteristic of the optimal camera placement problem, however, it is difficult to find a solution for a complex, real-world problem using BIP. Many approximation algorithms have been developed to solve this problem. In this paper, a two-phase algorithm is proposed as an approximation algorithm based on BIP that can solve the optimal camera placement problem for a placement space larger than in current studies. This study solves the problem in three-dimensional space for a real-world structure.

  12. Portable mini gamma camera for medical applications

    CERN Document Server

    Porras, E; Benlloch, J M; El-Djalil-Kadi-Hanifi, M; López, S; Pavon, N; Ruiz, J A; Sánchez, F; Sebastiá, A

    2002-01-01

    A small, portable and low-cost gamma camera for medical applications has been developed and clinically tested. This camera, based on a scintillator crystal and a Position Sensitive Photo-Multiplier Tube, has a useful field of view of 4.6 cm diameter and provides 2.2 mm of intrinsic spatial resolution. Its mobility and light weight allow to reach the patient from any desired direction. This camera images small organs with high efficiency and so addresses the demand for devices of specific clinical applications. In this paper, we present the camera and briefly describe the procedures that have led us to choose its configuration and the image reconstruction method. The clinical tests and diagnostic capability are also presented and discussed.

  13. Color reproduction software for a digital still camera

    Science.gov (United States)

    Lee, Bong S.; Park, Du-Sik; Nam, Byung D.

    1998-04-01

    We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.

  14. Camera Network Coverage Improving by Particle Swarm Optimization

    NARCIS (Netherlands)

    Xu, Y.C.; Lei, B.; Hendriks, E.A.

    2011-01-01

    This paper studies how to improve the field of view (FOV) coverage of a camera network. We focus on a special but practical scenario where the cameras are randomly scattered in a wide area and each camera may adjust its orientation but cannot move in any direction. We propose a particle swarm

  15. Wired and Wireless Camera Triggering with Arduino

    Science.gov (United States)

    Kauhanen, H.; Rönnholm, P.

    2017-10-01

    Synchronous triggering is an important task that allows simultaneous data capture from multiple cameras. Accurate synchronization enables 3D measurements of moving objects or from a moving platform. In this paper, we describe one wired and four wireless variations of Arduino-based low-cost remote trigger systems designed to provide a synchronous trigger signal for industrial cameras. Our wireless systems utilize 315 MHz or 434 MHz frequencies with noise filtering capacitors. In order to validate the synchronization accuracy, we developed a prototype of a rotating trigger detection system (named RoTriDeS). This system is suitable to detect the triggering accuracy of global shutter cameras. As a result, the wired system indicated an 8.91 μs mean triggering time difference between two cameras. Corresponding mean values for the four wireless triggering systems varied between 7.92 and 9.42 μs. Presented values include both camera-based and trigger-based desynchronization. Arduino-based triggering systems appeared to be feasible, and they have the potential to be extended to more complicated triggering systems.

  16. Super-resolution in plenoptic cameras using FPGAs.

    Science.gov (United States)

    Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime

    2014-05-16

    Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes.

  17. Super-Resolution in Plenoptic Cameras Using FPGAs

    Directory of Open Access Journals (Sweden)

    Joel Pérez

    2014-05-01

    Full Text Available Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA devices using VHDL (very high speed integrated circuit (VHSIC hardware description language. With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes.

  18. Camera Control and Geo-Registration for Video Sensor Networks

    Science.gov (United States)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  19. Camera Layout Design for the Upper Stage Thrust Cone

    Science.gov (United States)

    Wooten, Tevin; Fowler, Bart

    2010-01-01

    Engineers in the Integrated Design and Analysis Division (EV30) use a variety of different tools to aid in the design and analysis of the Ares I vehicle. One primary tool in use is Pro-Engineer. Pro-Engineer is a computer-aided design (CAD) software that allows designers to create computer generated structural models of vehicle structures. For the Upper State thrust cone, Pro-Engineer was used to assist in the design of a layout for two camera housings. These cameras observe the separation between the first and second stage of the Ares I vehicle. For the Ares I-X, one standard speed camera was used. The Ares I design calls for two separate housings, three cameras, and a lighting system. With previous design concepts and verification strategies in mind, a new layout for the two camera design concept was developed with members of the EV32 team. With the new design, Pro-Engineer was used to draw the layout to observe how the two camera housings fit with the thrust cone assembly. Future analysis of the camera housing design will verify the stability and clearance of the camera with other hardware present on the thrust cone.

  20. Applying a foil queue micro-electrode in micro-EDM to fabricate a 3D micro-structure

    Science.gov (United States)

    Xu, Bin; Guo, Kang; Wu, Xiao-yu; Lei, Jian-guo; Liang, Xiong; Guo, Deng-ji; Ma, Jiang; Cheng, Rong

    2018-05-01

    Applying a 3D micro-electrode in a micro electrical discharge machining (micro-EDM) can fabricate a 3D micro-structure with an up and down reciprocating method. However, this processing method has some shortcomings, such as a low success rate and a complex process for fabrication of 3D micro-electrodes. By focusing on these shortcomings, this paper proposed a novel 3D micro-EDM process based on the foil queue micro-electrode. Firstly, a 3D micro-electrode was discretized into several foil micro-electrodes and these foil micro-electrodes constituted a foil queue micro-electrode. Then, based on the planned process path, foil micro-electrodes were applied in micro-EDM sequentially and the micro-EDM results of each foil micro-electrode were able to superimpose the 3D micro-structure. However, the step effect will occur on the 3D micro-structure surface, which has an adverse effect on the 3D micro-structure. To tackle this problem, this paper proposes to reduce this adverse effect by rounded corner wear at the end of the foil micro-electrode and studies the impact of machining parameters on rounded corner wear and the step effect on the micro-structure surface. Finally, using a wire cutting voltage of 80 V, a current of 0.5 A and a pulse width modulation ratio of 1:4, the foil queue micro-electrode was fabricated by wire electrical discharge machining. Also, using a pulse width of 100 ns, a pulse interval of 200 ns, a voltage of 100 V and workpiece material of 304# stainless steel, the foil queue micro-electrode was applied in micro-EDM for processing of a 3D micro-structure with hemispherical features, which verified the feasibility of this process.