WorldWideScience

Sample records for high-end mobile camera

  1. Cameras in mobile phones

    Science.gov (United States)

    Nummela, Ville; Viinikanoja, Jarkko; Alakarhu, Juha

    2006-04-01

    One of the fastest growing markets in consumer markets today are camera phones. During past few years total volume has been growing fast and today millions of mobile phones with camera will be sold. At the same time resolution and functionality of the cameras has been growing from CIF towards DSC level. From camera point of view the mobile world is an extremely challenging field. Cameras should have good image quality but in small size. They also need to be reliable and their construction should be suitable for mass manufacturing. All components of the imaging chain should be well optimized in this environment. Image quality and usability are the most important parameters to user. The current trend of adding more megapixels to cameras and at the same time using smaller pixels is affecting both. On the other hand reliability and miniaturization are key drivers for product development as well as the cost. In optimized solution all parameters are in balance but the process of finding the right trade-offs is not an easy task. In this paper trade-offs related to optics and their effects to image quality and usability of cameras are discussed. Key development areas from mobile phone camera point of view are also listed.

  2. Low-cost mobile phone microscopy with a reversed mobile phone camera lens.

    Directory of Open Access Journals (Sweden)

    Neil A Switz

    Full Text Available The increasing capabilities and ubiquity of mobile phones and their associated digital cameras offer the possibility of extending low-cost, portable diagnostic microscopy to underserved and low-resource areas. However, mobile phone microscopes created by adding magnifying optics to the phone's camera module have been unable to make use of the full image sensor due to the specialized design of the embedded camera lens, exacerbating the tradeoff between resolution and field of view inherent to optical systems. This tradeoff is acutely felt for diagnostic applications, where the speed and cost of image-based diagnosis is related to the area of the sample that can be viewed at sufficient resolution. Here we present a simple and low-cost approach to mobile phone microscopy that uses a reversed mobile phone camera lens added to an intact mobile phone to enable high quality imaging over a significantly larger field of view than standard microscopy. We demonstrate use of the reversed lens mobile phone microscope to identify red and white blood cells in blood smears and soil-transmitted helminth eggs in stool samples.

  3. Low-cost mobile phone microscopy with a reversed mobile phone camera lens.

    Science.gov (United States)

    Switz, Neil A; D'Ambrosio, Michael V; Fletcher, Daniel A

    2014-01-01

    The increasing capabilities and ubiquity of mobile phones and their associated digital cameras offer the possibility of extending low-cost, portable diagnostic microscopy to underserved and low-resource areas. However, mobile phone microscopes created by adding magnifying optics to the phone's camera module have been unable to make use of the full image sensor due to the specialized design of the embedded camera lens, exacerbating the tradeoff between resolution and field of view inherent to optical systems. This tradeoff is acutely felt for diagnostic applications, where the speed and cost of image-based diagnosis is related to the area of the sample that can be viewed at sufficient resolution. Here we present a simple and low-cost approach to mobile phone microscopy that uses a reversed mobile phone camera lens added to an intact mobile phone to enable high quality imaging over a significantly larger field of view than standard microscopy. We demonstrate use of the reversed lens mobile phone microscope to identify red and white blood cells in blood smears and soil-transmitted helminth eggs in stool samples.

  4. Mobile phone camera benchmarking: combination of camera speed and image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  5. Comparative evaluation of consumer grade cameras and mobile phone cameras for close range photogrammetry

    Science.gov (United States)

    Chikatsu, Hirofumi; Takahashi, Yoji

    2009-08-01

    The authors have been concentrating on developing convenient 3D measurement methods using consumer grade digital cameras, and it was concluded that consumer grade digital cameras are expected to become a useful photogrammetric device for the various close range application fields. On the other hand, mobile phone cameras which have 10 mega pixels were appeared on the market in Japan. In these circumstances, we are faced with alternative epoch-making problem whether mobile phone cameras are able to take the place of consumer grade digital cameras in close range photogrammetric applications. In order to evaluate potentials of mobile phone cameras in close range photogrammetry, comparative evaluation between mobile phone cameras and consumer grade digital cameras are investigated in this paper with respect to lens distortion, reliability, stability and robustness. The calibration tests for 16 mobile phone cameras and 50 consumer grade digital cameras were conducted indoors using test target. Furthermore, practability of mobile phone camera for close range photogrammetry was evaluated outdoors. This paper presents that mobile phone cameras have ability to take the place of consumer grade digital cameras, and develop the market in digital photogrammetric fields.

  6. Automatic camera to laser calibration for high accuracy mobile mapping systems using INS

    Science.gov (United States)

    Goeman, Werner; Douterloigne, Koen; Gautama, Sidharta

    2013-09-01

    A mobile mapping system (MMS) is a mobile multi-sensor platform developed by the geoinformation community to support the acquisition of huge amounts of geodata in the form of georeferenced high resolution images and dense laser clouds. Since data fusion and data integration techniques are increasingly able to combine the complementary strengths of different sensor types, the external calibration of a camera to a laser scanner is a common pre-requisite on today's mobile platforms. The methods of calibration, nevertheless, are often relatively poorly documented, are almost always time-consuming, demand expert knowledge and often require a carefully constructed calibration environment. A new methodology is studied and explored to provide a high quality external calibration for a pinhole camera to a laser scanner which is automatic, easy to perform, robust and foolproof. The method presented here, uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration, a well studied absolute orientation problem needs to be solved. In many cases, the camera and laser sensor are calibrated in relation to the INS system. Therefore, the transformation from camera to laser contains the cumulated error of each sensor in relation to the INS. Here, the calibration of the camera is performed in relation to the laser frame using the time synchronization between the sensors for data association. In this study, the use of the inertial relative movement will be explored to collect more useful calibration data. This results in a better intersensor calibration allowing better coloring of the clouds and a more accurate depth mask for images, especially on the edges of objects in the scene.

  7. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  8. Dynamical scene analysis with a moving camera: mobile targets detection system

    International Nuclear Information System (INIS)

    Hennebert, Christine

    1996-01-01

    This thesis work deals with the detection of moving objects in monocular image sequences acquired with a mobile camera. We propose a method able to detect small moving objects in visible or infrared images of real outdoor scenes. In order to detect objects of very low apparent motion, we consider an analysis on a large temporal interval. We have chosen to compensate for the dominant motion due to the camera displacement for several consecutive images in order to form a sub-sequence of images for which the camera seems virtually static. We have also developed a new approach allowing to extract the different layers of a real scene in order to deal with cases where the 2D motion due to the camera displacement cannot be globally compensated for. To this end, we use a hierarchical model with two levels: the local merging step and the global merging one. Then, an appropriate temporal filtering is applied to registered image sub-sequence to enhance signals corresponding to moving objects. The detection issue is stated as a labeling problem within a statistical regularization based on Markov Random Fields. Our method has been validated on numerous real image sequences depicting complex outdoor scenes. Finally, the feasibility of an integrated circuit for mobile object detection has been proved. This circuit could lead to an ASIC creation. (author) [fr

  9. Users’ Perceptions Using Low-End and High-End Mobile-Rendered HMDs: A Comparative Study

    Directory of Open Access Journals (Sweden)

    M.-Carmen Juan

    2018-02-01

    Full Text Available Currently, it is possible to combine Mobile-Rendered Head-Mounted Displays (MR HMDs with smartphones to have Augmented Reality platforms. The differences between these types of platforms can affect the user’s experiences and satisfaction. This paper presents a study that analyses the user’s perception when using the same Augmented Reality app with two MR HMD (low-end and high-end. Our study evaluates the user’s experience taking into account several factors (control, sensory, distraction, ergonomics and realism. An Augmalpha-lowerented Reality app was developed to carry out the comparison for two MR HMDs. The application had exactly the same visual appearance and functionality for both devices. Forty adults participated in our study. From the results, there were no statistically significant differences for the users’ experience for the different factors when using the two MR HMDs, except for the ergonomic factors in favour of the high-end MR HMD. Even though the scores for the high-end MR HMD were higher in nearly all of the questions, both MR HMDs provided a very satisfying viewing experience with very high scores. The results were independent of gender and age. The participants rated the high-end MR HMD as the best one. Nevertheless, when they were asked which MR HMD they would buy, the participants chose the low-end MR HMD taking into account its price.

  10. High-speed holographic camera

    International Nuclear Information System (INIS)

    Novaro, Marc

    The high-speed holographic camera is a disgnostic instrument using holography as an information storing support. It allows us to take 10 holograms, of an object, with exposures times of 1,5ns, separated in time by 1 or 2ns. In order to get these results easily, no mobile part is used in the set-up [fr

  11. Engineering task plan for flammable gas atmosphere mobile color video camera systems

    International Nuclear Information System (INIS)

    Kohlman, E.H.

    1995-01-01

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and testing of the mobile video camera systems. The color video camera systems will be used to observe and record the activities within the vapor space of a tank on a limited exposure basis. The units will be fully mobile and designed for operation in the single-shell flammable gas producing tanks. The objective of this tank is to provide two mobile camera systems for use in flammable gas producing single-shell tanks (SSTs) for the Flammable Gas Tank Safety Program. The camera systems will provide observation, video recording, and monitoring of the activities that occur in the vapor space of applied tanks. The camera systems will be designed to be totally mobile, capable of deployment up to 6.1 meters into a 4 inch (minimum) riser

  12. Development of an Algorithm for Heart Rate Measurement Using a Mobile Phone Camera

    Directory of Open Access Journals (Sweden)

    D. A. Laure

    2014-01-01

    Full Text Available Nowadays there exist many different ways to measure a person’s heart rate. One of them assumes the usage of a mobile phone built-in camera. This method is easy to use and does not require any additional skills or special devices for heart rate measurement. It requires only a mobile cellphone with a built-in camera and a flash. The main idea of the method is to detect changes in finger skin color that occur due to blood pulsation. The measurement process is simple: the user covers the camera lens with a finger and the application on the mobile phone starts catching and analyzing frames from the camera. Heart rate can be calculated by analyzing average red component values of frames taken by the mobile cellphone camera that contain images of an area of the skin.In this paper the authors review the existing algorithms for heart rate measurement with the help of a mobile phone camera and propose their own algorithm which is more efficient than the reviewed algorithms.

  13. Advanced Camera Image Cropping Approach for CNN-Based End-to-End Controls on Sustainable Computing

    Directory of Open Access Journals (Sweden)

    Yunsick Sung

    2018-03-01

    Full Text Available Recent research on deep learning has been applied to a diversity of fields. In particular, numerous studies have been conducted on self-driving vehicles using end-to-end approaches based on images captured by a single camera. End-to-end controls learn the output vectors of output devices directly from the input vectors of available input devices. In other words, an end-to-end approach learns not by analyzing the meaning of input vectors, but by extracting optimal output vectors based on input vectors. Generally, when end-to-end control is applied to self-driving vehicles, the steering wheel and pedals are controlled autonomously by learning from the images captured by a camera. However, high-resolution images captured from a car cannot be directly used as inputs to Convolutional Neural Networks (CNNs owing to memory limitations; the image size needs to be efficiently reduced. Therefore, it is necessary to extract features from captured images automatically and to generate input images by merging the parts of the images that contain the extracted features. This paper proposes a learning method for end-to-end control that generates input images for CNNs by extracting road parts from input images, identifying the edges of the extracted road parts, and merging the parts of the images that contain the detected edges. In addition, a CNN model for end-to-end control is introduced. Experiments involving the Open Racing Car Simulator (TORCS, a sustainable computing environment for cars, confirmed the effectiveness of the proposed method for self-driving by comparing the accumulated difference in the angle of the steering wheel in the images generated by it with those of resized images containing the entire captured area and cropped images containing only a part of the captured area. The results showed that the proposed method reduced the accumulated difference by 0.839% and 0.850% compared to those yielded by the resized images and cropped images

  14. Portraiture lens concept in a mobile phone camera

    Science.gov (United States)

    Sheil, Conor J.; Goncharov, Alexander V.

    2017-11-01

    A small form-factor lens was designed for the purpose of portraiture photography, the size of which allows use within smartphone casing. The current general requirement of mobile cameras having good all-round performance results in a typical, familiar, many-element design. Such designs have little room for improvement, in terms of the available degrees of freedom and highly-demanding target metrics such as low f-number and wide field of view. However, the specific application of the current portraiture lens relaxed the requirement of an all-round high-performing lens, allowing improvement of certain aspects at the expense of others. With a main emphasis on reducing depth of field (DoF), the current design takes advantage of the simple geometrical relationship between DoF and pupil diameter. The system has a large aperture, while a reasonable f-number gives a relatively large focal length, requiring a catadioptric lens design with double ray path; hence, field of view is reduced. Compared to typical mobile lenses, the large diameter reduces depth of field by a factor of four.

  15. Differential signaling spread-spectrum modulation of the LED visible light wireless communications using a mobile-phone camera

    Science.gov (United States)

    Chen, Shih-Hao; Chow, Chi-Wai

    2015-02-01

    Visible light communication (VLC) using spread spectrum modulation (SSM) and differential signaling (DS), detected by a mobile-phone camera is proposed and demonstrated for the first time to provide high immunity to background ambient light interference. The SSM signal provides the coding gain while the DS scheme enhances the clock recovery particular under high background ambient light. Experiment results confirm the feasibility of the proposed scheme, showing that the proposed system has 6-dB gain comparing with the traditional on-off keying (OOK) modulation under background ambient light of 3000 lux. The direct incident ambient light to the mobile-phone camera is 520 lux.

  16. Clinical Value of High Mobility Group Box 1 and the Receptor for Advanced Glycation End-products in Head and Neck Cancer: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Nguyen, Austin

    2016-04-01

    Full Text Available Introduction High mobility group box 1 is a versatile protein involved in gene transcription, extracellular signaling, and response to inflammation. Extracellularly, high mobility group box 1 binds to several receptors, notably the receptor for advanced glycation end-products. Expression of high mobility group box 1 and the receptor for advanced glycation end-products has been described in many cancers. Objectives To systematically review the available literature using PubMed and Web of Science to evaluate the clinical value of high mobility group box 1 and the receptor for advanced glycation end-products in head and neck squamous cell carcinomas. Data synthesis A total of eleven studies were included in this review. High mobility group box 1 overexpression is associated with poor prognosis and many clinical and pathological characteristics of head and neck squamous cell carcinomas patients. Additionally, the receptor for advanced glycation end-products demonstrates potential value as a clinical indicator of tumor angiogenesis and advanced staging. In diagnosis, high mobility group box 1 demonstrates low sensitivity. Conclusion High mobility group box 1 and the receptor for advanced glycation end-products are associated with clinical and pathological characteristics of head and neck squamous cell carcinomas. Further investigation of the prognostic and diagnostic value of these molecules is warranted.

  17. Applying and extending ISO/TC42 digital camera resolution standards to mobile imaging products

    Science.gov (United States)

    Williams, Don; Burns, Peter D.

    2007-01-01

    There are no fundamental differences between today's mobile telephone cameras and consumer digital still cameras that suggest many existing ISO imaging performance standards do not apply. To the extent that they have lenses, color filter arrays, detectors, apertures, image processing, and are hand held, there really are no operational or architectural differences. Despite this, there are currently differences in the levels of imaging performance. These are driven by physical and economic constraints, and image-capture conditions. Several ISO standards for resolution, well established for digital consumer digital cameras, require care when applied to the current generation of cell phone cameras. In particular, accommodation of optical flare, shading non-uniformity and distortion are recommended. We offer proposals for the application of existing ISO imaging resolution performance standards to mobile imaging products, and suggestions for extending performance standards to the characteristic behavior of camera phones.

  18. Large-grazing-angle, multi-image Kirkpatrick-Baez microscope as the front end to a high-resolution streak camera for OMEGA

    International Nuclear Information System (INIS)

    Gotchev, O.V.; Hayes, L.J.; Jaanimagi, P.A.; Knauer, J.P.; Marshall, F.J.; Meyerhofer, D.D.

    2003-01-01

    A high-resolution x-ray microscope with a large grazing angle has been developed, characterized, and fielded at the Laboratory for Laser Energetics. It increases the sensitivity and spatial resolution in planar direct-drive hydrodynamic stability experiments, relevant to inertial confinement fusion research. It has been designed to work as the optical front end of the PJX - a high-current, high-dynamic-range x-ray streak camera. Optical design optimization, results from numerical ray tracing, mirror-coating choice, and characterization have been described previously [O. V. Gotchev, et al., Rev. Sci. Instrum. 74, 2178 (2003)]. This work highlights the optics' unique mechanical design and flexibility and considers certain applications that benefit from it. Characterization of the microscope's resolution in terms of its modulation transfer function over the field of view is shown. Recent results from hydrodynamic stability experiments, diagnosed with the optic and the PJX, are provided to confirm the microscope's advantages as a high-resolution, high-throughput x-ray optical front end for streaked imaging

  19. Augmented Reality user interface analysis in mobile devices

    OpenAIRE

    Guzmán Guzmán, José Daniel

    2014-01-01

    [ENGLISH] The presence of high-end phones in the telephony market, has allowed consumers to have access to the computational power of mobile smart-phone devices. Powerful processors, combined with cameras and ease of development encourage an increasing number of Augmented Reality (AR) researchers to adopt mobile smart-phones as AR platform. The same way, Augmented Reality on mobile devices has become increasingly popular for many applications, including search and location, tourism, and shopp...

  20. NV-CMOS HD camera for day/night imaging

    Science.gov (United States)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  1. Visual Servoing of Mobile Microrobot with Centralized Camera

    Directory of Open Access Journals (Sweden)

    Kiswanto Gandjar

    2018-01-01

    Full Text Available In this paper, a mechanism of visual servoing for mobile microrobot with a centralized camera is developed. Especially for the development of swarm AI applications. In the fields of microrobots the size of robots is minimal and the amount of movement is also small. By replacing various sensors that is needed with a single centralized vision sensor we can eliminate a lot of components and the need for calibration on every robot. A study and design for a visual servoing mobile microrobot has been developed. This system can use multi object tracking and hough transform to identify the positions of the robots. And can control multiple robots at once with an accuracy of 5-6 pixel from the desired target.

  2. Large-Grazing-Angle, Multi-Image Kirkpatrick-Baez Microscope as the Front End to a High-Resolution Streak Camera for OMEGA

    International Nuclear Information System (INIS)

    Gotchev, O.V.; Hayes, L.J.; Jaanimagi, P.A.; Knauer, J.P.; Marshall, F.J.; Meyerhofer, D. D.

    2003-01-01

    (B204)A new, high-resolution x-ray microscope with a large grazing angle has been developed, characterized, and fielded at the Laboratory for Laser Energetics. It increases the sensitivity and spatial resolution in planar direct-drive hydrodynamic stability experiments, relevant to inertial confinement fusion (ICF) research. It has been designed to work as the optical front end of the PJX-a high-current, high-dynamic-range x-ray streak camera. Optical design optimization, results from numerical ray tracing, mirror-coating choice, and characterization have been described previously [O. V. Gotchev, et al./Rev. Sci. Instrum. 74, 2178 (2003)]. This work highlights the optics' unique mechanical design and flexibility and considers certain applications that benefit from it. Characterization of the microscope's resolution in terms of its modulation transfer function (MTF) over the field of view is shown. Recent results from hydrodynamic stability experiments, diagnosed with the optic and the PJX, are provided to confirm the microscope's advantages as a high-resolution, high-throughput x-ray optical front end for streaked imaging

  3. 77 FR 43858 - Certain Mobile Telephones and Wireless Communication Devices Featuring Digital Cameras, and...

    Science.gov (United States)

    2012-07-26

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-703] Certain Mobile Telephones and Wireless Communication Devices Featuring Digital Cameras, and Components Thereof; Determination To Review... importation, and the sale within the United States after importation of certain mobile telephones and wireless...

  4. Mechanically assisted liquid lens zoom system for mobile phone cameras

    Science.gov (United States)

    Wippermann, F. C.; Schreiber, P.; Bräuer, A.; Berge, B.

    2006-08-01

    Camera systems with small form factor are an integral part of today's mobile phones which recently feature auto focus functionality. Ready to market solutions without moving parts have been developed by using the electrowetting technology. Besides virtually no deterioration, easy control electronics and simple and therefore cost-effective fabrication, this type of liquid lenses enables extremely fast settling times compared to mechanical approaches. As a next evolutionary step mobile phone cameras will be equipped with zoom functionality. We present first order considerations for the optical design of a miniaturized zoom system based on liquid-lenses and compare it to its mechanical counterpart. We propose a design of a zoom lens with a zoom factor of 2.5 considering state-of-the-art commercially available liquid lens products. The lens possesses auto focus capability and is based on liquid lenses and one additional mechanical actuator. The combination of liquid lenses and a single mechanical actuator enables extremely short settling times of about 20ms for the auto focus and a simplified mechanical system design leading to lower production cost and longer life time. The camera system has a mechanical outline of 24mm in length and 8mm in diameter. The lens with f/# 3.5 provides market relevant optical performance and is designed for an image circle of 6.25mm (1/2.8" format sensor).

  5. Printed products for digital cameras and mobile devices

    Science.gov (United States)

    Fageth, Reiner; Schmidt-Sacht, Wulf

    2005-01-01

    Digital photography is no longer simply a successor to film. The digital market is now driven by additional devices such as mobile phones with camera and video functions (camphones) as well as innovative products derived from digital files. A large number of consumers do not print their images and non-printing has become the major enemy of wholesale printers, home printing suppliers and retailers. This paper addresses the challenge facing our industry, namely how to encourage the consumer to print images easily and conveniently from all types of digital media.

  6. Development of underwater camera using high-definition camera

    International Nuclear Information System (INIS)

    Tsuji, Kenji; Watanabe, Masato; Takashima, Masanobu; Kawamura, Shingo; Tanaka, Hiroyuki

    2012-01-01

    In order to reduce the time for core verification or visual inspection of BWR fuels, the underwater camera using a High-Definition camera has been developed. As a result of this development, the underwater camera has 2 lights and 370 x 400 x 328mm dimensions and 20.5kg weight. Using the camera, 6 or so spent-fuel IDs are identified at 1 or 1.5m distance at a time, and 0.3mmφ pin-hole is recognized at 1.5m distance and 20 times zoom-up. Noises caused by radiation less than 15 Gy/h are not affected the images. (author)

  7. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.

    Science.gov (United States)

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-03-25

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  8. Preliminary analysis on faint luminous lightning events recorded by multiple high speed cameras

    Science.gov (United States)

    Alves, J.; Saraiva, A. V.; Pinto, O.; Campos, L. Z.; Antunes, L.; Luz, E. S.; Medeiros, C.; Buzato, T. S.

    2013-12-01

    The objective of this work is the study of some faint luminous events produced by lightning flashes that were recorded simultaneously by multiple high-speed cameras during the previous RAMMER (Automated Multi-camera Network for Monitoring and Study of Lightning) campaigns. The RAMMER network is composed by three fixed cameras and one mobile color camera separated by, in average, distances of 13 kilometers. They were located in the Paraiba Valley (in the cities of São José dos Campos and Caçapava), SP, Brazil, arranged in a quadrilateral shape, centered in São José dos Campos region. This configuration allowed RAMMER to see a thunderstorm from different angles, registering the same lightning flashes simultaneously by multiple cameras. Each RAMMER sensor is composed by a triggering system and a Phantom high-speed camera version 9.1, which is set to operate at a frame rate of 2,500 frames per second with a lens Nikkor (model AF-S DX 18-55 mm 1:3.5 - 5.6 G in the stationary sensors, and a lens model AF-S ED 24 mm - 1:1.4 in the mobile sensor). All videos were GPS (Global Positioning System) time stamped. For this work we used a data set collected in four RAMMER manual operation days in the campaign of 2012 and 2013. On Feb. 18th the data set is composed by 15 flashes recorded by two cameras and 4 flashes recorded by three cameras. On Feb. 19th a total of 5 flashes was registered by two cameras and 1 flash registered by three cameras. On Feb. 22th we obtained 4 flashes registered by two cameras. Finally, in March 6th two cameras recorded 2 flashes. The analysis in this study proposes an evaluation methodology for faint luminous lightning events, such as continuing current. Problems in the temporal measurement of the continuing current can generate some imprecisions during the optical analysis, therefore this work aim to evaluate the effects of distance in this parameter with this preliminary data set. In the cases that include the color camera we analyzed the RGB

  9. Exploring the requirements for multimodal interaction for mobile devices in an end-to-end journey context.

    Science.gov (United States)

    Krehl, Claudia; Sharples, Sarah

    2012-01-01

    The paper investigates the requirements for multimodal interaction on mobile devices in an end-to-end journey context. Traditional interfaces are deemed cumbersome and inefficient for exchanging information with the user. Multimodal interaction provides a different user-centred approach allowing for more natural and intuitive interaction between humans and computers. It is especially suitable for mobile interaction as it can overcome additional constraints including small screens, awkward keypads, and continuously changing settings - an inherent property of mobility. This paper is based on end-to-end journeys where users encounter several contexts during their journeys. Interviews and focus groups explore the requirements for multimodal interaction design for mobile devices by examining journey stages and identifying the users' information needs and sources. Findings suggest that multimodal communication is crucial when users multitask. Choosing suitable modalities depend on user context, characteristics and tasks.

  10. Development of high-speed video cameras

    Science.gov (United States)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  11. Visual Servo Tracking Control of a Wheeled Mobile Robot with a Monocular Fixed Camera

    National Research Council Canada - National Science Library

    Chen, J; Dixon, W. E; Dawson, D. M; Chitrakaran, V. K

    2004-01-01

    In this paper, a visual servo tracking controller for a wheeled mobile robot (WMR) is developed that utilizes feedback from a monocular camera system that is mounted with a fixed position and orientation...

  12. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    Directory of Open Access Journals (Sweden)

    Chun-Tang Chao

    2016-03-01

    Full Text Available In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  13. On-Line High Dose-Rate Gamma Ray Irradiation Test of the CCD/CMOS Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Jeong, Kyung Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2012-05-15

    In this paper, test results of gamma ray irradiation to CCD/CMOS cameras are described. From the CAMS (containment atmospheric monitoring system) data of Fukushima Dai-ichi nuclear power plant station, we found out that the gamma ray dose-rate when the hydrogen explosion occurred in nuclear reactors 1{approx}3 is about 160 Gy/h. If assumed that the emergency response robot for the management of severe accident of the nuclear power plant has been sent into the reactor area to grasp the inside situation of reactor building and to take precautionary measures against releasing radioactive materials, the CCD/CMOS cameras, which are loaded with the robot, serve as eye of the emergency response robot. In the case of the Japanese Quince robot system, which was sent to carry out investigating the unit 2 reactor building refueling floor situation, 7 CCD/CMOS cameras are used. 2 CCD cameras of Quince robot are used for the forward and backward monitoring of the surroundings during navigation. And 2 CCD (or CMOS) cameras are used for monitoring the status of front-end and back-end motion mechanics such as flippers and crawlers. A CCD camera with wide field of view optics is used for monitoring the status of the communication (VDSL) cable reel. And another 2 CCD cameras are assigned for reading the indication value of the radiation dosimeter and the instrument. In the preceding assumptions, a major problem which arises when dealing with CCD/CMOS cameras in the severe accident situations of the nuclear power plant is the presence of high dose-rate gamma irradiation fields. In the case of the DBA (design basis accident) situations of the nuclear power plant, in order to use a CCD/CMOS camera as an ad-hoc monitoring unit in the vicinity of high radioactivity structures and components of the nuclear reactor area, a robust survivability of this camera in such intense gamma-radiation fields therefore should be verified. The CCD/CMOS cameras of various types were gamma irradiated at a

  14. Video content analysis on body-worn cameras for retrospective investigation

    Science.gov (United States)

    Bouma, Henri; Baan, Jan; ter Haar, Frank B.; Eendebak, Pieter T.; den Hollander, Richard J. M.; Burghouts, Gertjan J.; Wijn, Remco; van den Broek, Sebastiaan P.; van Rest, Jeroen H. C.

    2015-10-01

    In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications and effects, such as the reduction of violence between police and citizens. However, the increased use of bodycams also creates potential challenges. For example: how can end-users extract information from the abundance of video, how can the information be presented, and how can an officer retrieve information efficiently? Nevertheless, such video gives the opportunity to stimulate the professionals' memory, and support complete and accurate reporting. In this paper, we show how video content analysis (VCA) can address these challenges and seize these opportunities. To this end, we focus on methods for creating a complete summary of the video, which allows quick retrieval of relevant fragments. The content analysis for summarization consists of several components, such as stabilization, scene selection, motion estimation, localization, pedestrian tracking and action recognition in the video from a bodycam. The different components and visual representations of summaries are presented for retrospective investigation.

  15. NEW METHOD FOR THE CALIBRATION OF MULTI-CAMERA MOBILE MAPPING SYSTEMS

    Directory of Open Access Journals (Sweden)

    A. P. Kersting

    2012-07-01

    Full Text Available Mobile Mapping Systems (MMS allow for fast and cost-effective collection of geo-spatial information. Such systems integrate a set of imaging sensors and a position and orientation system (POS, which entails GPS and INS units. System calibration is a crucial process to ensure the attainment of the expected accuracy of such systems. It involves the calibration of the individual sensors as well as the calibration of the mounting parameters relating the system components. The mounting parameters of multi-camera MMS include two sets of relative orientation parameters (ROP: the lever arm offsets and the boresight angles relating the cameras and the IMU body frame and the ROP among the cameras (in the absence of GPS/INS data. In this paper, a novel single-step calibration method, which has the ability of estimating these two sets of ROP, is devised. Besides the ability to estimate the ROP among the cameras, the proposed method can use such parameters as prior information in the ISO procedure. The implemented procedure consists of an integrated sensor orientation (ISO where the GPS/INS-derived position and orientation and the system mounting parameters are directly incorporated in the collinearity equations. The concept of modified collinearity equations has been used by few authors for single-camera systems. In this paper, a new modification to the collinearity equations for GPS/INS-assisted multicamera systems is introduced. Experimental results using a real dataset demonstrate the feasibility of the proposed method.

  16. New Method for the Calibration of Multi-Camera Mobile Mapping Systems

    Science.gov (United States)

    Kersting, A. P.; Habib, A.; Rau, J.

    2012-07-01

    Mobile Mapping Systems (MMS) allow for fast and cost-effective collection of geo-spatial information. Such systems integrate a set of imaging sensors and a position and orientation system (POS), which entails GPS and INS units. System calibration is a crucial process to ensure the attainment of the expected accuracy of such systems. It involves the calibration of the individual sensors as well as the calibration of the mounting parameters relating the system components. The mounting parameters of multi-camera MMS include two sets of relative orientation parameters (ROP): the lever arm offsets and the boresight angles relating the cameras and the IMU body frame and the ROP among the cameras (in the absence of GPS/INS data). In this paper, a novel single-step calibration method, which has the ability of estimating these two sets of ROP, is devised. Besides the ability to estimate the ROP among the cameras, the proposed method can use such parameters as prior information in the ISO procedure. The implemented procedure consists of an integrated sensor orientation (ISO) where the GPS/INS-derived position and orientation and the system mounting parameters are directly incorporated in the collinearity equations. The concept of modified collinearity equations has been used by few authors for single-camera systems. In this paper, a new modification to the collinearity equations for GPS/INS-assisted multicamera systems is introduced. Experimental results using a real dataset demonstrate the feasibility of the proposed method.

  17. Vision-based mobile robot navigation through deep convolutional neural networks and end-to-end learning

    Science.gov (United States)

    Zhang, Yachu; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Kong, Lingqin; Liu, Lingling

    2017-09-01

    In contrast to humans, who use only visual information for navigation, many mobile robots use laser scanners and ultrasonic sensors along with vision cameras to navigate. This work proposes a vision-based robot control algorithm based on deep convolutional neural networks. We create a large 15-layer convolutional neural network learning system and achieve the advanced recognition performance. Our system is trained from end to end to map raw input images to direction in supervised mode. The images of data sets are collected in a wide variety of weather conditions and lighting conditions. Besides, the data sets are augmented by adding Gaussian noise and Salt-and-pepper noise to avoid overfitting. The algorithm is verified by two experiments, which are line tracking and obstacle avoidance. The line tracking experiment is proceeded in order to track the desired path which is composed of straight and curved lines. The goal of obstacle avoidance experiment is to avoid the obstacles indoor. Finally, we get 3.29% error rate on the training set and 5.1% error rate on the test set in the line tracking experiment, 1.8% error rate on the training set and less than 5% error rate on the test set in the obstacle avoidance experiment. During the actual test, the robot can follow the runway centerline outdoor and avoid the obstacle in the room accurately. The result confirms the effectiveness of the algorithm and our improvement in the network structure and train parameters

  18. A sampling ultra-high-speed streak camera based on the use of a unique photomultiplier

    International Nuclear Information System (INIS)

    Marode, Emmanuel

    An apparatus reproducing the ''streak'' mode of a high-speed camera is proposed for the case of a slit AB whose variations in luminosity are repetitive. A photomultiplier, analysing the object AB point by point, and a still camera, photographing a slit fixed on the oscilloscope screen parallel to the sweep direction are placed on a mobile platform P. The movement of P assures a time-resolved analysis of AB. The resolution is of the order of 2.10 -9 s, and can be improved [fr

  19. A mobile device-based imaging spectrometer for environmental monitoring by attaching a lightweight small module to a commercial digital camera.

    Science.gov (United States)

    Cai, Fuhong; Lu, Wen; Shi, Wuxiong; He, Sailing

    2017-11-15

    Spatially-explicit data are essential for remote sensing of ecological phenomena. Lately, recent innovations in mobile device platforms have led to an upsurge in on-site rapid detection. For instance, CMOS chips in smart phones and digital cameras serve as excellent sensors for scientific research. In this paper, a mobile device-based imaging spectrometer module (weighing about 99 g) is developed and equipped on a Single Lens Reflex camera. Utilizing this lightweight module, as well as commonly used photographic equipment, we demonstrate its utility through a series of on-site multispectral imaging, including ocean (or lake) water-color sensing and plant reflectance measurement. Based on the experiments we obtain 3D spectral image cubes, which can be further analyzed for environmental monitoring. Moreover, our system can be applied to many kinds of cameras, e.g., aerial camera and underwater camera. Therefore, any camera can be upgraded to an imaging spectrometer with the help of our miniaturized module. We believe it has the potential to become a versatile tool for on-site investigation into many applications.

  20. The use of a camera-enabled mobile phone to triage patients with nasal bone injuries.

    LENUS (Irish Health Repository)

    Barghouthi, Taleb

    2012-03-01

    To identify the accuracy of a camera-enabled mobile phone in assessing patients with nasal bone injuries and to determine if treatment in the form of manipulation of the nasal bones and therefore outpatient attendance was necessary.

  1. Math Teachers' Attitudes towards Photo Math Application in Solving Mathematical Problem Using Mobile Camera

    Science.gov (United States)

    Hamadneh, Iyad M.; Al-Masaeed, Aslan

    2015-01-01

    This study aimed at finding out mathematics teachers' attitudes towards photo math application in solving mathematical problems using mobile camera; it also aim to identify significant differences in their attitudes according to their stage of teaching, educational qualifications, and teaching experience. The study used judgmental/purposive…

  2. Personalized Avatars for Mobile Entertainment

    Directory of Open Access Journals (Sweden)

    Tomislav Kosutic

    2006-01-01

    Full Text Available With evolution in computer and mobile networking technologies comes the challenge of offering novel and complex multimedia applications and end-user services in heterogeneous environments for both developers and service providers. This paper describes one novel service, called LiveMail that explores the potential of existing face animation technologies for innovative and attractive services intended for the mobile market. This prototype service allows mobile subscribers to communicate using personalized 3D face models created from images taken by their phone cameras. The user can take a snapshot of someone's face – a friend, famous person, themselves, even a pet – using the mobile phone's camera. After a quick manipulation on the phone, a 3D model of that face is created and can be animated simply by typing in some text. Speech and appropriate animation of the face are created automatically by speech synthesis. Furthermore, these highly personalized animations can be sent to others as real 3D animated messages or as short videos in MMS. The clients were implemented on different platforms, and different network and face animation techniques, and connected into one complex system. This paper presents the architecture and experience gained in building such a system.

  3. An End User Development Approach for Mobile Web Augmentation

    Directory of Open Access Journals (Sweden)

    Gabriela Bosetti

    2017-01-01

    Full Text Available The trend towards mobile devices usage has made it possible for the Web to be conceived not only as an information space but also as a ubiquitous platform where users perform all kinds of tasks. In some cases, users access the Web with native mobile applications developed for well-known sites, such as, LinkedIn, Facebook, and Twitter. These native applications might offer further (e.g., location-based functionalities to their users in comparison with their corresponding Web sites because they were developed with mobile features in mind. However, many Web applications have no native counterpart and users access them using a mobile Web browser. Although the access to context information is not a complex issue nowadays, not all Web applications adapt themselves according to it or diversely improve the user experience by listening to a wide range of sensors. At some point, users might want to add mobile features to these Web sites, even if those features were not originally supported. In this paper, we present a novel approach to allow end users to augment their preferred Web sites with mobile features. We support our claims by presenting a framework for mobile Web augmentation, an authoring tool, and an evaluation with 21 end users.

  4. Color-filter-free spatial visible light communication using RGB-LED and mobile-phone camera.

    Science.gov (United States)

    Chen, Shih-Hao; Chow, Chi-Wai

    2014-12-15

    A novel color-filter-free visible-light communication (VLC) system using red-green-blue (RGB) light emitting diode (LED) and mobile-phone camera is proposed and demonstrated for the first time. A feature matching method, which is based on the scale-invariant feature transform (SIFT) algorithm for the received grayscale image is used instead of the chromatic information decoding method. The proposed method is simple and saves the computation complexity. The signal processing is based on the grayscale image computation; hence neither color-filter nor chromatic channel information is required. A proof-of-concept experiment is performed and high performance channel recognition is achieved.

  5. Optimization of a miniature short-wavelength infrared objective optics of a short-wavelength infrared to visible upconversion layer attached to a mobile-devices visible camera

    Science.gov (United States)

    Kadosh, Itai; Sarusi, Gabby

    2017-10-01

    The use of dual cameras in parallax in order to detect and create 3-D images in mobile devices has been increasing over the last few years. We propose a concept where the second camera will be operating in the short-wavelength infrared (SWIR-1300 to 1800 nm) and thus have night vision capability while preserving most of the other advantages of dual cameras in terms of depth and 3-D capabilities. In order to maintain commonality of the two cameras, we propose to attach to one of the cameras a SWIR to visible upconversion layer that will convert the SWIR image into a visible image. For this purpose, the fore optics (the objective lenses) should be redesigned for the SWIR spectral range and the additional upconversion layer, whose thickness is mobile device visible range camera sensor (the CMOS sensor). This paper presents such a SWIR objective optical design and optimization that is formed and fit mechanically to the visible objective design but with different lenses in order to maintain the commonality and as a proof-of-concept. Such a SWIR objective design is very challenging since it requires mimicking the original visible mobile camera lenses' sizes and the mechanical housing, so we can adhere to the visible optical and mechanical design. We present in depth a feasibility study and the overall optical system performance of such a SWIR mobile-device camera fore optics design.

  6. SME2EM: Smart mobile end-to-end monitoring architecture for life-long diseases.

    Science.gov (United States)

    Serhani, Mohamed Adel; Menshawy, Mohamed El; Benharref, Abdelghani

    2016-01-01

    Monitoring life-long diseases requires continuous measurements and recording of physical vital signs. Most of these diseases are manifested through unexpected and non-uniform occurrences and behaviors. It is impractical to keep patients in hospitals, health-care institutions, or even at home for long periods of time. Monitoring solutions based on smartphones combined with mobile sensors and wireless communication technologies are a potential candidate to support complete mobility-freedom, not only for patients, but also for physicians. However, existing monitoring architectures based on smartphones and modern communication technologies are not suitable to address some challenging issues, such as intensive and big data, resource constraints, data integration, and context awareness in an integrated framework. This manuscript provides a novel mobile-based end-to-end architecture for live monitoring and visualization of life-long diseases. The proposed architecture provides smartness features to cope with continuous monitoring, data explosion, dynamic adaptation, unlimited mobility, and constrained devices resources. The integration of the architecture׳s components provides information about diseases׳ recurrences as soon as they occur to expedite taking necessary actions, and thus prevent severe consequences. Our architecture system is formally model-checked to automatically verify its correctness against designers׳ desirable properties at design time. Its components are fully implemented as Web services with respect to the SOA architecture to be easy to deploy and integrate, and supported by Cloud infrastructure and services to allow high scalability, availability of processes and data being stored and exchanged. The architecture׳s applicability is evaluated through concrete experimental scenarios on monitoring and visualizing states of epileptic diseases. The obtained theoretical and experimental results are very promising and efficiently satisfy the proposed

  7. High performance CCD camera system for digitalisation of 2D DIGE gels.

    Science.gov (United States)

    Strijkstra, Annemieke; Trautwein, Kathleen; Roesler, Stefan; Feenders, Christoph; Danzer, Daniel; Riemenschneider, Udo; Blasius, Bernd; Rabus, Ralf

    2016-07-01

    An essential step in 2D DIGE-based analysis of differential proteome profiles is the accurate and sensitive digitalisation of 2D DIGE gels. The performance progress of commercially available charge-coupled device (CCD) camera-based systems combined with light emitting diodes (LED) opens up a new possibility for this type of digitalisation. Here, we assessed the performance of a CCD camera system (Intas Advanced 2D Imager) as alternative to a traditionally employed, high-end laser scanner system (Typhoon 9400) for digitalisation of differential protein profiles from three different environmental bacteria. Overall, the performance of the CCD camera system was comparable to the laser scanner, as evident from very similar protein abundance changes (irrespective of spot position and volume), as well as from linear range and limit of detection. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Mobile cosmetics advisor: an imaging based mobile service

    Science.gov (United States)

    Bhatti, Nina; Baker, Harlyn; Chao, Hui; Clearwater, Scott; Harville, Mike; Jain, Jhilmil; Lyons, Nic; Marguier, Joanna; Schettino, John; Süsstrunk, Sabine

    2010-01-01

    Selecting cosmetics requires visual information and often benefits from the assessments of a cosmetics expert. In this paper we present a unique mobile imaging application that enables women to use their cell phones to get immediate expert advice when selecting personal cosmetic products. We derive the visual information from analysis of camera phone images, and provide the judgment of the cosmetics specialist through use of an expert system. The result is a new paradigm for mobile interactions-image-based information services exploiting the ubiquity of camera phones. The application is designed to work with any handset over any cellular carrier using commonly available MMS and SMS features. Targeted at the unsophisticated consumer, it must be quick and easy to use, not requiring download capabilities or preplanning. Thus, all application processing occurs in the back-end system and not on the handset itself. We present the imaging pipeline technology and a comparison of the services' accuracy with respect to human experts.

  9. A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Sobers Lourdu Xavier Francis

    2015-11-01

    Full Text Available The aim of this paper is to deploy a time-of-flight (ToF based photonic mixer device (PMD camera on an Autonomous Ground Vehicle (AGV whose overall target is to traverse from one point to another in hazardous and hostile environments employing obstacle avoidance without human intervention. The hypothesized approach of applying a ToF Camera for an AGV is a suitable approach to autonomous robotics because, as the ToF camera can provide three-dimensional (3D information at a low computational cost, it is utilized to extract information about obstacles after their calibration and ground testing and is mounted and integrated with the Pioneer mobile robot. The workspace is a two-dimensional (2D world map which has been divided into a grid/cells, where the collision-free path defined by the graph search algorithm is a sequence of cells the AGV can traverse to reach the target. PMD depth data is used to populate traversable areas and obstacles by representing a grid/cells of suitable size. These camera data are converted into Cartesian coordinates for entry into a workspace grid map. A more optimal camera mounting angle is needed and adopted by analysing the camera's performance discrepancy, such as pixel detection, the detection rate and the maximum perceived distances, and infrared (IR scattering with respect to the ground surface. This mounting angle is recommended to be half the vertical field-of-view (FoV of the PMD camera. A series of still and moving tests are conducted on the AGV to verify correct sensor operations, which show that the postulated application of the ToF camera in the AGV is not straightforward. Later, to stabilize the moving PMD camera and to detect obstacles, a tracking feature detection algorithm and the scene flow technique are implemented to perform a real-time experiment.

  10. High resolution RGB color line scan camera

    Science.gov (United States)

    Lynch, Theodore E.; Huettig, Fred

    1998-04-01

    A color line scan camera family which is available with either 6000, 8000 or 10000 pixels/color channel, utilizes off-the-shelf lenses, interfaces with currently available frame grabbers, includes on-board pixel by pixel offset correction, and is configurable and controllable via RS232 serial port for computer controlled or stand alone operation is described in this paper. This line scan camera is based on an available 8000 element monochrome line scan camera designed by AOA for OEM use. The new color version includes improvements such as better packaging and additional user features which make the camera easier to use. The heart of the camera is a tri-linear CCD sensor with on-chip color balancing for maximum accuracy and pinned photodiodes for low lag response. Each color channel is digitized to 12 bits and all three channels are multiplexed together so that the resulting camera output video is either a 12 or 8 bit data stream at a rate of up to 24Megpixels/sec. Conversion from 12 to 8 bit, or user-defined gamma, is accomplished by on board user-defined video look up tables. The camera has two user-selectable operating modes; lows speed, high sensitivity mode or high speed, reduced sensitivity mode. The intended uses of the camera include industrial inspection, digital archiving, document scanning, and graphic arts applications.

  11. Front end design of smartphone-based mobile health

    Science.gov (United States)

    Zhang, Changfan; He, Lingsong; Gao, Zhiqiang; Ling, Cong; Du, Jianhao

    2015-02-01

    Mobile health has been a new trend all over the world with the rapid development of intelligent terminals and mobile internet. It can help patients monitor health in-house and is convenient for doctors to diagnose remotely. Smart-phone-based mobile health has big advantages in cost and data sharing. Front end design of it mainly focuses on two points: one is implementation of medical sensors aimed at measuring kinds of medical signal; another is acquisition of medical signal from sensors to smart phone. In this paper, the above two aspects were both discussed. First, medical sensor implementation was proposed to refer to mature measurement solutions with ECG (electrocardiograph) sensor design taken for example. And integrated chip using can simplify design. Then second, typical data acquisition architecture of smart phones, namely Bluetooth and MIC (microphone)-based architecture, were compared. Bluetooth architecture should be equipped with an acquisition card; MIC design uses sound card of smart phone instead. Smartphone-based virtual instrument app design corresponding to above acquisition architecture was discussed. In experiments, Bluetooth and MIC architecture were used to acquire blood pressure and ECG data respectively. The results showed that Bluetooth design can guarantee high accuracy during the acquisition and transmission process, and MIC design is competitive because of low cost and convenience.

  12. Engineering of Data Acquiring Mobile Software and Sustainable End-User Applications

    Science.gov (United States)

    Smith, Benton T.

    2013-01-01

    The criteria for which data acquiring software and its supporting infrastructure should be designed should take the following two points into account: the reusability and organization of stored online and remote data and content, and an assessment on whether abandoning a platform optimized design in favor for a multi-platform solution significantly reduces the performance of an end-user application. Furthermore, in-house applications that control or process instrument acquired data for end-users should be designed with a communication and control interface such that the application's modules can be reused as plug-in modular components in greater software systems. The application of the above mentioned is applied using two loosely related projects: a mobile application, and a website containing live and simulated data. For the intelligent devices mobile application AIDM, the end-user interface have a platform and data type optimized design, while the database and back-end applications store this information in an organized manner and manage access to that data to only to authorized user end application(s). Finally, the content for the website was derived from a database such that the content can be included and uniform to all applications accessing the content. With these projects being ongoing, I have concluded from my research that the applicable methods presented are feasible for both projects, and that a multi-platform design for the mobile application only marginally drop the performance of the mobile application.

  13. Video Sharing System Based on Wi-Fi Camera

    OpenAIRE

    Qidi Lin; Hewei Yu; Jinbin Huang; Weile Liang

    2015-01-01

    This paper introduces a video sharing platform based on WiFi, which consists of camera, mobile phone and PC server. This platform can receive wireless signal from the camera and show the live video on the mobile phone captured by camera. In addition, it is able to send commands to camera and control the camera's holder to rotate. The platform can be applied to interactive teaching and dangerous area's monitoring and so on. Testing results show that the platform can share ...

  14. A wideband large dynamic range and high linearity RF front-end for U-band mobile DTV

    International Nuclear Information System (INIS)

    Liu Rongjiang; Liu Shengyou; Guo Guiliang; Cheng Xu; Yan Yuepeng

    2013-01-01

    A wideband large dynamic range and high linearity U-band RF front-end for mobile DTV is introduced, and includes a noise-cancelling low-noise amplifier (LNA), an RF programmable gain amplifier (RFPGA) and a current communicating passive mixer. The noise/distortion cancelling structure and RC post-distortion compensation are employed to improve the linearity of the LNA. An RFPGA with five stages provides large dynamic range and fine gain resolution. A simple resistor voltage network in the passive mixer decreases the gate bias voltage of the mixing transistor, and optimum linearity and symmetrical mixing is obtained at the same time. The RF front-end is implemented in a 0.25 μm CMOS process. Tests show that it achieves an IIP3 (third-order intercept point) of −17 dBm, a conversion gain of 39 dB, and a noise figure of 5.8 dB. The RFPGA achieves a dynamic range of −36.2 to 23.5 dB with a resolution of 0.32 dB. (semiconductor integrated circuits)

  15. Characterization and performance of the ASIC (CITIROC) front-end of the ASTRI camera

    Energy Technology Data Exchange (ETDEWEB)

    Impiombato, D., E-mail: Domenico.Impiombato@iasf-palermo.inaf.it [INAF, Istituto di Astrofisica Spaziale e Fisica cosmica di Palermo, via U. La Malfa 153, I-90146 Palermo (Italy); Giarrusso, S., E-mail: Giarrusso@iasf-palermo.inaf.it [INAF, Istituto di Astrofisica Spaziale e Fisica cosmica di Palermo, via U. La Malfa 153, I-90146 Palermo (Italy); Mineo, T., E-mail: Mineo@iasf-palermo.inaf.it [INAF, Istituto di Astrofisica Spaziale e Fisica cosmica di Palermo, via U. La Malfa 153, I-90146 Palermo (Italy); Catalano, O., E-mail: Catalano@iasf-palermo.inaf.it [INAF, Istituto di Astrofisica Spaziale e Fisica cosmica di Palermo, via U. La Malfa 153, I-90146 Palermo (Italy); Gargano, C.; La Rosa, G.; Russo, F.; Sottile, G. [INAF, Istituto di Astrofisica Spaziale e Fisica cosmica di Palermo, via U. La Malfa 153, I-90146 Palermo (Italy); Billotta, S.; Bonanno, G.; Garozzo, S.; Grillo, A.; Marano, D.; Romeo, G. [INAF, Osservatorio Astrofisico di Catania, via S. Sofia 78, I-95123 Catania (Italy)

    2015-09-11

    The Cherenkov Imaging Telescope Integrated Read Out Chip, CITIROC, is a chip adopted as the front-end of the camera at the focal plane of the imaging Cherenkov ASTRI dual-mirror small size telescope (ASTRI SST-2M) prototype. This paper presents the results of the measurements performed to characterize CITIROC tailored for the ASTRI SST-2M focal plane requirements. In particular, we investigated the trigger linearity and efficiency, as a function of the pulse amplitude. Moreover, we tested its response by performing a set of measurements using a silicon photomultiplier (SiPM) in dark conditions and under light pulse illumination. The CITIROC output signal is found to vary linearly as a function of the input pulse amplitude. Our results show that it is suitable for the ASTRI SST-2M camera.

  16. Characterization and performance of the ASIC (CITIROC) front-end of the ASTRI camera

    International Nuclear Information System (INIS)

    Impiombato, D.; Giarrusso, S.; Mineo, T.; Catalano, O.; Gargano, C.; La Rosa, G.; Russo, F.; Sottile, G.; Billotta, S.; Bonanno, G.; Garozzo, S.; Grillo, A.; Marano, D.; Romeo, G.

    2015-01-01

    The Cherenkov Imaging Telescope Integrated Read Out Chip, CITIROC, is a chip adopted as the front-end of the camera at the focal plane of the imaging Cherenkov ASTRI dual-mirror small size telescope (ASTRI SST-2M) prototype. This paper presents the results of the measurements performed to characterize CITIROC tailored for the ASTRI SST-2M focal plane requirements. In particular, we investigated the trigger linearity and efficiency, as a function of the pulse amplitude. Moreover, we tested its response by performing a set of measurements using a silicon photomultiplier (SiPM) in dark conditions and under light pulse illumination. The CITIROC output signal is found to vary linearly as a function of the input pulse amplitude. Our results show that it is suitable for the ASTRI SST-2M camera

  17. Cheetah: A high frame rate, high resolution SWIR image camera

    Science.gov (United States)

    Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob

    2008-10-01

    A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.

  18. Mixel camera--a new push-broom camera concept for high spatial resolution keystone-free hyperspectral imaging.

    Science.gov (United States)

    Høye, Gudrun; Fridman, Andrei

    2013-05-06

    Current high-resolution push-broom hyperspectral cameras introduce keystone errors to the captured data. Efforts to correct these errors in hardware severely limit the optical design, in particular with respect to light throughput and spatial resolution, while at the same time the residual keystone often remains large. The mixel camera solves this problem by combining a hardware component--an array of light mixing chambers--with a mathematical method that restores the hyperspectral data to its keystone-free form, based on the data that was recorded onto the sensor with large keystone. A Virtual Camera software, that was developed specifically for this purpose, was used to compare the performance of the mixel camera to traditional cameras that correct keystone in hardware. The mixel camera can collect at least four times more light than most current high-resolution hyperspectral cameras, and simulations have shown that the mixel camera will be photon-noise limited--even in bright light--with a significantly improved signal-to-noise ratio compared to traditional cameras. A prototype has been built and is being tested.

  19. FlexISP: a flexible camera image processing framework

    KAUST Repository

    Heide, Felix

    2014-11-19

    Conventional pipelines for capturing, displaying, and storing images are usually defined as a series of cascaded modules, each responsible for addressing a particular problem. While this divide-and-conquer approach offers many benefits, it also introduces a cumulative error, as each step in the pipeline only considers the output of the previous step, not the original sensor data. We propose an end-to-end system that is aware of the camera and image model, enforces natural-image priors, while jointly accounting for common image processing steps like demosaicking, denoising, deconvolution, and so forth, all directly in a given output representation (e.g., YUV, DCT). Our system is flexible and we demonstrate it on regular Bayer images as well as images from custom sensors. In all cases, we achieve large improvements in image quality and signal reconstruction compared to state-of-the-art techniques. Finally, we show that our approach is capable of very efficiently handling high-resolution images, making even mobile implementations feasible.

  20. FlexISP: a flexible camera image processing framework

    KAUST Repository

    Heide, Felix; Egiazarian, Karen; Kautz, Jan; Pulli, Kari; Steinberger, Markus; Tsai, Yun-Ta; Rouf, Mushfiqur; Pająk, Dawid; Reddy, Dikpal; Gallo, Orazio; Liu, Jing; Heidrich, Wolfgang

    2014-01-01

    Conventional pipelines for capturing, displaying, and storing images are usually defined as a series of cascaded modules, each responsible for addressing a particular problem. While this divide-and-conquer approach offers many benefits, it also introduces a cumulative error, as each step in the pipeline only considers the output of the previous step, not the original sensor data. We propose an end-to-end system that is aware of the camera and image model, enforces natural-image priors, while jointly accounting for common image processing steps like demosaicking, denoising, deconvolution, and so forth, all directly in a given output representation (e.g., YUV, DCT). Our system is flexible and we demonstrate it on regular Bayer images as well as images from custom sensors. In all cases, we achieve large improvements in image quality and signal reconstruction compared to state-of-the-art techniques. Finally, we show that our approach is capable of very efficiently handling high-resolution images, making even mobile implementations feasible.

  1. Video content analysis on body-worn cameras for retrospective investigation

    NARCIS (Netherlands)

    Bouma, H.; Baan, J.; Haar, F.B. ter; Eendebak, P.T.; Hollander, R.J.M. den; Burghouts, G.J.; Wijn, R.; Broek, S.P. van den; Rest, J.H.C. van

    2015-01-01

    In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications

  2. System design description for the LDUA high resolution stereoscopic video camera system (HRSVS)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The High Resolution Stereoscopic Video Camera System (HRSVS), system 6230, was designed to be used as an end effector on the LDUA to perform surveillance and inspection activities within a waste tank. It is attached to the LDUA by means of a Tool Interface Plate (TIP) which provides a feed through for all electrical and pneumatic utilities needed by the end effector to operate. Designed to perform up close weld and corrosion inspection roles in US T operations, the HRSVS will support and supplement the Light Duty Utility Arm (LDUA) and provide the crucial inspection tasks needed to ascertain waste tank condition

  3. Video Chat with Multiple Cameras

    OpenAIRE

    MacCormick, John

    2012-01-01

    The dominant paradigm for video chat employs a single camera at each end of the conversation, but some conversations can be greatly enhanced by using multiple cameras at one or both ends. This paper provides the first rigorous investigation of multi-camera video chat, concentrating especially on the ability of users to switch between views at either end of the conversation. A user study of 23 individuals analyzes the advantages and disadvantages of permitting a user to switch between views at...

  4. Portable mini gamma camera for medical applications

    CERN Document Server

    Porras, E; Benlloch, J M; El-Djalil-Kadi-Hanifi, M; López, S; Pavon, N; Ruiz, J A; Sánchez, F; Sebastiá, A

    2002-01-01

    A small, portable and low-cost gamma camera for medical applications has been developed and clinically tested. This camera, based on a scintillator crystal and a Position Sensitive Photo-Multiplier Tube, has a useful field of view of 4.6 cm diameter and provides 2.2 mm of intrinsic spatial resolution. Its mobility and light weight allow to reach the patient from any desired direction. This camera images small organs with high efficiency and so addresses the demand for devices of specific clinical applications. In this paper, we present the camera and briefly describe the procedures that have led us to choose its configuration and the image reconstruction method. The clinical tests and diagnostic capability are also presented and discussed.

  5. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  6. Real-time multiple human perception with color-depth cameras on a mobile robot.

    Science.gov (United States)

    Zhang, Hao; Reardon, Christopher; Parker, Lynne E

    2013-10-01

    The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an

  7. New insight into lunar impact melt mobility from the LRO camera

    Science.gov (United States)

    Bray, Veronica J.; Tornabene, Livio L.; Keszthelyi, Laszlo P.; McEwen, Alfred S.; Hawke, B. Ray; Giguere, Thomas A.; Kattenhorn, Simon A.; Garry, William B.; Rizk, Bashar; Caudill, C.M.; Gaddis, Lisa R.; van der Bogert, Carolyn H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) is systematically imaging impact melt deposits in and around lunar craters at meter and sub-meter scales. These images reveal that lunar impact melts, although morphologically similar to terrestrial lava flows of similar size, exhibit distinctive features (e.g., erosional channels). Although generated in a single rapid event, the post-impact mobility and morphology of lunar impact melts is surprisingly complex. We present evidence for multi-stage influx of impact melt into flow lobes and crater floor ponds. Our volume and cooling time estimates for the post-emplacement melt movements noted in LROC images suggest that new flows can emerge from melt ponds an extended time period after the impact event.

  8. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-07-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2-deg deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  9. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-04-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2 0 deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  10. Using a slit lamp-mounted digital high-speed camera for dynamic observation of phakic lenses during eye movements: a pilot study

    Directory of Open Access Journals (Sweden)

    Leitritz MA

    2014-07-01

    Full Text Available Martin Alexander Leitritz, Focke Ziemssen, Karl Ulrich Bartz-Schmidt, Bogomil Voykov Centre for Ophthalmology, University Eye Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany Purpose: To evaluate a digital high-speed camera combined with digital morphometry software for dynamic measurements of phakic intraocular lens movements to observe kinetic influences, particularly in fast direction changes and at lateral end points. Materials and methods: A high-speed camera taking 300 frames per second observed movements of eight iris-claw intraocular lenses and two angle-supported intraocular lenses. Standardized saccades were performed by the patients to trigger mass inertia with lens position changes. Freeze images with maximum deviation were used for digital software-based morphometry analysis with ImageJ.Results: Two eyes from each of five patients (median age 32 years, range 28–45 years without findings other than refractive errors were included. The high-speed images showed sufficient usability for further morphometric processing. In the primary eye position, the median decentrations downward and in a lateral direction were -0.32 mm (range -0.69 to 0.024 and 0.175 mm (range -0.37 to 0.45, respectively. Despite the small sample size of asymptomatic patients, we found a considerable amount of lens dislocation. The median distance amplitude during eye movements was 0.158 mm (range 0.02–0.84. There was a slight positive corrlation (r=0.39, P<0.001 between the grade of deviation in the primary position and the distance increase triggered by movements.Conclusion: With the use of a slit lamp-mounted high-speed camera system and morphometry software, observation and objective measurements of iris-claw intraocular lenses and angle-supported intraocular lenses movements seem to be possible. Slight decentration in the primary position might be an indicator of increased lens mobility during kinetic stress during eye movements

  11. Mobile viewer system for virtual 3D space using infrared LED point markers and camera

    Science.gov (United States)

    Sakamoto, Kunio; Taneji, Shoto

    2006-09-01

    The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.

  12. A new omni-directional multi-camera system for high resolution surveillance

    Science.gov (United States)

    Cogal, Omer; Akin, Abdulkadir; Seyid, Kerem; Popovic, Vladan; Schmid, Alexandre; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2014-05-01

    Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor's image resolution. Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented. This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange imaging which are beyond standard stitching and panorama generation methods.

  13. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    Science.gov (United States)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  14. X-ray powder diffraction camera for high-field experiments

    International Nuclear Information System (INIS)

    Koyama, K; Mitsui, Y; Takahashi, K; Watanabe, K

    2009-01-01

    We have designed a high-field X-ray diffraction (HF-XRD) camera which will be inserted into an experimental room temperature bore (100 mm) of a conventional solenoid-type cryocooled superconducting magnet (10T-CSM). Using the prototype camera that is same size of the HF-XRD camera, a XRD pattern of Si is taken at room temperature in a zero magnetic field. From the obtained results, the expected ability of the designed HF-XRD camera is presented.

  15. Materials and devices with applications in high-end organic transistors

    International Nuclear Information System (INIS)

    Takeya, J.; Uemura, T.; Sakai, K.; Okada, Y.

    2014-01-01

    The development of functional materials typically benefits from an understanding of the microscopic mechanisms by which those materials operate. To accelerate the development of organic semiconductor devices with industrial applications in flexible and printed electronics, it is essential to elucidate the mechanisms of charge transport associated with molecular-scale charge transfer. In this study, we employed Hall effect measurements to differentiate coherent band transport from site-to-site hopping. The results of tests using several different molecular systems as the active semiconductor layers demonstrate that high-mobility charge transport in recently-developed solution-crystallized organic transistors is the result of a band-like mechanism. These materials, which have the potential to be organic transistors exhibiting the highest speeds ever obtained, are significantly different from the conventional lower-mobility organic semiconductors with incoherent hopping-like transport mechanisms which were studied in the previous century. They may be categorized as “high-end” organic semiconductors, characterized by their coherent electronic states and high values of mobility which are close to or greater than 10 cm 2 /Vs. - Highlights: • Transport in high-mobility solution-crystallized organic transistors is band-like. • High-end organic semiconductors carry coherent electrons with mobility > 10 cm 2 /Vs. • Hall-effect measurement differentiates coherent band transport from hopping. • We found an anomalous pressure effect in organic semiconductors

  16. Natural Environment Illumination: Coherent Interactive Augmented Reality for Mobile and Non-Mobile Devices.

    Science.gov (United States)

    Rohmer, Kai; Jendersie, Johannes; Grosch, Thorsten

    2017-11-01

    Augmented Reality offers many applications today, especially on mobile devices. Due to the lack of mobile hardware for illumination measurements, photorealistic rendering with consistent appearance of virtual objects is still an area of active research. In this paper, we present a full two-stage pipeline for environment acquisition and augmentation of live camera images using a mobile device with a depth sensor. We show how to directly work on a recorded 3D point cloud of the real environment containing high dynamic range color values. For unknown and automatically changing camera settings, a color compensation method is introduced. Based on this, we show photorealistic augmentations using variants of differential light simulation techniques. The presented methods are tailored for mobile devices and run at interactive frame rates. However, our methods are scalable to trade performance for quality and can produce quality renderings on desktop hardware.

  17. Image Alignment for Multiple Camera High Dynamic Range Microscopy.

    Science.gov (United States)

    Eastwood, Brian S; Childs, Elisabeth C

    2012-01-09

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.

  18. The Value of Privacy and Surveillance Drones in the Public Domain : Scrutinizing the Dutch Flexible Deployment of Mobile Cameras Act

    NARCIS (Netherlands)

    Gerdo Kuiper; Quirine Eijkman

    2017-01-01

    The flexible deployment of drones in the public domain, is in this article assessed from a legal philosophical perspective. On the basis of theories of Dworkin and Moore the distinction between individual rights and collective security policy goals is discussed. Mobile cameras in the public domain

  19. Citizen Camera-Witnessing: A Case Study of the Umbrella Movement

    Directory of Open Access Journals (Sweden)

    Wai Han Lo

    2016-08-01

    Full Text Available Citizen camera-witness is a new concept by which to describe using mobile camera phone to engage in civic expression. I argue that the meaning of this concept should not be limited to painful testimony; instead, it is a mode of civic camera-mediated mass self-testimony to brutality. The use of mobile phone recordings in Hong Kong’s Umbrella Movement is examined to understand how mobile cameras are employed as personal witnessing devices to provide recordings to indict unjust events and engage others in the civic movement. This study has examined the Facebook posts and You Tube videos of the Umbrella Movement between September 22, 2014 and December 22, 2014. The results suggest that the camera phone not only contributes to witnessing the brutal repression of the state, but also witnesses the beauty of the movement, and provides a testimony that allows for rituals to develop and semi-codes to be transformed.

  20. a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging

    Science.gov (United States)

    Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.

    2017-08-01

    Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.

  1. A SPATIO-SPECTRAL CAMERA FOR HIGH RESOLUTION HYPERSPECTRAL IMAGING

    Directory of Open Access Journals (Sweden)

    S. Livens

    2017-08-01

    Full Text Available Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600–900 nm in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots, horticulture (crop status monitoring to evaluate irrigation management in strawberry fields and geology (meteorite detection on a grassland field. Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475–925 nm, and we discuss future work.

  2. High dynamic range image acquisition based on multiplex cameras

    Science.gov (United States)

    Zeng, Hairui; Sun, Huayan; Zhang, Tinghua

    2018-03-01

    High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.

  3. Spectroscopic gamma camera for use in high dose environments

    Energy Technology Data Exchange (ETDEWEB)

    Ueno, Yuichiro, E-mail: yuichiro.ueno.bv@hitachi.com [Research and Development Group, Hitachi, Ltd., Hitachi-shi, Ibaraki-ken 319-1221 (Japan); Takahashi, Isao; Ishitsu, Takafumi; Tadokoro, Takahiro; Okada, Koichi; Nagumo, Yasushi [Research and Development Group, Hitachi, Ltd., Hitachi-shi, Ibaraki-ken 319-1221 (Japan); Fujishima, Yasutake; Kometani, Yutaka [Hitachi Works, Hitachi-GE Nuclear Energy, Ltd., Hitachi-shi, Ibaraki-ken (Japan); Suzuki, Yasuhiko [Measuring Systems Engineering Dept., Hitachi Aloka Medical, Ltd., Ome-shi, Tokyo (Japan); Umegaki, Kikuo [Faculty of Engineering, Hokkaido University, Sapporo-shi, Hokkaido (Japan)

    2016-06-21

    We developed a pinhole gamma camera to measure distributions of radioactive material contaminants and to identify radionuclides in extraordinarily high dose regions (1000 mSv/h). The developed gamma camera is characterized by: (1) tolerance for high dose rate environments; (2) high spatial and spectral resolution for identifying unknown contaminating sources; and (3) good usability for being carried on a robot and remotely controlled. These are achieved by using a compact pixelated detector module with CdTe semiconductors, efficient shielding, and a fine resolution pinhole collimator. The gamma camera weighs less than 100 kg, and its field of view is an 8 m square in the case of a distance of 10 m and its image is divided into 256 (16×16) pixels. From the laboratory test, we found the energy resolution at the 662 keV photopeak was 2.3% FWHM, which is enough to identify the radionuclides. We found that the count rate per background dose rate was 220 cps h/mSv and the maximum count rate was 300 kcps, so the maximum dose rate of the environment where the gamma camera can be operated was calculated as 1400 mSv/h. We investigated the reactor building of Unit 1 at the Fukushima Dai-ichi Nuclear Power Plant using the gamma camera and could identify the unknown contaminating source in the dose rate environment that was as high as 659 mSv/h.

  4. The development of high-speed 100 fps CCD camera

    International Nuclear Information System (INIS)

    Hoffberg, M.; Laird, R.; Lenkzsus, F.; Liu, C.; Rodricks, B.

    1997-01-01

    This paper describes the development of a high-speed CCD digital camera system. The system has been designed to use CCDs from various manufacturers with minimal modifications. The first camera built on this design utilizes a Thomson 512 x 512 pixel CCD as its sensor, which is read out from two parallel outputs at a speed of 15 MHz/pixel/output. The data undergo correlated double sampling after which it is digitized into 12 bits. The throughput of the system translates into 60 MB/second, which is either stored directly in a PC or transferred to a custom-designed VXI module. The PC data acquisition version of the camera can collect sustained data in real time that is limited to the memory installed in the PC. The VXI version of the camera, also controlled by a PC, stores 512 MB of real-time data before it must be read out to the PC disk storage. The uncooled CCD can be used either with lenses for visible light imaging or with a phosphor screen for X-ray imaging. This camera has been tested with a phosphor screen coupled to a fiber-optic face plate for high-resolution, high-speed X-ray imaging. The camera is controlled through a custom event-driven user-friendly Windows package. The pixel clock speed can be changed from 1 to 15 MHz. The noise was measured to be 1.05 bits at a 13.3 MHz pixel clock. This paper will describe the electronics, software, and characterizations that have been performed using both visible and X-ray photons. (orig.)

  5. SCC500: next-generation infrared imaging camera core products with highly flexible architecture for unique camera designs

    Science.gov (United States)

    Rumbaugh, Roy N.; Grealish, Kevin; Kacir, Tom; Arsenault, Barry; Murphy, Robert H.; Miller, Scott

    2003-09-01

    A new 4th generation MicroIR architecture is introduced as the latest in the highly successful Standard Camera Core (SCC) series by BAE SYSTEMS to offer an infrared imaging engine with greatly reduced size, weight, power, and cost. The advanced SCC500 architecture provides great flexibility in configuration to include multiple resolutions, an industry standard Real Time Operating System (RTOS) for customer specific software application plug-ins, and a highly modular construction for unique physical and interface options. These microbolometer based camera cores offer outstanding and reliable performance over an extended operating temperature range to meet the demanding requirements of real-world environments. A highly integrated lens and shutter is included in the new SCC500 product enabling easy, drop-in camera designs for quick time-to-market product introductions.

  6. Quantitative imaging with a mobile phone microscope.

    Directory of Open Access Journals (Sweden)

    Arunan Skandarajah

    Full Text Available Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone-based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications.

  7. Quantitative Imaging with a Mobile Phone Microscope

    Science.gov (United States)

    Skandarajah, Arunan; Reber, Clay D.; Switz, Neil A.; Fletcher, Daniel A.

    2014-01-01

    Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone–based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications. PMID:24824072

  8. People detection and tracking using RGB-D cameras for mobile robots

    Directory of Open Access Journals (Sweden)

    Hengli Liu

    2016-09-01

    Full Text Available People detection and tracking is an essential capability for mobile robots in order to achieve natural human–robot interaction. In this article, a human detection and tracking system is designed and validated for mobile robots using color data with depth information RGB-depth (RGB-D cameras. The whole framework is composed of human detection, tracking and re-identification. Firstly, ground points and ceiling planes are removed to reduce computation effort. A prior-knowledge guided random sample consensus fitting algorithm is used to detect the ground plane and ceiling points. All left points are projected onto the ground plane and subclusters are segmented for candidate detection. Meanshift clustering with an Epanechnikov kernel is conducted to partition different points into subclusters. We propose the new idea of spatial region of interest plan view maps which are employed to identify human candidates from point cloud subclusters. Here, a depth-weighted histogram is extracted online to feature a human candidate. Then, a particle filter algorithm is adopted to track the human’s motion. The integration of the depth-weighted histogram and particle filter provides a precise tool to track the motion of human objects. Finally, data association is set up to re-identify humans who are tracked. Extensive experiments are conducted to demonstrate the effectiveness and robustness of our human detection and tracking system.

  9. High-performance dual-speed CCD camera system for scientific imaging

    Science.gov (United States)

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  10. A new high-speed IR camera system

    Science.gov (United States)

    Travis, Jeffrey W.; Shu, Peter K.; Jhabvala, Murzy D.; Kasten, Michael S.; Moseley, Samuel H.; Casey, Sean C.; Mcgovern, Lawrence K.; Luers, Philip J.; Dabney, Philip W.; Kaipa, Ravi C.

    1994-01-01

    A multi-organizational team at the Goddard Space Flight Center is developing a new far infrared (FIR) camera system which furthers the state of the art for this type of instrument by the incorporating recent advances in several technological disciplines. All aspects of the camera system are optimized for operation at the high data rates required for astronomical observations in the far infrared. The instrument is built around a Blocked Impurity Band (BIB) detector array which exhibits responsivity over a broad wavelength band and which is capable of operating at 1000 frames/sec, and consists of a focal plane dewar, a compact camera head electronics package, and a Digital Signal Processor (DSP)-based data system residing in a standard 486 personal computer. In this paper we discuss the overall system architecture, the focal plane dewar, and advanced features and design considerations for the electronics. This system, or one derived from it, may prove useful for many commercial and/or industrial infrared imaging or spectroscopic applications, including thermal machine vision for robotic manufacturing, photographic observation of short-duration thermal events such as combustion or chemical reactions, and high-resolution surveillance imaging.

  11. PhC-4 new high-speed camera with mirror scanning

    International Nuclear Information System (INIS)

    Daragan, A.O.; Belov, B.G.

    1979-01-01

    The description of the optical system and the construction of the high-speed PhC-4 photographic camera with mirror scanning of the continuously operating type is given. The optical system of the camera is based on the foursided rotating mirror, two optical inlets and two working sectors. The PhC-4 camera provides the framing rate up to 600 thousand frames per second. (author)

  12. First results from the TOPSAT camera

    Science.gov (United States)

    Greenway, Paul; Tosh, Ian; Morris, Nigel; Burton, Gary; Cawley, Steve

    2017-11-01

    The TopSat camera is a low cost remote sensing imager capable of producing 2.5 metre resolution panchromatic imagery, funded by the British National Space Centre's Mosaic programme. The instrument was designed and assembled at the Space Science & Technology Department of the CCLRC's Rutherford Appleton Laboratory (RAL) in the UK, and was launched on the 27th October 2005 from Plesetsk Cosmodrome in Northern Russia on a Kosmos-3M. The camera utilises an off-axis three mirror system, which has the advantages of excellent image quality over a wide field of view, combined with a compactness that makes its overall dimensions smaller than its focal length. Keeping the costs to a minimum has been a major design driver in the development of this camera. The camera is part of the TopSat mission, which is a collaboration between four UK organisations; QinetiQ, Surrey Satellite Technology Ltd (SSTL), RAL and Infoterra. Its objective is to demonstrate provision of rapid response high resolution imagery to fixed and mobile ground stations using a low cost minisatellite. The paper "Development of the TopSat Camera" presented by RAL at the 5th ICSO in 2004 described the opto-mechanical design, assembly, alignment and environmental test methods implemented. Now that the spacecraft is in orbit and successfully acquiring images, this paper presents the first results from the camera and makes an initial assessment of the camera's in-orbit performance.

  13. MonoSLAM: real-time single camera SLAM.

    Science.gov (United States)

    Davison, Andrew J; Reid, Ian D; Molton, Nicholas D; Stasse, Olivier

    2007-06-01

    We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to Structure from Motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera.

  14. Adaptive-Repetitive Visual-Servo Control of Low-Flying Aerial Robots via Uncalibrated High-Flying Cameras

    Science.gov (United States)

    Guo, Dejun; Bourne, Joseph R.; Wang, Hesheng; Yim, Woosoon; Leang, Kam K.

    2017-08-01

    This paper presents the design and implementation of an adaptive-repetitive visual-servo control system for a moving high-flying vehicle (HFV) with an uncalibrated camera to monitor, track, and precisely control the movements of a low-flying vehicle (LFV) or mobile ground robot. Applications of this control strategy include the use of high-flying unmanned aerial vehicles (UAVs) with computer vision for monitoring, controlling, and coordinating the movements of lower altitude agents in areas, for example, where GPS signals may be unreliable or nonexistent. When deployed, a remote operator of the HFV defines the desired trajectory for the LFV in the HFV's camera frame. Due to the circular motion of the HFV, the resulting motion trajectory of the LFV in the image frame can be periodic in time, thus an adaptive-repetitive control system is exploited for regulation and/or trajectory tracking. The adaptive control law is able to handle uncertainties in the camera's intrinsic and extrinsic parameters. The design and stability analysis of the closed-loop control system is presented, where Lyapunov stability is shown. Simulation and experimental results are presented to demonstrate the effectiveness of the method for controlling the movement of a low-flying quadcopter, demonstrating the capabilities of the visual-servo control system for localization (i.e.,, motion capturing) and trajectory tracking control. In fact, results show that the LFV can be commanded to hover in place as well as track a user-defined flower-shaped closed trajectory, while the HFV and camera system circulates above with constant angular velocity. On average, the proposed adaptive-repetitive visual-servo control system reduces the average RMS tracking error by over 77% in the image plane and over 71% in the world frame compared to using just the adaptive visual-servo control law.

  15. The in vitro and in vivo validation of a mobile non-contact camera-based digital imaging system for tooth colour measurement.

    Science.gov (United States)

    Smith, Richard N; Collins, Luisa Z; Naeeni, Mojgan; Joiner, Andrew; Philpotts, Carole J; Hopkinson, Ian; Jones, Clare; Lath, Darren L; Coxon, Thomas; Hibbard, James; Brook, Alan H

    2008-01-01

    To assess the reproducibility of a mobile non-contact camera-based digital imaging system (DIS) for measuring tooth colour under in vitro and in vivo conditions. One in vitro and two in vivo studies were performed using a mobile non-contact camera-based digital imaging system. In vitro study: two operators used the DIS to image 10 dry tooth specimens in a randomised order on three occasions. In vivo study 1:25 subjects with two natural, normally aligned, upper central incisors had their teeth imaged using the DIS on four consecutive days by one operator to measure day-to-day variability. On one of the four test days, duplicate images were collected by three different operators to measure inter- and intra-operator variability. In vivo study 2:11 subjects with two natural, normally aligned, upper central incisors had their teeth imaged using the DIS twice daily over three days within the same week to assess day-to-day variability. Three operators collected images from subjects in a randomised order to measure inter- and intra-operator variability. Subject-to-subject variability was the largest source of variation within the data. Pairwise correlations and concordance coefficients were > 0.7 for each operator, demonstrating good precision and excellent operator agreement in each of the studies. Intraclass correlation coefficients (ICCs) for each operator indicate that day-to-day reliability was good to excellent, where all ICC's where > 0.75 for each operator. The mobile non-contact camera-based digital imaging system was shown to be a reproducible means of measuring tooth colour in both in vitro and in vivo experiments.

  16. A novel single-step procedure for the calibration of the mounting parameters of a multi-camera terrestrial mobile mapping system

    Science.gov (United States)

    Habib, A.; Kersting, P.; Bang, K.; Rau, J.

    2011-12-01

    Mobile Mapping Systems (MMS) can be defined as moving platforms which integrates a set of imaging sensors and a position and orientation system (POS) for the collection of geo-spatial information. In order to fully explore the potential accuracy of such systems and guarantee accurate multi-sensor integration, a careful system calibration must be carried out. System calibration involves individual sensor calibration as well as the estimation of the inter-sensor geometric relationship. This paper tackles a specific component of the system calibration process of a multi-camera MMS - the estimation of the relative orientation parameters among the cameras, i.e., the inter-camera geometric relationship (lever-arm offsets and boresight angles among the cameras). For that purpose, a novel single step procedure, which is easy to implement and not computationally intensive, will be introduced. The proposed method is implemented in such a way that it can also be used for the estimation of the mounting parameters among the cameras and the IMU body frame, in case of directly georeferenced systems. The performance of the proposed method is evaluated through experimental results using simulated data. A comparative analysis between the proposed single-step and the two-step, which makes use of the traditional bundle adjustment procedure, is demonstrated.

  17. A pixellated gamma-camera based on CdTe detectors clinical interests and performances

    CERN Document Server

    Chambron, J; Eclancher, B; Scheiber, C; Siffert, P; Hage-Ali, M; Regal, R; Kazandjian, A; Prat, V; Thomas, S; Warren, S; Matz, R; Jahnke, A; Karman, M; Pszota, A; Németh, L

    2000-01-01

    A mobile gamma camera dedicated to nuclear cardiology, based on a 15 cmx15 cm detection matrix of 2304 CdTe detector elements, 2.83 mmx2.83 mmx2 mm, has been developed with a European Community support to academic and industrial research centres. The intrinsic properties of the semiconductor crystals - low-ionisation energy, high-energy resolution, high attenuation coefficient - are potentially attractive to improve the gamma-camera performances. But their use as gamma detectors for medical imaging at high resolution requires production of high-grade materials and large quantities of sophisticated read-out electronics. The decision was taken to use CdTe rather than CdZnTe, because the manufacturer (Eurorad, France) has a large experience for producing high-grade materials, with a good homogeneity and stability and whose transport properties, characterised by the mobility-lifetime product, are at least 5 times greater than that of CdZnTe. The detector matrix is divided in 9 square units, each unit is composed ...

  18. GENERATION OF HIGH RESOLUTION AND HIGH PRECISION ORTHORECTIFIED ROAD IMAGERY FROM MOBILE MAPPING SYSTEM

    Directory of Open Access Journals (Sweden)

    M. Sakamoto

    2012-07-01

    Full Text Available In this paper, a novel technique to generate a high resolution and high precision Orthorectified Road Imagery (ORI by using spatial information acquired from a Mobile Mapping System (MMS is introduced. The MMS was equipped with multiple sensors such as GPS, IMU, odometer, 2-6 digital cameras and 2-4 laser scanners. In this study, a Triangulated Irregular Network (TIN based approach, similar to general aerial photogrammetry, was adopted to build a terrain model in order to generate ORI with high resolution and high geometric precision. Compared to aerial photogrammetry, there are several issues that are needed to be addressed. ORI is generated by merging multiple time sequence images of a short section. Hence, the influence of occlusion due to stationary objects, such as telephone poles, trees, footbridges, or moving objects, such as vehicles, pedestrians are very significant. Moreover, influences of light falloff at the edges of cameras, tone adjustment among images captured from different cameras or a round trip data acquisition of the same path, and time lag between image exposure and laser point acquisition also need to be addressed properly. The proposed method was applied to generate ORI with 1 cm resolution, from the actual MMS data sets. The ORI generated by the proposed technique was more clear, occlusion free and with higher resolution compared to the conventional orthorectified coloured point cloud imagery. Moreover, the visual interpretation of road features from the ORI was much easier. In addition, the experimental results also validated the effectiveness of proposed radiometric corrections. In occluded regions, the ORI was compensated by using other images captured from different angles. The validity of the image masking process, in the occluded regions, was also ascertained.

  19. Scintillation camera for high activity sources

    International Nuclear Information System (INIS)

    Arseneau, R.E.

    1978-01-01

    The invention described relates to a scintillation camera used for clinical medical diagnosis. Advanced recognition of many unacceptable pulses allows the scintillation camera to discard such pulses at an early stage in processing. This frees the camera to process a greater number of pulses of interest within a given period of time. Temporary buffer storage allows the camera to accommodate pulses received at a rate in excess of its maximum rated capability due to statistical fluctuations in the level of radioactivity of the radiation source measured. (U.K.)

  20. Export, metal recovery and the mobile phone end-of-life ecosystem

    NARCIS (Netherlands)

    Bollinger, L.A.; Blass, V.

    2012-01-01

    Against a background of rapidly growing mobile phone consumption in developing and emerging economies, falling use times and looming metal scarcity, finding better ways to deal with end-of-life (EoL) phones is imperative. The current dynamic in which large numbers of EoL phones are exported from

  1. Image-converter streak cameras with very high gain

    International Nuclear Information System (INIS)

    1975-01-01

    A new camera is described with slit scanning and very high photonic gain (G=5000). Development of the technology of tubes and microchannel plates has enabled integration of such an amplifying element in an image converter tube which does away with the couplings and the intermediary electron-photon-electron conversions of the classical converter systems having external amplification. It is thus possible to obtain equal or superior performance while retaining considerable gain for the camera, great compactness, great flexibility in use, and easy handling. (author)

  2. The future of consumer cameras

    Science.gov (United States)

    Battiato, Sebastiano; Moltisanti, Marco

    2015-03-01

    In the last two decades multimedia, and in particular imaging devices (camcorders, tablets, mobile phones, etc.) have been dramatically diffused. Moreover the increasing of their computational performances, combined with an higher storage capability, allows them to process large amount of data. In this paper an overview of the current trends of consumer cameras market and technology will be given, providing also some details about the recent past (from Digital Still Camera up today) and forthcoming key issues.

  3. Comparison of myocardial perfusion imaging between the new high-speed gamma camera and the standard anger camera

    International Nuclear Information System (INIS)

    Tanaka, Hirokazu; Chikamori, Taishiro; Hida, Satoshi

    2013-01-01

    Cadmium-zinc-telluride (CZT) solid-state detectors have been recently introduced into the field of myocardial perfusion imaging. The aim of this study was to prospectively compare the diagnostic performance of the CZT high-speed gamma camera (Discovery NM 530c) with that of the standard 3-head gamma camera in the same group of patients. The study group consisted of 150 consecutive patients who underwent a 1-day stress-rest 99m Tc-sestamibi or tetrofosmin imaging protocol. Image acquisition was performed first on a standard gamma camera with a 15-min scan time each for stress and for rest. All scans were immediately repeated on a CZT camera with a 5-min scan time for stress and a 3-min scan time for rest, using list mode. The correlations between the CZT camera and the standard camera for perfusion and function analyses were strong within narrow Bland-Altman limits of agreement. Using list mode analysis, image quality for stress was rated as good or excellent in 97% of the 3-min scans, and in 100% of the ≥4-min scans. For CZT scans at rest, similarly, image quality was rated as good or excellent in 94% of the 1-min scans, and in 100% of the ≥2-min scans. The novel CZT camera provides excellent image quality, which is equivalent to standard myocardial single-photon emission computed tomography, despite a short scan time of less than half of the standard time. (author)

  4. Mobile Phone Ratiometric Imaging Enables Highly Sensitive Fluorescence Lateral Flow Immunoassays without External Optical Filters.

    Science.gov (United States)

    Shah, Kamal G; Singh, Vidhi; Kauffman, Peter C; Abe, Koji; Yager, Paul

    2018-05-14

    Paper-based diagnostic tests based on the lateral flow immunoassay concept promise low-cost, point-of-care detection of infectious diseases, but such assays suffer from poor limits of detection. One factor that contributes to poor analytical performance is a reliance on low-contrast chromophoric optical labels such as gold nanoparticles. Previous attempts to improve the sensitivity of paper-based diagnostics include replacing chromophoric labels with enzymes, fluorophores, or phosphors at the expense of increased fluidic complexity or the need for device readers with costly optoelectronics. Several groups, including our own, have proposed mobile phones as suitable point-of-care readers due to their low cost, ease of use, and ubiquity. However, extant mobile phone fluorescence readers require costly optical filters and were typically validated with only one camera sensor module, which is inappropriate for potential point-of-care use. In response, we propose to couple low-cost ultraviolet light-emitting diodes with long Stokes-shift quantum dots to enable ratiometric mobile phone fluorescence measurements without optical filters. Ratiometric imaging with unmodified smartphone cameras improves the contrast and attenuates the impact of excitation intensity variability by 15×. Practical application was shown with a lateral flow immunoassay for influenza A with nucleoproteins spiked into simulated nasal matrix. Limits of detection of 1.5 and 2.6 fmol were attained on two mobile phones, which are comparable to a gel imager (1.9 fmol), 10× better than imaging gold nanoparticles on a scanner (18 fmol), and >2 orders of magnitude better than gold nanoparticle-labeled assays imaged with mobile phones. Use of the proposed filter-free mobile phone imaging scheme is a first step toward enabling a new generation of highly sensitive, point-of-care fluorescence assays.

  5. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    Science.gov (United States)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  6. Practical Stabilization of Uncertain Nonholonomic Mobile Robots Based on Visual Servoing Model with Uncalibrated Camera Parameters

    Directory of Open Access Journals (Sweden)

    Hua Chen

    2013-01-01

    Full Text Available The practical stabilization problem is addressed for a class of uncertain nonholonomic mobile robots with uncalibrated visual parameters. Based on the visual servoing kinematic model, a new switching controller is presented in the presence of parametric uncertainties associated with the camera system. In comparison with existing methods, the new design method is directly used to control the original system without any state or input transformation, which is effective to avoid singularity. Under the proposed control law, it is rigorously proved that all the states of closed-loop system can be stabilized to a prescribed arbitrarily small neighborhood of the zero equilibrium point. Furthermore, this switching control technique can be applied to solve the practical stabilization problem of a kind of mobile robots with uncertain parameters (and angle measurement disturbance which appeared in some literatures such as Morin et al. (1998, Hespanha et al. (1999, Jiang (2000, and Hong et al. (2005. Finally, the simulation results show the effectiveness of the proposed controller design approach.

  7. An Architecture Offering Mobile Pollution Sensing with High Spatial Resolution

    Directory of Open Access Journals (Sweden)

    Oscar Alvear

    2016-01-01

    Full Text Available Mobile sensing is becoming the best option to monitor our environment due to its ease of use, high flexibility, and low price. In this paper, we present a mobile sensing architecture able to monitor different pollutants using low-end sensors. Although the proposed solution can be deployed everywhere, it becomes especially meaningful in crowded cities where pollution values are often high, being of great concern to both population and authorities. Our architecture is composed of three different modules: a mobile sensor for monitoring environment pollutants, an Android-based device for transferring the gathered data to a central server, and a central processing server for analyzing the pollution distribution. Moreover, we analyze different issues related to the monitoring process: (i filtering captured data to reduce the variability of consecutive measurements; (ii converting the sensor output to actual pollution levels; (iii reducing the temporal variations produced by mobile sensing process; and (iv applying interpolation techniques for creating detailed pollution maps. In addition, we study the best strategy to use mobile sensors by first determining the influence of sensor orientation on the captured values and then analyzing the influence of time and space sampling in the interpolation process.

  8. MAVIS: Mobile Acquisition and VISualization -\\ud a professional tool for video recording on a mobile platform

    OpenAIRE

    Watten, Phil; Gilardi, Marco; Holroyd, Patrick; Newbury, Paul

    2015-01-01

    Professional video recording is a complex process which often requires expensive cameras and large amounts of ancillary equipment.\\ud With the advancement of mobile technologies, cameras on mobile devices have improved to the point where the quality of their output is sometimes comparable to that obtained from a professional video camera and are often used in professional productions.\\ud However, tools that allow professional users to access the information they need to control the technical ...

  9. A Bayes Theory-Based Modeling Algorithm to End-to-end Network Traffic

    OpenAIRE

    Zhao Hong-hao; Meng Fan-bo; Zhao Si-wen; Zhao Si-hang; Lu Yi

    2016-01-01

    Recently, network traffic has exponentially increasing due to all kind of applications, such as mobile Internet, smart cities, smart transportations, Internet of things, and so on. the end-to-end network traffic becomes more important for traffic engineering. Usually end-to-end traffic estimation is highly difficult. This paper proposes a Bayes theory-based method to model the end-to-end network traffic. Firstly, the end-to-end network traffic is described as a independent identically distrib...

  10. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera

    Directory of Open Access Journals (Sweden)

    Wei Feng

    2016-03-01

    Full Text Available High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device or CMOS (complementary metal oxide semiconductor camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second gain in temporal resolution by using a 25 fps camera.

  11. Image Alignment for Multiple Camera High Dynamic Range Microscopy

    OpenAIRE

    Eastwood, Brian S.; Childs, Elisabeth C.

    2012-01-01

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability fo...

  12. End-of-life (EoL) mobile phone management in Hong Kong households.

    Science.gov (United States)

    Deng, Wen-Jing; Giesy, John P; So, C S; Zheng, Hai-Long

    2017-09-15

    A questionnaire survey and interviews were conducted in households and end-of-life (EoL) mobile phone business centres in Hong Kong. Widespread Internet use, combined with the rapid evolution of modern social networks, has resulted in the more rapid obsolescence of mobile phones, and thus a tremendous increase in the number of obsolete phones. In 2013, the volume of EoL mobile phones generated in Hong Kong totalled at least 330 tonnes, and the amount is rising. Approximately 80% of electronic waste is exported to Africa and developing countries such as mainland China or Pakistan for recycling. However, the material flow of the large number of obsolete phones generated by the territory's households remains unclear. Hence, the flow of EoL mobile phones in those households was analysed, with the average lifespan of a mobile phone in Hong Kong found to be just under two years (nearly 23 months). Most EoL mobile phones are transferred to mainland China for disposal. Current recycling methods are neither environmentally friendly nor sustainable, with serious implications for the environment and human health. The results of this analysis provide useful information for planning the collection system and facilities needed in Hong Kong and mainland China to better manage EoL mobile phones in the future. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Multi-Angle Snowflake Camera Value-Added Product

    Energy Technology Data Exchange (ETDEWEB)

    Shkurko, Konstantin [Univ. of Utah, Salt Lake City, UT (United States); Garrett, T. [Univ. of Utah, Salt Lake City, UT (United States); Gaustad, K [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-12-01

    The Multi-Angle Snowflake Camera (MASC) addresses a need for high-resolution multi-angle imaging of hydrometeors in freefall with simultaneous measurement of fallspeed. As illustrated in Figure 1, the MASC consists of three cameras, separated by 36°, each pointing at an identical focal point approximately 10 cm away. Located immediately above each camera, a light aims directly at the center of depth of field for its corresponding camera. The focal point at which the cameras are aimed lies within a ring through which hydrometeors fall. The ring houses a system of near-infrared emitter-detector pairs, arranged in two arrays separated vertically by 32 mm. When hydrometeors pass through the lower array, they simultaneously trigger all cameras and lights. Fallspeed is calculated from the time it takes to traverse the distance between the upper and lower triggering arrays. The trigger electronics filter out ambient light fluctuations associated with varying sunlight and shadows. The microprocessor onboard the MASC controls the camera system and communicates with the personal computer (PC). The image data is sent via FireWire 800 line, and fallspeed (and camera control) is sent via a Universal Serial Bus (USB) line that relies on RS232-over-USB serial conversion. See Table 1 for specific details on the MASC located at the Oliktok Point Mobile Facility on the North Slope of Alaska. The value-added product (VAP) detailed in this documentation analyzes the raw data (Section 2.0) using Python: images rely on OpenCV image processing library and derived aggregated statistics rely on some clever averaging. See Sections 4.1 and 4.2 for more details on what variables are computed.

  14. The Application of Data Mining Techniques to Create Promotion Strategy for Mobile Phone Shop

    Science.gov (United States)

    Khasanah, A. U.; Wibowo, K. S.; Dewantoro, H. F.

    2017-12-01

    The number of mobile shop is growing very fast in various regions in Indonesia including in Yogyakarta due to the increasing demand of mobile phone. This fact leads high competition among the mobile phone shops. In these conditions the mobile phone shop should have a good promotion strategy in order to survive in competition, especially for a small mobile phone shop. To create attractive promotion strategy, the companies/shops should know their customer segmentation and the buying pattern of their target market. These kind of analysis can be done using Data mining technique. This study aims to segment customer using Agglomerative Hierarchical Clustering and know customer buying pattern using Association Rule Mining. This result conducted in a mobile shop in Sleman Yogyakarta. The clustering result shows that the biggest customer segment of the shop was male university student who come on weekend and from association rule mining, it can be concluded that tempered glass and smart phone “x” as well as action camera and waterproof monopod and power bank have strong relationship. This results that used to create promotion strategies which are presented in the end of the study.

  15. Strategic options towards an affordable high-performance infrared camera

    Science.gov (United States)

    Oduor, Patrick; Mizuno, Genki; Dutta, Achyut K.; Lewis, Jay; Dhar, Nibir K.

    2016-05-01

    The promise of infrared (IR) imaging attaining low-cost akin to CMOS sensors success has been hampered by the inability to achieve cost advantages that are necessary for crossover from military and industrial applications into the consumer and mass-scale commercial realm despite well documented advantages. Banpil Photonics is developing affordable IR cameras by adopting new strategies to speed-up the decline of the IR camera cost curve. We present a new short-wave IR (SWIR) camera; 640x512 pixel InGaAs uncooled system that is high sensitivity low noise ( 500 frames per second (FPS)) at full resolution, and low power consumption (market adoption by not only demonstrating high-performance IR imaging capability value add demanded by military and industrial application, but also illuminates a path towards justifiable price points essential for consumer facing application industries such as automotive, medical, and security imaging adoption. Among the strategic options presented include new sensor manufacturing technologies that scale favorably towards automation, multi-focal plane array compatible readout electronics, and dense or ultra-small pixel pitch devices.

  16. A pixellated γ-camera based on CdTe detectors clinical interests and performances

    International Nuclear Information System (INIS)

    Chambron, J.; Arntz, Y.; Eclancher, B.; Scheiber, Ch.; Siffert, P.; Hage Hali, M.; Regal, R.; Kazandjian, A.; Prat, V.; Thomas, S.; Warren, S.; Matz, R.; Jahnke, A.; Karman, M.; Pszota, A.; Nemeth, L.

    2000-01-01

    A mobile gamma camera dedicated to nuclear cardiology, based on a 15 cmx15 cm detection matrix of 2304 CdTe detector elements, 2.83 mmx2.83 mmx2 mm, has been developed with a European Community support to academic and industrial research centres. The intrinsic properties of the semiconductor crystals - low-ionisation energy, high-energy resolution, high attenuation coefficient - are potentially attractive to improve the γ-camera performances. But their use as γ detectors for medical imaging at high resolution requires production of high-grade materials and large quantities of sophisticated read-out electronics. The decision was taken to use CdTe rather than CdZnTe, because the manufacturer (Eurorad, France) has a large experience for producing high-grade materials, with a good homogeneity and stability and whose transport properties, characterised by the mobility-lifetime product, are at least 5 times greater than that of CdZnTe. The detector matrix is divided in 9 square units, each unit is composed of 256 detectors shared in 16 modules. Each module consists in a thin ceramic plate holding a line of 16 detectors, in four groups of four for an easy replacement, and holding a special 16 channels integrated circuit designed by CLRC (UK). A detection and acquisition logic based on a DSP card and a PC has been programmed by Eurorad for spectral and counting acquisition modes. Collimators LEAP and LEHR from commercial design, mobile gantry and clinical software were provided by Siemens (Germany). The γ-camera head housing, its general mounting and the electric connections were performed by Phase Laboratory (CNRS, France). The compactness of the γ-camera head, thin detectors matrix, electronic readout and collimator, facilitates the detection of close γ sources with the advantage of a high spatial resolution. Such an equipment is intended to bedside explorations. There is a growing clinical requirement in nuclear cardiology to early assess the extent of an infarct

  17. Identification and Removal of High Frequency Temporal Noise in a Nd:YAG Macro-Pulse Laser Assisted with a Diagnostic Streak Camera

    International Nuclear Information System (INIS)

    Kent Marlett; Ke-Xun Sun

    2004-01-01

    This paper discusses the use of a reference streak camera (SC) to diagnose laser performance and guide modifications to remove high frequency noise from Bechtel Nevada's long-pulse laser. The upgraded laser exhibits less than 0.1% high frequency noise in cumulative spectra, exceeding National Ignition Facility (NIF) calibration specifications. Inertial Confinement Fusion (ICF) experiments require full characterization of streak cameras over a wide range of sweep speeds (10 ns to 480 ns). This paradigm of metrology poses stringent spectral requirements on the laser source for streak camera calibration. Recently, Bechtel Nevada worked with a laser vendor to develop a high performance, multi-wavelength Nd:YAG laser to meet NIF calibration requirements. For a typical NIF streak camera with a 4096 x 4096 pixel CCD, the flat field calibration at 30 ns requires a smooth laser spectrum over 33 MHz to 68 GHz. Streak cameras are the appropriate instrumentation for measuring laser amplitude noise at these very high frequencies since the upper end spectral content is beyond the frequency response of typical optoelectronic detectors for a single shot pulse. The SC was used to measure a similar laser at its second harmonic wavelength (532 nm), to establish baseline spectra for testing signal analysis algorithms. The SC was then used to measure the new custom calibration laser. In both spatial-temporal measurements and cumulative spectra, 6-8 GHz oscillations were identified. The oscillations were found to be caused by inter-surface reflections between amplifiers. Additional variations in the SC spectral data were found to result from temperature instabilities in the seeding laser. Based on these findings, laser upgrades were made to remove the high frequency noise from the laser output

  18. Evaluation of the optical cross talk level in the SiPMs adopted in ASTRI SST-2M Cherenkov Camera using EASIROC front-end electronics

    International Nuclear Information System (INIS)

    Impiombato, D; Giarrusso, S; Mineo, T; Agnetta, G; Biondo, B; Catalano, O; Gargano, C; Rosa, G La; Russo, F; Sottile, G; Belluso, M; Billotta, S; Bonanno, G; Garozzo, S; Marano, D; Romeo, G

    2014-01-01

    ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana), is a flagship project of the Italian Ministry of Education, University and Research whose main goal is the design and construction of an end-to-end prototype of the Small Size of Telescopes of the Cherenkov Telescope Array. The prototype, named ASTRI SST-2M, will adopt a wide field dual mirror optical system in a Schwarzschild-Couder configuration to explore the VHE range of the electromagnetic spectrum. The camera at the focal plane is based on Silicon Photo-Multipliers detectors which is an innovative solution for the detection astronomical Cherenkov light. This contribution reports some preliminary results on the evaluation of the optical cross talk level among the SiPM pixels foreseen for the ASTRI SST-2M camera

  19. An Investigation of the Relationship between High-School Students' Problematic Mobile Phone Use and Their Self-Esteem Levels

    Science.gov (United States)

    Isiklar, Abdullah; Sar, Ali Haydar; Durmuscelebi, Mustafa

    2013-01-01

    Excessive mobile phone use, especially among adolescents, brings too many debates about its effects. To this end, in this study, we try to investigate the relationship between adolescents' mobile phone use and their self-esteem levels with regard to their genders. For 919 high school students, we evaluated mobile phone use concerning their…

  20. Colorimetric analyzer based on mobile phone camera for determination of available phosphorus in soil.

    Science.gov (United States)

    Moonrungsee, Nuntaporn; Pencharee, Somkid; Jakmunee, Jaroon

    2015-05-01

    A field deployable colorimetric analyzer based on an "Android mobile phone" was developed for the determination of available phosphorus content in soil. An inexpensive mobile phone embedded with digital camera was used for taking photograph of the chemical solution under test. The method involved a reaction of the phosphorus (orthophosphate form), ammonium molybdate and potassium antimonyl tartrate to form phosphomolybdic acid which was reduced by ascorbic acid to produce the intense colored molybdenum blue. The software program was developed to use with the phone for recording and analyzing RGB color of the picture. A light tight box with LED light to control illumination was fabricated to improve precision and accuracy of the measurement. Under the optimum conditions, the calibration graph was created by measuring blue color intensity of a series of standard phosphorus solution (0.0-1.0mgPL(-1)), then, the calibration equation obtained was retained by the program for the analysis of sample solution. The results obtained from the proposed method agreed well with the spectrophotometric method, with a detection limit of 0.01mgPL(-1) and a sample throughput about 40h(-1) was achieved. The developed system provided good accuracy (REphosphorus nutrient. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Development of a Mobile User Interface for Image-based Dietary Assessment.

    Science.gov (United States)

    Kim, Sungye; Schap, Tusarebecca; Bosch, Marc; Maciejewski, Ross; Delp, Edward J; Ebert, David S; Boushey, Carol J

    2010-12-31

    In this paper, we present a mobile user interface for image-based dietary assessment. The mobile user interface provides a front end to a client-server image recognition and portion estimation software. In the client-server configuration, the user interactively records a series of food images using a built-in camera on the mobile device. Images are sent from the mobile device to the server, and the calorie content of the meal is estimated. In this paper, we describe and discuss the design and development of our mobile user interface features. We discuss the design concepts, through initial ideas and implementations. For each concept, we discuss qualitative user feedback from participants using the mobile client application. We then discuss future designs, including work on design considerations for the mobile application to allow the user to interactively correct errors in the automatic processing while reducing the user burden associated with classical pen-and-paper dietary records.

  2. MANAGING HIGH-END, HIGH-VOLUME INNOVATIVE PRODUCTS

    Directory of Open Access Journals (Sweden)

    Gembong Baskoro

    2008-01-01

    Full Text Available This paper discuses the concept of managing high-end, high-volume innovative products. High-end, high-volume consumer products are products that have considerable influence to the way of life. Characteristic of High-end, high-volume consumer products are (1 short cycle time, (2 quick obsolete time, and (3 rapid price erosion. Beside the disadvantages that they are high risk for manufacturers, if manufacturers are able to understand precisely the consumer needs then they have the potential benefit or success to be the market leader. High innovation implies to high utilization of the user, therefore these products can influence indirectly to the way of people life. The objective of managing them is to achieve sustainability of the products development and innovation. This paper observes the behavior of these products in companies operated in high-end, high-volume consumer product.

  3. Development of low-cost high-performance multispectral camera system at Banpil

    Science.gov (United States)

    Oduor, Patrick; Mizuno, Genki; Olah, Robert; Dutta, Achyut K.

    2014-05-01

    Banpil Photonics (Banpil) has developed a low-cost high-performance multispectral camera system for Visible to Short- Wave Infrared (VIS-SWIR) imaging for the most demanding high-sensitivity and high-speed military, commercial and industrial applications. The 640x512 pixel InGaAs uncooled camera system is designed to provide a compact, smallform factor to within a cubic inch, high sensitivity needing less than 100 electrons, high dynamic range exceeding 190 dB, high-frame rates greater than 1000 frames per second (FPS) at full resolution, and low power consumption below 1W. This is practically all the feature benefits highly desirable in military imaging applications to expand deployment to every warfighter, while also maintaining a low-cost structure demanded for scaling into commercial markets. This paper describes Banpil's development of the camera system including the features of the image sensor with an innovation integrating advanced digital electronics functionality, which has made the confluence of high-performance capabilities on the same imaging platform practical at low cost. It discusses the strategies employed including innovations of the key components (e.g. focal plane array (FPA) and Read-Out Integrated Circuitry (ROIC)) within our control while maintaining a fabless model, and strategic collaboration with partners to attain additional cost reductions on optics, electronics, and packaging. We highlight the challenges and potential opportunities for further cost reductions to achieve a goal of a sub-$1000 uncooled high-performance camera system. Finally, a brief overview of emerging military, commercial and industrial applications that will benefit from this high performance imaging system and their forecast cost structure is presented.

  4. Automatic multi-camera calibration for deployable positioning systems

    Science.gov (United States)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  5. High-resolution Compton cameras based on Si/CdTe double-sided strip detectors

    International Nuclear Information System (INIS)

    Odaka, Hirokazu; Ichinohe, Yuto; Takeda, Shin'ichiro; Fukuyama, Taro; Hagino, Koichi; Saito, Shinya; Sato, Tamotsu; Sato, Goro; Watanabe, Shin; Kokubun, Motohide; Takahashi, Tadayuki; Yamaguchi, Mitsutaka

    2012-01-01

    We have developed a new Compton camera based on silicon (Si) and cadmium telluride (CdTe) semiconductor double-sided strip detectors (DSDs). The camera consists of a 500-μm-thick Si-DSD and four layers of 750-μm-thick CdTe-DSDs all of which have common electrode configuration segmented into 128 strips on each side with pitches of 250μm. In order to realize high angular resolution and to reduce size of the detector system, a stack of DSDs with short stack pitches of 4 mm is utilized to make the camera. Taking advantage of the excellent energy and position resolutions of the semiconductor devices, the camera achieves high angular resolutions of 4.5° at 356 keV and 3.5° at 662 keV. To obtain such high resolutions together with an acceptable detection efficiency, we demonstrate data reduction methods including energy calibration using Compton scattering continuum and depth sensing in the CdTe-DSD. We also discuss imaging capability of the camera and show simultaneous multi-energy imaging.

  6. Development of X-ray CCD camera system with high readout rate using ASIC

    International Nuclear Information System (INIS)

    Nakajima, Hiroshi; Matsuura, Daisuke; Anabuki, Naohisa; Miyata, Emi; Tsunemi, Hiroshi; Doty, John P.; Ikeda, Hirokazu; Katayama, Haruyoshi

    2009-01-01

    We report on the development of an X-ray charge-coupled device (CCD) camera system with high readout rate using application-specific integrated circuit (ASIC) and Camera Link standard. The distinctive ΔΣ type analog-to-digital converter is introduced into the chip to achieve effective noise shaping and to obtain a high resolution with relatively simple circuits. The unit test proved moderately low equivalent input noise of 70μV with a high readout pixel rate of 625 kHz, while the entire chip consumes only 100 mW. The Camera Link standard was applied for the connectivity between the camera system and frame grabbers. In the initial test of the whole system, we adopted a P-channel CCD with a thick depletion layer developed for X-ray CCD camera onboard the next Japanese X-ray astronomical satellite. The characteristic X-rays from 109 Cd were successfully read out resulting in the energy resolution of 379(±7)eV (FWHM) at 22.1 keV, that is, ΔE/E=1.7% with a readout rate of 44 kHz.

  7. Personal Safety Triggering System on Android Mobile Platform

    OpenAIRE

    Ramalingam, Ashokkumar; Dorairaj, Prabhu; Ramamoorthy, Saranya

    2012-01-01

    Introduction of Smart phones redefined the usage of mobile phones in the communication world. Smart phones are equipped with various sophisticated features such as Wi-Fi, GPS navigation, high resolution camera, touch screen with broadband access which helps the mobile phone users to keep in touch with the modern world. Many of these features are primarily integrated with the mobile operatingsystem which is out of reach to public, by which the users can’t manipulate those features. Google came...

  8. An intelligent space for mobile robot localization using a multi-camera system.

    Science.gov (United States)

    Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel

    2014-08-15

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  9. An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Mariana Rampinelli

    2014-08-01

    Full Text Available This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  10. Life Course Stage and Social Support Mobilization for End-of-Life Caregivers.

    Science.gov (United States)

    LaValley, Susan A; Gage-Bouchard, Elizabeth A

    2018-04-01

    Caregivers of terminally ill patients are at risk for anxiety, depression, and social isolation. Social support from friends, family members, neighbors, and health care professionals can potentially prevent or mitigate caregiver strain. While previous research documents the importance of social support in helping end-of-life caregivers cope with caregiving demands, little is known about differences in social support experiences among caregivers at different life course stages. Using life course theory, this study analyzes data from in-depth interviews with 50 caregivers of patients enrolled in hospice services to compare barriers to mobilizing social support among caregivers at two life course stages: midlife caregivers caring for parents and older adult caregivers caring for spouses/partners. Older adult caregivers reported different barriers to mobilizing social support compared with midlife caregivers. Findings enhance the understanding of how caregivers' life course stage affects their barriers to mobilization of social support resources.

  11. Selecting a digital camera for telemedicine.

    Science.gov (United States)

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  12. Enhancing User Experiences of Mobile-Based Augmented Reality via Spatial Augmented Reality: Designs and Architectures of Projector-Camera Devices

    Directory of Open Access Journals (Sweden)

    Thitirat Siriborvornratanakul

    2018-01-01

    Full Text Available As smartphones, tablet computers, and other mobile devices have continued to dominate our digital world ecosystem, there are many industries using mobile or wearable devices to perform Augmented Reality (AR functions in their workplaces in order to increase productivity and decrease unnecessary workloads. Mobile-based AR can basically be divided into three main types: phone-based AR, wearable AR, and projector-based AR. Among these, projector-based AR or Spatial Augmented Reality (SAR is the most immature and least recognized type of AR for end users. This is because there are a small number of commercial products providing projector-based AR functionalities in a mobile manner. Also, prices of mobile projectors are still relatively high. Moreover, there are still many technical problems regarding projector-based AR that have been left unsolved. Nevertheless, it is projector-based AR that has potential to solve a fundamental problem shared by most mobile-based AR systems. Also the always-visible nature of projector-based AR is one good answer for solving current user experience issues of phone-based AR and wearable AR systems. Hence, in this paper, we analyze what are the user experience issues and technical issues regarding common mobile-based AR systems, recently widespread phone-based AR systems, and rising wearable AR systems. Then for each issue, we propose and explain a new solution of how using projector-based AR can solve the problems and/or help enhance its user experiences. Our proposed framework includes hardware designs and architectures as well as a software computing paradigm towards mobile projector-based AR systems. The proposed design is evaluated by three experts using qualitative and semiquantitative research approaches.

  13. Practical End-to-End Performance Testing Tool for High Speed 3G-Based Networks

    Science.gov (United States)

    Shinbo, Hiroyuki; Tagami, Atsushi; Ano, Shigehiro; Hasegawa, Toru; Suzuki, Kenji

    High speed IP communication is a killer application for 3rd generation (3G) mobile systems. Thus 3G network operators should perform extensive tests to check whether expected end-to-end performances are provided to customers under various environments. An important objective of such tests is to check whether network nodes fulfill requirements to durations of processing packets because a long duration of such processing causes performance degradation. This requires testers (persons who do tests) to precisely know how long a packet is hold by various network nodes. Without any tool's help, this task is time-consuming and error prone. Thus we propose a multi-point packet header analysis tool which extracts and records packet headers with synchronized timestamps at multiple observation points. Such recorded packet headers enable testers to calculate such holding durations. The notable feature of this tool is that it is implemented on off-the shelf hardware platforms, i.e., lap-top personal computers. The key challenges of the implementation are precise clock synchronization without any special hardware and a sophisticated header extraction algorithm without any drop.

  14. Localization and Mapping Using a Non-Central Catadioptric Camera System

    Science.gov (United States)

    Khurana, M.; Armenakis, C.

    2018-05-01

    This work details the development of an indoor navigation and mapping system using a non-central catadioptric omnidirectional camera and its implementation for mobile applications. Omnidirectional catadioptric cameras find their use in navigation and mapping of robotic platforms, owing to their wide field of view. Having a wider field of view, or rather a potential 360° field of view, allows the system to "see and move" more freely in the navigation space. A catadioptric camera system is a low cost system which consists of a mirror and a camera. Any perspective camera can be used. A platform was constructed in order to combine the mirror and a camera to build a catadioptric system. A calibration method was developed in order to obtain the relative position and orientation between the two components so that they can be considered as one monolithic system. The mathematical model for localizing the system was determined using conditions based on the reflective properties of the mirror. The obtained platform positions were then used to map the environment using epipolar geometry. Experiments were performed to test the mathematical models and the achieved location and mapping accuracies of the system. An iterative process of positioning and mapping was applied to determine object coordinates of an indoor environment while navigating the mobile platform. Camera localization and 3D coordinates of object points obtained decimetre level accuracies.

  15. Vibration extraction based on fast NCC algorithm and high-speed camera.

    Science.gov (United States)

    Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an

    2015-09-20

    In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals.

  16. Completely optical orientation determination for an unstabilized aerial three-line camera

    Science.gov (United States)

    Wohlfeil, Jürgen

    2010-10-01

    Aerial line cameras allow the fast acquisition of high-resolution images at low costs. Unfortunately the measurement of the camera's orientation with the necessary rate and precision is related with large effort, unless extensive camera stabilization is used. But also stabilization implicates high costs, weight, and power consumption. This contribution shows that it is possible to completely derive the absolute exterior orientation of an unstabilized line camera from its images and global position measurements. The presented approach is based on previous work on the determination of the relative orientation of subsequent lines using optical information from the remote sensing system. The relative orientation is used to pre-correct the line images, in which homologous points can reliably be determined using the SURF operator. Together with the position measurements these points are used to determine the absolute orientation from the relative orientations via bundle adjustment of a block of overlapping line images. The approach was tested at a flight with the DLR's RGB three-line camera MFC. To evaluate the precision of the resulting orientation the measurements of a high-end navigation system and ground control points are used.

  17. Reducing the Variance of Intrinsic Camera Calibration Results in the ROS Camera_Calibration Package

    Science.gov (United States)

    Chiou, Geoffrey Nelson

    The intrinsic calibration of a camera is the process in which the internal optical and geometric characteristics of the camera are determined. If accurate intrinsic parameters of a camera are known, the ray in 3D space that every point in the image lies on can be determined. Pairing with another camera allows for the position of the points in the image to be calculated by intersection of the rays. Accurate intrinsics also allow for the position and orientation of a camera relative to some world coordinate system to be calculated. These two reasons for having accurate intrinsic calibration for a camera are especially important in the field of industrial robotics where 3D cameras are frequently mounted on the ends of manipulators. In the ROS (Robot Operating System) ecosystem, the camera_calibration package is the default standard for intrinsic camera calibration. Several researchers from the Industrial Robotics & Automation division at Southwest Research Institute have noted that this package results in large variances in the intrinsic parameters of the camera when calibrating across multiple attempts. There are also open issues on this matter in their public repository that have not been addressed by the developers. In this thesis, we confirm that the camera_calibration package does indeed return different results across multiple attempts, test out several possible hypothesizes as to why, identify the reason, and provide simple solution to fix the cause of the issue.

  18. High mobility emissive organic semiconductor

    Science.gov (United States)

    Liu, Jie; Zhang, Hantang; Dong, Huanli; Meng, Lingqiang; Jiang, Longfeng; Jiang, Lang; Wang, Ying; Yu, Junsheng; Sun, Yanming; Hu, Wenping; Heeger, Alan J.

    2015-01-01

    The integration of high charge carrier mobility and high luminescence in an organic semiconductor is challenging. However, there is need of such materials for organic light-emitting transistors and organic electrically pumped lasers. Here we show a novel organic semiconductor, 2,6-diphenylanthracene (DPA), which exhibits not only high emission with single crystal absolute florescence quantum yield of 41.2% but also high charge carrier mobility with single crystal mobility of 34 cm2 V−1 s−1. Organic light-emitting diodes (OLEDs) based on DPA give pure blue emission with brightness up to 6,627 cd m−2 and turn-on voltage of 2.8 V. 2,6-Diphenylanthracene OLED arrays are successfully driven by DPA field-effect transistor arrays, demonstrating that DPA is a high mobility emissive organic semiconductor with potential in organic optoelectronics. PMID:26620323

  19. Using a Camera Phone as a Mixed-Reality Laser Cannon

    Directory of Open Access Journals (Sweden)

    Fadi Chehimi

    2008-01-01

    Full Text Available Despite the ubiquity and rich features of current mobile phones, mobile games have failed to reach even the lowest estimates of expected revenues. This is unfortunate as mobile phones offer unique possibilities for creating games aimed at attracting demographics not currently catered for by the traditional console market. As a result, there has been a growing call for greater innovation within the mobile games industry and support for games outside the current console genres. In this paper, we present the design and implementation of a novel location-based game which allows us turn a camera phone into a mixed-reality laser cannon. The game uses specially designed coloured tags, which are worn by the players, and advanced colour tracking software running on a camera phone, to create a novel first person shoot-em-up (FPS with innovative game interactions and play.

  20. High spatial resolution infrared camera as ISS external experiment

    Science.gov (United States)

    Eckehard, Lorenz; Frerker, Hap; Fitch, Robert Alan

    High spatial resolution infrared camera as ISS external experiment for monitoring global climate changes uses ISS internal and external resources (eg. data storage). The optical experiment will consist of an infrared camera for monitoring global climate changes from the ISS. This technology was evaluated by the German small satellite mission BIRD and further developed in different ESA projects. Compared to BIRD the presended instrument uses proven sensor advanced technologies (ISS external) and ISS on board processing and storage capabili-ties (internal). The instrument will be equipped with a serial interfaces for TM/TC and several relay commands for the power supply. For data processing and storage a mass memory is re-quired. The access to actual attitude data is highly desired to produce geo referenced maps-if possible by an on board processing.

  1. INTRODUCING NOVEL GENERATION OF HIGH ACCURACY CAMERA OPTICAL-TESTING AND CALIBRATION TEST-STANDS FEASIBLE FOR SERIES PRODUCTION OF CAMERAS

    Directory of Open Access Journals (Sweden)

    M. Nekouei Shahraki

    2015-12-01

    Full Text Available The recent advances in the field of computer-vision have opened the doors of many opportunities for taking advantage of these techniques and technologies in many fields and applications. Having a high demand for these systems in today and future vehicles implies a high production volume of video cameras. The above criterions imply that it is critical to design test systems which deliver fast and accurate calibration and optical-testing capabilities. In this paper we introduce new generation of test-stands delivering high calibration quality in single-shot calibration of fisheye surround-view cameras. This incorporates important geometric features from bundle-block calibration, delivers very high (sub-pixel calibration accuracy, makes possible a very fast calibration procedure (few seconds, and realizes autonomous calibration via machines. We have used the geometrical shape of a Spherical Helix (Type: 3D Spherical Spiral with special geometrical characteristics, having a uniform radius which corresponds to the uniform motion. This geometrical feature was mechanically realized using three dimensional truncated icosahedrons which practically allow the implementation of a spherical helix on multiple surfaces. Furthermore the test-stand enables us to perform many other important optical tests such as stray-light testing, enabling us to evaluate the certain qualities of the camera optical module.

  2. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  3. An autonomous sensor module based on a legacy CCTV camera

    Science.gov (United States)

    Kent, P. J.; Faulkner, D. A. A.; Marshall, G. F.

    2016-10-01

    A UK MoD funded programme into autonomous sensors arrays (SAPIENT) has been developing new, highly capable sensor modules together with a scalable modular architecture for control and communication. As part of this system there is a desire to also utilise existing legacy sensors. The paper reports upon the development of a SAPIENT-compliant sensor module using a legacy Close-Circuit Television (CCTV) pan-tilt-zoom (PTZ) camera. The PTZ camera sensor provides three modes of operation. In the first mode, the camera is automatically slewed to acquire imagery of a specified scene area, e.g. to provide "eyes-on" confirmation for a human operator or for forensic purposes. In the second mode, the camera is directed to monitor an area of interest, with zoom level automatically optimized for human detection at the appropriate range. Open source algorithms (using OpenCV) are used to automatically detect pedestrians; their real world positions are estimated and communicated back to the SAPIENT central fusion system. In the third mode of operation a "follow" mode is implemented where the camera maintains the detected person within the camera field-of-view without requiring an end-user to directly control the camera with a joystick.

  4. Centering mount for a gamma camera

    International Nuclear Information System (INIS)

    Mirkhodzhaev, A.Kh.; Kuznetsov, N.K.; Ostryj, Yu.E.

    1988-01-01

    A device for centering a γ-camera detector in case of radionuclide diagnosis is described. It permits the use of available medical coaches instead of a table with a transparent top. The device can be used for centering a detector (when it is fixed at the low end of a γ-camera) on a required area of the patient's body

  5. Mobile phone based mini-spectrometer for rapid screening of skin cancer

    Science.gov (United States)

    Das, Anshuman; Swedish, Tristan; Wahi, Akshat; Moufarrej, Mira; Noland, Marie; Gurry, Thomas; Aranda-Michel, Edgar; Aksel, Deniz; Wagh, Sneha; Sadashivaiah, Vijay; Zhang, Xu; Raskar, Ramesh

    2015-06-01

    We demonstrate a highly sensitive mobile phone based spectrometer that has potential to detect cancerous skin lesions in a rapid, non-invasive manner. Earlier reports of low cost spectrometers utilize the camera of the mobile phone to image the field after moving through a diffraction grating. These approaches are inherently limited by the closed nature of mobile phone image sensors and built in optical elements. The system presented uses a novel integrated grating and sensor that is compact, accurate and calibrated. Resolutions of about 10 nm can be achieved. Additionally, UV and visible LED excitation sources are built into the device. Data collection and analysis is simplified using the wireless interfaces and logical control on the smart phone. Furthermore, by utilizing an external sensor, the mobile phone camera can be used in conjunction with spectral measurements. We are exploring ways to use this device to measure endogenous fluorescence of skin in order to distinguish cancerous from non-cancerous lesions with a mobile phone based dermatoscope.

  6. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.

  7. Photography in Dermatologic Surgery: Selection of an Appropriate Camera Type for a Particular Clinical Application.

    Science.gov (United States)

    Chen, Brian R; Poon, Emily; Alam, Murad

    2017-08-01

    Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.

  8. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    Science.gov (United States)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  9. A Bayes Theory-Based Modeling Algorithm to End-to-end Network Traffic

    Directory of Open Access Journals (Sweden)

    Zhao Hong-hao

    2016-01-01

    Full Text Available Recently, network traffic has exponentially increasing due to all kind of applications, such as mobile Internet, smart cities, smart transportations, Internet of things, and so on. the end-to-end network traffic becomes more important for traffic engineering. Usually end-to-end traffic estimation is highly difficult. This paper proposes a Bayes theory-based method to model the end-to-end network traffic. Firstly, the end-to-end network traffic is described as a independent identically distributed normal process. Then the Bases theory is used to characterize the end-to-end network traffic. By calculating the parameters, the model is determined correctly. Simulation results show that our approach is feasible and effective.

  10. A mobile light source for carbon/nitrogen cameras

    International Nuclear Information System (INIS)

    Trower, W.P.; Melekhin, V.N.; Shvedunov, V.I.; Sobenin, N.P.

    1995-01-01

    The pulsed light source for carbon/nitrogen cameras developed to image concealed narcotics/explosives is described. This race-track microtron will produce 40 mA pulses of 70 MeV electrons, have minimal size and weight, and maximal ruggedness and reliability, so that it can be transported on a truck. (orig.)

  11. A mobile light source for carbon/nitrogen cameras

    Science.gov (United States)

    Trower, W. P.; Karev, A. I.; Melekhin, V. N.; Shvedunov, V. I.; Sobenin, N. P.

    1995-05-01

    The pulsed light source for carbon/nitrogen cameras developed to image concealed narcotics/explosives is described. This race-track microtron will produce 40 mA pulses of 70 MeV electrons, have minimal size and weight, and maximal ruggedness and reliability, so that it can be transported on a truck.

  12. Ultrahigh-speed, high-sensitivity color camera with 300,000-pixel single CCD

    Science.gov (United States)

    Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Ohtake, H.; Kurita, T.; Tanioka, K.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Etoh, T. G.

    2007-01-01

    We have developed an ultrahigh-speed, high-sensitivity portable color camera with a new 300,000-pixel single CCD. The 300,000-pixel CCD, which has four times the number of pixels of our initial model, was developed by seamlessly joining two 150,000-pixel CCDs. A green-red-green-blue (GRGB) Bayer filter is used to realize a color camera with the single-chip CCD. The camera is capable of ultrahigh-speed video recording at up to 1,000,000 frames/sec, and small enough to be handheld. We also developed a technology for dividing the CCD output signal to enable parallel, highspeed readout and recording in external memory; this makes possible long, continuous shots up to 1,000 frames/second. As a result of an experiment, video footage was imaged at an athletics meet. Because of high-speed shooting, even detailed movements of athletes' muscles were captured. This camera can capture clear slow-motion videos, so it enables previously impossible live footage to be imaged for various TV broadcasting programs.

  13. Ion mobility sensor system

    Science.gov (United States)

    Xu, Jun; Watson, David B.; Whitten, William B.

    2013-01-22

    An ion mobility sensor system including an ion mobility spectrometer and a differential mobility spectrometer coupled to the ion mobility spectrometer. The ion mobility spectrometer has a first chamber having first end and a second end extending along a first direction, and a first electrode system that generates a constant electric field parallel to the first direction. The differential mobility spectrometer includes a second chamber having a third end and a fourth end configured such that a fluid may flow in a second direction from the third end to the fourth end, and a second electrode system that generates an asymmetric electric field within an interior of the second chamber. Additionally, the ion mobility spectrometer and the differential mobility spectrometer form an interface region. Also, the first end and the third end are positioned facing one another so that the constant electric field enters the third end and overlaps the fluid flowing in the second direction.

  14. Sub-Camera Calibration of a Penta-Camera

    Science.gov (United States)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  15. Calibration grooming and alignment for LDUA High Resolution Stereoscopic Video Camera System (HRSVS)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The High Resolution Stereoscopic Video Camera System (HRSVS) was designed by the Savannah River Technology Center (SRTC) to provide routine and troubleshooting views of tank interiors during characterization and remediation phases of underground storage tank (UST) processing. The HRSVS is a dual color camera system designed to provide stereo viewing of the interior of the tanks including the tank wall in a Class 1, Division 1, flammable atmosphere. The HRSVS was designed with a modular philosophy for easy maintenance and configuration modifications. During operation of the system with the LDUA, the control of the camera system will be performed by the LDUA supervisory data acquisition system (SDAS). Video and control status 1458 will be displayed on monitors within the LDUA control center. All control functions are accessible from the front panel of the control box located within the Operations Control Trailer (OCT). The LDUA will provide all positioning functions within the waste tank for the end effector. Various electronic measurement instruments will be used to perform CG and A activities. The instruments may include a digital volt meter, oscilloscope, signal generator, and other electronic repair equipment. None of these instruments will need to be calibrated beyond what comes from the manufacturer. During CG and A a temperature indicating device will be used to measure the temperature of the outside of the HRSVS from initial startup until the temperature has stabilized. This device will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing. This sensor will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing

  16. Camera Based Navigation System with Augmented Reality

    Directory of Open Access Journals (Sweden)

    M. Marcu

    2012-06-01

    Full Text Available Nowadays smart mobile devices have enough processing power, memory, storage and always connected wireless communication bandwidth that makes them available for any type of application. Augmented reality (AR proposes a new type of applications that tries to enhance the real world by superimposing or combining virtual objects or computer generated information with it. In this paper we present a camera based navigation system with augmented reality integration. The proposed system aims to the following: the user points the camera of the smartphone towards a point of interest, like a building or any other place, and the application searches for relevant information about that specific place and superimposes the data over the video feed on the display. When the user moves the camera away, changing its orientation, the data changes as well, in real-time, with the proper information about the place that is now in the camera view.

  17. Graphics hardware accelerated panorama builder for mobile phones

    Science.gov (United States)

    Bordallo López, Miguel; Hannuksela, Jari; Silvén, Olli; Vehviläinen, Markku

    2009-02-01

    Modern mobile communication devices frequently contain built-in cameras allowing users to capture highresolution still images, but at the same time the imaging applications are facing both usability and throughput bottlenecks. The difficulties in taking ad hoc pictures of printed paper documents with multi-megapixel cellular phone cameras on a common business use case, illustrate these problems for anyone. The result can be examined only after several seconds, and is often blurry, so a new picture is needed, although the view-finder image had looked good. The process can be a frustrating one with waits and the user not being able to predict the quality beforehand. The problems can be traced to the processor speed and camera resolution mismatch, and application interactivity demands. In this context we analyze building mosaic images of printed documents from frames selected from VGA resolution (640x480 pixel) video. High interactivity is achieved by providing real-time feedback on the quality, while simultaneously guiding the user actions. The graphics processing unit of the mobile device can be used to speed up the reconstruction computations. To demonstrate the viability of the concept, we present an interactive document scanning application implemented on a Nokia N95 mobile phone.

  18. Single software platform used for high speed data transfer implementation in a 65k pixel camera working in single photon counting mode

    International Nuclear Information System (INIS)

    Maj, P.; Kasiński, K.; Gryboś, P.; Szczygieł, R.; Kozioł, A.

    2015-01-01

    Integrated circuits designed for specific applications generally use non-standard communication methods. Hybrid pixel detector readout electronics produces a huge amount of data as a result of number of frames per seconds. The data needs to be transmitted to a higher level system without limiting the ASIC's capabilities. Nowadays, the Camera Link interface is still one of the fastest communication methods, allowing transmission speeds up to 800 MB/s. In order to communicate between a higher level system and the ASIC with a dedicated protocol, an FPGA with dedicated code is required. The configuration data is received from the PC and written to the ASIC. At the same time, the same FPGA should be able to transmit the data from the ASIC to the PC at the very high speed. The camera should be an embedded system enabling autonomous operation and self-monitoring. In the presented solution, at least three different hardware platforms are used—FPGA, microprocessor with real-time operating system and the PC with end-user software. We present the use of a single software platform for high speed data transfer from 65k pixel camera to the personal computer

  19. Single software platform used for high speed data transfer implementation in a 65k pixel camera working in single photon counting mode

    Science.gov (United States)

    Maj, P.; Kasiński, K.; Gryboś, P.; Szczygieł, R.; Kozioł, A.

    2015-12-01

    Integrated circuits designed for specific applications generally use non-standard communication methods. Hybrid pixel detector readout electronics produces a huge amount of data as a result of number of frames per seconds. The data needs to be transmitted to a higher level system without limiting the ASIC's capabilities. Nowadays, the Camera Link interface is still one of the fastest communication methods, allowing transmission speeds up to 800 MB/s. In order to communicate between a higher level system and the ASIC with a dedicated protocol, an FPGA with dedicated code is required. The configuration data is received from the PC and written to the ASIC. At the same time, the same FPGA should be able to transmit the data from the ASIC to the PC at the very high speed. The camera should be an embedded system enabling autonomous operation and self-monitoring. In the presented solution, at least three different hardware platforms are used—FPGA, microprocessor with real-time operating system and the PC with end-user software. We present the use of a single software platform for high speed data transfer from 65k pixel camera to the personal computer.

  20. Variable high-resolution color CCD camera system with online capability for professional photo studio application

    Science.gov (United States)

    Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute

    1998-04-01

    Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.

  1. RoCoMAR: Robots' Controllable Mobility Aided Routing and Relay Architecture for Mobile Sensor Networks

    Science.gov (United States)

    Van Le, Duc; Oh, Hoon; Yoon, Seokhoon

    2013-01-01

    In a practical deployment, mobile sensor network (MSN) suffers from a low performance due to high node mobility, time-varying wireless channel properties, and obstacles between communicating nodes. In order to tackle the problem of low network performance and provide a desired end-to-end data transfer quality, in this paper we propose a novel ad hoc routing and relaying architecture, namely RoCoMAR (Robots' Controllable Mobility Aided Routing) that uses robotic nodes' controllable mobility. RoCoMAR repeatedly performs link reinforcement process with the objective of maximizing the network throughput, in which the link with the lowest quality on the path is identified and replaced with high quality links by placing a robotic node as a relay at an optimal position. The robotic node resigns as a relay if the objective is achieved or no more gain can be obtained with a new relay. Once placed as a relay, the robotic node performs adaptive link maintenance by adjusting its position according to the movements of regular nodes. The simulation results show that RoCoMAR outperforms existing ad hoc routing protocols for MSN in terms of network throughput and end-to-end delay. PMID:23881134

  2. RoCoMAR: Robots’ Controllable Mobility Aided Routing and Relay Architecture for Mobile Sensor Networks

    Directory of Open Access Journals (Sweden)

    Seokhoon Yoon

    2013-07-01

    Full Text Available In a practical deployment, mobile sensor network (MSN suffers from a low performance due to high node mobility, time-varying wireless channel properties, and obstacles between communicating nodes. In order to tackle the problem of low network performance and provide a desired end-to-end data transfer quality, in this paper we propose a novel ad hoc routing and relaying architecture, namely RoCoMAR (Robots’ Controllable Mobility Aided Routing that uses robotic nodes’ controllable mobility. RoCoMAR repeatedly performs link reinforcement process with the objective of maximizing the network throughput, in which the link with the lowest quality on the path is identified and replaced with high quality links by placing a robotic node as a relay at an optimal position. The robotic node resigns as a relay if the objective is achieved or no more gain can be obtained with a new relay. Once placed as a relay, the robotic node performs adaptive link maintenance by adjusting its position according to the movements of regular nodes. The simulation results show that RoCoMAR outperforms existing ad hoc routing protocols for MSN in terms of network throughput and end-to-end delay.

  3. Finite-time tracking control for multiple non-holonomic mobile robots based on visual servoing

    Science.gov (United States)

    Ou, Meiying; Li, Shihua; Wang, Chaoli

    2013-12-01

    This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.

  4. Design of a Short/Open-Ended Slot Antenna with Capacitive Coupling Feed Strips for Hepta-Band Mobile Application

    Directory of Open Access Journals (Sweden)

    Kyoseung Keum

    2018-01-01

    Full Text Available In this paper, a planar printed hybrid short/open-ended slot antenna with capacitive coupling feed strips is proposed for hepta-band mobile applications. The proposed antenna is comprised of a slotted ground plane on the top plane and two capacitive coupling feed strips with a chip inductor on the bottom plane. At the low frequency band, the short-ended long slot fed by strip 1 generates its half-wavelength resonance mode, whereas the T-shaped open ended slot fed by strip 2 generates its quarter-wavelength resonance mode for the high frequency band. The antenna provides a wide bandwidth covering GSM850/GSM900/DCS/PCS/UMTS/LTE2300/LTE2500 operation bands. Moreover, the antenna occupies a small volume of 15 mm × 50 mm × 1 mm. The operating principle of the proposed antenna and the simulation/measurement results are presented and discussed.

  5. Task Phase Recognition for Highly Mobile Workers in Large Building Complexes

    DEFF Research Database (Denmark)

    Stisen, Allan; Mathisen, Andreas; Krogh, Søren

    2016-01-01

    requirements on the accuracy of the indoor positioning, and thus come with low deployment and maintenance effort in real-world settings. We evaluated the proposed methods in a large hospital complex, where the highly mobile workers were recruited among the non-clinical workforce. The evaluation is based......-scale indoor work environments, namely from a WiFi infrastructure providing coarse grained indoor positioning, from inertial sensors in the workers’ mobile phones, and from a task management system yielding information about the scheduled tasks’ start and end locations. The methods presented have low...... on manually labelled real-world data collected over 4 days of regular work life of the mobile workforce. The collected data yields 83 tasks in total involving 8 different orderlies from a major university hospital with a building area of 160, 000 m2. The results show that the proposed methods can distinguish...

  6. Structured photocathodes for improved high-energy x-ray efficiency in streak cameras

    Energy Technology Data Exchange (ETDEWEB)

    Opachich, Y. P., E-mail: opachiyp@nv.doe.gov; Huffman, E.; Koch, J. A. [National Security Technologies, LLC, Livermore, California 94551 (United States); Bell, P. M.; Bradley, D. K.; Hatch, B.; Landen, O. L.; MacPhee, A. G.; Nagel, S. R. [Lawrence Livermore National Laboratory, Livermore, California 94551 (United States); Chen, N.; Gopal, A.; Udin, S. [Nanoshift LLC, Emeryville, California 94608 (United States); Feng, J. [Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Hilsabeck, T. J. [General Atomics, San Diego, California 92121 (United States)

    2016-11-15

    We have designed and fabricated a structured streak camera photocathode to provide enhanced efficiency for high energy X-rays (1–12 keV). This gold coated photocathode was tested in a streak camera and compared side by side against a conventional flat thin film photocathode. Results show that the measured electron yield enhancement at energies ranging from 1 to 10 keV scales well with predictions, and that the total enhancement can be more than 3×. The spatial resolution of the streak camera does not show degradation in the structured region. We predict that the temporal resolution of the detector will also not be affected as it is currently dominated by the slit width. This demonstration with Au motivates exploration of comparable enhancements with CsI and may revolutionize X-ray streak camera photocathode design.

  7. Parallel Computational Intelligence-Based Multi-Camera Surveillance System

    Directory of Open Access Journals (Sweden)

    Sergio Orts-Escolano

    2014-04-01

    Full Text Available In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units. It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.

  8. Calibration of high resolution digital camera based on different photogrammetric methods

    International Nuclear Information System (INIS)

    Hamid, N F A; Ahmad, A

    2014-01-01

    This paper presents method of calibrating high-resolution digital camera based on different configuration which comprised of stereo and convergent. Both methods are performed in the laboratory and in the field calibration. Laboratory calibration is based on a 3D test field where a calibration plate of dimension 0.4 m × 0.4 m with grid of targets at different height is used. For field calibration, it uses the same concept of 3D test field which comprised of 81 target points located on a flat ground and the dimension is 9 m × 9 m. In this study, a non-metric high resolution digital camera called Canon Power Shot SX230 HS was calibrated in the laboratory and in the field using different configuration for data acquisition. The aim of the calibration is to investigate the behavior of the internal digital camera whether all the digital camera parameters such as focal length, principal point and other parameters remain the same or vice-versa. In the laboratory, a scale bar is placed in the test field for scaling the image and approximate coordinates were used for calibration process. Similar method is utilized in the field calibration. For both test fields, the digital images were acquired within short period using stereo and convergent configuration. For field calibration, aerial digital images were acquired using unmanned aerial vehicle (UAV) system. All the images were processed using photogrammetric calibration software. Different calibration results were obtained for both laboratory and field calibrations. The accuracy of the results is evaluated based on standard deviation. In general, for photogrammetric applications and other applications the digital camera must be calibrated for obtaining accurate measurement or results. The best method of calibration depends on the type of applications. Finally, for most applications the digital camera is calibrated on site, hence, field calibration is the best method of calibration and could be employed for obtaining accurate

  9. Design and evaluation of a high-performance charge coupled device camera for astronomical imaging

    International Nuclear Information System (INIS)

    Shang, Yuanyuan; Guan, Yong; Zhang, Weigong; Pan, Wei; Liu, Hui; Zhang, Jie

    2009-01-01

    The Space Solar Telescope (SST) is the first Chinese space astronomy mission. This paper introduces the design of a high-performance 2K × 2K charge coupled device (CCD) camera that is an important payload in the Space Solar Telescope. The camera is composed of an analogue system and a digital embedded system. The analogue system is first discussed in detail, including the power and bias voltage supply circuit, power protection unit, CCD clock driver circuit, 16 bit A/D converter and low-noise amplifier circuit. The digital embedded system integrated with an NIOS II soft-core processor serves as the control and data acquisition system of the camera. In addition, research on evaluation methods for CCDs was carried out to evaluate the performance of the TH7899 CCD camera in relation to the requirements of the SST project. We present the evaluation results, including readout noise, linearity, quantum efficiency, dark current, full-well capacity, charge transfer efficiency and gain. The results show that this high-performance CCD camera can satisfy the specifications of the SST project

  10. Intraocular camera for retinal prostheses: Refractive and diffractive lens systems

    Science.gov (United States)

    Hauer, Michelle Christine

    The focus of this thesis is on the design and analysis of refractive, diffractive, and hybrid refractive/diffractive lens systems for a miniaturized camera that can be surgically implanted in the crystalline lens sac and is designed to work in conjunction with current and future generation retinal prostheses. The development of such an intraocular camera (IOC) would eliminate the need for an external head-mounted or eyeglass-mounted camera. Placing the camera inside the eye would allow subjects to use their natural eye movements for foveation (attention) instead of more cumbersome head tracking, would notably aid in personal navigation and mobility, and would also be significantly more psychologically appealing from the standpoint of personal appearances. The capability for accommodation with no moving parts or feedback control is incorporated by employing camera designs that exhibit nearly infinite depth of field. Such an ultracompact optical imaging system requires a unique combination of refractive and diffractive optical elements and relaxed system constraints derived from human psychophysics. This configuration necessitates an extremely compact, short focal-length lens system with an f-number close to unity. Initially, these constraints appear highly aggressive from an optical design perspective. However, after careful analysis of the unique imaging requirements of a camera intended to work in conjunction with the relatively low pixellation levels of a retinal microstimulator array, it becomes clear that such a design is not only feasible, but could possibly be implemented with a single lens system.

  11. High-speed two-frame gated camera for parameters measurement of Dragon-Ⅰ LIA

    International Nuclear Information System (INIS)

    Jiang Xiaoguo; Wang Yuan; Zhang Kaizhi; Shi Jinshui; Deng Jianjun; Li Jin

    2012-01-01

    The time-resolved measurement system which can work at very high speed is necessary in electron beam parameter diagnosis for Dragon-Ⅰ linear induction accelerator (LIA). A two-frame gated camera system has been developed and put into operation. The camera system adopts the optical principle of splitting the imaging light beam into two parts in the imaging space of a lens with long focus length. It includes lens coupled gated image intensifier, CCD camera, high speed shutter trigger device based on large scale field programmable gate array. The minimum exposure time for each image is about 3 ns, and the interval time between two images can be adjusted with a step of about 0.5 ns. The exposure time and the interval time can be independently adjusted and can reach about 1 s. The camera system features good linearity, good response uniformity, equivalent background illumination (EBI) as low as about 5 electrons per pixel per second, large adjustment range of sensitivity, and excel- lent flexibility and adaptability in applications. The camera system can capture two frame images at one time with the image size of 1024 x 1024. It meets the requirements of measurement for Dragon-Ⅰ LIA. (authors)

  12. Real-Time Acquisition of High Quality Face Sequences from an Active Pan-Tilt-Zoom Camera

    DEFF Research Database (Denmark)

    Haque, Mohammad A.; Nasrollahi, Kamal; Moeslund, Thomas B.

    2013-01-01

    -based real-time high-quality face image acquisition system, which utilizes pan-tilt-zoom parameters of a camera to focus on a human face in a scene and employs a face quality assessment method to log the best quality faces from the captured frames. The system consists of four modules: face detection, camera...... control, face tracking, and face quality assessment before logging. Experimental results show that the proposed system can effectively log the high quality faces from the active camera in real-time (an average of 61.74ms was spent per frame) with an accuracy of 85.27% compared to human annotated data.......Traditional still camera-based facial image acquisition systems in surveillance applications produce low quality face images. This is mainly due to the distance between the camera and subjects of interest. Furthermore, people in such videos usually move around, change their head poses, and facial...

  13. Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras

    Science.gov (United States)

    2017-10-01

    ARL-TR-8185 ● OCT 2017 US Army Research Laboratory Field Test Data for Detecting Vibrations of a Building Using High-Speed Video...Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras by Caitlin P Conn and Geoffrey H Goldman Sensors and...June 2016 – October 2017 4. TITLE AND SUBTITLE Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras 5a. CONTRACT

  14. High electron mobility InN

    International Nuclear Information System (INIS)

    Jones, R. E.; Li, S. X.; Haller, E. E.; van Genuchten, H. C. M.; Yu, K. M.; Ager, J. W. III; Liliental-Weber, Z.; Walukiewicz, W.; Lu, H.; Schaff, W. J.

    2007-01-01

    Irradiation of InN films with 2 MeV He + ions followed by thermal annealing below 500 deg. C creates films with high electron concentrations and mobilities, as well as strong photoluminescence. Calculations show that electron mobility in irradiated samples is limited by triply charged donor defects. Subsequent thermal annealing removes a fraction of the defects, decreasing the electron concentration. There is a large increase in electron mobility upon annealing; the mobilities approach those of the as-grown films, which have 10 to 100 times smaller electron concentrations. Spatial ordering of the triply charged defects is suggested to cause the unusual increase in electron mobility

  15. OTR profile measurement of a LINAC electron beam with portable ultra high-speed camera

    International Nuclear Information System (INIS)

    Mogi, T.; Nisiyama, S.; Tomioka, S.; Enoto, T.

    2004-01-01

    We have studied on and developed a portable ultra high-speed camera, and so applied to measurement of a LINAC electron beam. We measured spatial OTR profiles of a LINAC electron beam using this camera with temporal resolution 80ns. (author)

  16. Experiments on mobile robot stereo vision system calibration under hardware imperfection

    Directory of Open Access Journals (Sweden)

    Safin Ramil

    2018-01-01

    Full Text Available Calibration is essential for any robot vision system for achieving high accuracy in deriving objects metric information. One of typical requirements for a stereo vison system in order to obtain better calibration results is to guarantee that both cameras keep the same vertical level. However, cameras may be displaced due to severe conditions of a robot operating or some other circumstances. This paper presents our experimental approach to the problem of a mobile robot stereo vision system calibration under a hardware imperfection. In our experiments, we used crawler-type mobile robot «Servosila Engineer». Stereo system cameras of the robot were displaced relative to each other, causing loss of surrounding environment information. We implemented and verified checkerboard and circle grid based calibration methods. The two methods comparison demonstrated that a circle grid based calibration should be preferred over a classical checkerboard calibration approach.

  17. CHEETAH: circuit-switched high-speed end-to-end transport architecture

    Science.gov (United States)

    Veeraraghavan, Malathi; Zheng, Xuan; Lee, Hyuk; Gardner, M.; Feng, Wuchun

    2003-10-01

    Leveraging the dominance of Ethernet in LANs and SONET/SDH in MANs and WANs, we propose a service called CHEETAH (Circuit-switched High-speed End-to-End Transport ArcHitecture). The service concept is to provide end hosts with high-speed, end-to-end circuit connectivity on a call-by-call shared basis, where a "circuit" consists of Ethernet segments at the ends that are mapped into Ethernet-over-SONET long-distance circuits. This paper focuses on the file-transfer application for such circuits. For this application, the CHEETAH service is proposed as an add-on to the primary Internet access service already in place for enterprise hosts. This allows an end host that is sending a file to first attempt setting up an end-to-end Ethernet/EoS circuit, and if rejected, fall back to the TCP/IP path. If the circuit setup is successful, the end host will enjoy a much shorter file-transfer delay than on the TCP/IP path. To determine the conditions under which an end host with access to the CHEETAH service should attempt circuit setup, we analyze mean file-transfer delays as a function of call blocking probability in the circuit-switched network, probability of packet loss in the IP network, round-trip times, link rates, and so on.

  18. Advances in top-down and bottom-up approaches to video-based camera tracking

    OpenAIRE

    Marimón Sanjuán, David

    2007-01-01

    Video-based camera tracking consists in trailing the three dimensional pose followed by a mobile camera using video as sole input. In order to estimate the pose of a camera with respect to a real scene, one or more three dimensional references are needed. Examples of such references are landmarks with known geometric shape, or objects for which a model is generated beforehand. By comparing what is seen by a camera with what is geometrically known from reality, it is possible to recover the po...

  19. Advances in top-down and bottom-up approaches to video-based camera tracking

    OpenAIRE

    Marimón Sanjuán, David; Ebrahimi, Touradj

    2008-01-01

    Video-based camera tracking consists in trailing the three dimensional pose followed by a mobile camera using video as sole input. In order to estimate the pose of a camera with respect to a real scene, one or more three dimensional references are needed. Examples of such references are landmarks with known geometric shape, or objects for which a model is generated beforehand. By comparing what is seen by a camera with what is geometrically known from reality, it is possible to recover the po...

  20. An automated meta-monitoring mobile application and front-end interface for the ATLAS computing model

    Energy Technology Data Exchange (ETDEWEB)

    Kawamura, Gen; Quadt, Arnulf [II. Physikalisches Institut, Georg-August-Universitaet Goettingen (Germany)

    2016-07-01

    Efficient administration of computing centres requires advanced tools for the monitoring and front-end interface of the infrastructure. Providing the large-scale distributed systems as a global grid infrastructure, like the Worldwide LHC Computing Grid (WLCG) and ATLAS computing, is offering many existing web pages and information sources indicating the status of the services, systems and user jobs at grid sites. A meta-monitoring mobile application which automatically collects the information could give every administrator a sophisticated and flexible interface of the infrastructure. We describe such a solution; the MadFace mobile application developed at Goettingen. It is a HappyFace compatible mobile application which has a user-friendly interface. It also becomes very feasible to automatically investigate the status and problem from different sources and provides access of the administration roles for non-experts.

  1. Emergency Based Remote Collateral Tracking System Using Google's Android Mobile Platform

    OpenAIRE

    Ramalingam, Ashokkumar; Dorairaj, Prabhu; Ramamoorthy, Saranya

    2011-01-01

    Introduction of Smart phones redefined the usage of mobile phones in the communication world. Smart phones are equipped with various sophisticated features such as Wi-Fi, GPS navigation, high resolution camera, touch screen with broadband access which helps the mobile phone users to keep in touch with the modern world. Many of these features are primarily integrated with the mobile operating system which is out of reach to public, by which the users can’t manipulate those features. Google cam...

  2. Comprehensive review on the development of high mobility in oxide thin film transistors

    Science.gov (United States)

    Choi, Jun Young; Lee, Sang Yeol

    2017-11-01

    Oxide materials are one of the most advanced key technology in the thin film transistors (TFTs) for the high-end of device applications. Amorphous oxide semiconductors (AOSs) have leading technique for flat panel display (FPD), active matrix organic light emitting display (AMOLED) and active matrix liquid crystal display (AMLCD) due to their excellent electrical characteristics, such as field effect mobility ( μ FE ), subthreshold swing (S.S) and threshold voltage ( V th ). Covalent semiconductor like amorphous silicon (a-Si) is attributed to the anti-bonding and bonding states of Si hybridized orbitals. However, AOSs have not grain boundary and excellent performances originated from the unique characteristics of AOS which is the direct orbital overlap between s orbitals of neighboring metal cations. High mobility oxide TFTs have gained attractive attention during the last few years and today in display industries. It is progressively developed to increase the mobility either by exploring various oxide semiconductors or by adopting new TFT structures. Mobility of oxide thin film transistor has been rapidly increased from single digit to higher than 100 cm2/V·s in a decade. In this review, we discuss on the comprehensive review on the mobility of oxide TFTs in a decade and propose bandgap engineering and novel structure to enhance the electrical characteristics of oxide TFTs.

  3. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  4. Spacecraft 3D Augmented Reality Mobile App

    Science.gov (United States)

    Hussey, Kevin J.; Doronila, Paul R.; Kumanchik, Brian E.; Chan, Evan G.; Ellison, Douglas J.; Boeck, Andrea; Moore, Justin M.

    2013-01-01

    The Spacecraft 3D application allows users to learn about and interact with iconic NASA missions in a new and immersive way using common mobile devices. Using Augmented Reality (AR) techniques to project 3D renditions of the mission spacecraft into real-world surroundings, users can interact with and learn about Curiosity, GRAIL, Cassini, and Voyager. Additional updates on future missions, animations, and information will be ongoing. Using a printed AR Target and camera on a mobile device, users can get up close with these robotic explorers, see how some move, and learn about these engineering feats, which are used to expand knowledge and understanding about space. The software receives input from the mobile device's camera to recognize the presence of an AR marker in the camera's field of view. It then displays a 3D rendition of the selected spacecraft in the user's physical surroundings, on the mobile device's screen, while it tracks the device's movement in relation to the physical position of the spacecraft's 3D image on the AR marker.

  5. Dynamic resource allocation engine for cloud-based real-time video transcoding in mobile cloud computing environments

    Science.gov (United States)

    Adedayo, Bada; Wang, Qi; Alcaraz Calero, Jose M.; Grecos, Christos

    2015-02-01

    The recent explosion in video-related Internet traffic has been driven by the widespread use of smart mobile devices, particularly smartphones with advanced cameras that are able to record high-quality videos. Although many of these devices offer the facility to record videos at different spatial and temporal resolutions, primarily with local storage considerations in mind, most users only ever use the highest quality settings. The vast majority of these devices are optimised for compressing the acquired video using a single built-in codec and have neither the computational resources nor battery reserves to transcode the video to alternative formats. This paper proposes a new low-complexity dynamic resource allocation engine for cloud-based video transcoding services that are both scalable and capable of being delivered in real-time. Firstly, through extensive experimentation, we establish resource requirement benchmarks for a wide range of transcoding tasks. The set of tasks investigated covers the most widely used input formats (encoder type, resolution, amount of motion and frame rate) associated with mobile devices and the most popular output formats derived from a comprehensive set of use cases, e.g. a mobile news reporter directly transmitting videos to the TV audience of various video format requirements, with minimal usage of resources both at the reporter's end and at the cloud infrastructure end for transcoding services.

  6. Distributed embedded smart cameras architectures, design and applications

    CERN Document Server

    Velipasalar, Senem

    2014-01-01

    This publication addresses distributed embedded smart camerascameras that perform onboard analysis and collaborate with other cameras. This book provides the material required to better understand the architectural design challenges of embedded smart camera systems, the hardware/software ecosystem, the design approach for, and applications of distributed smart cameras together with the state-of-the-art algorithms. The authors concentrate on the architecture, hardware/software design, realization of smart camera networks from applications to architectures, in particular in the embedded and mobile domains. •                    Examines energy issues related to wireless communication such as decreasing energy consumption to increase battery-life •                    Discusses processing large volumes of video data on an embedded environment in real-time •                    Covers design of realistic applications of distributed and embedded smart...

  7. OPTIMAL CAMERA NETWORK DESIGN FOR 3D MODELING OF CULTURAL HERITAGE

    Directory of Open Access Journals (Sweden)

    B. S. Alsadik

    2012-07-01

    Full Text Available Digital cultural heritage documentation in 3D is subject to research and practical applications nowadays. Image-based modeling is a technique to create 3D models, which starts with the basic task of designing the camera network. This task is – however – quite crucial in practical applications because it needs a thorough planning and a certain level of expertise and experience. Bearing in mind todays computational (mobile power we think that the optimal camera network should be designed in the field, and, therefore, making the preprocessing and planning dispensable. The optimal camera network is designed when certain accuracy demands are fulfilled with a reasonable effort, namely keeping the number of camera shots at a minimum. In this study, we report on the development of an automatic method to design the optimum camera network for a given object of interest, focusing currently on buildings and statues. Starting from a rough point cloud derived from a video stream of object images, the initial configuration of the camera network assuming a high-resolution state-of-the-art non-metric camera is designed. To improve the image coverage and accuracy, we use a mathematical penalty method of optimization with constraints. From the experimental test, we found that, after optimization, the maximum coverage is attained beside a significant improvement of positional accuracy. Currently, we are working on a guiding system, to ensure, that the operator actually takes the desired images. Further next steps will include a reliable and detailed modeling of the object applying sophisticated dense matching techniques.

  8. Wavefront measurement of plastic lenses for mobile-phone applications

    Science.gov (United States)

    Huang, Li-Ting; Cheng, Yuan-Chieh; Wang, Chung-Yen; Wang, Pei-Jen

    2016-08-01

    In camera lenses for mobile-phone applications, all lens elements have been designed with aspheric surfaces because of the requirements in minimal total track length of the lenses. Due to the diffraction-limited optics design with precision assembly procedures, element inspection and lens performance measurement have become cumbersome in the production of mobile-phone cameras. Recently, wavefront measurements based on Shack-Hartmann sensors have been successfully implemented on injection-molded plastic lens with aspheric surfaces. However, the applications of wavefront measurement on small-sized plastic lenses have yet to be studied both theoretically and experimentally. In this paper, both an in-house-built and a commercial wavefront measurement system configured on two optics structures have been investigated with measurement of wavefront aberrations on two lens elements from a mobile-phone camera. First, the wet-cell method has been employed for verifications of aberrations due to residual birefringence in an injection-molded lens. Then, two lens elements of a mobile-phone camera with large positive and negative power have been measured with aberrations expressed in Zernike polynomial to illustrate the effectiveness in wavefront measurement for troubleshooting defects in optical performance.

  9. Motion camera based on a custom vision sensor and an FPGA architecture

    Science.gov (United States)

    Arias-Estrada, Miguel

    1998-09-01

    A digital camera for custom focal plane arrays was developed. The camera allows the test and development of analog or mixed-mode arrays for focal plane processing. The camera is used with a custom sensor for motion detection to implement a motion computation system. The custom focal plane sensor detects moving edges at the pixel level using analog VLSI techniques. The sensor communicates motion events using the event-address protocol associated to a temporal reference. In a second stage, a coprocessing architecture based on a field programmable gate array (FPGA) computes the time-of-travel between adjacent pixels. The FPGA allows rapid prototyping and flexible architecture development. Furthermore, the FPGA interfaces the sensor to a compact PC computer which is used for high level control and data communication to the local network. The camera could be used in applications such as self-guided vehicles, mobile robotics and smart surveillance systems. The programmability of the FPGA allows the exploration of further signal processing like spatial edge detection or image segmentation tasks. The article details the motion algorithm, the sensor architecture, the use of the event- address protocol for velocity vector computation and the FPGA architecture used in the motion camera system.

  10. Development of a high sensitivity pinhole type gamma camera using semiconductors for low dose rate fields

    Science.gov (United States)

    Ueno, Yuichiro; Takahashi, Isao; Ishitsu, Takafumi; Tadokoro, Takahiro; Okada, Koichi; Nagumo, Yasushi; Fujishima, Yasutake; Yoshida, Akira; Umegaki, Kikuo

    2018-06-01

    We developed a pinhole type gamma camera, using a compact detector module of a pixelated CdTe semiconductor, which has suitable sensitivity and quantitative accuracy for low dose rate fields. In order to improve the sensitivity of the pinhole type semiconductor gamma camera, we adopted three methods: a signal processing method to set the discriminating level lower, a high sensitivity pinhole collimator and a smoothing image filter that improves the efficiency of the source identification. We tested basic performances of the developed gamma camera and carefully examined effects of the three methods. From the sensitivity test, we found that the effective sensitivity was about 21 times higher than that of the gamma camera for high dose rate fields which we had previously developed. We confirmed that the gamma camera had sufficient sensitivity and high quantitative accuracy; for example, a weak hot spot (0.9 μSv/h) around a tree root could be detected within 45 min in a low dose rate field test, and errors of measured dose rates with point sources were less than 7% in a dose rate accuracy test.

  11. 75 FR 8112 - In the Matter of Certain Mobile Telephones and Wireless Communication Devices Featuring Digital...

    Science.gov (United States)

    2010-02-23

    ... importation of certain mobile telephones and wireless communication devices featuring digital cameras, and... importation of certain mobile telephones or wireless communication devices featuring digital cameras, or... INTERNATIONAL TRADE COMMISSION [Inv. No. 337-TA-703] In the Matter of Certain Mobile Telephones...

  12. A Novel Indoor Mobile Localization System Based on Optical Camera Communication

    Directory of Open Access Journals (Sweden)

    Md. Tanvir Hossan

    2018-01-01

    Full Text Available Localizing smartphones in indoor environments offers excellent opportunities for e-commence. In this paper, we propose a localization technique for smartphones in indoor environments. This technique can calculate the coordinates of a smartphone using existing illumination infrastructure with light-emitting diodes (LEDs. The system can locate smartphones without further modification of the existing LED light infrastructure. Smartphones do not have fixed position and they may move frequently anywhere in an environment. Our algorithm uses multiple (i.e., more than two LED lights simultaneously. The smartphone gets the LED-IDs from the LED lights that are within the field of view (FOV of the smartphone’s camera. These LED-IDs contain the coordinate information (e.g., x- and y-coordinate of the LED lights. Concurrently, the pixel area on the image sensor (IS of projected image changes with the relative motion between the smartphone and each LED light which allows the algorithm to calculate the distance from the smartphone to that LED. At the end of this paper, we present simulated results for predicting the next possible location of the smartphone using Kalman filter to minimize the time delay for coordinate calculation. These simulated results demonstrate that the position resolution can be maintained within 10 cm.

  13. Analisis Unjuk Kerja Horizontal Handover Mobile Wimax Mendukung Layanan Mobile TV

    Directory of Open Access Journals (Sweden)

    Ammatia Risty

    2016-05-01

    Full Text Available IEEE 802.16e-2005 teknologi mobile WiMAX merupakan salah satu alternatif yang dapat memberikan datarate 15 Mbps yang lebih baik daripada teknologi 3G, WLAN, dan lain-lain. Mobile WiMAX juga memberikan cakupan area yang luas. Oleh karena itu, WiMAX mampu memberikan layanan berbagai macam aplikasi multimedia seperti VoIP, IPTV, Video conferencing. dan aplikasi real-time lainnya. Selain itu, Mobile WiMAX juga mendukung mobility secara portable, mobile, dan nomadic. Saat ini IPTV muncul pada mobile phone, yang dinamakan mobile TV dimana layanan IPTV akan dapat diakses dalam keadaan bergerak. Hal itu membutuhkan suatu teknologi yang mendukung mobility namun tetap dapat menerima layanan IPTV dengan baik. Teknologi mobile WiMAX adalah teknologi yang cocok untuk mendukung layanan IPTV khususnya untuk user yang sedang bergerak. Akibat dari user yang bergerak maka memungkinkan user tersebut melakukan handover. Penelitian ini menganalisis parameter unjuk kerja yang mempengaruhi pada mobile TV saat user melakukan  handover  pada jaringan mobile WiMAX seperti jitter, end to end delay, throughput, handover delay dengan skenario perbedaan kecepatan dan perubahan jumlah user dalam satu cakupan area. Berdasarkan hasil simulasi untuk skenario perbedaan kecepatan (maksimum 100 km/jam diperoleh nilai end to end delay sebesar 23.234 ms, jitter sebesar 0.047 ms, throughput sebesar 637.723 Kbps. Sedangkan skenario jumlah user diperoleh nilai end to end delay 27.218 ms, jitter sebesar 0.057 ms, throughput sebesar 558.881 Kbps. Hasil dari kedua skenario menunjukkan bahwa saat kecepatan dan jumlah user naik maka parameter kualitas layanan turun namun masih memenuhi syarat kualitas layanan Mobile TV (IPTV.

  14. Low-cost, portable, robust and high-resolution single-camera stereo-DIC system and its application in high-temperature deformation measurements

    Science.gov (United States)

    Chi, Yuxi; Yu, Liping; Pan, Bing

    2018-05-01

    A low-cost, portable, robust and high-resolution single-camera stereo-digital image correlation (stereo-DIC) system for accurate surface three-dimensional (3D) shape and deformation measurements is described. This system adopts a single consumer-grade high-resolution digital Single Lens Reflex (SLR) camera and a four-mirror adaptor, rather than two synchronized industrial digital cameras, for stereo image acquisition. In addition, monochromatic blue light illumination and coupled bandpass filter imaging are integrated to ensure the robustness of the system against ambient light variations. In contrast to conventional binocular stereo-DIC systems, the developed pseudo-stereo-DIC system offers the advantages of low cost, portability, robustness against ambient light variations, and high resolution. The accuracy and precision of the developed single SLR camera-based stereo-DIC system were validated by measuring the 3D shape of a stationary sphere along with in-plane and out-of-plane displacements of a translated planar plate. Application of the established system to thermal deformation measurement of an alumina ceramic plate and a stainless-steel plate subjected to radiation heating was also demonstrated.

  15. Fully automatic feature-based registration of mobile mapping and aerial nadir images for enabling the adjustment of mobile platform locations in gnss-denied urban environments

    NARCIS (Netherlands)

    Jende, P.; Nex, F.; Gerke, M.; Vosselman, G.; Heipke, C.; [et al], ...

    2017-01-01

    Mobile Mapping (MM) has gained significant importance in the realm of high-resolution data acquisition techniques. MM is able to record georeferenced street-level data in a continuous (laser scanners) and/or discrete (cameras) fashion. MM?s georeferencing relies on a conjunction of Global Navigation

  16. Design and Implementation of a Novel Portable 360° Stereo Camera System with Low-Cost Action Cameras

    Science.gov (United States)

    Holdener, D.; Nebiker, S.; Blaser, S.

    2017-11-01

    The demand for capturing indoor spaces is rising with the digitalization trend in the construction industry. An efficient solution for measuring challenging indoor environments is mobile mapping. Image-based systems with 360° panoramic coverage allow a rapid data acquisition and can be processed to georeferenced 3D images hosted in cloud-based 3D geoinformation services. For the multiview stereo camera system presented in this paper, a 360° coverage is achieved with a layout consisting of five horizontal stereo image pairs in a circular arrangement. The design is implemented as a low-cost solution based on a 3D printed camera rig and action cameras with fisheye lenses. The fisheye stereo system is successfully calibrated with accuracies sufficient for the applied measurement task. A comparison of 3D distances with reference data delivers maximal deviations of 3 cm on typical distances in indoor space of 2-8 m. Also the automatic computation of coloured point clouds from the stereo pairs is demonstrated.

  17. Measuring high-resolution sky luminance distributions with a CCD camera.

    Science.gov (United States)

    Tohsing, Korntip; Schrempf, Michael; Riechelmann, Stefan; Schilke, Holger; Seckmeyer, Gunther

    2013-03-10

    We describe how sky luminance can be derived from a newly developed hemispherical sky imager (HSI) system. The system contains a commercial compact charge coupled device (CCD) camera equipped with a fish-eye lens. The projection of the camera system has been found to be nearly equidistant. The luminance from the high dynamic range images has been calculated and then validated with luminance data measured by a CCD array spectroradiometer. The deviation between both datasets is less than 10% for cloudless and completely overcast skies, and differs by no more than 20% for all sky conditions. The global illuminance derived from the HSI pictures deviates by less than 5% and 20% under cloudless and cloudy skies for solar zenith angles less than 80°, respectively. This system is therefore capable of measuring sky luminance with the high spatial and temporal resolution of more than a million pixels and every 20 s respectively.

  18. Status of the NectarCAM camera project

    International Nuclear Information System (INIS)

    Glicenstein, J.F.; Delagnes, E.; Fesquet, M.; Louis, F.; Moudden, Y.; Moulin, E.; Nunio, F.; Sizun, P.

    2014-01-01

    NectarCAM is a camera designed for the medium-sized telescopes of the Cherenkov Telescope Array (CTA) covering the central energy range 100 GeV to 30 TeV. It has a modular design based on the NECTAr chip, at the heart of which is a GHz sampling Switched Capacitor Array and 12-bit Analog to Digital converter. The camera will be equipped with 265 7-photomultiplier modules, covering a field of view of 7 to 8 degrees. Each module includes the photomultiplier bases, High Voltage supply, pre-amplifier, trigger, readout and Thernet transceiver. Events recorded last between a few nanoseconds and tens of nanoseconds. A flexible trigger scheme allows to read out very long events. NectarCAM can sustain a data rate of 10 kHz. The camera concept, the design and tests of the various sub-components and results of thermal and electrical prototypes are presented. The design includes the mechanical structure, the cooling of electronics, read-out, clock distribution, slow control, data-acquisition, trigger, monitoring and services. A 133-pixel prototype with full scale mechanics, cooling, data acquisition and slow control will be built at the end of 2014. (authors)

  19. Distributing functionality in the Drift Scan Camera System

    International Nuclear Information System (INIS)

    Nicinski, T.; Constanta-Fanourakis, P.; MacKinnon, B.; Petravick, D.; Pluquet, C.; Rechenmacher, R.; Sergey, G.

    1993-11-01

    The Drift Scan Camera (DSC) System acquires image data from a CCD camera. The DSC is divided physically into two subsystems which are tightly coupled to each other. Functionality is split between these two subsystems: the front-end performs data acquisition while the host subsystem performs near real-time data analysis and control. Yet, through the use of backplane-based Remote Procedure Calls, the feel of one coherent system is preserved. Observers can control data acquisition, archiving to tape, and other functions from the host, but, the front-end can accept these same commands and operate independently. The DSC meets the needs for such robustness and cost-effective computing

  20. Overexpression of Receptor for Advanced Glycation End Products and High-Mobility Group Box 1 in Human Dental Pulp Inflammation

    Directory of Open Access Journals (Sweden)

    Salunya Tancharoen

    2014-01-01

    Full Text Available High mobility group box 1 (HMGB1, a nonhistone DNA-binding protein, is released into the extracellular space and promotes inflammation. HMGB1 binds to related cell signaling transduction receptors, including receptor for advanced glycation end products (RAGE, which actively participate in vascular and inflammatory diseases. The aim of this study was to examine whether RAGE and HMGB1 are involved in the pathogenesis of pulpitis and investigate the effect of Prevotella intermedia (P. intermedia lipopolysaccharide (LPS on RAGE and HMGB1 expression in odontoblast-like cells (OLC-1. RAGE and HMGB1 expression levels in clinically inflamed dental pulp were higher than those in healthy dental pulp. Upregulated expression of RAGE was observed in odontoblasts, stromal pulp fibroblasts-like cells, and endothelial-like cell lining human pulpitis tissue. Strong cytoplasmic HMGB1 immunoreactivity was noted in odontoblasts, whereas nuclear HMGB1 immunoreactivity was seen in stromal pulp fibroblasts-like cells in human pulpitis tissue. LPS stimulated OLC-1 cells produced HMGB1 in a dose-dependent manner through RAGE. HMGB1 translocation towards the cytoplasm and secretion from OLC-1 in response to LPS was inhibited by TPCA-1, an inhibitor of NF-κB activation. These findings suggest that RAGE and HMGB1 play an important role in the pulpal immune response to oral bacterial infection.

  1. Mobile patient monitoring: The MobiHealth system

    NARCIS (Netherlands)

    Wac, K.E.; Bults, Richard G.A.; van Beijnum, Bernhard J.F.; Widya, I.A.; Jones, Valerie M.; Konstantas, D.; Vollenbroek-Hutten, Miriam Marie Rosé; Hermens, Hermanus J.

    2009-01-01

    The emergence of high bandwidth public wireless networks and miniaturized personal mobile devices give rise to new mobile healthcare services. To this end, the MobiHealth system provides highly customizable vital signs tele-monitoring and tele-treatment system based on a body area network (BAN) and

  2. Blood Culture Testing via a Mobile App That Uses a Mobile Phone Camera: A Feasibility Study.

    Science.gov (United States)

    Lee, Guna; Lee, Yura; Chong, Yong Pil; Jang, Seongsoo; Kim, Mi Na; Kim, Jeong Hoon; Kim, Woo Sung; Lee, Jae-Ho

    2016-10-26

    To evaluate patients with fever of unknown origin or those with suspected bacteremia, the precision of blood culture tests is critical. An inappropriate step in the test process or error in a parameter could lead to a false-positive result, which could then affect the direction of treatment in critical conditions. Mobile health apps can be used to resolve problems with blood culture tests, and such apps can hence ensure that point-of-care guidelines are followed and processes are monitored for blood culture tests. In this pilot project, we aimed to investigate the feasibility of using a mobile blood culture app to manage blood culture test quality. We implemented the app at a university hospital in South Korea to assess the potential for its utilization in a clinical environment by reviewing the usage data among a small group of users and by assessing their feedback and the data related to blood culture sampling. We used an iOS-based blood culture app that uses an embedded camera to scan the patient identification and sample number bar codes. A total of 4 medical interns working at 2 medical intensive care units (MICUs) participated in this project, which spanned 3 weeks. App usage and blood culture sampling parameters (including sampler, sampling site, sampling time, and sample volume) were analyzed. The compliance of sampling parameter entry was also measured. In addition, the participants' opinions regarding patient safety, timeliness, efficiency, and usability were recorded. In total, 356/644 (55.3%) of all blood culture samples obtained at the MICUs were examined using the app, including 254/356 (71.3%) with blood collection volumes of 5-7 mL and 256/356 (71.9%) with blood collection from the peripheral veins. The sampling volume differed among the participants. Sampling parameters were completely entered in 354/356 cases (99.4%). All the participants agreed that the app ensured good patient safety, disagreed on its timeliness, and did not believe that it was

  3. Compton camera study for high efficiency SPECT and benchmark with Anger system

    Science.gov (United States)

    Fontana, M.; Dauvergne, D.; Létang, J. M.; Ley, J.-L.; Testa, É.

    2017-12-01

    Single photon emission computed tomography (SPECT) is at present one of the major techniques for non-invasive diagnostics in nuclear medicine. The clinical routine is mostly based on collimated cameras, originally proposed by Hal Anger. Due to the presence of mechanical collimation, detection efficiency and energy acceptance are limited and fixed by the system’s geometrical features. In order to overcome these limitations, the application of Compton cameras for SPECT has been investigated for several years. In this study we compare a commercial SPECT-Anger device, the General Electric HealthCare Infinia system with a High Energy General Purpose (HEGP) collimator, and the Compton camera prototype under development by the French collaboration CLaRyS, through Monte Carlo simulations (GATE—GEANT4 Application for Tomographic Emission—version 7.1 and GEANT4 version 9.6, respectively). Given the possible introduction of new radio-emitters at higher energies intrinsically allowed by the Compton camera detection principle, the two detectors are exposed to point-like sources at increasing primary gamma energies, from actual isotopes already suggested for nuclear medicine applications. The Compton camera prototype is first characterized for SPECT application by studying the main parameters affecting its imaging performance: detector energy resolution and random coincidence rate. The two detector performances are then compared in terms of radial event distribution, detection efficiency and final image, obtained by gamma transmission analysis for the Anger system, and with an iterative List Mode-Maximum Likelihood Expectation Maximization (LM-MLEM) algorithm for the Compton reconstruction. The results show for the Compton camera a detection efficiency increased by a factor larger than an order of magnitude with respect to the Anger camera, associated with an enhanced spatial resolution for energies beyond 500 keV. We discuss the advantages of Compton camera application

  4. Low-cost and high-speed optical mark reader based on an intelligent line camera

    Science.gov (United States)

    Hussmann, Stephan; Chan, Leona; Fung, Celine; Albrecht, Martin

    2003-08-01

    Optical Mark Recognition (OMR) is thoroughly reliable and highly efficient provided that high standards are maintained at both the planning and implementation stages. It is necessary to ensure that OMR forms are designed with due attention to data integrity checks, the best use is made of features built into the OMR, used data integrity is checked before the data is processed and data is validated before it is processed. This paper describes the design and implementation of an OMR prototype system for marking multiple-choice tests automatically. Parameter testing is carried out before the platform and the multiple-choice answer sheet has been designed. Position recognition and position verification methods have been developed and implemented in an intelligent line scan camera. The position recognition process is implemented into a Field Programmable Gate Array (FPGA), whereas the verification process is implemented into a micro-controller. The verified results are then sent to the Graphical User Interface (GUI) for answers checking and statistical analysis. At the end of the paper the proposed OMR system will be compared with commercially available system on the market.

  5. High mobility AlGaN/GaN devices for β"−-dosimetry

    International Nuclear Information System (INIS)

    Schmid, Martin; Howgate, John; Ruehm, Werner; Thalhammer, Stefan

    2016-01-01

    There is a high demand in modern medical applications for dosimetry sensors with a small footprint allowing for unobtrusive or high spatial resolution detectors. To this end we characterize the sensoric response of radiation resistant high mobility AlGaN/GaN semiconductor devices when exposed to β"−-emitters. The samples were operated as a floating gate transistor, without a field effect gate electrode, thus excluding any spurious effects from β"−-particle interactions with a metallic surface covering. We demonstrate that the source–drain current is modulated in dependence on the kinetic energy of the incident β"−-particles. Here, the signal is shown to have a linear dependence on the absorbed energy calculated from Monte Carlo simulations. Additionally, a stable and reproducible sensor performance as a β"−-dose monitor is shown for individual radioisotopes. Our experimental findings and the characteristics of the AlGaN/GaN high mobility layered devices indicate their potential for future applications where small sensor size is necessary, like for instance brachytherapy.

  6. Mobile systems development

    DEFF Research Database (Denmark)

    Pedersen, Ole; Kristiansen, Martin Lund; Kammersgaard, Marc N.

    2007-01-01

    in XP. In general, we find XP well-suited for mobile systems development projects. However, based on our experiences and an analytical comparison we propose the following modifications to XP: Make an essential design to avoid the worst time waste during refactoring. For faster development, reuse code......Development of mobile software is Surrounded by much uncertainty. Immature software platforms on mobile clients, a highly competitive market calling for innovation, efficiency and effectiveness in the development life cycle, and lacking end-user adoption are just some of the realities facing...... development teams in the mobile software industry. By taking a process view on development of mobile systems we seek to explore the strengths and limitations of eXtreme Programming (XP) in the context of mobile software development. Following an experimental approach a mobile systems development project...

  7. ISPA - a high accuracy X-ray and gamma camera Exhibition LEPFest 2000

    CERN Multimedia

    2000-01-01

    ISPA offers ... Ten times better resolution than Anger cameras High efficiency single gamma counting Noise reduction by sensitivity to gamma energy ...for Single Photon Emission Computed Tomography (SPECT)

  8. Geant4 simulation of a 3D high resolution gamma camera

    International Nuclear Information System (INIS)

    Akhdar, H.; Kezzar, K.; Aksouh, F.; Assemi, N.; AlGhamdi, S.; AlGarawi, M.; Gerl, J.

    2015-01-01

    The aim of this work is to develop a 3D gamma camera with high position resolution and sensitivity relying on both distance/absorption and Compton scattering techniques and without using any passive collimation. The proposed gamma camera is simulated in order to predict its performance using the full benefit of Geant4 features that allow the construction of the needed geometry of the detectors, have full control of the incident gamma particles and study the response of the detector in order to test the suggested geometries. Three different geometries are simulated and each configuration is tested with three different scintillation materials (LaBr3, LYSO and CeBr3)

  9. Solar-Powered Airplane with Cameras and WLAN

    Science.gov (United States)

    Higgins, Robert G.; Dunagan, Steve E.; Sullivan, Don; Slye, Robert; Brass, James; Leung, Joe G.; Gallmeyer, Bruce; Aoyagi, Michio; Wei, Mei Y.; Herwitz, Stanley R.; hide

    2004-01-01

    An experimental airborne remote sensing system includes a remotely controlled, lightweight, solar-powered airplane (see figure) that carries two digital-output electronic cameras and communicates with a nearby ground control and monitoring station via a wireless local-area network (WLAN). The speed of the airplane -- typically <50 km/h -- is low enough to enable loitering over farm fields, disaster scenes, or other areas of interest to collect high-resolution digital imagery that could be delivered to end users (e.g., farm managers or disaster-relief coordinators) in nearly real time.

  10. Imagers for digital still photography

    Science.gov (United States)

    Bosiers, Jan; Dillen, Bart; Draijer, Cees; Manoury, Erik-Jan; Meessen, Louis; Peters, Inge

    2006-04-01

    This paper gives an overview of the requirements for, and current state-of-the-art of, CCD and CMOS imagers for use in digital still photography. Four market segments will be reviewed: mobile imaging, consumer "point-and-shoot cameras", consumer digital SLR cameras and high-end professional camera systems. The paper will also present some challenges and innovations with respect to packaging, testing, and system integration.

  11. High-end encroachment patterns of new products

    NARCIS (Netherlands)

    Rhee, van der B.; Schmidt, G.; Orden, van J.

    2012-01-01

    Previous research describes two key ways in which a new product may encroach on an existing market. In high-end encroachment, the new product first sells to high-end customers and then diffuses down-market; in low-end encroachment, the new product enters at the low end and encroaches up-market. This

  12. Mobile phone based clinical microscopy for global health applications.

    Directory of Open Access Journals (Sweden)

    David N Breslauer

    Full Text Available Light microscopy provides a simple, cost-effective, and vital method for the diagnosis and screening of hematologic and infectious diseases. In many regions of the world, however, the required equipment is either unavailable or insufficiently portable, and operators may not possess adequate training to make full use of the images obtained. Counterintuitively, these same regions are often well served by mobile phone networks, suggesting the possibility of leveraging portable, camera-enabled mobile phones for diagnostic imaging and telemedicine. Toward this end we have built a mobile phone-mounted light microscope and demonstrated its potential for clinical use by imaging P. falciparum-infected and sickle red blood cells in brightfield and M. tuberculosis-infected sputum samples in fluorescence with LED excitation. In all cases resolution exceeded that necessary to detect blood cell and microorganism morphology, and with the tuberculosis samples we took further advantage of the digitized images to demonstrate automated bacillus counting via image analysis software. We expect such a telemedicine system for global healthcare via mobile phone -- offering inexpensive brightfield and fluorescence microscopy integrated with automated image analysis -- to provide an important tool for disease diagnosis and screening, particularly in the developing world and rural areas where laboratory facilities are scarce but mobile phone infrastructure is extensive.

  13. Mobile and embedded fast high resolution image stitching for long length rectangular monochromatic objects with periodic structure

    Science.gov (United States)

    Limonova, Elena; Tropin, Daniil; Savelyev, Boris; Mamay, Igor; Nikolaev, Dmitry

    2018-04-01

    In this paper we describe stitching protocol, which allows to obtain high resolution images of long length monochromatic objects with periodic structure. This protocol can be used for long length documents or human-induced objects in satellite images of uninhabitable regions like Arctic regions. The length of such objects can reach notable values, while modern camera sensors have limited resolution and are not able to provide good enough image of the whole object for further processing, e.g. using in OCR system. The idea of the proposed method is to acquire a video stream containing full object in high resolution and use image stitching. We expect the scanned object to have straight boundaries and periodic structure, which allow us to introduce regularization to the stitching problem and adapt algorithm for limited computational power of mobile and embedded CPUs. With the help of detected boundaries and structure we estimate homography between frames and use this information to reduce complexity of stitching. We demonstrate our algorithm on mobile device and show image processing speed of 2 fps on Samsung Exynos 5422 processor

  14. Control Design and Digital Implementation of a Fast 2-Degree-of-Freedom Translational Optical Image Stabilizer for Image Sensors in Mobile Camera Phones.

    Science.gov (United States)

    Wang, Jeremy H-S; Qiu, Kang-Fu; Chao, Paul C-P

    2017-10-13

    This study presents design, digital implementation and performance validation of a lead-lag controller for a 2-degree-of-freedom (DOF) translational optical image stabilizer (OIS) installed with a digital image sensor in mobile camera phones. Nowadays, OIS is an important feature of modern commercial mobile camera phones, which aims to mechanically reduce the image blur caused by hand shaking while shooting photos. The OIS developed in this study is able to move the imaging lens by actuating its voice coil motors (VCMs) at the required speed to the position that significantly compensates for imaging blurs by hand shaking. The compensation proposed is made possible by first establishing the exact, nonlinear equations of motion (EOMs) for the OIS, which is followed by designing a simple lead-lag controller based on established nonlinear EOMs for simple digital computation via a field-programmable gate array (FPGA) board in order to achieve fast response. Finally, experimental validation is conducted to show the favorable performance of the designed OIS; i.e., it is able to stabilize the lens holder to the desired position within 0.02 s, which is much less than previously reported times of around 0.1 s. Also, the resulting residual vibration is less than 2.2-2.5 μm, which is commensurate to the very small pixel size found in most of commercial image sensors; thus, significantly minimizing image blur caused by hand shaking.

  15. End-range mobilization techniques in adhesive capsulitis of the shoulder joint: a multiple-subject case report.

    NARCIS (Netherlands)

    Vermeulen, H.M.; Obermann, W.R.; Burger, B.J.; Kok, G.J.; Rozing, P.M.; Ende, C.H.M. van den

    2000-01-01

    BACKGROUND AND PURPOSE: The purpose of this case report is to describe the use of end-range mobilization techniques in the management of patients with adhesive capsulitis. CASE DESCRIPTION: Four men and 3 women (mean age=50.2 years, SD=6.0, range=41-65) with adhesive capsulitis of the glenohumeral

  16. Real Time Indoor Robot Localization Using a Stationary Fisheye Camera

    OpenAIRE

    Delibasis , Konstantinos ,; Plagianakos , Vasilios ,; Maglogiannis , Ilias

    2013-01-01

    Part 7: Intelligent Signal and Image Processing; International audience; A core problem in robotics is the localization of a mobile robot (determination of the location or pose) in its environment, since the robot’s behavior depends on its position. In this work, we propose the use of a stationary fisheye camera for real time robot localization in indoor environments. We employ an image formation model for the fisheye camera, which is used for accelerating the segmentation of the robot’s top ...

  17. Divergence-ratio axi-vision camera (Divcam): A distance mapping camera

    International Nuclear Information System (INIS)

    Iizuka, Keigo

    2006-01-01

    A novel distance mapping camera the divergence-ratio axi-vision camera (Divcam) is proposed. The decay rate of the illuminating light with distance due to the divergence of the light is used as means of mapping the distance. Resolutions of 10 mm over a range of meters and 0.5 mm over a range of decimeters were achieved. The special features of this camera are its high resolution real-time operation, simplicity, compactness, light weight, portability, and yet low fabrication cost. The feasibility of various potential applications is also included

  18. Coupling Front-End Separations, Ion Mobility Spectrometry, and Mass Spectrometry For Enhanced Multidimensional Biological and Environmental Analyses

    Science.gov (United States)

    Zheng, Xueyun; Wojcik, Roza; Zhang, Xing; Ibrahim, Yehia M.; Burnum-Johnson, Kristin E.; Orton, Daniel J.; Monroe, Matthew E.; Moore, Ronald J.; Smith, Richard D.; Baker, Erin S.

    2017-01-01

    Ion mobility spectrometry (IMS) is a widely used analytical technique for rapid molecular separations in the gas phase. Though IMS alone is useful, its coupling with mass spectrometry (MS) and front-end separations is extremely beneficial for increasing measurement sensitivity, peak capacity of complex mixtures, and the scope of molecular information available from biological and environmental sample analyses. In fact, multiple disease screening and environmental evaluations have illustrated that the IMS-based multidimensional separations extract information that cannot be acquired with each technique individually. This review highlights three-dimensional separations using IMS-MS in conjunction with a range of front-end techniques, such as gas chromatography, supercritical fluid chromatography, liquid chromatography, solid-phase extractions, capillary electrophoresis, field asymmetric ion mobility spectrometry, and microfluidic devices. The origination, current state, various applications, and future capabilities of these multidimensional approaches are described in detail to provide insight into their uses and benefits. PMID:28301728

  19. End-Point Contact Force Control with Quantitative Feedback Theory for Mobile Robots

    Directory of Open Access Journals (Sweden)

    Shuhuan Wen

    2012-12-01

    Full Text Available Robot force control is an important issue for intelligent mobile robotics. The end-point stiffness of a robot is a key and open problem in the research community. The control strategies are mostly dependent on both the specifications of the task and the environment of the robot. Due to the limited stiffness of the end-effector, we may adopt inherent torque to feedback the oscillations of the controlled force. This paper proposes an effective control strategy which contains a controller using quantitative feedback theory. The nested loop controllers take into account the physical limitation of the system's inner variables and harmful interference. The biggest advantage of the method is its simplicity in both the design process and the implementation of the control algorithm in engineering practice. Taking the one-link manipulator as an example, numerical experiments are carried out to verify the proposed control method. The results show the satisfactory performance.

  20. Super-resolution processing for pulsed neutron imaging system using a high-speed camera

    International Nuclear Information System (INIS)

    Ishizuka, Ken; Kai, Tetsuya; Shinohara, Takenao; Segawa, Mariko; Mochiki, Koichi

    2015-01-01

    Super-resolution and center-of-gravity processing improve the resolution of neutron-transmitted images. These processing methods calculate the center-of-gravity pixel or sub-pixel of the neutron point converted into light by a scintillator. The conventional neutron-transmitted image is acquired using a high-speed camera by integrating many frames when a transmitted image with one frame is not provided. It succeeds in acquiring the transmitted image and calculating a spectrum by integrating frames of the same energy. However, because a high frame rate is required for neutron resonance absorption imaging, the number of pixels of the transmitted image decreases, and the resolution decreases to the limit of the camera performance. Therefore, we attempt to improve the resolution by integrating the frames after applying super-resolution or center-of-gravity processing. The processed results indicate that center-of-gravity processing can be effective in pulsed-neutron imaging with a high-speed camera. In addition, the results show that super-resolution processing is effective indirectly. A project to develop a real-time image data processing system has begun, and this system will be used at J-PARC in JAEA. (author)

  1. Optics for mobile phone imaging

    Science.gov (United States)

    Vigier-Blanc, Emmanuelle E.

    2004-02-01

    Micro cameras for mobile phones require specific opto electronic designs using high-resolution micro technologies for compromising optical, electronical and mechanical requirements. The purpose of this conference is to present the optical critical parameters for imaging optics embedded into mobile phones. We will overview the optics critical parameters involved into micro optical cameras, as seen from user point of view, and their interdependence and relative influence onto optical performances of the product, as: -Focal length, field of view and array size. -Lens speed and depth of field: what is hidden behind lens speed, how to compromise small aperture, production tolerances, sensitivity, good resolution in corners and great depth of field -Relative illumination, this smooth fall off of intensity toward edge of array -Resolution; how to measure it, the interaction of pixel size, small dimensions -Sensitivity, insuring same sensitivity as human being under both twilight and midday sunny conditions. -Mischievous effects, as flare, glare, ghost effects and how to avoid them -How to match sensor spectrum and photopic eye curve: IR filter, and color balancing. We will compromise above parameters and see how to match with market needs and productivity insurance.

  2. Upgrading of analogue cameras using modern PC based computer

    International Nuclear Information System (INIS)

    Pardom, M.F.; Matos, L.

    2002-01-01

    Aim: The use of computers along with analogue cameras enables them to perform tasks involving time-activity parameters. The INFORMENU system converts a modern PC computer into a dedicated nuclear medicine computer system with a total cost affordable to emerging economic countries, and easily adaptable to all existing cameras. Materials and Methods: In collaboration with nuclear medicine physicians, an application including hardware and software was developed by a private firm. The system runs smoothly on Windows 98 and its operation is very easy. The main features are comparable to the brand commercial computer systems; such as image resolution until 1024 x 1024, low count loss at high count rate, uniformity correction, integrated graphical and text reporting, and user defined clinical protocols. Results: The system is used in more than 20 private and public institutions. The count loss is less than 1% in all the routine work, improvement of uniformity correction of 3-5 times, improved utility of the analogue cameras. Conclusion: The INFORMENU system improves the utility of analogue cameras permitting the inclusion of dynamic clinical protocols and quantifications, helping the development of the nuclear medicine practice. The operation and maintenance costs were lowered. The end users improve their knowledge of modern nuclear medicine

  3. Texton-based super-resolution for achieving high spatiotemporal resolution in hybrid camera system

    Science.gov (United States)

    Kamimura, Kenji; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi

    2010-05-01

    Many super-resolution methods have been proposed to enhance the spatial resolution of images by using iteration and multiple input images. In a previous paper, we proposed the example-based super-resolution method to enhance an image through pixel-based texton substitution to reduce the computational cost. In this method, however, we only considered the enhancement of a texture image. In this study, we modified this texton substitution method for a hybrid camera to reduce the required bandwidth of a high-resolution video camera. We applied our algorithm to pairs of high- and low-spatiotemporal-resolution videos, which were synthesized to simulate a hybrid camera. The result showed that the fine detail of the low-resolution video can be reproduced compared with bicubic interpolation and the required bandwidth could be reduced to about 1/5 in a video camera. It was also shown that the peak signal-to-noise ratios (PSNRs) of the images improved by about 6 dB in a trained frame and by 1.0-1.5 dB in a test frame, as determined by comparison with the processed image using bicubic interpolation, and the average PSNRs were higher than those obtained by the well-known Freeman’s patch-based super-resolution method. Compared with that of the Freeman’s patch-based super-resolution method, the computational time of our method was reduced to almost 1/10.

  4. Coupling Front-End Separations, Ion Mobility Spectrometry, and Mass Spectrometry For Enhanced Multidimensional Biological and Environmental Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Xueyun; Wojcik, Roza; Zhang, Xing; Ibrahim, Yehia M.; Burnum-Johnson, Kristin E.; Orton, Daniel J.; Monroe, Matthew E.; Moore, Ronald J.; Smith, Richard D.; Baker, Erin M.

    2017-06-12

    Ion mobility spectrometry (IMS) is a widely used analytical technique for rapid molecular separations in the gas phase. IMS alone is useful, but its coupling with mass spectrometry (MS) and front-end separations has been extremely beneficial for increasing measurement sensitivity, peak capacity of complex mixtures, and the scope of molecular information in biological and environmental sample analyses. Multiple studies in disease screening and environmental evaluations have even shown these IMS-based multidimensional separations extract information not possible with each technique individually. This review highlights 3-dimensional separations using IMS-MS in conjunction with a range of front-end techniques, such as gas chromatography (GC), supercritical fluid chromatography (SFC), liquid chromatography (LC), solid phase extractions (SPE), capillary electrophoresis (CE), field asymmetric ion mobility spectrometry (FAIMS), and microfluidic devices. The origination, current state, various applications, and future capabilities for these multidimensional approaches are described to provide insight into the utility and potential of each technique.

  5. A miniature low-cost LWIR camera with a 160×120 microbolometer FPA

    Science.gov (United States)

    Tepegoz, Murat; Kucukkomurler, Alper; Tankut, Firat; Eminoglu, Selim; Akin, Tayfun

    2014-06-01

    This paper presents the development of a miniature LWIR thermal camera, MSE070D, which targets value performance infrared imaging applications, where a 160x120 CMOS-based microbolometer FPA is utilized. MSE070D features a universal USB interface that can communicate with computers and some particular mobile devices in the market. In addition, it offers high flexibility and mobility with the help of its USB powered nature, eliminating the need for any external power source, thanks to its low-power requirement option. MSE070D provides thermal imaging with its 1.65 inch3 volume with the use of a vacuum packaged CMOS-based microbolometer type thermal sensor MS1670A-VP, achieving moderate performance with a very low production cost. MSE070D allows 30 fps thermal video imaging with the 160x120 FPA size while resulting in an NETD lower than 350 mK with f/1 optics. It is possible to obtain test electronics and software, miniature camera cores, complete Application Programming Interfaces (APIs) and relevant documentation with MSE070D, as MikroSens want to help its customers to evaluate its products and to ensure quick time-to-market for systems manufacturers.

  6. High vacuum high temperature x-ray camera (1961)

    International Nuclear Information System (INIS)

    Baron, J.L.

    1961-01-01

    - This camera makes it possible to carry out X-ray studies on highly oxidisable materials, up to about 900 deg. C. Most of the existing models do not provide sufficient protection against the formation of surface oxide or carbide films on the sample. The present arrangement makes it possible to operate at very low pressures: 5 x 10 -8 to 10 -7 torr, thanks to an entirely metallic apparatus. The radiation heating system consists of an incandescent lamp, outside the evacuated portion, and a reflector which concentrates the energetic flux into the sample through a silica window. The heated parts have thus only a small thermal inertia. With the apparatus it has been possible to determine the phase parameters of uranium-α up to 650 deg. C with a precision of ± 0.0015 A. A similar study has been carried out on a uranium-chromium alloy in the β-phase up to 740 deg. C. (author) [fr

  7. High mobility AlGaN/GaN devices for β{sup −}-dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Schmid, Martin; Howgate, John; Ruehm, Werner [Helmholtz Zentrum München, Ingolstädter Landstraße 1, 85764 Neuherberg (Germany); Thalhammer, Stefan, E-mail: stefan.thalhammer@physik.uni-augsburg.de [Universität Augsburg, Universitätsstraße 1, 86159 Augsburg (Germany)

    2016-05-21

    There is a high demand in modern medical applications for dosimetry sensors with a small footprint allowing for unobtrusive or high spatial resolution detectors. To this end we characterize the sensoric response of radiation resistant high mobility AlGaN/GaN semiconductor devices when exposed to β{sup −}-emitters. The samples were operated as a floating gate transistor, without a field effect gate electrode, thus excluding any spurious effects from β{sup −}-particle interactions with a metallic surface covering. We demonstrate that the source–drain current is modulated in dependence on the kinetic energy of the incident β{sup −}-particles. Here, the signal is shown to have a linear dependence on the absorbed energy calculated from Monte Carlo simulations. Additionally, a stable and reproducible sensor performance as a β{sup −}-dose monitor is shown for individual radioisotopes. Our experimental findings and the characteristics of the AlGaN/GaN high mobility layered devices indicate their potential for future applications where small sensor size is necessary, like for instance brachytherapy.

  8. The Light Field Attachment: Turning a DSLR into a Light Field Camera Using a Low Budget Camera Ring

    KAUST Repository

    Wang, Yuwang

    2016-11-16

    We propose a concept for a lens attachment that turns a standard DSLR camera and lens into a light field camera. The attachment consists of 8 low-resolution, low-quality side cameras arranged around the central high-quality SLR lens. Unlike most existing light field camera architectures, this design provides a high-quality 2D image mode, while simultaneously enabling a new high-quality light field mode with a large camera baseline but little added weight, cost, or bulk compared with the base DSLR camera. From an algorithmic point of view, the high-quality light field mode is made possible by a new light field super-resolution method that first improves the spatial resolution and image quality of the side cameras and then interpolates additional views as needed. At the heart of this process is a super-resolution method that we call iterative Patch- And Depth-based Synthesis (iPADS), which combines patch-based and depth-based synthesis in a novel fashion. Experimental results obtained for both real captured data and synthetic data confirm that our method achieves substantial improvements in super-resolution for side-view images as well as the high-quality and view-coherent rendering of dense and high-resolution light fields.

  9. Effective Testing Principles for the Mobile Data Services Applications

    NARCIS (Netherlands)

    Srirama, S.; Kakumani, R.; Aggarwal, A.; Pawar, P.

    Wireless communication technologies like GPRS, UMTS and WLAN, combined with the availability of high-end, affordable mobile devices enable the development of the advanced and innovative mobile services. Devices such as mobile phones and Personal Digital Assistants let the users access a wide range

  10. Teleoperated Marsupial Mobile Sensor Platform Pair for Telepresence Insertion Into Challenging Structures

    Science.gov (United States)

    Krasowski, Michael J.; Prokop, Norman F.; Greer, Lawrence C.

    2011-01-01

    A platform has been developed for two or more vehicles with one or more residing within the other (a marsupial pair). This configuration consists of a large, versatile robot that is carrying a smaller, more specialized autonomous operating robot(s) and/or mobile repeaters for extended transmission. The larger vehicle, which is equipped with a ramp and/or a robotic arm, is used to operate over a more challenging topography than the smaller one(s) that may have a more limited inspection area to traverse. The intended use of this concept is to facilitate the insertion of a small video camera and sensor platform into a difficult entry area. In a terrestrial application, this may be a bus or a subway car with narrow aisles or steep stairs. The first field-tested configuration is a tracked vehicle bearing a rigid ramp of fixed length and width. A smaller six-wheeled vehicle approximately 10 in. (25 cm) wide by 12 in. (30 cm) long resides at the end of the ramp within the larger vehicle. The ramp extends from the larger vehicle and is tipped up into the air. Using video feedback from a camera atop the larger robot, the operator at a remote location can steer the larger vehicle to the bus door. Once positioned at the door, the operator can switch video feedback to a camera at the end of the ramp to facilitate the mating of the end of the ramp to the top landing at the upper terminus of the steps. The ramp can be lowered by remote control until its end is in contact with the top landing. At the same time, the end of the ramp bearing the smaller vehicle is raised to minimize the angle of the slope the smaller vehicle has to climb, and further gives the operator a better view of the entry to the bus from the smaller vehicle. Control is passed over to the smaller vehicle and, using video feedback from the camera, it is driven up the ramp, turned oblique into the bus, and then sent down the aisle for surveillance. The demonstrated vehicle was used to scale the steps leading to

  11. Measurement of the timing behaviour of off-the-shelf cameras

    Science.gov (United States)

    Schatz, Volker

    2017-04-01

    This paper presents a measurement method suitable for investigating the timing properties of cameras. A single light source illuminates the camera detector starting with a varying defined delay after the camera trigger. Pixels from the recorded camera frames are summed up and normalised, and the resulting function is indicative of the overlap between illumination and exposure. This allows one to infer the trigger delay and the exposure time with sub-microsecond accuracy. The method is therefore of interest when off-the-shelf cameras are used in reactive systems or synchronised with other cameras. It can supplement radiometric and geometric calibration methods for cameras in scientific use. A closer look at the measurement results reveals deviations from the ideal camera behaviour of constant sensitivity limited to the exposure interval. One of the industrial cameras investigated retains a small sensitivity long after the end of the nominal exposure interval. All three investigated cameras show non-linear variations of sensitivity at O≤ft({{10}-3}\\right) to O≤ft({{10}-2}\\right) during exposure. Due to its sign, the latter effect cannot be described by a sensitivity function depending on the time after triggering, but represents non-linear pixel characteristics.

  12. Depth camera driven mobile robot for human localization and following

    DEFF Research Database (Denmark)

    Skordilis, Nikolaos; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2014-01-01

    In this paper the design and the development of a mobile robot able to locate and then follow a human target is described. Both the integration of the required mechatronics components and the development of appropriate software are covered. The main sensor of the developed mobile robot is an RGB-...

  13. Remote hardware-reconfigurable robotic camera

    Science.gov (United States)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.

    2001-10-01

    In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.

  14. End-User Attitudes towards Location-Based Services and Future Mobile Wireless Devices: The Students’ Perspective

    Directory of Open Access Journals (Sweden)

    Bogdan Cramariuc

    2011-07-01

    Full Text Available Nowadays, location-enabled mobile phones are becoming more and more widespread. Various players in the mobile business forecast that, in the future, a significant part of total wireless revenue will come from Location-Based Services (LBS. An LBS system extracts information about the user’s geographical location and provides services based on the positioning information. A successful LBS service should create value for the end-user, by satisfying some of the users’ needs or wants, and at the same time preserving the key factors of the mobile wireless device, such as low costs, low battery consumption, and small size. From many users’ perspectives, location services and mobile location capabilities are still rather poorly known and poorly understood. The aim of this research is to investigate users’ views on the LBS, their requirements in terms of mobile device characteristics, their concerns in terms of privacy and usability, and their opinion on LBS applications that might increase the social wellbeing in the future wireless world. Our research is based on two surveys performed among 105 students (average student age: 24 years from two European technical universities. The survey questions were intended to solicit the youngsters’ views on present and future technological trends and on their perceived needs and wishes regarding Location-Based Services, with the aim of obtaining a better understanding of designer constraints when building a location receiver and generating new ideas related to potential future killer LBS applications.

  15. Compact streak camera for the shock study of solids by using the high-pressure gas gun

    Science.gov (United States)

    Nagayama, Kunihito; Mori, Yasuhito

    1993-01-01

    For the precise observation of high-speed impact phenomena, a compact high-speed streak camera recording system has been developed. The system consists of a high-pressure gas gun, a streak camera, and a long-pulse dye laser. The gas gun installed in our laboratory has a muzzle of 40 mm in diameter, and a launch tube of 2 m long. Projectile velocity is measured by the laser beam cut method. The gun is capable of accelerating a 27 g projectile up to 500 m/s, if helium gas is used as a driver. The system has been designed on the principal idea that the precise optical measurement methods developed in other areas of research can be applied to the gun study. The streak camera is 300 mm in diameter, with a rectangular rotating mirror which is driven by an air turbine spindle. The attainable streak velocity is 3 mm/microsecond(s) . The size of the camera is rather small aiming at the portability and economy. Therefore, the streak velocity is relatively slower than the fast cameras, but it is possible to use low-sensitivity but high-resolution film as a recording medium. We have also constructed a pulsed dye laser of 25 - 30 microsecond(s) in duration. The laser can be used as a light source of observation. The advantage for the use of the laser will be multi-fold, i.e., good directivity, almost single frequency, and so on. The feasibility of the system has been demonstrated by performing several experiments.

  16. Compact CdZnTe-Based Gamma Camera For Prostate Cancer Imaging

    International Nuclear Information System (INIS)

    Cui, Y.; Lall, T.; Tsui, B.; Yu, J.; Mahler, G.; Bolotnikov, A.; Vaska, P.; DeGeronimo, G.; O'Connor, P.; Meinken, G.; Joyal, J.; Barrett, J.; Camarda, G.; Hossain, A.; Kim, K.H.; Yang, G.; Pomper, M.; Cho, S.; Weisman, K.; Seo, Y.; Babich, J.; LaFrance, N.; James, R.B.

    2011-01-01

    In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high false-positive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integrated-circuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera

  17. VUV Testing of Science Cameras at MSFC: QE Measurement of the CLASP Flight Cameras

    Science.gov (United States)

    Champey, Patrick R.; Kobayashi, Ken; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512x512 detector, dual channel analog readout electronics and an internally mounted cold block. At the flight operating temperature of -20 C, the CLASP cameras achieved the low-noise performance requirements (less than or equal to 25 e- read noise and greater than or equal to 10 e-/sec/pix dark current), in addition to maintaining a stable gain of approximately equal to 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Four flight-like cameras were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise, dark current and residual non-linearity of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV and EUV science cameras at MSFC.

  18. Front-end multiplexing—applied to SQUID multiplexing: Athena X-IFU and QUBIC experiments

    Science.gov (United States)

    Prele, D.

    2015-08-01

    As we have seen for digital camera market and a sensor resolution increasing to "megapixels", all the scientific and high-tech imagers (whatever the wave length - from radio to X-ray range) tends also to always increases the pixels number. So the constraints on front-end signals transmission increase too. An almost unavoidable solution to simplify integration of large arrays of pixels is front-end multiplexing. Moreover, "simple" and "efficient" techniques allow integration of read-out multiplexers in the focal plane itself. For instance, CCD (Charge Coupled Device) technology has boost number of pixels in digital camera. Indeed, this is exactly a planar technology which integrates both the sensors and a front-end multiplexed readout. In this context, front-end multiplexing techniques will be discussed for a better understanding of their advantages and their limits. Finally, the cases of astronomical instruments in the millimeter and in the X-ray ranges using SQUID (Superconducting QUantum Interference Device) will be described.

  19. Front-end multiplexing—applied to SQUID multiplexing: Athena X-IFU and QUBIC experiments

    International Nuclear Information System (INIS)

    Prele, D.

    2015-01-01

    As we have seen for digital camera market and a sensor resolution increasing to 'megapixels', all the scientific and high-tech imagers (whatever the wave length - from radio to X-ray range) tends also to always increases the pixels number. So the constraints on front-end signals transmission increase too. An almost unavoidable solution to simplify integration of large arrays of pixels is front-end multiplexing. Moreover, 'simple' and 'efficient' techniques allow integration of read-out multiplexers in the focal plane itself. For instance, CCD (Charge Coupled Device) technology has boost number of pixels in digital camera. Indeed, this is exactly a planar technology which integrates both the sensors and a front-end multiplexed readout. In this context, front-end multiplexing techniques will be discussed for a better understanding of their advantages and their limits. Finally, the cases of astronomical instruments in the millimeter and in the X-ray ranges using SQUID (Superconducting QUantum Interference Device) will be described

  20. High-End Scientific Computing

    Science.gov (United States)

    EPA uses high-end scientific computing, geospatial services and remote sensing/imagery analysis to support EPA's mission. The Center for Environmental Computing (CEC) assists the Agency's program offices and regions to meet staff needs in these areas.

  1. Defocus Deblurring and Superresolution for Time-of-Flight Depth Cameras

    KAUST Repository

    Xiao, Lei

    2015-06-07

    Continuous-wave time-of-flight (ToF) cameras show great promise as low-cost depth image sensors in mobile applications. However, they also suffer from several challenges, including limited illumination intensity, which mandates the use of large numerical aperture lenses, and thus results in a shallow depth of field, making it difficult to capture scenes with large variations in depth. Another shortcoming is the limited spatial resolution of currently available ToF sensors. In this paper we analyze the image formation model for blurred ToF images. By directly working with raw sensor measurements but regularizing the recovered depth and amplitude images, we are able to simultaneously deblur and super-resolve the output of ToF cameras. Our method outperforms existing methods on both synthetic and real datasets. In the future our algorithm should extend easily to cameras that do not follow the cosine model of continuous-wave sensors, as well as to multi-frequency or multi-phase imaging employed in more recent ToF cameras.

  2. Defocus Deblurring and Superresolution for Time-of-Flight Depth Cameras

    KAUST Repository

    Xiao, Lei; Heide, Felix; O'Toole, Matthew; Kolb, Andreas; Hullin, Matthias B.; Kutulakos, Kyros; Heidrich, Wolfgang

    2015-01-01

    Continuous-wave time-of-flight (ToF) cameras show great promise as low-cost depth image sensors in mobile applications. However, they also suffer from several challenges, including limited illumination intensity, which mandates the use of large numerical aperture lenses, and thus results in a shallow depth of field, making it difficult to capture scenes with large variations in depth. Another shortcoming is the limited spatial resolution of currently available ToF sensors. In this paper we analyze the image formation model for blurred ToF images. By directly working with raw sensor measurements but regularizing the recovered depth and amplitude images, we are able to simultaneously deblur and super-resolve the output of ToF cameras. Our method outperforms existing methods on both synthetic and real datasets. In the future our algorithm should extend easily to cameras that do not follow the cosine model of continuous-wave sensors, as well as to multi-frequency or multi-phase imaging employed in more recent ToF cameras.

  3. A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer

    Directory of Open Access Journals (Sweden)

    Bailey Y. Shen

    2017-01-01

    Full Text Available Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm×91mm×45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient.

  4. Very high frame rate volumetric integration of depth images on mobile devices.

    Science.gov (United States)

    Kähler, Olaf; Adrian Prisacariu, Victor; Yuheng Ren, Carl; Sun, Xin; Torr, Philip; Murray, David

    2015-11-01

    Volumetric methods provide efficient, flexible and simple ways of integrating multiple depth images into a full 3D model. They provide dense and photorealistic 3D reconstructions, and parallelised implementations on GPUs achieve real-time performance on modern graphics hardware. To run such methods on mobile devices, providing users with freedom of movement and instantaneous reconstruction feedback, remains challenging however. In this paper we present a range of modifications to existing volumetric integration methods based on voxel block hashing, considerably improving their performance and making them applicable to tablet computer applications. We present (i) optimisations for the basic data structure, and its allocation and integration; (ii) a highly optimised raycasting pipeline; and (iii) extensions to the camera tracker to incorporate IMU data. In total, our system thus achieves frame rates up 47 Hz on a Nvidia Shield Tablet and 910 Hz on a Nvidia GTX Titan XGPU, or even beyond 1.1 kHz without visualisation.

  5. Mobile Robot Positioning by using Low-Cost Visual Tracking System

    Directory of Open Access Journals (Sweden)

    Ruangpayoongsak Niramon

    2017-01-01

    Full Text Available This paper presents an application of visual tracking system on mobile robot positioning. The proposed method is verified on a constructed low-cost tracking system consisting of 2 DOF pan-tilt unit, web camera and distance sensor. The motion of pan-tilt joints is realized and controlled by using LQR controller running on microcontroller. Without needs of camera calibration, robot trajectory is tracked by Kalman filter integrating distance information and joint positions. The experimental results demonstrate validity of the proposed positioning technique and the obtained mobile robot trajectory is benchmarked against laser rangefinder positioning. The implemented system can successfully track a mobile robot driving at 14 cm/s.

  6. GWDC Expands High-End Market Share

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    @@ It is a decision of great significance for GWDC to expand high-end market share in order to realize its transformation of development strategy and improve its development quality. As an important step of GWDC to explore high-end market, Oman PDO Project marks the first time that the Chinese petroleum engineering service team cooperates with the transnational petroleum corporations ranking first three in the world.

  7. IMAGE CAPTURE WITH SYNCHRONIZED MULTIPLE-CAMERAS FOR EXTRACTION OF ACCURATE GEOMETRIES

    Directory of Open Access Journals (Sweden)

    M. Koehl

    2016-06-01

    Full Text Available This paper presents a project of recording and modelling tunnels, traffic circles and roads from multiple sensors. The aim is the representation and the accurate 3D modelling of a selection of road infrastructures as dense point clouds in order to extract profiles and metrics from it. Indeed, these models will be used for the sizing of infrastructures in order to simulate exceptional convoy truck routes. The objective is to extract directly from the point clouds the heights, widths and lengths of bridges and tunnels, the diameter of gyrating and to highlight potential obstacles for a convoy. Light, mobile and fast acquisition approaches based on images and videos from a set of synchronized sensors have been tested in order to obtain useable point clouds. The presented solution is based on a combination of multiple low-cost cameras designed on an on-boarded device allowing dynamic captures. The experimental device containing GoPro Hero4 cameras has been set up and used for tests in static or mobile acquisitions. That way, various configurations have been tested by using multiple synchronized cameras. These configurations are discussed in order to highlight the best operational configuration according to the shape of the acquired objects. As the precise calibration of each sensor and its optics are major factors in the process of creation of accurate dense point clouds, and in order to reach the best quality available from such cameras, the estimation of the internal parameters of fisheye lenses of the cameras has been processed. Reference measures were also realized by using a 3D TLS (Faro Focus 3D to allow the accuracy assessment.

  8. Image Capture with Synchronized Multiple-Cameras for Extraction of Accurate Geometries

    Science.gov (United States)

    Koehl, M.; Delacourt, T.; Boutry, C.

    2016-06-01

    This paper presents a project of recording and modelling tunnels, traffic circles and roads from multiple sensors. The aim is the representation and the accurate 3D modelling of a selection of road infrastructures as dense point clouds in order to extract profiles and metrics from it. Indeed, these models will be used for the sizing of infrastructures in order to simulate exceptional convoy truck routes. The objective is to extract directly from the point clouds the heights, widths and lengths of bridges and tunnels, the diameter of gyrating and to highlight potential obstacles for a convoy. Light, mobile and fast acquisition approaches based on images and videos from a set of synchronized sensors have been tested in order to obtain useable point clouds. The presented solution is based on a combination of multiple low-cost cameras designed on an on-boarded device allowing dynamic captures. The experimental device containing GoPro Hero4 cameras has been set up and used for tests in static or mobile acquisitions. That way, various configurations have been tested by using multiple synchronized cameras. These configurations are discussed in order to highlight the best operational configuration according to the shape of the acquired objects. As the precise calibration of each sensor and its optics are major factors in the process of creation of accurate dense point clouds, and in order to reach the best quality available from such cameras, the estimation of the internal parameters of fisheye lenses of the cameras has been processed. Reference measures were also realized by using a 3D TLS (Faro Focus 3D) to allow the accuracy assessment.

  9. Versatile ultrafast pump-probe imaging with high sensitivity CCD camera

    OpenAIRE

    Pezeril , Thomas; Klieber , Christoph; Temnov , Vasily; Huntzinger , Jean-Roch; Anane , Abdelmadjid

    2012-01-01

    International audience; A powerful imaging technique based on femtosecond time-resolved measurements with a high dynamic range, commercial CCD camera is presented. Ultrafast phenomena induced by a femtosecond laser pump are visualized through the lock-in type acquisition of images recorded by a femtosecond laser probe. This technique allows time-resolved measurements of laser excited phenomena at multiple probe wavelengths (spectrometer mode) or conventional imaging of the sample surface (ima...

  10. Comparison of the effectiveness of three retinal camera technologies for malarial retinopathy detection in Malawi

    Science.gov (United States)

    Soliz, Peter; Nemeth, Sheila C.; Barriga, E. Simon; Harding, Simon P.; Lewallen, Susan; Taylor, Terrie E.; MacCormick, Ian J.; Joshi, Vinayak S.

    2016-03-01

    The purpose of this study was to test the suitability of three available camera technologies (desktop, portable, and iphone based) for imaging comatose children who presented with clinical symptoms of malaria. Ultimately, the results of the project would form the basis for a design of a future camera to screen for malaria retinopathy (MR) in a resource challenged environment. The desktop, portable, and i-phone based cameras were represented by the Topcon, Pictor Plus, and Peek cameras, respectively. These cameras were tested on N=23 children presenting with symptoms of cerebral malaria (CM) at a malaria clinic, Queen Elizabeth Teaching Hospital in Malawi, Africa. Each patient was dilated for binocular indirect ophthalmoscopy (BIO) exam by an ophthalmologist followed by imaging with all three cameras. Each of the cases was graded according to an internationally established protocol and compared to the BIO as the clinical ground truth. The reader used three principal retinal lesions as markers for MR: hemorrhages, retinal whitening, and vessel discoloration. The study found that the mid-priced Pictor Plus hand-held camera performed considerably better than the lower price mobile phone-based camera, and slightly the higher priced table top camera. When comparing the readings of digital images against the clinical reference standard (BIO), the Pictor Plus camera had sensitivity and specificity for MR of 100% and 87%, respectively. This compares to a sensitivity and specificity of 87% and 75% for the i-phone based camera and 100% and 75% for the desktop camera. The drawback of all the cameras were their limited field of view which did not allow complete view of the periphery where vessel discoloration occurs most frequently. The consequence was that vessel discoloration was not addressed in this study. None of the cameras offered real-time image quality assessment to ensure high quality images to afford the best possible opportunity for reading by a remotely located

  11. Using a High-Speed Camera to Measure the Speed of Sound

    Science.gov (United States)

    Hack, William Nathan; Baird, William H.

    2012-01-01

    The speed of sound is a physical property that can be measured easily in the lab. However, finding an inexpensive and intuitive way for students to determine this speed has been more involved. The introduction of affordable consumer-grade high-speed cameras (such as the Exilim EX-FC100) makes conceptually simple experiments feasible. Since the…

  12. Mobile Phone User Interfaces in Multiplayer Games

    OpenAIRE

    NURMINEN, MINNA

    2007-01-01

    This study focuses on the user interface elements of mobile phones and their qualities in multiplayer games. Mobile phone is not intended as a gaming device. Therefore its technology has many shortcomings when it comes to playing mobile games on the device. One of those is the non-standardized user interface design. However, it has also some strengths, such as the portability and networked nature. In addition, many mobile phone models today have a camera, a feature only few gaming devices hav...

  13. SAME4HPC: A Promising Approach in Building a Scalable and Mobile Environment for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Karthik, Rajasekar [ORNL

    2014-01-01

    In this paper, an architecture for building Scalable And Mobile Environment For High-Performance Computing with spatial capabilities called SAME4HPC is described using cutting-edge technologies and standards such as Node.js, HTML5, ECMAScript 6, and PostgreSQL 9.4. Mobile devices are increasingly becoming powerful enough to run high-performance apps. At the same time, there exist a significant number of low-end and older devices that rely heavily on the server or the cloud infrastructure to do the heavy lifting. Our architecture aims to support both of these types of devices to provide high-performance and rich user experience. A cloud infrastructure consisting of OpenStack with Ubuntu, GeoServer, and high-performance JavaScript frameworks are some of the key open-source and industry standard practices that has been adopted in this architecture.

  14. High Thermoelectric Power Factor of High-Mobility 2D Electron Gas.

    Science.gov (United States)

    Ohta, Hiromichi; Kim, Sung Wng; Kaneki, Shota; Yamamoto, Atsushi; Hashizume, Tamotsu

    2018-01-01

    Thermoelectric conversion is an energy harvesting technology that directly converts waste heat from various sources into electricity by the Seebeck effect of thermoelectric materials with a large thermopower ( S ), high electrical conductivity (σ), and low thermal conductivity (κ). State-of-the-art nanostructuring techniques that significantly reduce κ have realized high-performance thermoelectric materials with a figure of merit ( ZT = S 2 ∙σ∙ T ∙κ -1 ) between 1.5 and 2. Although the power factor (PF = S 2 ∙σ) must also be enhanced to further improve ZT , the maximum PF remains near 1.5-4 mW m -1 K -2 due to the well-known trade-off relationship between S and σ. At a maximized PF, σ is much lower than the ideal value since impurity doping suppresses the carrier mobility. A metal-oxide-semiconductor high electron mobility transistor (MOS-HEMT) structure on an AlGaN/GaN heterostructure is prepared. Applying a gate electric field to the MOS-HEMT simultaneously modulates S and σ of the high-mobility electron gas from -490 µV K -1 and ≈10 -1 S cm -1 to -90 µV K -1 and ≈10 4 S cm -1 , while maintaining a high carrier mobility (≈1500 cm 2 V -1 s -1 ). The maximized PF of the high-mobility electron gas is ≈9 mW m -1 K -2 , which is a two- to sixfold increase compared to state-of-the-art practical thermoelectric materials.

  15. Estimation of end of life mobile phones generation: The case study of the Czech Republic

    International Nuclear Information System (INIS)

    Polák, Miloš; Drápalová, Lenka

    2012-01-01

    Highlights: ► In this paper, we define lifespan of mobile phones and estimate their average total lifespan. ► The estimation of lifespan distribution is based on large sample of EoL mobile phones. ► Total lifespan of Czech mobile phones is surprisingly long, exactly 7.99 years. ► In the years 2010–20, about 26.3 million pieces of EoL mobile phones will be generated in the Czech Republic. - Abstract: The volume of waste electrical and electronic equipment (WEEE) has been rapidly growing in recent years. In the European Union (EU), legislation promoting the collection and recycling of WEEE has been in force since the year 2003. Yet, both current and recently suggested collection targets for WEEE are completely ineffective when it comes to collection and recycling of small WEEE (s-WEEE), with mobile phones as a typical example. Mobile phones are the most sold EEE and at the same time one of appliances with the lowest collection rate. To improve this situation, it is necessary to assess the amount of generated end of life (EoL) mobile phones as precisely as possible. This paper presents a method of assessment of EoL mobile phones generation based on delay model. Within the scope of this paper, the method has been applied on the Czech Republic data. However, this method can be applied also to other EoL appliances in or outside the Czech Republic. Our results show that the average total lifespan of Czech mobile phones is surprisingly long, exactly 7.99 years. We impute long lifespan particularly to a storage time of EoL mobile phones at households, estimated to be 4.35 years. In the years 1990–2000, only 45 thousands of EoL mobile phones were generated in the Czech Republic, while in the years 2000–2010 the number grew to 6.5 million pieces and it is estimated that in the years 2010–2020 about 26.3 million pieces will be generated. Current European legislation sets targets on collection and recycling of WEEE in general, but no specific collection target

  16. Scintillation camera for high activity sources

    International Nuclear Information System (INIS)

    Arseneau, R.E.

    1976-01-01

    A scintillation camera is provided with electrical components which expand the intrinsic maximum rate of acceptance for processing of pulses emanating from detected radioactive events. Buffer storage is provided to accommodate temporary increases in the level of radioactivity. An early provisional determination of acceptability of pulses allows many unacceptable pulses to be discarded at an early stage

  17. The neutron small-angle camera D11 at the high-flux reactor, Grenoble

    International Nuclear Information System (INIS)

    Ibel, K.

    1976-01-01

    The neutron small-angle scattering system at the high-flux reactor in Grenoble consists of three major parts: the supply of cold neutrons via bent neutron guides; the small-angle camera D11; and the data handling facilities. The camera D11 has an overall length of 80 m. The effective length of the camera is variable. The full length of the collimator before the fixed sample position can be reduced by movable neutron guides; the second flight path of 40 m full length contains detector sites in various positions. Thus a large range of momentum transfers can be used with the same relative resolution. Scattering angles between 5 x 10 -4 and 0.5 rad and neutron wavelengths from 0.2 to 2.0 nm are available. A large-area position-sensitive detector is used which allows simultaneous recording of intensities scattered at different angles; it is a multiwire proportional chamber. 3808 elements of 1 cm 2 are arranged in a two-dimensional matrix. (Auth.)

  18. A Signature Comparing Android Mobile Application Utilizing Feature Extracting Algorithms

    Directory of Open Access Journals (Sweden)

    Paul Grafilon

    2017-08-01

    Full Text Available The paper presented one of the application that can be done using smartphones camera. Nowadays forgery is one of the most undetected crimes. With the forensic technology used today it is still difficult for authorities to compare and define what a real signature is and what a forged signature is. A signature is a legal representation of a person. All transactions are based on a signature. Forgers may use a signature to sign illegal contracts and withdraw from bank accounts undetected. A signature can also be forged during election periods for repeated voting. Addressing the issues a signature should always be secure. Signature verification is a reduced problem that still poses a real challenge for researchers. The literature on signature verification is quite extensive and shows two main areas of research off-line and on-line systems. Off-line systems deal with a static image of the signature i.e. the result of the action of signing while on-line systems work on the dynamic process of generating the signature i.e. the action of signing itself. The researchers have found a way to resolve the concerns. A mobile application that integrates the camera to take a picture of a signature analyzes it and compares it to other signatures for verification. It will exist to help citizens to be more cautious and aware with issues regarding the signatures. This might also be relevant to help organizations and institutions such as banks and insurance companies in verifying signatures that may avoid unwanted transactions and identity theft. Furthermore this might help the authorities in the never ending battle against crime especially against forgers and thieves. The project aimed to design and develop a mobile application that integrates the smartphone camera for verifying and comparing signatures for security using the best algorithm possible. As the result of the development the said smartphone camera application is functional and reliable.

  19. Low cost alternative of high speed visible light camera for tokamak experiments

    Czech Academy of Sciences Publication Activity Database

    Odstrčil, T.; Odstrčil, Michal; Grover, O.; Svoboda, V.; Ďuran, Ivan; Mlynář, Jan

    2012-01-01

    Roč. 83, č. 10 (2012), 10E505-10E505 ISSN 0034-6748. [Topical Conference High-Temperature Plasma Diagnostics/19./. Monterey, 06.05.2012-10.05.2012] Institutional research plan: CEZ:AV0Z20430508 Keywords : Plasma * tokamak * diagnostic * high speed camera * GOLEM Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 1.602, year: 2012 http://dx.doi.org/10.1063/1.4731003

  20. Camera-augmented mobile C-arm (CamC): A feasibility study of augmented reality imaging in the operating room.

    Science.gov (United States)

    von der Heide, Anna Maria; Fallavollita, Pascal; Wang, Lejing; Sandner, Philipp; Navab, Nassir; Weidert, Simon; Euler, Ekkehard

    2018-04-01

    In orthopaedic trauma surgery, image-guided procedures are mostly based on fluoroscopy. The reduction of radiation exposure is an important goal. The purpose of this work was to investigate the impact of a camera-augmented mobile C-arm (CamC) on radiation exposure and the surgical workflow during a first clinical trial. Applying a workflow-oriented approach, 10 general workflow steps were defined to compare the CamC to traditional C-arms. The surgeries included were arbitrarily identified and assigned to the study. The evaluation criteria were radiation exposure and operation time for each workflow step and the entire surgery. The evaluation protocol was designed and conducted in a single-centre study. The radiation exposure was remarkably reduced by 18 X-ray shots 46% using the CamC while keeping similar surgery times. The intuitiveness of the system, its easy integration into the surgical workflow, and its great potential to reduce radiation have been demonstrated. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Estimation of end of life mobile phones generation: the case study of the Czech Republic.

    Science.gov (United States)

    Polák, Miloš; Drápalová, Lenka

    2012-08-01

    The volume of waste electrical and electronic equipment (WEEE) has been rapidly growing in recent years. In the European Union (EU), legislation promoting the collection and recycling of WEEE has been in force since the year 2003. Yet, both current and recently suggested collection targets for WEEE are completely ineffective when it comes to collection and recycling of small WEEE (s-WEEE), with mobile phones as a typical example. Mobile phones are the most sold EEE and at the same time one of appliances with the lowest collection rate. To improve this situation, it is necessary to assess the amount of generated end of life (EoL) mobile phones as precisely as possible. This paper presents a method of assessment of EoL mobile phones generation based on delay model. Within the scope of this paper, the method has been applied on the Czech Republic data. However, this method can be applied also to other EoL appliances in or outside the Czech Republic. Our results show that the average total lifespan of Czech mobile phones is surprisingly long, exactly 7.99 years. We impute long lifespan particularly to a storage time of EoL mobile phones at households, estimated to be 4.35 years. In the years 1990-2000, only 45 thousands of EoL mobile phones were generated in the Czech Republic, while in the years 2000-2010 the number grew to 6.5 million pieces and it is estimated that in the years 2010-2020 about 26.3 million pieces will be generated. Current European legislation sets targets on collection and recycling of WEEE in general, but no specific collection target for EoL mobile phone exists. In the year 2010 only about 3-6% of Czech EoL mobile phones were collected for recovery and recycling. If we make similar estimation using an estimated average EU value, then within the next 10 years about 1.3 billion of EoL mobile phones would be available for recycling in the EU. This amount contains about 31 tonnes of gold and 325 tonnes of silver. Since Europe is dependent on import

  2. A Mobile-Based High Sensitivity On-Field Organophosphorus Compounds Detecting System for IoT-Based Food Safety Tracking

    Directory of Open Access Journals (Sweden)

    Han Jin

    2017-01-01

    Full Text Available A mobile-based high sensitivity absorptiometer is presented to detect organophosphorus (OP compounds for Internet-of-Things based food safety tracking. This instrument consists of a customized sensor front-end chip, LED-based light source, low power wireless link, and coin battery, along with a sample holder packaged in a recycled format. The sensor front-end integrates optical sensor, capacitive transimpedance amplifier, and a folded-reference pulse width modulator in a single chip fabricated in a 0.18 μm 1-poly 5-metal CMOS process and has input optical power dynamic range of 71 dB, sensitivity of 3.6 nW/cm2 (0.77 pA, and power consumption of 14.5 μW. Enabled by this high sensitivity sensor front-end chip, the proposed absorptiometer has a small size of 96 cm3, with features including on-field detection and wireless communication with a mobile. OP compound detection experiments of the handheld system demonstrate a limit of detection (LOD of 0.4 μmol/L, comparable to that of a commercial spectrophotometer. Meanwhile, an android-based application (APP is presented which makes the absorptiometer access to the Internet-of-Things (IoT.

  3. Earth elevation map production and high resolution sensing camera imaging analysis

    Science.gov (United States)

    Yang, Xiubin; Jin, Guang; Jiang, Li; Dai, Lu; Xu, Kai

    2010-11-01

    The Earth's digital elevation which impacts space camera imaging has prepared and imaging has analysed. Based on matching error that TDI CCD integral series request of the speed of image motion, statistical experimental methods-Monte Carlo method is used to calculate the distribution histogram of Earth's elevation in image motion compensated model which includes satellite attitude changes, orbital angular rate changes, latitude, longitude and the orbital inclination changes. And then, elevation information of the earth's surface from SRTM is read. Earth elevation map which produced for aerospace electronic cameras is compressed and spliced. It can get elevation data from flash according to the shooting point of latitude and longitude. If elevation data between two data, the ways of searching data uses linear interpolation. Linear interpolation can better meet the rugged mountains and hills changing requests. At last, the deviant framework and camera controller are used to test the character of deviant angle errors, TDI CCD camera simulation system with the material point corresponding to imaging point model is used to analyze the imaging's MTF and mutual correlation similarity measure, simulation system use adding cumulation which TDI CCD imaging exceeded the corresponding pixel horizontal and vertical offset to simulate camera imaging when stability of satellite attitude changes. This process is practicality. It can effectively control the camera memory space, and meet a very good precision TDI CCD camera in the request matches the speed of image motion and imaging.

  4. Image quality testing of assembled IR camera modules

    Science.gov (United States)

    Winters, Daniel; Erichsen, Patrik

    2013-10-01

    Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.

  5. A digital gigapixel large-format tile-scan camera.

    Science.gov (United States)

    Ben-Ezra, M

    2011-01-01

    Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.

  6. PulseCam: high-resolution blood perfusion imaging using a camera and a pulse oximeter.

    Science.gov (United States)

    Kumar, Mayank; Suliburk, James; Veeraraghavan, Ashok; Sabharwal, Ashutosh

    2016-08-01

    Measuring blood perfusion is important in medical care as an indicator of injury and disease. However, currently available devices to measure blood perfusion like laser Doppler flowmetry are bulky, expensive, and cumbersome to use. An alternative low-cost and portable camera-based blood perfusion measurement system has recently been proposed, but such camera-only system produces noisy low-resolution blood perfusion maps. In this paper, we propose a new multi-sensor modality, named PulseCam, for measuring blood perfusion by combining a traditional pulse oximeter with a video camera in a unique way to provide low noise and high-resolution blood perfusion maps. Our proposed multi-sensor modality improves per pixel signal to noise ratio of measured perfusion map by up to 3 dB and improves the spatial resolution by 2 - 3 times compared to best known camera-only methods. Blood perfusion measured in the palm using our PulseCam setup during a post-occlusive reactive hyperemia (PORH) test replicates standard PORH response curve measured using laser Doppler flowmetry device but with much lower cost and a portable setup making it suitable for further development as a clinical device.

  7. Evaluation of a high-resolution, breast-specific, small-field-of-view gamma camera for the detection of breast cancer

    International Nuclear Information System (INIS)

    Brem, R.F.; Kieper, D.A.; Rapelyea, J.A.; Majewski, S.

    2003-01-01

    Purpose: The purpose of our study is to review the state of the art in nuclear medicine imaging of the breast (scintimammography) and to evaluate a novel, high-resolution, breast-specific gamma camera (HRBGC) for the detection of suspicious breast lesions. Materials and Methods: Fifty patients with 58 breast lesions in whom a scintimammogram was clinically indicated were prospectively evaluated with a general-purpose gamma camera and a HRBGC prototype. Nuclear studies were prospectively classified as negative (normal/benign) or positive (suspicious/malignant) by two radiologists, blinded to mammographic and histologic results with both the conventional and high-resolution. All lesions were confirmed by pathology. Results: Included in this study were 30 benign and 28 malignant lesions. The sensitivity for detection of breast cancer was 64.3% (18/28) with the conventional camera and 78.6% (22/28) with the HRBGC. Specificity of both systems was 93.3% (28/30). In the 18 nonpalpable cancers, sensitivity was 55.5% (10/18) and 72.2% (13/18) with the general-purpose camera and HRBGC, respectively. In cancers ≤ 1cm, 7 of 15 were detected with the general-purpose camera and 10 of 15 with the HRBGC. Four of the cancers (median size, 8.5 mm) detected with the HRBGC were missed by the conventional camera Conclusion: Evaluation of indeterminate breasts lesions with a high resolution, breast-specific gamma camera results in improved sensitivity for the detection of cancer with greater improvement demonstrated in nonpalpable and ≤1 cm cancers

  8. High-mobility group box 1 and the receptor for advanced glycation end products contribute to lung injury during Staphylococcus aureus pneumonia

    NARCIS (Netherlands)

    Achouiti, Ahmed; van der Meer, Anne Jan; Florquin, Sandrine; Yang, Huan; Tracey, Kevin J.; van 't Veer, Cornelis; de Vos, Alex F.; van der Poll, Tom

    2013-01-01

    Staphylococcus (S.) aureus has emerged as an important cause of necrotizing pneumonia. Lung injury during S. aureus pneumonia may be enhanced by local release of damage associated molecular patterns such as high-mobility group box 1 (HMGB1). In the current study we sought to determine the functional

  9. INFLUENCE OF MECHANICAL ERRORS IN A ZOOM CAMERA

    Directory of Open Access Journals (Sweden)

    Alfredo Gardel

    2011-05-01

    Full Text Available As it is well known, varying the focus and zoom of a camera lens system changes the alignment of the lens components resulting in a displacement of the image centre and field of view. Thus, knowledge of how the image centre shifts may be important for some aspects of camera calibration. As shown in other papers, the pinhole model is not adequate for zoom lenses. To ensure a calibration model for these lenses, the calibration parameters must be adjusted. The geometrical modelling of a zoom lens is realized from its lens specifications. The influence on the calibration parameters is calculated by introducing mechanical errors in the mobile lenses. Figures are given describing the errors obtained in the principal point coordinates and also in its standard deviation. A comparison is then made with the errors that come from the incorrect detection of the calibration points. It is concluded that mechanical errors of actual zoom lenses can be neglected in the calibration process because detection errors have more influence on the camera parameters.

  10. Robust exponential stabilization of nonholonomic wheeled mobile robots with unknown visual parameters

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    The visual servoing stabilization of nonholonomic mobile robot with unknown camera parameters is investigated.A new kind of uncertain chained model of nonholonomic kinemetic system is obtained based on the visual feedback and the standard chained form of type (1,2) mobile robot.Then,a novel time-varying feedback controller is proposed for exponentially stabilizing the position and orientation of the robot using visual feedback and switching strategy when the camera parameters are not known.The exponential s...

  11. High Quantum Efficiency 1024x1024 Longwave Infrared SLS FPA and Camera, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose a high quantum efficiency (QE) 1024x1024 longwave infrared focal plane array (LWIR FPA) and CAMERA with ~ 12 micron cutoff wavelength made from...

  12. Smart mobility solution with multiple input Output interface.

    Science.gov (United States)

    Sethi, Aartika; Deb, Sujay; Ranjan, Prabhat; Sardar, Arghya

    2017-07-01

    Smart wheelchairs are commonly used to provide solution for mobility impairment. However their usage is limited primarily due to high cost owing from sensors required for giving input, lack of adaptability for different categories of input and limited functionality. In this paper we propose a smart mobility solution using smartphone with inbuilt sensors (accelerometer, camera and speaker) as an input interface. An Emotiv EPOC+ is also used for motor imagery based input control synced with facial expressions in cases of extreme disability. Apart from traction, additional functions like home security and automation are provided using Internet of Things (IoT) and web interfaces. Although preliminary, our results suggest that this system can be used as an integrated and efficient solution for people suffering from mobility impairment. The results also indicate a decent accuracy is obtained for the overall system.

  13. Instantaneous phase-shifting Fizeau interferometry with high-speed pixelated phase-mask camera

    Science.gov (United States)

    Yatagai, Toyohiko; Jackin, Boaz Jessie; Ono, Akira; Kiyohara, Kosuke; Noguchi, Masato; Yoshii, Minoru; Kiyohara, Motosuke; Niwa, Hayato; Ikuo, Kazuyuki; Onuma, Takashi

    2015-08-01

    A Fizeou interferometer with instantaneous phase-shifting ability using a Wollaston prism is designed. to measure dynamic phase change of objects, a high-speed video camera of 10-5s of shutter speed is used with a pixelated phase-mask of 1024 × 1024 elements. The light source used is a laser of wavelength 532 nm which is split into orthogonal polarization states by passing through a Wollaston prism. By adjusting the tilt of the reference surface it is possible to make the reference and object beam with orthogonal polarizations states to coincide and interfere. Then the pixelated phase-mask camera calculate the phase changes and hence the optical path length difference. Vibration of speakers and turbulence of air flow were successfully measured in 7,000 frames/sec.

  14. Extrinsic Parameter Calibration for Line Scanning Cameras on Ground Vehicles with Navigation Systems Using a Calibration Pattern

    Directory of Open Access Journals (Sweden)

    Alexander Wendel

    2017-10-01

    Full Text Available Line scanning cameras, which capture only a single line of pixels, have been increasingly used in ground based mobile or robotic platforms. In applications where it is advantageous to directly georeference the camera data to world coordinates, an accurate estimate of the camera’s 6D pose is required. This paper focuses on the common case where a mobile platform is equipped with a rigidly mounted line scanning camera, whose pose is unknown, and a navigation system providing vehicle body pose estimates. We propose a novel method that estimates the camera’s pose relative to the navigation system. The approach involves imaging and manually labelling a calibration pattern with distinctly identifiable points, triangulating these points from camera and navigation system data and reprojecting them in order to compute a likelihood, which is maximised to estimate the 6D camera pose. Additionally, a Markov Chain Monte Carlo (MCMC algorithm is used to estimate the uncertainty of the offset. Tested on two different platforms, the method was able to estimate the pose to within 0.06 m/1.05 ∘ and 0.18 m/2.39 ∘ . We also propose several approaches to displaying and interpreting the 6D results in a human readable way.

  15. Transmission electron microscope CCD camera

    Science.gov (United States)

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  16. Using a laser scanning camera for reactor inspection

    International Nuclear Information System (INIS)

    Armour, I.A.; Adrain, R.S.; Klewe, R.C.

    1984-01-01

    Inspection of nuclear reactors is normally carried out using TV or film cameras. There are, however, several areas where these cameras show considerable shortcomings. To overcome these difficulties, laser scanning cameras have been developed. This type of camera can be used for general visual inspection as well as the provision of high resolution video images with high ratio on and off-axis zoom capability. In this paper, we outline the construction and operation of a laser scanning camera and give examples of how it has been used in various power stations, and indicate future potential developments. (author)

  17. Enterprise Mobile Tracking and Reminder System: MAE

    Directory of Open Access Journals (Sweden)

    Cheah Huei Yoong

    2012-07-01

    Full Text Available Mobile phones have made significant improvements from providing voice communications to advanced features such as camera, GPS, Wi-Fi, SMS, voice recognition, Internet surfing, and touch screen. This paper presents an enterprise mobile tracking and reminder system (MAE that enables the elderly to have a better elder-care experience. The high-level architecture and major software algorithms especially the tracking in Android phones and SMS functions in server are described. The analysis of data captured and performance study of the server are discussed. In order to show the effectiveness of MAE, a pilot test was carried out with a retirement village in Singapore and the feedback from the elderly was evaluated. Generally, most comments received from the elderly were positive.

  18. Back End Programming in .NET Framework

    OpenAIRE

    Sapkota, Dinesh

    2013-01-01

    The main goal of the project was to develop a web application which provides the services for both web and mobile client. The complete application development process was carried out by the team of four members and a supervisor. According to the interest of group members the whole project was divided into four parts; that is user interface design, mobile application development, back end development for mobile services and server side back end development of application. I got the task of ser...

  19. High-precision real-time 3D shape measurement based on a quad-camera system

    Science.gov (United States)

    Tao, Tianyang; Chen, Qian; Feng, Shijie; Hu, Yan; Zhang, Minliang; Zuo, Chao

    2018-01-01

    Phase-shifting profilometry (PSP) based 3D shape measurement is well established in various applications due to its high accuracy, simple implementation, and robustness to environmental illumination and surface texture. In PSP, higher depth resolution generally requires higher fringe density of projected patterns which, in turn, lead to severe phase ambiguities that must be solved with additional information from phase coding and/or geometric constraints. However, in order to guarantee the reliability of phase unwrapping, available techniques are usually accompanied by increased number of patterns, reduced amplitude of fringe, and complicated post-processing algorithms. In this work, we demonstrate that by using a quad-camera multi-view fringe projection system and carefully arranging the relative spatial positions between the cameras and the projector, it becomes possible to completely eliminate the phase ambiguities in conventional three-step PSP patterns with high-fringe-density without projecting any additional patterns or embedding any auxiliary signals. Benefiting from the position-optimized quad-camera system, stereo phase unwrapping can be efficiently and reliably performed by flexible phase consistency checks. Besides, redundant information of multiple phase consistency checks is fully used through a weighted phase difference scheme to further enhance the reliability of phase unwrapping. This paper explains the 3D measurement principle and the basic design of quad-camera system, and finally demonstrates that in a large measurement volume of 200 mm × 200 mm × 400 mm, the resultant dynamic 3D sensing system can realize real-time 3D reconstruction at 60 frames per second with a depth precision of 50 μm.

  20. Muon Trigger for Mobile Phones

    Science.gov (United States)

    Borisyak, M.; Usvyatsov, M.; Mulhearn, M.; Shimmin, C.; Ustyuzhanin, A.

    2017-10-01

    The CRAYFIS experiment proposes to use privately owned mobile phones as a ground detector array for Ultra High Energy Cosmic Rays. Upon interacting with Earth’s atmosphere, these events produce extensive particle showers which can be detected by cameras on mobile phones. A typical shower contains minimally-ionizing particles such as muons. As these particles interact with CMOS image sensors, they may leave tracks of faintly-activated pixels that are sometimes hard to distinguish from random detector noise. Triggers that rely on the presence of very bright pixels within an image frame are not efficient in this case. We present a trigger algorithm based on Convolutional Neural Networks which selects images containing such tracks and are evaluated in a lazy manner: the response of each successive layer is computed only if activation of the current layer satisfies a continuation criterion. Usage of neural networks increases the sensitivity considerably comparable with image thresholding, while the lazy evaluation allows for execution of the trigger under the limited computational power of mobile phones.

  1. Diagnostics and camera strobe timers for hydrogen pellet injectors

    International Nuclear Information System (INIS)

    Bauer, M.L.; Fisher, P.W.; Qualls, A.L.

    1993-01-01

    Hydrogen pellet injectors have been used to fuel fusion experimental devices for the last decade. As part of developments to improve pellet production and velocity, various diagnostic devices were implemented, ranging from witness plates to microwave mass meters to high speed photography. This paper will discuss details of the various implementations of light sources, cameras, synchronizing electronics and other diagnostic systems developed at Oak Ridge for the Tritium Proof-of-Principle (TPOP) experiment at the Los Alamos National Laboratory's Tritium System Test Assembly (TSTA), a system built for the Oak Ridge Advanced Toroidal Facility (ATF), and the Tritium Pellet Injector (TPI) built for the Princeton Tokamak Fusion Test Reactor (TFTR). Although a number of diagnostic systems were implemented on each pellet injector, the emphasis here will be on the development of a synchronization system for high-speed photography using pulsed light sources, standard video cameras, and video recorders. This system enabled near real-time visualization of the pellet shape, size and flight trajectory over a wide range of pellet speeds and at one or two positions along the flight path. Additionally, the system provides synchronization pulses to the data system for pseudo points along the flight path, such as the estimated plasma edge. This was accomplished using an electronic system that took the time measured between sets of light gates, and generated proportionally delayed triggers for light source strobes and pseudo points. Systems were built with two camera stations, one located after the end of the barrel, and a second camera located closer to the main reactor vessel wall. Two or three light gates were used to sense pellet velocity and various spacings were implemented on the three experiments. Both analog and digital schemes were examined for implementing the delay system. A digital technique was chosen

  2. WiMAX security and quality of service an end-to-end perspective

    CERN Document Server

    Tang, Seok-Yee; Sharif, Hamid

    2010-01-01

    WiMAX is the first standard technology to deliver true broadband mobility at speeds that enable powerful multimedia applications such as Voice over Internet Protocol (VoIP), online gaming, mobile TV, and personalized infotainment. WiMAX Security and Quality of Service, focuses on the interdisciplinary subject of advanced Security and Quality of Service (QoS) in WiMAX wireless telecommunication systems including its models, standards, implementations, and applications. Split into 4 parts, Part A of the book is an end-to-end overview of the WiMAX architecture, protocol, and system requirements.

  3. Design and recognition of artificial landmarks for reliable indoor self-localization of mobile robots

    Directory of Open Access Journals (Sweden)

    Xu Zhong

    2017-02-01

    Full Text Available This article presents a self-localization scheme for indoor mobile robot navigation based on reliable design and recognition of artificial visual landmarks. Each landmark is patterned with a set of concentric circular rings in black and white, which reliably encodes the landmark’s identity under environmental illumination. A mobile robot in navigation uses an onboard camera to capture landmarks in the environment. The landmarks in an image are detected and identified using a bilayer recognition algorithm: A global recognition process initially extracts candidate landmark regions across the whole image and tries to identify enough landmarks; if necessary, a local recognition process locally enhances those unidentified regions of interest influenced by illumination and incompleteness and reidentifies them. The recognized landmarks are used to estimate the position and orientation of the onboard camera in the environment, based on the geometric relationship between the image and environmental frames. The experiments carried out in a real indoor environment show high robustness of the proposed landmark design and recognition scheme to the illumination condition, which leads to reliable and accurate mobile robot localization.

  4. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  5. High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project

    Science.gov (United States)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2015-04-01

    Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.

  6. Autonomous Mobile Robot That Can Read

    Directory of Open Access Journals (Sweden)

    Létourneau Dominic

    2004-01-01

    Full Text Available The ability to read would surely contribute to increased autonomy of mobile robots operating in the real world. The process seems fairly simple: the robot must be capable of acquiring an image of a message to read, extract the characters, and recognize them as symbols, characters, and words. Using an optical Character Recognition algorithm on a mobile robot however brings additional challenges: the robot has to control its position in the world and its pan-tilt-zoom camera to find textual messages to read, potentially having to compensate for its viewpoint of the message, and use the limited onboard processing capabilities to decode the message. The robot also has to deal with variations in lighting conditions. In this paper, we present our approach demonstrating that it is feasible for an autonomous mobile robot to read messages of specific colors and font in real-world conditions. We outline the constraints under which the approach works and present results obtained using a Pioneer 2 robot equipped with a Pentium 233 MHz and a Sony EVI-D30 pan-tilt-zoom camera.

  7. Low cost thermal camera for use in preclinical detection of diabetic peripheral neuropathy in primary care setting

    Science.gov (United States)

    Joshi, V.; Manivannan, N.; Jarry, Z.; Carmichael, J.; Vahtel, M.; Zamora, G.; Calder, C.; Simon, J.; Burge, M.; Soliz, P.

    2018-02-01

    Diabetic peripheral neuropathy (DPN) accounts for around 73,000 lower-limb amputations annually in the US on patients with diabetes. Early detection of DPN is critical. Current clinical methods for diagnosing DPN are subjective and effective only at later stages. Until recently, thermal cameras used for medical imaging have been expensive and hence prohibitive to be installed in primary care setting. The objective of this study is to compare results from a low-cost thermal camera with a high-end thermal camera used in screening for DPN. Thermal imaging has demonstrated changes in microvascular function that correlates with nerve function affected by DPN. The limitations for using low-cost cameras for DPN imaging are: less resolution (active pixels), frame rate, thermal sensitivity etc. We integrated two FLIR Lepton (80x60 active pixels, 50° HFOV, thermal sensitivity aged 35-76) were recruited. Difference in the temperature measurements between cameras was calculated for each subject and the results show that the difference between the temperature measurements of two cameras (mean difference=0.4, p-value=0.2) is not statistically significant. We conclude that the low-cost thermal camera system shows potential for use in detecting early-signs of DPN in under-served and rural clinics.

  8. ePix100 camera: Use and applications at LCLS

    Energy Technology Data Exchange (ETDEWEB)

    Carini, G. A., E-mail: carini@slac.stanford.edu; Alonso-Mori, R.; Blaj, G.; Caragiulo, P.; Chollet, M.; Damiani, D.; Dragone, A.; Feng, Y.; Haller, G.; Hart, P.; Hasi, J.; Herbst, R.; Herrmann, S.; Kenney, C.; Lemke, H.; Manger, L.; Markovic, B.; Mehta, A.; Nelson, S.; Nishimura, K. [SLAC National Accelerator Laboratory (United States); and others

    2016-07-27

    The ePix100 x-ray camera is a new system designed and built at SLAC for experiments at the Linac Coherent Light Source (LCLS). The camera is the first member of a family of detectors built around a single hardware and software platform, supporting a variety of front-end chips. With a readout speed of 120 Hz, matching the LCLS repetition rate, a noise lower than 80 e-rms and pixels of 50 µm × 50 µm, this camera offers a viable alternative to fast readout, direct conversion, scientific CCDs in imaging mode. The detector, designed for applications such as X-ray Photon Correlation Spectroscopy (XPCS) and wavelength dispersive X-ray Emission Spectroscopy (XES) in the energy range from 2 to 10 keV and above, comprises up to 0.5 Mpixels in a very compact form factor. In this paper, we report the performance of the camera during its first use at LCLS.

  9. Caliste 64, an innovative CdTe hard X-ray micro-camera

    International Nuclear Information System (INIS)

    Meuris, A.; Limousin, O.; Pinsard, F.; Le Mer, I.; Lugiez, F.; Gevin, O.; Delagnes, E.; Vassal, M.C.; Soufflet, F.; Bocage, R.

    2008-01-01

    A prototype 64 pixel miniature camera has been designed and tested for the Simbol-X hard X-ray observatory to be flown on the joint CNES-ASI space mission in 2014. This device is called Caliste 64. It is a high performance spectro-imager with event time-tagging capability, able to detect photons between 2 keV and 250 keV. Caliste 64 is the assembly of a 1 or 2 min thick CdTe detector mounted on top of a readout module. CdTe detectors equipped with Aluminum Schottky barrier contacts are used because of their very low dark current and excellent spectroscopic performance. Front-end electronics is a stack of four IDeF-X V1.1 ASICs, arranged perpendicular to the detection plane, to read out each pixel independently. The whole camera fits in a 10 * 10 * 20 mm 3 volume and is juxtaposable on its four sides. This allows the device to be used as an elementary unit in a larger array of Caliste 64 cameras. Noise performance resulted in an ENC better than 60 electrons rms in average. The first prototype camera is tested at -10 degrees C with a bias of -400 V. The spectrum summed across the 64 pixels results in a resolution of 697 eV FWHM at 13.9 keV and 808 eV FWFM at 59.54 keV. (authors)

  10. High-frequency hearing loss among mobile phone users.

    Science.gov (United States)

    Velayutham, P; Govindasamy, Gopala Krishnan; Raman, R; Prepageran, N; Ng, K H

    2014-01-01

    The objective of this study is to assess high frequency hearing (above 8 kHz) loss among prolonged mobile phone users is a tertiary Referral Center. Prospective single blinded study. This is the first study that used high-frequency audiometry. The wide usage of mobile phone is so profound that we were unable to find enough non-users as a control group. Therefore we compared the non-dominant ear to the dominant ear using audiometric measurements. The study was a blinded study wherein the audiologist did not know which was the dominant ear. A total of 100 subjects were studied. Of the subjects studied 53% were males and 47% females. Mean age was 27. The left ear was dominant in 63%, 22% were dominant in the right ear and 15% did not have a preference. This study showed that there is significant loss in the dominant ear compared to the non-dominant ear (P mobile phone revealed high frequency hearing loss in the dominant ear (mobile phone used) compared to the non dominant ear.

  11. RGB-D, Laser and Thermal Sensor Fusion for People following in a Mobile Robot

    Directory of Open Access Journals (Sweden)

    Loreto Susperregi

    2013-06-01

    Full Text Available Detecting and tracking people is a key capability for robots that operate in populated environments. In this paper, we used a multiple sensor fusion approach that combines three kinds of sensors in order to detect people using RGB-D vision, lasers and a thermal sensor mounted on a mobile platform. The Kinect sensor offers a rich data set at a significantly low cost, however, there are some limitations to its use in a mobile platform, mainly that the Kinect algorithms for people detection rely on images captured by a static camera. To cope with these limitations, this work is based on the combination of the Kinect and a Hokuyo laser and a thermopile array sensor. A real-time particle filter system merges the information provided by the sensors and calculates the position of the target, using probabilistic leg and thermal patterns, image features and optical flow to this end. Experimental results carried out with a mobile platform in a Science museum have shown that the combination of different sensory cues increases the reliability of the people following system.

  12. Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras

    Science.gov (United States)

    Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.

    2017-02-01

    Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.

  13. Improved scintimammography using a high-resolution camera mounted on an upright mammography gantry

    Energy Technology Data Exchange (ETDEWEB)

    Itti, Emmanuel; Patt, Bradley E.; Diggles, Linda E.; MacDonald, Lawrence; Iwanczyk, Jan S.; Mishkin, Fred S.; Khalkhali, Iraj E-mail: nephrad@aol.com

    2003-01-21

    {sup 99m}Tc-sestamibi scintimammography (SMM) is a useful adjunct to conventional X-ray mammography (XMM) for the assessment of breast cancer. An increasing number of studies has emphasized fair sensitivity values for the detection of tumors >1 cm, compared to XMM, particularly in situations where high glandular breast densities make mammographic interpretation difficult. In addition, SMM has demonstrated high specificity for cancer, compared to various functional and anatomic imaging modalities. However, large field-of-view (FOV) gamma cameras are difficult to position close to the breasts, which decreases spatial resolution and subsequently, the sensitivity of detection for tumors <1 cm. New dedicated detectors featuring small FOV and increased spatial resolution have recently been developed. In this setting, improvement in tumor detection sensitivity, particularly with regard to small cancers is expected. At Division of Nuclear Medicine, Harbor-UCLA Medical Center, we have performed over 2000 SMM within the last 9 years. We have recently used a dedicated breast camera (LumaGEM) featuring a 12.8x12.8 cm{sup 2} FOV and an array of 2x2x6 mm{sup 3} discrete crystals coupled to a photon-sensitive photomultiplier tube readout. This camera is mounted on a mammography gantry allowing upright imaging, medial positioning and use of breast compression. Preliminary data indicates significant enhancement of spatial resolution by comparison with standard imaging in the first 10 patients. Larger series will be needed to conclude on sensitivity/specificity issues.

  14. Affordances in Mobile Augmented Reality Applications

    OpenAIRE

    Gjøsæter, Tor

    2014-01-01

    This paper explores the affordances of augmented reality content in a mobile augmented reality application. A user study was conducted by performing a multi-camera video recording of seven think aloud sessions. The think aloud sessions consisted of individual users performing tasks, exploring and experiencing a mobile augmented reality (MAR) application we developed for the iOS platform named ARad. We discuss the instrumental affordances we observed when users interacted with augmented realit...

  15. Micromagnetics on high-performance workstation and mobile computational platforms

    Science.gov (United States)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  16. Streak camera recording of interferometer fringes

    International Nuclear Information System (INIS)

    Parker, N.L.; Chau, H.H.

    1977-01-01

    The use of an electronic high-speed camera in the streaking mode to record interference fringe motion from a velocity interferometer is discussed. Advantages of this method over the photomultiplier tube-oscilloscope approach are delineated. Performance testing and data for the electronic streak camera are discussed. The velocity profile of a mylar flyer accelerated by an electrically exploded bridge, and the jump-off velocity of metal targets struck by these mylar flyers are measured in the camera tests. Advantages of the streak camera include portability, low cost, ease of operation and maintenance, simplified interferometer optics, and rapid data analysis

  17. Mobile object retrieval in server-based image databases

    Science.gov (United States)

    Manger, D.; Pagel, F.; Widak, H.

    2013-05-01

    The increasing number of mobile phones equipped with powerful cameras leads to huge collections of user-generated images. To utilize the information of the images on site, image retrieval systems are becoming more and more popular to search for similar objects in an own image database. As the computational performance and the memory capacity of mobile devices are constantly increasing, this search can often be performed on the device itself. This is feasible, for example, if the images are represented with global image features or if the search is done using EXIF or textual metadata. However, for larger image databases, if multiple users are meant to contribute to a growing image database or if powerful content-based image retrieval methods with local features are required, a server-based image retrieval backend is needed. In this work, we present a content-based image retrieval system with a client server architecture working with local features. On the server side, the scalability to large image databases is addressed with the popular bag-of-word model with state-of-the-art extensions. The client end of the system focuses on a lightweight user interface presenting the most similar images of the database highlighting the visual information which is common with the query image. Additionally, new images can be added to the database making it a powerful and interactive tool for mobile contentbased image retrieval.

  18. On the analysis of human mobility model for content broadcasting in 5G networks

    KAUST Repository

    Lau, Chun Pong

    2018-02-15

    Today\\'s mobile service providers aim at ensuring end-to-end performance guarantees. Hence, ensuring an efficient content delivery to end users is highly required. Currently, transmitting popular contents in modern mobile networks rely on unicast transmission. This result into a huge underutilization of the wireless bandwidth. The urban scale mobility of users is beneficial for mobile networks to allocate radio resources spatially and temporally for broadcasting contents. In this paper, we conduct a comprehensive analysis on a human activity/mobility model and the content broadcasting system in 5G mobile networks. The objective of this work is to describe how human daily activities could improve the content broadcasting efficiency. We achieve the objective by analyzing the transition probabilities of a user traveling over several places according to the change of states of daily human activities. Using a reallife simulation, we demonstrate the relationship between the human mobility and the optimization objective of the content broadcasting system.

  19. Hydrogen peroxide plasma sterilization of a waterproof, high-definition video camera case for intraoperative imaging in veterinary surgery.

    Science.gov (United States)

    Adin, Christopher A; Royal, Kenneth D; Moore, Brandon; Jacob, Megan

    2018-06-13

    To evaluate the safety and usability of a wearable, waterproof high-definition camera/case for acquisition of surgical images by sterile personnel. An in vitro study to test the efficacy of biodecontamination of camera cases. Usability for intraoperative image acquisition was assessed in clinical procedures. Two waterproof GoPro Hero4 Silver camera cases were inoculated by immersion in media containing Staphylococcus pseudointermedius or Escherichia coli at ≥5.50E+07 colony forming units/mL. Cases were biodecontaminated by manual washing and hydrogen peroxide plasma sterilization. Cultures were obtained by swab and by immersion in enrichment broth before and after each contamination/decontamination cycle (n = 4). The cameras were then applied by a surgeon in clinical procedures by using either a headband or handheld mode and were assessed for usability according to 5 user characteristics. Cultures of all poststerilization swabs were negative. One of 8 cultures was positive in enrichment broth, consistent with a low level of contamination in 1 sample. Usability of the camera was considered poor in headband mode, with limited battery life, inability to control camera functions, and lack of zoom function affecting image quality. Handheld operation of the camera by the primary surgeon improved usability, allowing close-up still and video intraoperative image acquisition. Vaporized hydrogen peroxide sterilization of this camera case was considered effective for biodecontamination. Handheld operation improved usability for intraoperative image acquisition. Vaporized hydrogen peroxide sterilization and thorough manual washing of a waterproof camera may provide cost effective intraoperative image acquisition for documentation purposes. © 2018 The American College of Veterinary Surgeons.

  20. Mobile Video in Everyday Social Interactions

    Science.gov (United States)

    Reponen, Erika; Lehikoinen, Jaakko; Impiö, Jussi

    Video recording has become a spontaneous everyday activity for many people, thanks to the video capabilities of modern mobile phones. Internet connectivity of mobile phones enables fluent sharing of captured material even real-time, which makes video an up-and-coming everyday interaction medium. In this article we discuss the effect of the video camera in the social environment, everyday life situations, mainly based on a study where four groups of people used digital video cameras in their normal settings. We also reflect on another study of ours, relating to real-time mobile video communication and discuss future views. The aim of our research is to understand the possibilities in the domain of mobile video. Live and delayed sharing seem to have their special characteristics, live video being used as a virtual window between places whereas delayed video usage has more scope for good-quality content. While this novel way of interacting via mobile video enables new social patterns, it also raises new concerns for privacy and trust between participating persons in all roles, largely due to the widely spreading possibilities of videos. Video in a social situation affects cameramen (who record), targets (who are recorded), passers-by (who are unintentionally in the situation), and the audience (who follow the videos or recording situations) but also the other way around, the participants affect the video by their varying and evolving personal and communicational motivations for recording.

  1. Network-based Fingerprint Authentication System Using a Mobile Device

    OpenAIRE

    Zhang, Qihu

    2016-01-01

    Abstract— Fingerprint-based user authentication is highly effective in networked services such as electronic payment, but conventional authentication solutions have problems in cost, usability and security. To resolve these problems, we propose a touch-less fingerprint authentication solution, in which a mobile device's built-in camera is used to capture fingerprint image, and then it is sent to the server to determine the identity of the user. We designed and implemented a prototype as an a...

  2. Microprocessor-controlled wide-range streak camera

    Science.gov (United States)

    Lewis, Amy E.; Hollabaugh, Craig

    2006-08-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  3. Microprocessor-controlled, wide-range streak camera

    International Nuclear Information System (INIS)

    Amy E. Lewis; Craig Hollabaugh

    2006-01-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized

  4. End-to-End Delay Model for Train Messaging over Public Land Mobile Networks

    Directory of Open Access Journals (Sweden)

    Franco Mazzenga

    2017-11-01

    Full Text Available Modern train control systems rely on a dedicated radio network for train to ground communications. A number of possible alternatives have been analysed to adopt the European Rail Traffic Management System/European Train Control System (ERTMS/ETCS control system on local/regional lines to improve transport capacity. Among them, a communication system based on public networks (cellular&satellite provides an interesting, effective and alternative solution to proprietary and expensive radio networks. To analyse performance of this solution, it is necessary to model the end-to-end delay and message loss to fully characterize the message transfer process from train to ground and vice versa. Starting from the results of a railway test campaign over a 300 km railway line for a cumulative 12,000 traveled km in 21 days, in this paper, we derive a statistical model for the end-to-end delay required for delivering messages. In particular, we propose a two states model allowing for reproducing the main behavioral characteristics of the end-to-end delay as observed experimentally. Model formulation has been derived after deep analysis of the recorded experimental data. When it is applied to model a realistic scenario, it allows for explicitly accounting for radio coverage characteristics, the received power level, the handover points along the line and for the serving radio technology. As an example, the proposed model is used to generate the end-to-end delay profile in a realistic scenario.

  5. High electron mobility in Ga(In)NAs films grown by molecular beam epitaxy

    International Nuclear Information System (INIS)

    Miyashita, Naoya; Ahsan, Nazmul; Monirul Islam, Muhammad; Okada, Yoshitaka; Inagaki, Makoto; Yamaguchi, Masafumi

    2012-01-01

    We report the highest mobility values above 2000 cm 2 /Vs in Si doped GaNAs film grown by molecular beam epitaxy. To understand the feature of the origin which limits the electron mobility in GaNAs, temperature dependences of mobility were measured for high mobility GaNAs and referential low mobility GaInNAs. Temperature dependent mobility for high mobility GaNAs is similar to the GaAs case, while that for low mobility GaInNAs shows large decrease in lower temperature region. The electron mobility of high quality GaNAs can be explained by intrinsic limiting factor of random alloy scattering and extrinsic factor of ionized impurity scattering.

  6. Target-Tracking Camera for a Metrology System

    Science.gov (United States)

    Liebe, Carl; Bartman, Randall; Chapsky, Jacob; Abramovici, Alexander; Brown, David

    2009-01-01

    An analog electronic camera that is part of a metrology system measures the varying direction to a light-emitting diode that serves as a bright point target. In the original application for which the camera was developed, the metrological system is used to determine the varying relative positions of radiating elements of an airborne synthetic aperture-radar (SAR) antenna as the airplane flexes during flight; precise knowledge of the relative positions as a function of time is needed for processing SAR readings. It has been common metrology system practice to measure the varying direction to a bright target by use of an electronic camera of the charge-coupled-device or active-pixel-sensor type. A major disadvantage of this practice arises from the necessity of reading out and digitizing the outputs from a large number of pixels and processing the resulting digital values in a computer to determine the centroid of a target: Because of the time taken by the readout, digitization, and computation, the update rate is limited to tens of hertz. In contrast, the analog nature of the present camera makes it possible to achieve an update rate of hundreds of hertz, and no computer is needed to determine the centroid. The camera is based on a position-sensitive detector (PSD), which is a rectangular photodiode with output contacts at opposite ends. PSDs are usually used in triangulation for measuring small distances. PSDs are manufactured in both one- and two-dimensional versions. Because it is very difficult to calibrate two-dimensional PSDs accurately, the focal-plane sensors used in this camera are two orthogonally mounted one-dimensional PSDs.

  7. Material flows of mobile phones and accessories in Nigeria: Environmental implications and sound end-of-life management options

    International Nuclear Information System (INIS)

    Osibanjo, Oladele; Nnorom, Innocent Chidi

    2008-01-01

    Presently, Nigeria is one of the fastest growing Telecom markets in the world. The country's teledensity increased from a mere 0.4 in 1999 to 10 in 2005 following the liberalization of the Telecom sector in 2001. More than 25 million new digital mobile lines have been connected by June 2006. Large quantities of mobile phones and accessories including secondhand and remanufactured products are being imported to meet the pent-up demand. This improvement in mobile telecom services resulted in the preference of mobile telecom services to fixed lines. Consequently, the contribution of fixed lines decreased from about 95% in year 2000 to less than 10% in March 2005. This phenomenal progress in information technology has resulted in the generation of large quantities of electronic waste (e-waste) in the country. Abandoned fixed line telephone sets estimated at 120,000 units are either disposed or stockpiled. Increasing quantities of waste mobile phones estimated at 8 million units by 2007, and accessories will be generated. With no material recovery facility for e-waste and/or appropriate solid waste management infrastructure in place, these waste materials end up in open dumps and unlined landfills. These practices create the potential for the release of toxic metals and halocarbons from batteries, printed wiring boards, liquid crystal display and plastic housing units. This paper presents an overview of the developments in the Nigerian Telecom sector, the material in-flow of mobile phones, and the implications of the management practices for wastes from the Telecom sector in the country

  8. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera.

    Science.gov (United States)

    Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng

    2017-06-20

    The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.

  9. Talk, Mobility and Materialities

    DEFF Research Database (Denmark)

    McIlvenny, Paul

    The intersection of the quotidian practices of social interaction, learning and mobility outside of the classroom – for example, the ways in which talk shapes how children learn to be actively mobile – has been little studied until recently. This paper develops a social interactional approach......-country, both within the context of familial social interaction. Audiovisual data was collected with mobile video cameras from family bike rides in Denmark and family skiing in Finland, in which among other things a parent instructs and guides a child to bike or to ski. Using an EMCA approach, the analysis...... and limitations of a more reflexive, auto-ethnographic approach to collecting data derived from video recordings of activities in which, to different degrees, the researcher is an active subject....

  10. Improved Feature Matching for Mobile Devices with IMU

    Directory of Open Access Journals (Sweden)

    Andrea Masiero

    2016-08-01

    Full Text Available Thanks to the recent diffusion of low-cost high-resolution digital cameras and to the development of mostly automated procedures for image-based 3D reconstruction, the popularity of photogrammetry for environment surveys is constantly increasing in the last years. Automatic feature matching is an important step in order to successfully complete the photogrammetric 3D reconstruction: this step is the fundamental basis for the subsequent estimation of the geometry of the scene. This paper reconsiders the feature matching problem when dealing with smart mobile devices (e.g., when using the standard camera embedded in a smartphone as imaging sensor. More specifically, this paper aims at exploiting the information on camera movements provided by the inertial navigation system (INS in order to make the feature matching step more robust and, possibly, computationally more efficient. First, a revised version of the affine scale-invariant feature transform (ASIFT is considered: this version reduces the computational complexity of the original ASIFT, while still ensuring an increase of correct feature matches with respect to the SIFT. Furthermore, a new two-step procedure for the estimation of the essential matrix E (and the camera pose is proposed in order to increase its estimation robustness and computational efficiency.

  11. Data dissemination in the wild: A testbed for high-mobility MANETs

    DEFF Research Database (Denmark)

    Vingelmann, Peter; Pedersen, Morten Videbæk; Heide, Janus

    2012-01-01

    This paper investigates the problem of efficient data dissemination in Mobile Ad hoc NETworks (MANETs) with high mobility. A testbed is presented; which provides a high degree of mobility in experiments. The testbed consists of 10 autonomous robots with mobile phones mounted on them. The mobile...... information, and the goal is to convey that information to all devices. A strategy is proposed that uses UDP broadcast transmissions and random linear network coding to facilitate the efficient exchange of information in the network. An application is introduced that implements this strategy on Nokia phones...

  12. A Framework for Designing Collaborative Learning Environments Using Mobile AR

    Science.gov (United States)

    Cochrane, Thomas; Narayan, Vickel; Antonczak, Laurent

    2016-01-01

    Smartphones provide a powerful platform for augmented reality (AR). Using a smartphone's camera together with the built in GPS, compass, gyroscope, and touch screen enables the real world environment to be overlaid with contextual digital information. The creation of mobile AR environments is relatively simple, with the development of mobile AR…

  13. Outdoor Air Quality Level Inference via Surveillance Cameras

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2016-01-01

    Full Text Available Air pollution is a universal problem confronted by many developing countries. Because there are very few air quality monitoring stations in cities, it is difficult for people to know the exact air quality level anytime and anywhere. Fortunately, large amount of surveillance cameras have been deployed in the cities and can capture image densely and conveniently in the cities. In this case, this provides the possibility to utilize surveillance cameras as sensors to obtain data and predict the air quality level. To this end, we present a novel air quality level inference approach based on outdoor images. Firstly, we explore several features extracted from images as the robust representation for air quality prediction. Then, to effectively fuse these heterogeneous and complementary features, we adopt multikernel learning to learn an adaptive classifier for air quality level inference. In addition, to facilitate the research, we construct an Outdoor Air Quality Image Set (OAQIS dataset, which contains high quality registered and calibrated images with rich labels, that is, concentration of particles mass (PM, weather, temperature, humidity, and wind. Extensive experiments on the OAQIS dataset demonstrate the effectiveness of the proposed approach.

  14. Exploring the feasibility of iris recognition for visible spectrum iris images obtained using smartphone camera

    Science.gov (United States)

    Trokielewicz, Mateusz; Bartuzi, Ewelina; Michowska, Katarzyna; Andrzejewska, Antonina; Selegrat, Monika

    2015-09-01

    In the age of modern, hyperconnected society that increasingly relies on mobile devices and solutions, implementing a reliable and accurate biometric system employing iris recognition presents new challenges. Typical biometric systems employing iris analysis require expensive and complicated hardware. We therefore explore an alternative way using visible spectrum iris imaging. This paper aims at answering several questions related to applying iris biometrics for images obtained in the visible spectrum using smartphone camera. Can irides be successfully and effortlessly imaged using a smartphone's built-in camera? Can existing iris recognition methods perform well when presented with such images? The main advantage of using near-infrared (NIR) illumination in dedicated iris recognition cameras is good performance almost independent of the iris color and pigmentation. Are the images obtained from smartphone's camera of sufficient quality even for the dark irides? We present experiments incorporating simple image preprocessing to find the best visibility of iris texture, followed by a performance study to assess whether iris recognition methods originally aimed at NIR iris images perform well with visible light images. To our best knowledge this is the first comprehensive analysis of iris recognition performance using a database of high-quality images collected in visible light using the smartphones flashlight together with the application of commercial off-the-shelf (COTS) iris recognition methods.

  15. Behavioral Model of High Performance Camera for NIF Optics Inspection

    International Nuclear Information System (INIS)

    Hackel, B M

    2007-01-01

    The purpose of this project was to develop software that will model the behavior of the high performance Spectral Instruments 1000 series Charge-Coupled Device (CCD) camera located in the Final Optics Damage Inspection (FODI) system on the National Ignition Facility. NIF's target chamber will be mounted with 48 Final Optics Assemblies (FOAs) to convert the laser light from infrared to ultraviolet and focus it precisely on the target. Following a NIF shot, the optical components of each FOA must be carefully inspected for damage by the FODI to ensure proper laser performance during subsequent experiments. Rapid image capture and complex image processing (to locate damage sites) will reduce shot turnaround time; thus increasing the total number of experiments NIF can conduct during its 30 year lifetime. Development of these rapid processes necessitates extensive offline software automation -- especially after the device has been deployed in the facility. Without access to the unique real device or an exact behavioral model, offline software testing is difficult. Furthermore, a software-based behavioral model allows for many instances to be running concurrently; this allows multiple developers to test their software at the same time. Thus it is beneficial to construct separate software that will exactly mimic the behavior and response of the real SI-1000 camera

  16. Towards next generation 3D cameras

    Science.gov (United States)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (robotic inspection and assembly systems.

  17. Three-layer GSO depth-of-interaction detector for high-energy gamma camera

    International Nuclear Information System (INIS)

    Yamamoto, S.; Watabe, H.; Kawachi, N.; Fujimaki, S.; Kato, K.; Hatazawa, J.

    2014-01-01

    Using Ce-doped Gd 2 SiO 5 (GSO) of different Ce concentrations, three-layer DOI block detectors were developed to reduce the parallax error at the edges of a pinhole gamma camera for high-energy gamma photons. GSOs with Ce concentrations of 1.5 mol% (decay time ∼40 ns), 0.5 mol% crystal (∼60 ns), 0.4 mol% (∼80 ns) were selected for the depth of interaction (DOI) detectors. These three types of GSOs were optically coupled in the depth direction, arranged in a 22×22 matrix and coupled to a flat panel photomultiplier tube (FP-PMT, Hamamatsu H8500). Sizes of these GSO cells were 1.9 mm×1.9 mm×4 mm, 1.9 mm×1.9 mm×5 mm, and 1.9 mm×1.9 mm×6 mm for 1.5 mol%, 0.5 mol%, and 0.4 mol%, respectively. With these combinations of GSOs, all spots corresponding to GSO cells were clearly resolved in the position histogram. Pulse shape spectra showed three peaks for these three decay times of GSOs. The block detector was contained in a 2-cm-thick tungsten shield, and a pinhole collimator with a 0.5-mm aperture was mounted. With pulse shape discrimination, we separated the point source images of the Cs-137 for each DOI layer. The point source image of the lower layer was detected at the most central part of the field-of-view, and the distribution was the smallest. The point source image of the higher layer was detected at the most peripheral part of the field-of-view, and the distribution was widest. With this information, the spatial resolution of the pinhole gamma camera can be improved. We conclude that DOI detection is effective for pinhole gamma cameras for high energy gamma photons

  18. Improved scintimammography using a high-resolution camera mounted on an upright mammography gantry

    International Nuclear Information System (INIS)

    Itti, Emmanuel; Patt, Bradley E.; Diggles, Linda E.; MacDonald, Lawrence; Iwanczyk, Jan S.; Mishkin, Fred S.; Khalkhali, Iraj

    2003-01-01

    99m Tc-sestamibi scintimammography (SMM) is a useful adjunct to conventional X-ray mammography (XMM) for the assessment of breast cancer. An increasing number of studies has emphasized fair sensitivity values for the detection of tumors >1 cm, compared to XMM, particularly in situations where high glandular breast densities make mammographic interpretation difficult. In addition, SMM has demonstrated high specificity for cancer, compared to various functional and anatomic imaging modalities. However, large field-of-view (FOV) gamma cameras are difficult to position close to the breasts, which decreases spatial resolution and subsequently, the sensitivity of detection for tumors 2 FOV and an array of 2x2x6 mm 3 discrete crystals coupled to a photon-sensitive photomultiplier tube readout. This camera is mounted on a mammography gantry allowing upright imaging, medial positioning and use of breast compression. Preliminary data indicates significant enhancement of spatial resolution by comparison with standard imaging in the first 10 patients. Larger series will be needed to conclude on sensitivity/specificity issues

  19. DistancePPG: Robust non-contact vital signs monitoring using a camera

    Science.gov (United States)

    Kumar, Mayank; Veeraraghavan, Ashok; Sabharwal, Ashutosh

    2015-01-01

    Vital signs such as pulse rate and breathing rate are currently measured using contact probes. But, non-contact methods for measuring vital signs are desirable both in hospital settings (e.g. in NICU) and for ubiquitous in-situ health tracking (e.g. on mobile phone and computers with webcams). Recently, camera-based non-contact vital sign monitoring have been shown to be feasible. However, camera-based vital sign monitoring is challenging for people with darker skin tone, under low lighting conditions, and/or during movement of an individual in front of the camera. In this paper, we propose distancePPG, a new camera-based vital sign estimation algorithm which addresses these challenges. DistancePPG proposes a new method of combining skin-color change signals from different tracked regions of the face using a weighted average, where the weights depend on the blood perfusion and incident light intensity in the region, to improve the signal-to-noise ratio (SNR) of camera-based estimate. One of our key contributions is a new automatic method for determining the weights based only on the video recording of the subject. The gains in SNR of camera-based PPG estimated using distancePPG translate into reduction of the error in vital sign estimation, and thus expand the scope of camera-based vital sign monitoring to potentially challenging scenarios. Further, a dataset will be released, comprising of synchronized video recordings of face and pulse oximeter based ground truth recordings from the earlobe for people with different skin tones, under different lighting conditions and for various motion scenarios. PMID:26137365

  20. Construction of a frameless camera-based stereotactic neuronavigator.

    Science.gov (United States)

    Cornejo, A; Algorri, M E

    2004-01-01

    We built an infrared vision system to be used as the real time 3D motion sensor in a prototype low cost, high precision, frameless neuronavigator. The objective of the prototype is to develop accessible technology for increased availability of neuronavigation systems in research labs and small clinics and hospitals. We present our choice of technology including camera and IR emitter characteristics. We describe the methodology for setting up the 3D motion sensor, from the arrangement of the cameras and the IR emitters on surgical instruments, to triangulation equations from stereo camera pairs, high bandwidth computer communication with the cameras and real time image processing algorithms. We briefly cover the issues of camera calibration and characterization. Although our performance results do not yet fully meet the high precision, real time requirements of neuronavigation systems we describe the current improvements being made to the 3D motion sensor that will make it suitable for surgical applications.

  1. Defect testing of large aperture optics based on high resolution CCD camera

    International Nuclear Information System (INIS)

    Cheng Xiaofeng; Xu Xu; Zhang Lin; He Qun; Yuan Xiaodong; Jiang Xiaodong; Zheng Wanguo

    2009-01-01

    A fast testing method on inspecting defects of large aperture optics was introduced. With uniform illumination by LED source at grazing incidence, the image of defects on the surface of and inside the large aperture optics could be enlarged due to scattering. The images of defects were got by high resolution CCD camera and microscope, and the approximate mathematical relation between viewing dimension and real dimension of defects was simulated. Thus the approximate real dimension and location of all defects could be calculated through the high resolution pictures. (authors)

  2. Cost effective system for monitoring of fish migration with a camera

    Science.gov (United States)

    Sečnik, Matej; Brilly, Mitja; Vidmar, Andrej

    2016-04-01

    Within the European LIFE project Ljubljanica connects (LIFE10 NAT/SI/000142) we have developed a cost-effective solution for the monitoring of fish migration through the fish passes with the underwater camera. In the fish pass at Ambrožev trg and in the fish pass near the Fužine castle we installed a video camera called "Fishcam" to be able to monitor the migration of fish through the fish passes and success of its reconstruction. Live stream from fishcams installed in the fishpassesis available on our project website (http://ksh.fgg.uni-lj.si/ljubljanicaconnects/ang/12_camera). The system for the fish monitoring is made from two parts. First is the waterproof box for the computer with charger and the second part is the camera itself. We used a high sensitive Sony analogue camera. The advantage of this camera is that it has very good sensitivity in low light conditions, so it can take good quality pictures even at night with a minimum additional lighting. For the night recording we use additional IR reflector to illuminate passing fishes. The camera is connected to an 8-inch tablet PC. We decided to use a tablet PC because it is quite small, cheap, it is relatively fast and has a low power consumption. On the computer we use software which has advanced motion detection capabilities, so we can also detect the small fishes. When the fish is detected by a software, its photograph is automatically saved to local hard drive and for backup also on Google drive. The system for monitoring of fish migration has turned out to work very well. From the beginning of monitoring in June 2015 to end of the year there were more than 100.000 photographs produced. The first analysis of them was already prepared estimating fish species and their frequency in passing the fish pass.

  3. Detection of Tampering Inconsistencies on Mobile Photos

    Science.gov (United States)

    Cao, Hong; Kot, Alex C.

    Fast proliferation of mobile cameras and the deteriorating trust on digital images have created needs in determining the integrity of photos captured by mobile devices. As tampering often creates some inconsistencies, we propose in this paper a novel framework to statistically detect the image tampering inconsistency using accurately detected demosaicing weights features. By first cropping four non-overlapping blocks, each from one of the four quadrants in the mobile photo, we extract a set of demosaicing weights features from each block based on a partial derivative correlation model. Through regularizing the eigenspectrum of the within-photo covariance matrix and performing eigenfeature transformation, we further derive a compact set of eigen demosaicing weights features, which are sensitive to image signal mixing from different photo sources. A metric is then proposed to quantify the inconsistency based on the eigen weights features among the blocks cropped from different regions of the mobile photo. Through comparison, we show our eigen weights features perform better than the eigen features extracted from several other conventional sets of statistical forensics features in detecting the presence of tampering. Experimentally, our method shows a good confidence in tampering detection especially when one of the four cropped blocks is from a different camera model or brand with different demosaicing process.

  4. A new X-ray pinhole camera for energy dispersive X-ray fluorescence imaging with high-energy and high-spatial resolution

    Energy Technology Data Exchange (ETDEWEB)

    Romano, F.P., E-mail: romanop@lns.infn.it [IBAM, CNR, Via Biblioteca 4, 95124 Catania (Italy); INFN-LNS, Via S. Sofia 62, 95123 Catania (Italy); Altana, C. [INFN-LNS, Via S. Sofia 62, 95123 Catania (Italy); Dipartimento di Fisica e Astronomia, Università di Catania, Via S. Sofia 64, 95123 Catania (Italy); Cosentino, L.; Celona, L.; Gammino, S.; Mascali, D. [INFN-LNS, Via S. Sofia 62, 95123 Catania (Italy); Pappalardo, L. [IBAM, CNR, Via Biblioteca 4, 95124 Catania (Italy); INFN-LNS, Via S. Sofia 62, 95123 Catania (Italy); Rizzo, F. [INFN-LNS, Via S. Sofia 62, 95123 Catania (Italy); Dipartimento di Fisica e Astronomia, Università di Catania, Via S. Sofia 64, 95123 Catania (Italy)

    2013-08-01

    A new X-ray pinhole camera for the Energy Dispersive X-ray Fluorescence (ED-XRF) imaging of materials with high-energy and high-spatial resolution, was designed and developed. It consists of a back-illuminated and deep depleted CCD detector (composed of 1024 × 1024 pixels with a lateral size of 13 μm) coupled to a 70 μm laser-drilled pinhole-collimator, positioned between the sample under analysis and the CCD. The X-ray pinhole camera works in a coaxial geometry allowing a wide range of magnification values. The characteristic X-ray fluorescence is induced on the samples by irradiation with an external X-ray tube working at a maximum power of 100 W (50 kV and 2 mA operating conditions). The spectroscopic capabilities of the X-ray pinhole camera were accurately investigated. Energy response and energy calibration of the CCD detector were determined by irradiating pure target-materials emitting characteristic X-rays in the energy working-domain of the system (between 3 keV and 30 keV). Measurements were performed by using a multi-frame acquisition in single-photon counting. The characteristic X-ray spectra were obtained by an automated processing of the acquired images. The energy resolution measured at the Fe–Kα line is 157 eV. The use of the X-ray pinhole camera for the 2D resolved elemental analysis was investigated by using reference-patterns of different materials and geometries. The possibility of the elemental mapping of samples up to an area of 3 × 3 cm{sup 2} was demonstrated. Finally, the spatial resolution of the pinhole camera was measured by analyzing the profile function of a sharp-edge. The spatial resolution determined at the magnification values of 3.2 × and 0.8 × (used as testing values) is about 90 μm and 190 μm respectively. - Highlights: • We developed an X-ray pinhole camera for the 2D X-ray fluorescence imaging. • X-ray spectra are obtained by a multi-frame acquisition in single photon mode. • The energy resolution in the X

  5. Caliste 64, an innovative CdTe hard X-ray micro-camera

    Energy Technology Data Exchange (ETDEWEB)

    Meuris, A.; Limousin, O.; Pinsard, F.; Le Mer, I. [CEA Saclay, DSM, DAPNIA, Serv. Astrophys., F-91191 Gif sur Yvette (France); Lugiez, F.; Gevin, O.; Delagnes, E. [CEA Saclay, DSM, DAPNIA, Serv. Electron., F-91191 Gif sur Yvette (France); Vassal, M.C.; Soufflet, F.; Bocage, R. [3D-plus Company, F-78532 Buc (France)

    2008-07-01

    A prototype 64 pixel miniature camera has been designed and tested for the Simbol-X hard X-ray observatory to be flown on the joint CNES-ASI space mission in 2014. This device is called Caliste 64. It is a high performance spectro-imager with event time-tagging capability, able to detect photons between 2 keV and 250 keV. Caliste 64 is the assembly of a 1 or 2 min thick CdTe detector mounted on top of a readout module. CdTe detectors equipped with Aluminum Schottky barrier contacts are used because of their very low dark current and excellent spectroscopic performance. Front-end electronics is a stack of four IDeF-X V1.1 ASICs, arranged perpendicular to the detection plane, to read out each pixel independently. The whole camera fits in a 10 * 10 * 20 mm{sup 3} volume and is juxtaposable on its four sides. This allows the device to be used as an elementary unit in a larger array of Caliste 64 cameras. Noise performance resulted in an ENC better than 60 electrons rms in average. The first prototype camera is tested at -10 degrees C with a bias of -400 V. The spectrum summed across the 64 pixels results in a resolution of 697 eV FWHM at 13.9 keV and 808 eV FWFM at 59.54 keV. (authors)

  6. ePix: a class of architectures for second generation LCLS cameras

    International Nuclear Information System (INIS)

    Dragone, A; Caragiulo, P; Markovic, B; Herbst, R; Reese, B; Herrmann, S C; Hart, P A; Segal, J; Carini, G A; Kenney, C J; Haller, G

    2014-01-01

    ePix is a novel class of ASIC architectures, based on a common platform, optimized to build modular scalable detectors for LCLS. The platform architecture is composed of a random access analog matrix of pixel with global shutter, fast parallel column readout, and dedicated sigma-delta analog-to-digital converters per column. It also implements a dedicated control interface and all the required support electronics to perform configuration, calibration and readout of the matrix. Based on this platform a class of front-end ASICs and several camera modules, meeting different requirements, can be developed by designing specific pixel architectures. This approach reduces development time and expands the possibility of integration of detector modules with different size, shape or functionality in the same camera. The ePix platform is currently under development together with the first two integrating pixel architectures: ePix100 dedicated to ultra low noise applications and ePix10k for high dynamic range applications.

  7. The LSST Camera 500 watt -130 degC Mixed Refrigerant Cooling System

    Energy Technology Data Exchange (ETDEWEB)

    Bowden, Gordon B.; Langton, Brian J.; /SLAC; Little, William A.; /MMR-Technologies, Mountain View, CA; Powers, Jacob R; Schindler, Rafe H.; /SLAC; Spektor, Sam; /MMR-Technologies, Mountain View, CA

    2014-05-28

    The LSST Camera has a higher cryogenic heat load than previous CCD telescope cameras due to its large size (634 mm diameter focal plane, 3.2 Giga pixels) and its close coupled front-end electronics operating at low temperature inside the cryostat. Various refrigeration technologies are considered for this telescope/camera environment. MMR-Technology’s Mixed Refrigerant technology was chosen. A collaboration with that company was started in 2009. The system, based on a cluster of Joule-Thomson refrigerators running a special blend of mixed refrigerants is described. Both the advantages and problems of applying this technology to telescope camera refrigeration are discussed. Test results from a prototype refrigerator running in a realistic telescope configuration are reported. Current and future stages of the development program are described. (auth)

  8. Real-time spot size camera for pulsed high-energy radiographic machines

    International Nuclear Information System (INIS)

    Watson, S.A.

    1993-01-01

    The focal spot size of an x-ray source is a critical parameter which degrades resolution in a flash radiograph. For best results, a small round focal spot is required. Therefore, a fast and accurate measurement of the spot size is highly desirable to facilitate machine tuning. This paper describes two systems developed for Los Alamos National Laboratory's Pulsed High-Energy Radiographic Machine Emitting X-rays (PHERMEX) facility. The first uses a CCD camera combined with high-brightness floors, while the second utilizes phosphor storage screens. Other techniques typically record only the line spread function on radiographic film, while systems in this paper measure the more general two-dimensional point-spread function and associated modulation transfer function in real time for shot-to-shot comparison

  9. Mobile marketing: A literature review on its value for consumers and retailers

    OpenAIRE

    Ström, Roger; Vendel, Martin; Bredican, John

    2014-01-01

    The article describes the existing knowledge of how mobile marketing can increase the value for consumers and retailers. Mobile device shopping, and consumers0 use of mobile devices while shopping is shown to be both an extension of consumers0 shopping behaviours developed on Internet-connected desktop and laptop computers (PC), and potentially new behaviours based on a mobile devices0 uniquely integrated features such as camera, scanners and GPS. The article focuses on how mobile marketing c...

  10. Mobile Agent based Market Basket Analysis on Cloud

    OpenAIRE

    Waghmare, Vijayata; Mukhopadhyay, Debajyoti

    2014-01-01

    This paper describes the design and development of a location-based mobile shopping application for bakery product shops. Whole application is deployed on cloud. The three-tier architecture consists of, front-end, middle-ware and back-end. The front-end level is a location-based mobile shopping application for android mobile devices, for purchasing bakery products of nearby places. Front-end level also displays association among the purchased products. The middle-ware level provides a web ser...

  11. REAL-TIME CAMERA GUIDANCE FOR 3D SCENE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    F. Schindler

    2012-07-01

    Full Text Available We propose a framework for operator guidance during the image acquisition process for reliable multi-view stereo reconstruction. Goal is to achieve full coverage of the object and sufficient overlap. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that when following the camera movements suggested by our system, the proposed framework is able to generate good approximate values for the bundle adjustment, leading to accurate results compared to ground truth after few iterations. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.

  12. Myocardial blood flow rate and capillary permeability for 99mTc-DTPA in patients with angiographically normal coronary arteries. Evaluation of the single-injection, residue detection method with intracoronary indicator bolus injection and the use of a mobile gamma camera

    DEFF Research Database (Denmark)

    Svendsen, Jesper Hastrup; Kelbaek, H; Efsen, F

    1994-01-01

    The aims of the present study were to quantitate myocardial perfusion and capillary permeability in the human heart by means of the single-injection, residue detection method using a mobile gamma camera. With this method, the intravascular mean transit time and the capillary extraction fraction (E...

  13. Multi-core for mobile phones

    NARCIS (Netherlands)

    Berkel, van C.H.

    2009-01-01

    High-end mobile phones support multiple radio standards and a rich suite of applications, which involves advanced radio, audio, video, and graphics processing. The overall digital workload amounts to nearly 100GOPS, from 4b integer to 24b floating-point operations. With a power budget of only 1W

  14. New camera systems for fuel services

    International Nuclear Information System (INIS)

    Hummel, W.; Beck, H.J.

    2010-01-01

    AREVA NP Fuel Services have many years of experience in visual examination and measurements on fuel assemblies and associated core components by using state of the art cameras and measuring technologies. The used techniques allow the surface and dimensional characterization of materials and shapes by visual examination. New enhanced and sophisticated technologies for fuel services f. e. are two shielded color camera systems for use under water and close inspection of a fuel assembly. Nowadays the market requirements for detecting and characterization of small defects (lower than the 10th of one mm) or cracks and analyzing surface appearances on an irradiated fuel rod cladding or fuel assembly structure parts have increased. Therefore it is common practice to use movie cameras with higher resolution. The radiation resistance of high resolution CCD cameras is in general very low and it is not possible to use them unshielded close to a fuel assembly. By extending the camera with a mirror system and shielding around the sensitive parts, the movie camera can be utilized for fuel assembly inspection. AREVA NP Fuel Services is now equipped with such kind of movie cameras. (orig.)

  15. Hierarchical micro-mobility management in high-speed multihop access networks

    Institute of Scientific and Technical Information of China (English)

    TANG Bi-hua; MA Xiao-lei; LIU Yuan-an; GAO Jin-chun

    2006-01-01

    This article integrates the hierarchical micro-mobility management and the high-speed multihop access networks (HMAN), to accomplish the smooth handover between different access routers. The proposed soft handover scheme in the high-speed HMAN can solve the micro-mobility management problem in the access network. This article also proposes the hybrid access router (AR) advertisement scheme and AR selection algorithm, which uses the time delay and stable route to the AR as the gateway selection parameters. By simulation, the proposed micro-mobility management scheme can achieve high packet delivery fraction and improve the lifetime of network.

  16. The Light Field Attachment: Turning a DSLR into a Light Field Camera Using a Low Budget Camera Ring

    KAUST Repository

    Wang, Yuwang; Liu, Yebin; Heidrich, Wolfgang; Dai, Qionghai

    2016-01-01

    camera. From an algorithmic point of view, the high-quality light field mode is made possible by a new light field super-resolution method that first improves the spatial resolution and image quality of the side cameras and then interpolates additional

  17. Mobile marketing for mobile games

    OpenAIRE

    Vu, Giang

    2016-01-01

    Highly developed mobile technology and devices enable the rise of mobile game industry and mobile marketing. Hence mobile marketing for mobile game is an essential key for a mobile game success. Even though there are many articles on marketing for mobile games, there is a need of highly understanding mobile marketing strategies, how to launch a mobile campaign for a mobile game. Besides that, it is essential to understand the relationship between mobile advertising and users behaviours. There...

  18. Image compensation for camera and lighting variability

    Science.gov (United States)

    Daley, Wayne D.; Britton, Douglas F.

    1996-12-01

    With the current trend of integrating machine vision systems in industrial manufacturing and inspection applications comes the issue of camera and illumination stabilization. Unless each application is built around a particular camera and highly controlled lighting environment, the interchangeability of cameras of fluctuations in lighting become a problem as each camera usually has a different response. An empirical approach is proposed where color tile data is acquired using the camera of interest, and a mapping is developed to some predetermined reference image using neural networks. A similar analytical approach based on a rough analysis of the imaging systems is also considered for deriving a mapping between cameras. Once a mapping has been determined, all data from one camera is mapped to correspond to the images of the other prior to performing any processing on the data. Instead of writing separate image processing algorithms for the particular image data being received, the image data is adjusted based on each particular camera and lighting situation. All that is required when swapping cameras is the new mapping for the camera being inserted. The image processing algorithms can remain the same as the input data has been adjusted appropriately. The results of utilizing this technique are presented for an inspection application.

  19. Mobile app for chemical detection

    Science.gov (United States)

    Klunder, Gregory; Cooper, Chadway R.; Satcher, Jr., Joe H.; Tekle, Ephraim A.

    2017-07-18

    The present invention incorporates the camera from a mobile device (phone, iPad, etc.) to capture an image from a chemical test kit and process the image to provide chemical information. A simple user interface enables the automatic evaluation of the image, data entry, gps info, and maintain records from previous analyses.

  20. Control system for gamma camera

    International Nuclear Information System (INIS)

    Miller, D.W.

    1977-01-01

    An improved gamma camera arrangement is described which utilizing a solid state detector, formed of high purity germanium. the central arrangement of the camera operates to effect the carrying out of a trapezoidal filtering operation over antisymmetrically summed spatial signals through gated integration procedures utilizing idealized integrating intervals. By simultaneously carrying out peak energy evaluation of the input signals, a desirable control over pulse pile-up phenomena is achieved. Additionally, through the use of the time derivative of incoming pulse or signal energy information to initially enable the control system, a low level information evaluation is provided serving to enhance the signal processing efficiency of the camera

  1. An indoor augmented reality mobile application for simulation of building evacuation

    Science.gov (United States)

    Sharma, Sharad; Jerripothula, Shanmukha

    2015-03-01

    Augmented Reality enables people to remain connected with the physical environment they are in, and invites them to look at the world from new and alternative perspectives. There has been an increasing interest in emergency evacuation applications for mobile devices. Nearly all the smart phones these days are Wi-Fi and GPS enabled. In this paper, we propose a novel emergency evacuation system that will help people to safely evacuate a building in case of an emergency situation. It will further enhance knowledge and understanding of where the exits are in the building and safety evacuation procedures. We have applied mobile augmented reality (mobile AR) to create an application with Unity 3D gaming engine. We show how the mobile AR application is able to display a 3D model of the building and animation of people evacuation using markers and web camera. The system gives a visual representation of a building in 3D space, allowing people to see where exits are in the building through the use of a smart phone or tablets. Pilot studies were conducted with the system showing its partial success and demonstrated the effectiveness of the application in emergency evacuation. Our computer vision methods give good results when the markers are closer to the camera, but accuracy decreases when the markers are far away from the camera.

  2. Obstacle negotiation control for a mobile robot suspended on overhead ground wires by optoelectronic sensors

    Science.gov (United States)

    Zheng, Li; Yi, Ruan

    2009-11-01

    Power line inspection and maintenance already benefit from developments in mobile robotics. This paper presents mobile robots capable of crossing obstacles on overhead ground wires. A teleoperated robot realizes inspection and maintenance tasks on power transmission line equipment. The inspection robot is driven by 11 motor with two arms, two wheels and two claws. The inspection robot is designed to realize the function of observation, grasp, walk, rolling, turn, rise, and decline. This paper is oriented toward 100% reliable obstacle detection and identification, and sensor fusion to increase the autonomy level. An embedded computer based on PC/104 bus is chosen as the core of control system. Visible light camera and thermal infrared Camera are both installed in a programmable pan-and-tilt camera (PPTC) unit. High-quality visual feedback rapidly becomes crucial for human-in-the-loop control and effective teleoperation. The communication system between the robot and the ground station is based on Mesh wireless networks by 700 MHz bands. An expert system programmed with Visual C++ is developed to implement the automatic control. Optoelectronic laser sensors and laser range scanner were installed in robot for obstacle-navigation control to grasp the overhead ground wires. A novel prototype with careful considerations on mobility was designed to inspect the 500KV power transmission lines. Results of experiments demonstrate that the robot can be applied to execute the navigation and inspection tasks.

  3. QoC-based Optimization of End-to-End M-Health Data Delivery Services

    NARCIS (Netherlands)

    Widya, I.A.; van Beijnum, Bernhard J.F.; Salden, Alfons

    2006-01-01

    This paper addresses how Quality of Context (QoC) can be used to optimize end-to-end mobile healthcare (m-health) data delivery services in the presence of alternative delivery paths, which is quite common in a pervasive computing and communication environment. We propose min-max-plus based

  4. [A Quality Assurance (QA) System with a Web Camera for High-dose-rate Brachytherapy].

    Science.gov (United States)

    Hirose, Asako; Ueda, Yoshihiro; Oohira, Shingo; Isono, Masaru; Tsujii, Katsutomo; Inui, Shouki; Masaoka, Akira; Taniguchi, Makoto; Miyazaki, Masayoshi; Teshima, Teruki

    2016-03-01

    The quality assurance (QA) system that simultaneously quantifies the position and duration of an (192)Ir source (dwell position and time) was developed and the performance of this system was evaluated in high-dose-rate brachytherapy. This QA system has two functions to verify and quantify dwell position and time by using a web camera. The web camera records 30 images per second in a range from 1,425 mm to 1,505 mm. A user verifies the source position from the web camera at real time. The source position and duration were quantified with the movie using in-house software which was applied with a template-matching technique. This QA system allowed verification of the absolute position in real time and quantification of dwell position and time simultaneously. It was evident from the verification of the system that the mean of step size errors was 0.31±0.1 mm and that of dwell time errors 0.1±0.0 s. Absolute position errors can be determined with an accuracy of 1.0 mm at all dwell points in three step sizes and dwell time errors with an accuracy of 0.1% in more than 10.0 s of the planned time. This system is to provide quick verification and quantification of the dwell position and time with high accuracy at various dwell positions without depending on the step size.

  5. A quality assurance (QA) system with a web camera for high-dose-rate brachytherapy

    International Nuclear Information System (INIS)

    Hirose, Asako; Ueda, Yoshihiro; Ohira, Shingo

    2016-01-01

    The quality assurance (QA) system that simultaneously quantifies the position and duration of an 192 Ir source (dwell position and time) was developed and the performance of this system was evaluated in high-dose-rate brachytherapy. This QA system has two functions to verify and quantify dwell position and time by using a web camera. The web camera records 30 images per second in a range from 1,425 mm to 1,505 mm. A user verifies the source position from the web camera at real time. The source position and duration were quantified with the movie using in-house software which was applied with a template-matching technique. This QA system allowed verification of the absolute position in real time and quantification of dwell position and time simultaneously. It was evident from the verification of the system that the mean of step size errors was 0.3±0.1 mm and that of dwell time errors 0.1 ± 0.0 s. Absolute position errors can be determined with an accuracy of 1.0 mm at all dwell points in three step sizes and dwell time errors with an accuracy of 0.1% in more than 10.0 s of the planned time. This system is to provide quick verification and quantification of the dwell position and time with high accuracy at various dwell positions without depending on the step size. (author)

  6. Storage system software solutions for high-end user needs

    Science.gov (United States)

    Hogan, Carole B.

    1992-01-01

    Today's high-end storage user is one that requires rapid access to a reliable terabyte-capacity storage system running in a distributed environment. This paper discusses conventional storage system software and concludes that this software, designed for other purposes, cannot meet high-end storage requirements. The paper also reviews the philosophy and design of evolving storage system software. It concludes that this new software, designed with high-end requirements in mind, provides the potential for solving not only the storage needs of today but those of the foreseeable future as well.

  7. Solution-processable ambipolar diketopyrrolopyrrole-selenophene polymer with unprecedentedly high hole and electron mobilities.

    Science.gov (United States)

    Lee, Junghoon; Han, A-Reum; Kim, Jonggi; Kim, Yiho; Oh, Joon Hak; Yang, Changduk

    2012-12-26

    There is a fast-growing demand for polymer-based ambipolar thin-film transistors (TFTs), in which both n-type and p-type transistor operations are realized in a single layer, while maintaining simplicity in processing. Research progress toward this end is essentially fueled by molecular engineering of the conjugated backbones of the polymers and the development of process architectures for device fabrication, which has recently led to hole and electron mobilities of more than 1.0 cm(2) V(-1) s(-1). However, ambipolar polymers with even higher performance are still required. By taking into account both the conjugated backbone and side chains of the polymer component, we have developed a dithienyl-diketopyrrolopyrrole (TDPP) and selenophene containing polymer with hybrid siloxane-solubilizing groups (PTDPPSe-Si). A synergistic combination of rational polymer backbone design, side-chain dynamics, and solution processing affords an enormous boost in ambipolar TFT performance, resulting in unprecedentedly high hole and electron mobilities of 3.97 and 2.20 cm(2) V(-1) s(-1), respectively.

  8. High school students’ usage behavior and views about mobile phones

    Directory of Open Access Journals (Sweden)

    Ahmet Ergin

    2014-09-01

    Full Text Available Objective: The aim of this study was to determine high school students’ usage behavior and views about mobile phones. Methods:Totally 253 (85.5% students educated at Honaz High School within the academic year 2010-2011, participated to this cross-sectional study and a questionnaire consisting of 42 questions which aimed to determine usage behavior and views about mobile phones was administered to the students. Results:The mean age of the students was 16.1 ± 1.1 years, and 56.9% of them were girl. 79.8% of students have mobile phone and 53.9% of them make daily average of over 30 minutes mobile phone calls. 76.1% of participants stated that they did not use headphones, 78.1% did not turn off their mobile phones when they are sleeping and 67.3% put it right next to them or under the pillow. 83.1% of students think mobile phones are harmful for human health, 56.7% think the base stations are harmful to human health and the environment, 91.3% think mobile phones are harmful for children, pregnant women and elderly people. Conclusion: It is found that students’ mobile phone ownership is widespread, the age of starting to use mobile phone and headphones usage is low, knowledge about the base stations is not adequate.

  9. Investigation of high resolution compact gamma camera module based on a continuous scintillation crystal using a novel charge division readout method

    International Nuclear Information System (INIS)

    Dai Qiusheng; Zhao Cuilan; Qi Yujin; Zhang Hualin

    2010-01-01

    The objective of this study is to investigate a high performance and lower cost compact gamma camera module for a multi-head small animal SPECT system. A compact camera module was developed using a thin Lutetium Oxyorthosilicate (LSO) scintillation crystal slice coupled to a Hamamatsu H8500 position sensitive photomultiplier tube (PSPMT). A two-stage charge division readout board based on a novel subtractive resistive readout with a truncated center-of-gravity (TCOG) positioning method was developed for the camera. The performance of the camera was evaluated using a flood 99m Tc source with a four-quadrant bar-mask phantom. The preliminary experimental results show that the image shrinkage problem associated with the conventional resistive readout can be effectively overcome by the novel subtractive resistive readout with an appropriate fraction subtraction factor. The response output area (ROA) of the camera shown in the flood image was improved up to 34%, and an intrinsic spatial resolution better than 2 mm of detector was achieved. In conclusion, the utilization of a continuous scintillation crystal and a flat-panel PSPMT equipped with a novel subtractive resistive readout is a feasible approach for developing a high performance and lower cost compact gamma camera. (authors)

  10. Super-resolution in plenoptic cameras using FPGAs.

    Science.gov (United States)

    Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime

    2014-05-16

    Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes.

  11. Super-Resolution in Plenoptic Cameras Using FPGAs

    Directory of Open Access Journals (Sweden)

    Joel Pérez

    2014-05-01

    Full Text Available Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA devices using VHDL (very high speed integrated circuit (VHSIC hardware description language. With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes.

  12. Diffusion of Mobile Phones in China

    NARCIS (Netherlands)

    S. Sangwan (Sunanda); L-F. Pau (Louis-François)

    2005-01-01

    textabstractDiffusion of mobile communication has induced great societal changes in China. Factors at global market, communications industry and end-user market levels are driving the adoption at a high rate. Firstly, China’s economic emergence together with e.g. accession to WTO has led to foreign

  13. Tomographic Small-Animal Imaging Using a High-Resolution Semiconductor Camera

    Science.gov (United States)

    Kastis, GA; Wu, MC; Balzer, SJ; Wilson, DW; Furenlid, LR; Stevenson, G; Barber, HB; Barrett, HH; Woolfenden, JM; Kelly, P; Appleby, M

    2015-01-01

    We have developed a high-resolution, compact semiconductor camera for nuclear medicine applications. The modular unit has been used to obtain tomographic images of phantoms and mice. The system consists of a 64 x 64 CdZnTe detector array and a parallel-hole tungsten collimator mounted inside a 17 cm x 5.3 cm x 3.7 cm tungsten-aluminum housing. The detector is a 2.5 cm x 2.5 cm x 0.15 cm slab of CdZnTe connected to a 64 x 64 multiplexer readout via indium-bump bonding. The collimator is 7 mm thick, with a 0.38 mm pitch that matches the detector pixel pitch. We obtained a series of projections by rotating the object in front of the camera. The axis of rotation was vertical and about 1.5 cm away from the collimator face. Mouse holders were made out of acrylic plastic tubing to facilitate rotation and the administration of gas anesthetic. Acquisition times were varied from 60 sec to 90 sec per image for a total of 60 projections at an equal spacing of 6 degrees between projections. We present tomographic images of a line phantom and mouse bone scan and assess the properties of the system. The reconstructed images demonstrate spatial resolution on the order of 1–2 mm. PMID:26568676

  14. WE-DE-BRA-11: A Study of Motion Tracking Accuracy of Robotic Radiosurgery Using a Novel CCD Camera Based End-To-End Test System

    Energy Technology Data Exchange (ETDEWEB)

    Wang, L; M Yang, Y [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States); Nelson, B [Logos Systems Intl, Scotts Valley, CA (United States)

    2016-06-15

    Purpose: A novel end-to-end test system using a CCD camera and a scintillator based phantom (XRV-124, Logos Systems Int’l) capable of measuring the beam-by-beam delivery accuracy of Robotic Radiosurgery (CyberKnife) was developed and reported in our previous work. This work investigates its application in assessing the motion tracking (Synchrony) accuracy for CyberKnife. Methods: A QA plan with Anterior and Lateral beams (with 4 different collimator sizes) was created (Multiplan v5.3) for the XRV-124 phantom. The phantom was placed on a motion platform (superior and inferior movement), and the plans were delivered on the CyberKnife M6 system using four motion patterns: static, Sine- wave, Sine with 15° phase shift, and a patient breathing pattern composed of 2cm maximum motion with 4 second breathing cycle. Under integral recording mode, the time-averaged beam vectors (X, Y, Z) were measured by the phantom and compared with static delivery. In dynamic recording mode, the beam spots were recorded at a rate of 10 frames/second. The beam vector deviation from average position was evaluated against the various breathing patterns. Results: The average beam position of the six deliveries with no motion and three deliveries with Synchrony tracking on ideal motion (sinewave without phase shift) all agree within −0.03±0.00 mm, 0.10±0.04, and 0.04±0.03 in the X, Y, and X directions. Radiation beam width (FWHM) variations are within ±0.03 mm. Dynamic video record showed submillimeter tracking stability for both regular and irregular breathing pattern; however the tracking error up to 3.5 mm was observed when a 15 degree phase shift was introduced. Conclusion: The XRV-124 system is able to provide 3D and 4D targeting accuracy for CyberKnife delivery with Synchrony. The experimental results showed sub-millimeter delivery in phantom with excellent correlation in target to breathing motion. The accuracy was degraded when irregular motion and phase shift was introduced.

  15. Mobile Phone Images and Video in Science Teaching and Learning

    Science.gov (United States)

    Ekanayake, Sakunthala Yatigammana; Wishart, Jocelyn

    2014-01-01

    This article reports a study into how mobile phones could be used to enhance teaching and learning in secondary school science. It describes four lessons devised by groups of Sri Lankan teachers all of which centred on the use of the mobile phone cameras rather than their communication functions. A qualitative methodological approach was used to…

  16. Quantifying geological processes on Mars - Results of the high resolution stereo camera (HRSC) on Mars express

    NARCIS (Netherlands)

    Jaumann, R.; Tirsch, D.; Hauber, E.; Ansan, V.; Di Achille, G.; Erkeling, G.; Fueten, F.; Head, J.; Kleinhans, M. G.; Mangold, N.; Michael, G. G.; Neukum, G.; Pacifici, A.; Platz, T.; Pondrelli, M.; Raack, J.; Reiss, D.; Williams, D. A.; Adeli, S.; Baratoux, D.; De Villiers, G.; Foing, B.; Gupta, S.; Gwinner, K.; Hiesinger, H.; Hoffmann, H.; Deit, L. Le; Marinangeli, L.; Matz, K. D.; Mertens, V.; Muller, J. P.; Pasckert, J. H.; Roatsch, T.; Rossi, A. P.; Scholten, F.; Sowe, M.; Voigt, J.; Warner, N.

    2015-01-01

    Abstract This review summarizes the use of High Resolution Stereo Camera (HRSC) data as an instrumental tool and its application in the analysis of geological processes and landforms on Mars during the last 10 years of operation. High-resolution digital elevations models on a local to regional scale

  17. Relative camera localisation in non-overlapping camera networks using multiple trajectories

    NARCIS (Netherlands)

    John, V.; Englebienne, G.; Kröse, B.J.A.

    2012-01-01

    In this article we present an automatic camera calibration algorithm using multiple trajectories in a multiple camera network with non-overlapping field-of-views (FOV). Visible trajectories within a camera FOV are assumed to be measured with respect to the camera local co-ordinate system.

  18. Coalescence dynamics of mobile and immobile fluid interfaces

    KAUST Repository

    Vakarelski, Ivan Uriev

    2018-01-12

    Coalescence dynamics between deformable bubbles and droplets can be dramatically affected by the mobility of the interfaces with fully tangentially mobile bubble-liquid or droplet-liquid interfaces expected to accelerate the coalescence by orders of magnitudes. However, there is a lack of systematic experimental investigations that quantify this effect. By using high speed camera imaging we examine the free rise and coalescence of small air-bubbles (100 to 1300 μm in diameter) with a liquid interface. A perfluorocarbon liquid, PP11 is used as a model liquid to investigate coalescence dynamics between fully-mobile and immobile deformable interfaces. The mobility of the bubble surface was determined by measuring the terminal rise velocity of small bubbles rising at Reynolds numbers, Re less than 0.1 and the mobility of free PP11 surface by measuring the deceleration kinetics of the small bubble toward the interface. Induction or film drainage times of a bubble at the mobile PP11-air surface were found to be more than two orders of magnitude shorter compared to the case of bubble and an immobile PP11-water interface. A theoretical model is used to illustrate the effect of hydrodynamics and interfacial mobility on the induction time or film drainage time. The results of this study are expected to stimulate the development of a comprehensive theoretical model for coalescence dynamics between two fully or partially mobile fluid interfaces.

  19. Mobile Educational Augmented Reality Games: A Systematic Literature Review and Two Case Studies

    Directory of Open Access Journals (Sweden)

    Teemu H. Laine

    2018-03-01

    Full Text Available Augmented reality (AR has evolved from research projects into mainstream applications that cover diverse fields, such as entertainment, health, business, tourism and education. In particular, AR games, such as Pokémon Go, have contributed to introducing the AR technology to the general public. The proliferation of modern smartphones and tablets with large screens, cameras, and high processing power has ushered in mobile AR applications that can provide context-sensitive content to users whilst freeing them to explore the context. To avoid ambiguity, I define mobile AR as a type of AR where a mobile device (smartphone or tablet is used to display and interact with virtual content that is overlaid on top of a real-time camera feed of the real world. Beyond being mere entertainment, AR and games have been shown to possess significant affordances for learning. Although previous research has done a decent job of reviewing research on educational AR applications, I identified a need for a comprehensive review on research related to educational mobile AR games (EMARGs. This paper explored the research landscape on EMARGs over the period 2012–2017 through a systematic literature review complemented by two case studies in which the author participated. After a comprehensive literature search and filtering, I analyzed 31 EMARGs from the perspectives of technology, pedagogy, and gaming. Moreover, I presented an analysis of 26 AR platforms that can be used to create mobile AR applications. I then discussed the results in depth and synthesized my interpretations into 13 guidelines for future EMARG developers.

  20. MINER - A Mobile Imager of Neutrons for Emergency Responders

    Energy Technology Data Exchange (ETDEWEB)

    Goldsmith, John E. M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Brennan, James S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gerling, Mark D [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kiff, Scott D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mascarenhas, Nicholas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Van De Vreugde, James L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-10-01

    We have developed a mobile fast neutron imaging platform to enhance the capabilities of emergency responders in the localization and characterization of special nuclear material. This mobile imager of neutrons for emergency responders (MINER) is based on the Neutron Scatter Camera, a large segmented imaging system that was optimized for large-area search applications. Due to the reduced size and power requirements of a man-portable system, MINER has been engineered to fit a much smaller form factor, and to be operated from either a battery or AC power. We chose a design that enabled omnidirectional (4π) imaging, with only a ~twofold decrease in sensitivity compared to the much larger neutron scatter cameras. The system was designed to optimize its performance for neutron imaging and spectroscopy, but it does also function as a Compton camera for gamma imaging. This document outlines the project activities, broadly characterized as system development, laboratory measurements, and deployments, and presents sample results in these areas. Additional information can be found in the documents that reside in WebPMIS.

  1. The World in Slow Motion: Using a High-Speed Camera in a Physics Workshop

    Science.gov (United States)

    Dewanto, Andreas; Lim, Geok Quee; Kuang, Jianhong; Zhang, Jinfeng; Yeo, Ye

    2012-01-01

    We present a physics workshop for college students to investigate various physical phenomena using high-speed cameras. The technical specifications required, the step-by-step instructions, as well as the practical limitations of the workshop, are discussed. This workshop is also intended to be a novel way to promote physics to Generation-Y…

  2. Multiple-camera tracking: UK government requirements

    Science.gov (United States)

    Hosmer, Paul

    2007-10-01

    The Imagery Library for Intelligent Detection Systems (i-LIDS) is the UK government's new standard for Video Based Detection Systems (VBDS). The standard was launched in November 2006 and evaluations against it began in July 2007. With the first four i-LIDS scenarios completed, the Home Office Scientific development Branch (HOSDB) are looking toward the future of intelligent vision in the security surveillance market by adding a fifth scenario to the standard. The fifth i-LIDS scenario will concentrate on the development, testing and evaluation of systems for the tracking of people across multiple cameras. HOSDB and the Centre for the Protection of National Infrastructure (CPNI) identified a requirement to track targets across a network of CCTV cameras using both live and post event imagery. The Detection and Vision Systems group at HOSDB were asked to determine the current state of the market and develop an in-depth Operational Requirement (OR) based on government end user requirements. Using this OR the i-LIDS team will develop a full i-LIDS scenario to aid the machine vision community in its development of multi-camera tracking systems. By defining a requirement for multi-camera tracking and building this into the i-LIDS standard the UK government will provide a widely available tool that developers can use to help them turn theory and conceptual demonstrators into front line application. This paper will briefly describe the i-LIDS project and then detail the work conducted in building the new tracking aspect of the standard.

  3. Microprocessor-controlled, wide-range streak camera

    Energy Technology Data Exchange (ETDEWEB)

    Amy E. Lewis, Craig Hollabaugh

    2006-09-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera’s user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  4. Camera systems for crash and hyge testing

    Science.gov (United States)

    Schreppers, Frederik

    1995-05-01

    Since the beginning of the use of high speed cameras for crash and hyge- testing substantial changements have taken place. Both the high speed cameras and the electronic control equipment are more sophisticated nowadays. With regard to high speed equipment, a short historical retrospective view will show that concerning high speed cameras, the improvements are mainly concentrated in design details, where as the electronic control equipment has taken full advantage of the rapid progress in electronic and computer technology in the course of the last decades. Nowadays many companies and institutes involved in crash and hyge-testing wish to perform this testing, as far as possible, as an automatic computer controlled routine in order to maintain and improve security and quality. By means of several in practice realize solutions, it will be shown how their requirements could be met.

  5. Calibration of an Outdoor Distributed Camera Network with a 3D Point Cloud

    Directory of Open Access Journals (Sweden)

    Agustín Ortega

    2014-07-01

    Full Text Available Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies between image coordinates and world points in the ground plane (walking areas to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME patio at the Universitat Politècnica de Catalunya (UPC.

  6. Interface-controlled, high-mobility organic transistors

    NARCIS (Netherlands)

    Jurchescu, Oana D.; Popinciuc, Mihaita; van Wees, Bart J.; Palstra, Thomas T. M.

    2007-01-01

    The achievement of high mobilities in field-effect transistors (FETs) is one of the main challenges for the widespread application of organic conductors in devices. Good device performance of a single-crystal pentacene FET requires both removal of impurity molecules from the bulk and the

  7. Development of gamma camera display phantom for quality control in developing countries

    International Nuclear Information System (INIS)

    Todd-Pokropek, A.

    1981-08-01

    A special phantom suitable for the routine evaluation of ''end-to-end'' gamma camera system performance, that is, system performance from input to output, is described. The design finally adopted, called the ''strip-wedge phantom'' and consisting of an array of copper or aluminium wedges of various thicknesses, permits the evaluation of contrast along one axis and resolution along the other. It is proposed that on acceptance testing of a gamma camera system a series of progressively degraded images should be obtained from the best possible with the system to very poor. An ''action threshold'' should then be defined such that image quality below this threshold would warrant such action as calling in the service engineer. Daily routine images should then be examined with reference to this threshold. Experience with the phantom is summarized

  8. Model-based design evaluation of a compact, high-efficiency neutron scatter camera

    Science.gov (United States)

    Weinfurther, Kyle; Mattingly, John; Brubaker, Erik; Steele, John

    2018-03-01

    This paper presents the model-based design and evaluation of an instrument that estimates incident neutron direction using the kinematics of neutron scattering by hydrogen-1 nuclei in an organic scintillator. The instrument design uses a single, nearly contiguous volume of organic scintillator that is internally subdivided only as necessary to create optically isolated pillars, i.e., long, narrow parallelepipeds of organic scintillator. Scintillation light emitted in a given pillar is confined to that pillar by a combination of total internal reflection and a specular reflector applied to the four sides of the pillar transverse to its long axis. The scintillation light is collected at each end of the pillar using a photodetector, e.g., a microchannel plate photomultiplier (MCP-PM) or a silicon photomultiplier (SiPM). In this optically segmented design, the (x , y) position of scintillation light emission (where the x and y coordinates are transverse to the long axis of the pillars) is estimated as the pillar's (x , y) position in the scintillator "block", and the z-position (the position along the pillar's long axis) is estimated from the amplitude and relative timing of the signals produced by the photodetectors at each end of the pillar. The neutron's incident direction and energy is estimated from the (x , y , z) -positions of two sequential neutron-proton scattering interactions in the scintillator block using elastic scatter kinematics. For proton recoils greater than 1 MeV, we show that the (x , y , z) -position of neutron-proton scattering can be estimated with < 1 cm root-mean-squared [RMS] error and the proton recoil energy can be estimated with < 50 keV RMS error by fitting the photodetectors' response time history to models of optical photon transport within the scintillator pillars. Finally, we evaluate several alternative designs of this proposed single-volume scatter camera made of pillars of plastic scintillator (SVSC-PiPS), studying the effect of

  9. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room

    Directory of Open Access Journals (Sweden)

    Akshay Gopinathan Nair

    2015-01-01

    Full Text Available Objective: To study the utility of a commercially available small, portable ultra-high definition (HD camera (GoPro Hero 4 for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon′s head. Due care was taken to protect the patient′s identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each. The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon′s head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  10. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room.

    Science.gov (United States)

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Della Rocca, David; Della Rocca, Robert C; Andron, Aleza; Jain, Vandana

    2015-10-01

    To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  11. Data and image transfer using mobile phones to strengthen microscopy-based diagnostic services in low and middle income country laboratories.

    Directory of Open Access Journals (Sweden)

    Coosje J Tuijn

    Full Text Available BACKGROUND: The emerging market of mobile phone technology and its use in the health sector is rapidly expanding and connecting even the most remote areas of world. Distributing diagnostic images over the mobile network for knowledge sharing, feedback or quality control is a logical innovation. OBJECTIVE: To determine the feasibility of using mobile phones for capturing microscopy images and transferring these to a central database for assessment, feedback and educational purposes. METHODS: A feasibility study was carried out in Uganda. Images of microscopy samples were taken using a prototype connector that could fix a variety of mobile phones to a microscope. An Information Technology (IT platform was set up for data transfer from a mobile phone to a website, including feedback by text messaging to the end user. RESULTS: Clear images were captured using mobile phone cameras of 2 megapixels (MP up to 5MP. Images were sent by mobile Internet to a website where they were visualized and feedback could be provided to the sender by means of text message. CONCLUSION: The process of capturing microscopy images on mobile phones, relaying them to a central review website and feeding back to the sender is feasible and of potential benefit in resource poor settings. Even though the system needs further optimization, it became evident from discussions with stakeholders that there is a demand for this type of technology.

  12. A micro-machined retro-reflector for improving light yield in ultra-high-resolution gamma cameras

    NARCIS (Netherlands)

    Heemskerk, J.W.T.; Korevaar, M.A.N.; Kreuger, R.; Ligtvoet, C.M.; Schotanus, P.; Beekman, F.J.

    2009-01-01

    High-resolution imaging of x-ray and gamma-ray distributions can be achieved with cameras that use charge coupled devices (CCDs) for detecting scintillation light flashes. The energy and interaction position of individual gamma photons can be determined by rapid processing of CCD images of

  13. Automatically assessing properties of dynamic cameras for camera selection and rapid deployment of video content analysis tasks in large-scale ad-hoc networks

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; van Rest, Jeroen H. C.; ten Hove, Johan-Martijn; ter Haar, Frank B.; Burghouts, Gertjan J.

    2017-10-01

    Video analytics is essential for managing large quantities of raw data that are produced by video surveillance systems (VSS) for the prevention, repression and investigation of crime and terrorism. Analytics is highly sensitive to changes in the scene, and for changes in the optical chain so a VSS with analytics needs careful configuration and prompt maintenance to avoid false alarms. However, there is a trend from static VSS consisting of fixed CCTV cameras towards more dynamic VSS deployments over public/private multi-organization networks, consisting of a wider variety of visual sensors, including pan-tilt-zoom (PTZ) cameras, body-worn cameras and cameras on moving platforms. This trend will lead to more dynamic scenes and more frequent changes in the optical chain, creating structural problems for analytics. If these problems are not adequately addressed, analytics will not be able to continue to meet end users' developing needs. In this paper, we present a three-part solution for managing the performance of complex analytics deployments. The first part is a register containing meta data describing relevant properties of the optical chain, such as intrinsic and extrinsic calibration, and parameters of the scene such as lighting conditions or measures for scene complexity (e.g. number of people). A second part frequently assesses these parameters in the deployed VSS, stores changes in the register, and signals relevant changes in the setup to the VSS administrator. A third part uses the information in the register to dynamically configure analytics tasks based on VSS operator input. In order to support the feasibility of this solution, we give an overview of related state-of-the-art technologies for autocalibration (self-calibration), scene recognition and lighting estimation in relation to person detection. The presented solution allows for rapid and robust deployment of Video Content Analysis (VCA) tasks in large scale ad-hoc networks.

  14. Design criteria for a high energy Compton Camera and possible application to targeted cancer therapy

    Science.gov (United States)

    Conka Nurdan, T.; Nurdan, K.; Brill, A. B.; Walenta, A. H.

    2015-07-01

    The proposed research focuses on the design criteria for a Compton Camera with high spatial resolution and sensitivity, operating at high gamma energies and its possible application for molecular imaging. This application is mainly on the detection and visualization of the pharmacokinetics of tumor targeting substances specific for particular cancer sites. Expected high resolution (animals with a human tumor xenograft which is one of the first steps in evaluating the potential utility of a candidate gene. The additional benefit of high sensitivity detection will be improved cancer treatment strategies in patients based on the use of specific molecules binding to cancer sites for early detection of tumors and identifying metastasis, monitoring drug delivery and radionuclide therapy for optimum cell killing at the tumor site. This new technology can provide high resolution, high sensitivity imaging of a wide range of gamma energies and will significantly extend the range of radiotracers that can be investigated and used clinically. The small and compact construction of the proposed camera system allows flexible application which will be particularly useful for monitoring residual tumor around the resection site during surgery. It is also envisaged as able to test the performance of new drug/gene-based therapies in vitro and in vivo for tumor targeting efficacy using automatic large scale screening methods.

  15. Location Assisted Vertical Handover Algorithm for QoS Optimization in End-to-End Connections

    DEFF Research Database (Denmark)

    Dam, Martin S.; Christensen, Steffen R.; Mikkelsen, Lars M.

    2012-01-01

    implementation on Android based tablets. The simulations cover a wide range of scenarios for two mobile users in an urban area with ubiquitous cellular coverage, and shows our algorithm leads to increased throughput, with fewer handovers, when considering the end-to-end connection than to other handover schemes...

  16. Soft x-ray streak cameras

    International Nuclear Information System (INIS)

    Stradling, G.L.

    1988-01-01

    This paper is a discussion of the development and of the current state of the art in picosecond soft x-ray streak camera technology. Accomplishments from a number of institutions are discussed. X-ray streak cameras vary from standard visible streak camera designs in the use of an x-ray transmitting window and an x-ray sensitive photocathode. The spectral sensitivity range of these instruments includes portions of the near UV and extends from the subkilovolt x- ray region to several tens of kilovolts. Attendant challenges encountered in the design and use of x-ray streak cameras include the accommodation of high-voltage and vacuum requirements, as well as manipulation of a photocathode structure which is often fragile. The x-ray transmitting window is generally too fragile to withstand atmospheric pressure, necessitating active vacuum pumping and a vacuum line of sight to the x-ray signal source. Because of the difficulty of manipulating x-ray beams with conventional optics, as is done with visible light, the size of the photocathode sensing area, access to the front of the tube, the ability to insert the streak tube into a vacuum chamber and the capability to trigger the sweep with very short internal delay times are issues uniquely relevant to x-ray streak camera use. The physics of electron imaging may place more stringent limitations on the temporal and spatial resolution obtainable with x-ray photocathodes than with the visible counterpart. Other issues which are common to the entire streak camera community also concern the x-ray streak camera users and manufacturers

  17. Optimum color filters for CCD digital cameras

    Science.gov (United States)

    Engelhardt, Kai; Kunz, Rino E.; Seitz, Peter; Brunner, Harald; Knop, Karl

    1993-12-01

    As part of the ESPRIT II project No. 2103 (MASCOT) a high performance prototype color CCD still video camera was developed. Intended for professional usage such as in the graphic arts, the camera provides a maximum resolution of 3k X 3k full color pixels. A high colorimetric performance was achieved through specially designed dielectric filters and optimized matrixing. The color transformation was obtained by computer simulation of the camera system and non-linear optimization which minimized the perceivable color errors as measured in the 1976 CIELUV uniform color space for a set of about 200 carefully selected test colors. The color filters were designed to allow perfect colorimetric reproduction in principle and at the same time with imperceptible color noise and with special attention to fabrication tolerances. The camera system includes a special real-time digital color processor which carries out the color transformation. The transformation can be selected from a set of sixteen matrices optimized for different illuminants and output devices. Because the actual filter design was based on slightly incorrect data the prototype camera showed a mean colorimetric error of 2.7 j.n.d. (CIELUV) in experiments. Using correct input data in the redesign of the filters, a mean colorimetric error of only 1 j.n.d. (CIELUV) seems to be feasible, implying that it is possible with such an optimized color camera to achieve such a high colorimetric performance that the reproduced colors in an image cannot be distinguished from the original colors in a scene, even in direct comparison.

  18. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor

    Directory of Open Access Journals (Sweden)

    Heegwang Kim

    2017-12-01

    Full Text Available Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  19. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor.

    Science.gov (United States)

    Kim, Heegwang; Park, Jinho; Park, Hasil; Paik, Joonki

    2017-12-09

    Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

  20. Miniature gamma-ray camera for tumor localization

    International Nuclear Information System (INIS)

    Lund, J.C.; Olsen, R.W.; James, R.B.; Cross, E.

    1997-08-01

    The overall goal of this LDRD project was to develop technology for a miniature gamma-ray camera for use in nuclear medicine. The camera will meet a need of the medical community for an improved means to image radio-pharmaceuticals in the body. In addition, this technology-with only slight modifications-should prove useful in applications requiring the monitoring and verification of special nuclear materials (SNMs). Utilization of the good energy resolution of mercuric iodide and cadmium zinc telluride detectors provides a means for rejecting scattered gamma-rays and improving the isotopic selectivity in gamma-ray images. The first year of this project involved fabrication and testing of a monolithic mercuric iodide and cadmium zinc telluride detector arrays and appropriate collimators/apertures. The second year of the program involved integration of the front-end detector module, pulse processing electronics, computer, software, and display

  1. Development of underwater high-definition camera for the confirmation test of core configuration and visual examination of BWR fuel

    International Nuclear Information System (INIS)

    Watanabe, Masato; Tuji, Kenji; Ito, Keisuke

    2010-01-01

    The purpose of this study is to develop underwater High-Definition camera for the confirmation test of core configuration and visual examination of BWR fuels in order to reduce the time of these tests and total cost regarding to purchase and maintenance. The prototype model of the camera was developed and examined in real use condition in spent fuel pool at HAMAOKA-2 and 4. The examination showed that the ability of prototype model was either equaling or surpassing to conventional product expect for resistance to radiation. The camera supposes to be used in the dose rate condition of under about 10 Gy/h. (author)

  2. Location-Based Augmented Reality for Mobile Learning: Algorithm, System, and Implementation

    Science.gov (United States)

    Tan, Qing; Chang, William; Kinshuk

    2015-01-01

    AR technology can be considered as mainly consisting of two aspects: identification of real-world object and display of computer-generated digital contents related the identified real-world object. The technical challenge of mobile AR is to identify the real-world object that mobile device's camera aim at. In this paper, we will present a…

  3. Poor Man's Virtual Camera: Real-Time Simultaneous Matting and Camera Pose Estimation.

    Science.gov (United States)

    Szentandrasi, Istvan; Dubska, Marketa; Zacharias, Michal; Herout, Adam

    2016-03-18

    Today's film and advertisement production heavily uses computer graphics combined with living actors by chromakeying. The matchmoving process typically takes a considerable manual effort. Semi-automatic matchmoving tools exist as well, but they still work offline and require manual check-up and correction. In this article, we propose an instant matchmoving solution for green screen. It uses a recent technique of planar uniform marker fields. Our technique can be used in indie and professional filmmaking as a cheap and ultramobile virtual camera, and for shot prototyping and storyboard creation. The matchmoving technique based on marker fields of shades of green is very computationally efficient: we developed and present in the article a mobile application running at 33 FPS. Our technique is thus available to anyone with a smartphone at low cost and with easy setup, opening space for new levels of filmmakers' creative expression.

  4. High performance gel imaging with a commercial single lens reflex camera

    Science.gov (United States)

    Slobodan, J.; Corbett, R.; Wye, N.; Schein, J. E.; Marra, M. A.; Coope, R. J. N.

    2011-03-01

    A high performance gel imaging system was constructed using a digital single lens reflex camera with epi-illumination to image 19 × 23 cm agarose gels with up to 10,000 DNA bands each. It was found to give equivalent performance to a laser scanner in this high throughput DNA fingerprinting application using the fluorophore SYBR Green®. The specificity and sensitivity of the imager and scanner were within 1% using the same band identification software. Low and high cost color filters were also compared and it was found that with care, good results could be obtained with inexpensive dyed acrylic filters in combination with more costly dielectric interference filters, but that very poor combinations were also possible. Methods for determining resolution, dynamic range, and optical efficiency for imagers are also proposed to facilitate comparison between systems.

  5. Nuclear Safety in A Post-Fukushima ERDA: Moving Forward with smart Mobile Technology

    Energy Technology Data Exchange (ETDEWEB)

    Choi, J. H. [PHILOSOPHIA, Seoul (Korea, Republic of); Suh, K. Y. [Seoul National Univ., Seoul (Korea, Republic of)

    2012-03-15

    The so-called smart mobile technology refers to the know-how that is being applied to smart phones and smart pads such as ipa and Galaxy Tab, just to name a few. According to Gardner, 280 million smart phones were purchased in year 2010, and another 500 million will be sold in 2013. For the smart pad, the number was 7 million in 2010 and over 30 million pad shall be picked up by buyers. Smart devices are charging the individuals, corporate and societies in the world with '3a' meaning real time, infinite reach of information and communication beating the space limitation. Smart devices are tethered to the Global Positioning System (GPS), high resolution camera, touch sensor Graphic User Interface (GUI), gyro sensor for inclination information, motion sensor, voice recognition, face recognition, cloud, and so forth. These technologies bring about the remote office so that people not only work in their offices but communicate with concurrent information through the Social Networking Service (SNS), build their social relationship, and enjoy their free time with various entertainment mobile applications. These changes signify the strongest information power since the computing history began. The smart mobile technology can also bestow a boost to congested nuclear power industry due to the recent Fukushima Decah nuclear power plants (NPPs) accident. At first, the mobile office and cloud technology will endow the biggest variety to let US manage and access to the desired information anytime and anywhere. The display panel and camera built in the smart mobile device can make the augmented reality (AR) possible to nuclear power industry. For example, the smart mobile devices can be utilized to support the product assembly process in manufacturing company relevant to NPPs. Compared to previous assembly work that is coming and going to find the tools, one can accomplish the assembly process without wasting time, watching manual at the same time at the spot

  6. Nuclear Safety in A Post-Fukushima ERDA: Moving Forward with smart Mobile Technology

    International Nuclear Information System (INIS)

    Choi, J. H.; Suh, K. Y.

    2012-01-01

    The so-called smart mobile technology refers to the know-how that is being applied to smart phones and smart pads such as ipa and Galaxy Tab, just to name a few. According to Gardner, 280 million smart phones were purchased in year 2010, and another 500 million will be sold in 2013. For the smart pad, the number was 7 million in 2010 and over 30 million pad shall be picked up by buyers. Smart devices are charging the individuals, corporate and societies in the world with '3a' meaning real time, infinite reach of information and communication beating the space limitation. Smart devices are tethered to the Global Positioning System (GPS), high resolution camera, touch sensor Graphic User Interface (GUI), gyro sensor for inclination information, motion sensor, voice recognition, face recognition, cloud, and so forth. These technologies bring about the remote office so that people not only work in their offices but communicate with concurrent information through the Social Networking Service (SNS), build their social relationship, and enjoy their free time with various entertainment mobile applications. These changes signify the strongest information power since the computing history began. The smart mobile technology can also bestow a boost to congested nuclear power industry due to the recent Fukushima Decah nuclear power plants (NPPs) accident. At first, the mobile office and cloud technology will endow the biggest variety to let US manage and access to the desired information anytime and anywhere. The display panel and camera built in the smart mobile device can make the augmented reality (AR) possible to nuclear power industry. For example, the smart mobile devices can be utilized to support the product assembly process in manufacturing company relevant to NPPs. Compared to previous assembly work that is coming and going to find the tools, one can accomplish the assembly process without wasting time, watching manual at the same time at the spot. Experienced

  7. Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap

    Directory of Open Access Journals (Sweden)

    Khalil M. Ahmad Yousef

    2017-10-01

    Full Text Available Extrinsic calibration of a camera and a 2D laser range finder (lidar sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot–world hand–eye calibration (RWHE problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX = ZB , where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B , which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0 . 12 ∘ respectively.

  8. Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap.

    Science.gov (United States)

    Ahmad Yousef, Khalil M; Mohd, Bassam J; Al-Widyan, Khalid; Hayajneh, Thaier

    2017-10-14

    Extrinsic calibration of a camera and a 2D laser range finder (lidar) sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot-world hand-eye calibration (RWHE) problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX = ZB , where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B , which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0 . 12 ∘ respectively.

  9. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  10. Receptor for advanced glycation end products and its ligand high-mobility group box-1 mediate allergic airway sensitization and airway inflammation.

    Science.gov (United States)

    Ullah, Md Ashik; Loh, Zhixuan; Gan, Wan Jun; Zhang, Vivian; Yang, Huan; Li, Jian Hua; Yamamoto, Yasuhiko; Schmidt, Ann Marie; Armour, Carol L; Hughes, J Margaret; Phipps, Simon; Sukkar, Maria B

    2014-08-01

    The receptor for advanced glycation end products (RAGE) shares common ligands and signaling pathways with TLR4, a key mediator of house dust mite (Dermatophagoides pteronyssinus) (HDM) sensitization. We hypothesized that RAGE and its ligand high-mobility group box-1 (HMGB1) cooperate with TLR4 to mediate HDM sensitization. To determine the requirement for HMGB1 and RAGE, and their relationship with TLR4, in airway sensitization. TLR4(-/-), RAGE(-/-), and RAGE-TLR4(-/-) mice were intranasally exposed to HDM or cockroach (Blatella germanica) extracts, and features of allergic inflammation were measured during the sensitization or challenge phase. Anti-HMGB1 antibody and the IL-1 receptor antagonist Anakinra were used to inhibit HMGB1 and the IL-1 receptor, respectively. The magnitude of allergic airway inflammation in response to either HDM or cockroach sensitization and/or challenge was significantly reduced in the absence of RAGE but not further diminished in the absence of both RAGE and TLR4. HDM sensitization induced the release of HMGB1 from the airway epithelium in a biphasic manner, which corresponded to the sequential activation of TLR4 then RAGE. Release of HMGB1 in response to cockroach sensitization also was RAGE dependent. Significantly, HMGB1 release occurred downstream of TLR4-induced IL-1α, and upstream of IL-25 and IL-33 production. Adoptive transfer of HDM-pulsed RAGE(+/+)dendritic cells to RAGE(-/-) mice recapitulated the allergic responses after HDM challenge. Immunoneutralization of HMGB1 attenuated HDM-induced allergic airway inflammation. The HMGB1-RAGE axis mediates allergic airway sensitization and airway inflammation. Activation of this axis in response to different allergens acts to amplify the allergic inflammatory response, which exposes it as an attractive target for therapeutic intervention. Copyright © 2014 American Academy of Allergy, Asthma & Immunology. Published by Mosby, Inc. All rights reserved.

  11. Radiation Dose-Rate Extraction from the Camera Image of Quince 2 Robot System using Optical Character Recognition

    International Nuclear Information System (INIS)

    Cho, Jai Wan; Jeong, Kyung Min

    2012-01-01

    In the case of the Japanese Quince 2 robot system, 7 CCD/CMOS cameras were used. 2 CCD cameras of Quince robot are used for the forward and backward monitoring of the surroundings during navigation. And 2 CCD (or CMOS) cameras are used for monitoring the status of front-end and back-end motion mechanics such as flippers and crawlers. A CCD camera with wide field of view optics is used for monitoring the status of the communication (VDSL) cable reel. And another 2 CCD cameras are assigned for reading the indication value of the radiation dosimeter and the instrument. The Quince 2 robot measured radiation in the unit 2 reactor building refueling floor of the Fukushima nuclear power plant. The CCD camera with wide field-of-view (fisheye) lens reads indicator of the dosimeter loaded on the Quince 2 robot, which was sent to carry out investigating the unit 2 reactor building refueling floor situation. The camera image with gamma ray dose-rate information is transmitted to the remote control site via VDSL communication line. At the remote control site, the radiation information in the unit 2 reactor building refueling floor can be perceived by monitoring the camera image. To make up the radiation profile in the surveyed refueling floor, the gamma ray dose-rate information in the image should be converted to numerical value. In this paper, we extract the gamma ray dose-rate value in the unit 2 reactor building refueling floor using optical character recognition method

  12. Radiation Dose-Rate Extraction from the Camera Image of Quince 2 Robot System using Optical Character Recognition

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Jeong, Kyung Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2012-05-15

    In the case of the Japanese Quince 2 robot system, 7 CCD/CMOS cameras were used. 2 CCD cameras of Quince robot are used for the forward and backward monitoring of the surroundings during navigation. And 2 CCD (or CMOS) cameras are used for monitoring the status of front-end and back-end motion mechanics such as flippers and crawlers. A CCD camera with wide field of view optics is used for monitoring the status of the communication (VDSL) cable reel. And another 2 CCD cameras are assigned for reading the indication value of the radiation dosimeter and the instrument. The Quince 2 robot measured radiation in the unit 2 reactor building refueling floor of the Fukushima nuclear power plant. The CCD camera with wide field-of-view (fisheye) lens reads indicator of the dosimeter loaded on the Quince 2 robot, which was sent to carry out investigating the unit 2 reactor building refueling floor situation. The camera image with gamma ray dose-rate information is transmitted to the remote control site via VDSL communication line. At the remote control site, the radiation information in the unit 2 reactor building refueling floor can be perceived by monitoring the camera image. To make up the radiation profile in the surveyed refueling floor, the gamma ray dose-rate information in the image should be converted to numerical value. In this paper, we extract the gamma ray dose-rate value in the unit 2 reactor building refueling floor using optical character recognition method

  13. An array of virtual Frisch-grid CdZnTe detectors and a front-end application-specific integrated circuit for large-area position-sensitive gamma-ray cameras

    Energy Technology Data Exchange (ETDEWEB)

    Bolotnikov, A. E., E-mail: bolotnik@bnl.gov; Ackley, K.; Camarda, G. S.; Cherches, C.; Cui, Y.; De Geronimo, G.; Fried, J.; Hossain, A.; Mahler, G.; Maritato, M.; Roy, U.; Salwen, C.; Vernon, E.; Yang, G.; James, R. B. [Brookhaven National Laboratory, Upton, New York 11793 (United States); Hodges, D. [University of Texas at El Paso, El Paso, Texas 79968 (United States); Lee, W. [Korea University, Seoul 136-855 (Korea, Republic of); Petryk, M. [SUNY Binghamton, Vestal, New York 13902 (United States)

    2015-07-15

    We developed a robust and low-cost array of virtual Frisch-grid CdZnTe detectors coupled to a front-end readout application-specific integrated circuit (ASIC) for spectroscopy and imaging of gamma rays. The array operates as a self-reliant detector module. It is comprised of 36 close-packed 6 × 6 × 15 mm{sup 3} detectors grouped into 3 × 3 sub-arrays of 2 × 2 detectors with the common cathodes. The front-end analog ASIC accommodates up to 36 anode and 9 cathode inputs. Several detector modules can be integrated into a single- or multi-layer unit operating as a Compton or a coded-aperture camera. We present the results from testing two fully assembled modules and readout electronics. The further enhancement of the arrays’ performance and reduction of their cost are possible by using position-sensitive virtual Frisch-grid detectors, which allow for accurate corrections of the response of material non-uniformities caused by crystal defects.

  14. Coaxial fundus camera for opthalmology

    Science.gov (United States)

    de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.

    2015-09-01

    A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.

  15. A secure wireless mobile-to-server link

    Science.gov (United States)

    Kumar, Abhinav; Akopian, David; Agaian, Sos; Creutzburg, Reiner

    2009-02-01

    Modern mobile devices are some of the most technologically advanced devices that people use on a daily basis and the current trends indicate continuous growth in mobile phone applications. Nowadays phones are equipped with cameras that can capture still images and video, they are equipped with software that can read, convert, manipulate, communicate and save multimedia in multiple formats. This tremendous progress increased the volumes of communicated sensitive information which should be protected against unauthorized access. This paper discusses two general approaches for data protection, steganography and cryptography, and demonstrates how to integrate such algorithms with a mobile-toserver link being used by many applications.

  16. Mobile device-based optical instruments for agriculture

    Science.gov (United States)

    Sumriddetchkajorn, Sarun

    2013-05-01

    Realizing that a current smart-mobile device such as a cell phone and a tablet can be considered as a pocket-size computer embedded with a built-in digital camera, this paper reviews and demonstrates on how a mobile device can be specifically functioned as a portable optical instrument for agricultural applications. The paper highlights several mobile device-based optical instruments designed for searching small pests, measuring illumination level, analyzing spectrum of light, identifying nitrogen status in the rice field, estimating chlorine in water, and determining ripeness level of the fruit. They are suitable for individual use as well as for small and medium enterprises.

  17. Upgrading of analogue gamma cameras with PC based computer system

    International Nuclear Information System (INIS)

    Fidler, V.; Prepadnik, M.

    2002-01-01

    Full text: Dedicated nuclear medicine computers for acquisition and processing of images from analogue gamma cameras in developing countries are in many cases faulty and technologically obsolete. The aim of the upgrading project of International Atomic Energy Agency (IAEA) was to support the development of the PC based computer system which would cost 5.000 $ in total. Several research institutions from different countries (China, Cuba, India and Slovenia) were financially supported in this development. The basic demands for the system were: one acquisition card an ISA bus, image resolution up to 256x256, SVGA graphics, low count loss at high count rates, standard acquisition and clinical protocols incorporated in PIP (Portable Image Processing), on-line energy and uniformity correction, graphic printing and networking. The most functionally stable acquisition system tested on several international workshops and university clinics was the Slovenian one with a complete set of acquisition and clinical protocols, transfer of scintigraphic data from acquisition card to PC through PORT, count loss less than 1 % at count rate of 120 kc/s, improvement of integral uniformity index by a factor of 3-5 times, reporting, networking and archiving solutions for simple MS network or server oriented network systems (NT server, etc). More than 300 gamma cameras in 52 countries were digitized and put in the routine work. The project of upgrading the analogue gamma cameras yielded a high promotion of nuclear medicine in the developing countries by replacing the old computer systems, improving the technological knowledge of end users on workshops and training courses and lowering the maintenance cost of the departments. (author)

  18. Laser scanning camera inspects hazardous area

    International Nuclear Information System (INIS)

    Fryatt, A.; Miprode, C.

    1985-01-01

    Main operational characteristics of a new laser scanning camera are presented. The camera is intended primarily for low level high resolution viewing inside nuclear reactors. It uses a He-Ne laser beam raster; by detecting the reflected light by means of a phomultiplier, the subject under observation can be reconstructed in an electronic video store and reviewed on a conventional monitor screen

  19. Remote-controlled vision-guided mobile robot system

    Science.gov (United States)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  20. Measuring ionizing radiation with a mobile device

    Science.gov (United States)

    Michelsburg, Matthias; Fehrenbach, Thomas; Puente León, Fernando

    2012-02-01

    In cases of nuclear disasters it is desirable to know one's personal exposure to radioactivity and the related health risk. Usually, Geiger-Mueller tubes are used to assess the situation. Equipping everyone with such a device in a short period of time is very expensive. We propose a method to detect ionizing radiation using the integrated camera of a mobile consumer device, e.g., a cell phone. In emergency cases, millions of existing mobile devices could then be used to monitor the exposure of its owners. In combination with internet access and GPS, measured data can be collected by a central server to get an overview of the situation. During a measurement, the CMOS sensor of a mobile device is shielded from surrounding light by an attachment in front of the lens or an internal shutter. The high-energy radiation produces free electrons on the sensor chip resulting in an image signal. By image analysis by means of the mobile device, signal components due to incident ionizing radiation are separated from the sensor noise. With radioactive sources present significant increases in detected pixels can be seen. Furthermore, the cell phone application can make a preliminary estimate on the collected dose of an individual and the associated health risks.

  1. Deep learning enhanced mobile-phone microscopy

    KAUST Repository

    Rivenson, Yair

    2017-12-12

    Mobile-phones have facilitated the creation of field-portable, cost-effective imaging and sensing technologies that approach laboratory-grade instrument performance. However, the optical imaging interfaces of mobile-phones are not designed for microscopy and produce spatial and spectral distortions in imaging microscopic specimens. Here, we report on the use of deep learning to correct such distortions introduced by mobile-phone-based microscopes, facilitating the production of high-resolution, denoised and colour-corrected images, matching the performance of benchtop microscopes with high-end objective lenses, also extending their limited depth-of-field. After training a convolutional neural network, we successfully imaged various samples, including blood smears, histopathology tissue sections, and parasites, where the recorded images were highly compressed to ease storage and transmission for telemedicine applications. This method is applicable to other low-cost, aberrated imaging systems, and could offer alternatives for costly and bulky microscopes, while also providing a framework for standardization of optical images for clinical and biomedical applications.

  2. Tunable Design for LTE Mobile-Phones

    DEFF Research Database (Denmark)

    Barrio, Samantha Caporal Del; Bahramzy, Pevand; Svendsen, Simon

    2014-01-01

    Antenna volume has become a critical parameter in mobile phone antenna design, as broader bandwidths are required for high connectivity between users. Shrinking the antenna size affects its efficiency, if one does not sacrifice bandwidth. This paper proposes an architecture to address the need...... for small and wide-band antennas. The study focuses on the low-frequencies (700 MHz - 960 MHz) in order to address a tough scenario for small platforms. A tunable design of the front-end and the antennas of the mobile phone is proposed and investigated. Operation is achieved on all low...

  3. Determining Sala mango qualities with the use of RGB images captured by a mobile phone camera

    Science.gov (United States)

    Yahaya, Ommi Kalsom Mardziah; Jafri, Mohd Zubir Mat; Aziz, Azlan Abdul; Omar, Ahmad Fairuz

    2015-04-01

    Sala mango (Mangifera indicia) is one of the Malaysia's most popular tropical fruits that are widely marketed within the country. The degrees of ripeness of mangoes have conventionally been evaluated manually on the basis of color parameters, but a simple non-destructive technique using the Samsung Galaxy Note 1 mobile phone camera is introduced to replace the destructive technique. In this research, color parameters in terms of RGB values acquired using the ENVI software system were linked to detect Sala mango quality parameters. The features of mango were extracted from the acquired images and then used to classify of fruit skin color, which relates to the stages of ripening. A multivariate analysis method, multiple linear regression, was employed with the purpose of using RGB color parameters to estimate the pH, soluble solids content (SSC), and firmness. The relationship between these qualities parameters of Sala mango and its mean pixel values in the RGB system is analyzed. Findings show that pH yields the highest accuracy with a correlation coefficient R = 0.913 and root mean square of error RMSE = 0.166 pH. Meanwhile, firmness has R = 0.875 and RMSE = 1.392 kgf, whereas soluble solid content has the lowest accuracy with R = 0.814 and RMSE = 1.218°Brix with the correlation between color parameters. Therefore, this non-invasive method can be used to determine the quality attributes of mangoes.

  4. Field-testing of a cost-effective mobile-phone based microscope for screening of Schistosoma haematobium infection (Conference Presentation)

    Science.gov (United States)

    Ceylan Koydemir, Hatice; Bogoch, Isaac I.; Tseng, Derek; Ephraim, Richard K. D.; Duah, Evans; Tee, Joseph; Andrews, Jason R.; Ozcan, Aydogan

    2016-03-01

    Schistosomiasis is a parasitic and neglected tropical disease, and affects mobile-phone microscope, a custom-designed 3D printed opto-mechanical attachment (~150g) is placed in contact with the smartphone camera-lens, creating an imaging-system with a half-pitch resolution of ~0.87µm. This unit includes an external lens (also taken from a mobile-phone camera), a sample tray, a z-stage to adjust the focus, two light-emitting-diodes (LEDs) and two diffusers for uniform illumination of the sample. In our field-testing, 60 urine samples, collected from children, were used, where the prevalence of the infection was 72.9%. After concentration of the sample with centrifugation, the sediment was placed on a glass-slide and S. haematobium eggs were first identified/quantified using conventional benchtop microscopy by an expert diagnostician, and then a second expert, blinded to these results, determined the presence/absence of eggs using our mobile-phone microscope. Compared to conventional microscopy, our mobile-phone microscope had a diagnostic sensitivity of 72.1%, specificity of 100%, positive-predictive-value of 100%, and a negative-predictive-value of 57.1%. Furthermore, our mobile-phone platform demonstrated a sensitivity of 65.7% and 100% for low-intensity infections (≤50 eggs/10 mL urine) and high-intensity infections (mobile-phone microscope may play an important role in the diagnosis of schistosomiasis and various other global health challenges.

  5. Development of a mobile manipulator for nuclear plant disaster, HELIOS X. Mechanical design and basic experiments

    International Nuclear Information System (INIS)

    Noda, Satsuya; Hirose, Shigeo; Ueda, Koji; Nakano, Hisami; Horigome, Atsushi; Endo, Gen

    2016-01-01

    In places such as nuclear power plant disaster area, which it is difficult for human workers to enter, robots are required to scout those places instead of human workers. In this paper, we present a mobile manipulator HELIOS X for a nuclear plant decommissioning task. Firstly, we address demands and specifications for the robot, considering the mission of reconnaissance. Then we outline the system of the robot, mainly focusing on the following mechanism: 'Crank Wheel', 'Main Arm', 'Sphere Link Wrist', 'Camera Arm', 'Control System' and 'System architecture'. Especially, we installed 3 degree of freedom 'Camera Arm' on the 'Main Arm', in order to improve functionality of remote control system. This enables the operator to monitor both the gripper and its overall view of the robot. 'Camera Arm' helps the operator to recognize the distance from an object to the gripper, because the operator can interactively move the viewpoint of the camera, and monitor from another camera angle without changing the gripper's position. We confirmed the basic functionality of mobile base, 'Main Arm' and 'Camera Arm' through hardware experiments. We also demonstrated that HELIOS X could pass through the pull-to-open door with a substantial closing force when the operator watched camera view only. (author)

  6. Adaptive end-to-end optimization of mobile video streaming using QoS negotiation

    NARCIS (Netherlands)

    Taal, Jacco R.; Langendoen, Koen; van der Schaaf, Arjen; van Dijk, H.W.; Lagendijk, R. (Inald) L.

    Video streaming over wireless links is a non-trivial problem due to the large and frequent changes in the quality of the underlying radio channel combined with latency constraints. We believe that every layer in a mobile system must be prepared to adapt its behavior to its environment. Thus layers

  7. Geiger-mode APD camera system for single-photon 3D LADAR imaging

    Science.gov (United States)

    Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2012-06-01

    The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.

  8. SPADAS: a high-speed 3D single-photon camera for advanced driver assistance systems

    Science.gov (United States)

    Bronzi, D.; Zou, Y.; Bellisai, S.; Villa, F.; Tisa, S.; Tosi, A.; Zappa, F.

    2015-02-01

    Advanced Driver Assistance Systems (ADAS) are the most advanced technologies to fight road accidents. Within ADAS, an important role is played by radar- and lidar-based sensors, which are mostly employed for collision avoidance and adaptive cruise control. Nonetheless, they have a narrow field-of-view and a limited ability to detect and differentiate objects. Standard camera-based technologies (e.g. stereovision) could balance these weaknesses, but they are currently not able to fulfill all automotive requirements (distance range, accuracy, acquisition speed, and frame-rate). To this purpose, we developed an automotive-oriented CMOS single-photon camera for optical 3D ranging based on indirect time-of-flight (iTOF) measurements. Imagers based on Single-photon avalanche diode (SPAD) arrays offer higher sensitivity with respect to CCD/CMOS rangefinders, have inherent better time resolution, higher accuracy and better linearity. Moreover, iTOF requires neither high bandwidth electronics nor short-pulsed lasers, hence allowing the development of cost-effective systems. The CMOS SPAD sensor is based on 64 × 32 pixels, each able to process both 2D intensity-data and 3D depth-ranging information, with background suppression. Pixel-level memories allow fully parallel imaging and prevents motion artefacts (skew, wobble, motion blur) and partial exposure effects, which otherwise would hinder the detection of fast moving objects. The camera is housed in an aluminum case supporting a 12 mm F/1.4 C-mount imaging lens, with a 40°×20° field-of-view. The whole system is very rugged and compact and a perfect solution for vehicle's cockpit, with dimensions of 80 mm × 45 mm × 70 mm, and less that 1 W consumption. To provide the required optical power (1.5 W, eye safe) and to allow fast (up to 25 MHz) modulation of the active illumination, we developed a modular laser source, based on five laser driver cards, with three 808 nm lasers each. We present the full characterization of

  9. Low power multi-camera system and algorithms for automated threat detection

    Science.gov (United States)

    Huber, David J.; Khosla, Deepak; Chen, Yang; Van Buer, Darrel J.; Martin, Kevin

    2013-05-01

    A key to any robust automated surveillance system is continuous, wide field-of-view sensor coverage and high accuracy target detection algorithms. Newer systems typically employ an array of multiple fixed cameras that provide individual data streams, each of which is managed by its own processor. This array can continuously capture the entire field of view, but collecting all the data and back-end detection algorithm consumes additional power and increases the size, weight, and power (SWaP) of the package. This is often unacceptable, as many potential surveillance applications have strict system SWaP requirements. This paper describes a wide field-of-view video system that employs multiple fixed cameras and exhibits low SWaP without compromising the target detection rate. We cycle through the sensors, fetch a fixed number of frames, and process them through a modified target detection algorithm. During this time, the other sensors remain powered-down, which reduces the required hardware and power consumption of the system. We show that the resulting gaps in coverage and irregular frame rate do not affect the detection accuracy of the underlying algorithms. This reduces the power of an N-camera system by up to approximately N-fold compared to the baseline normal operation. This work was applied to Phase 2 of DARPA Cognitive Technology Threat Warning System (CT2WS) program and used during field testing.

  10. IEEE 1394 CAMERA IMAGING SYSTEM FOR BROOKHAVENS BOOSTER APPLICATION FACILITY BEAM DIAGNOSTICS

    International Nuclear Information System (INIS)

    BROWN, K.A.; FRAK, B.; GASSNER, D.; HOFF, L.; OLSEN, R.H.; SATOGATA, T.; TEPIKIAN, S.

    2002-01-01

    Brookhaven's Booster Applications Facility (BAF) will deliver resonant extracted heavy ion beams from the AGS Booster to short-exposure fixed-target experiments located at the end of the BAF beam line. The facility is designed to deliver a wide range of heavy ion species over a range of intensities from 10 3 to over 10 8 ions/pulse, and over a range of energies from 0.1 to 3.0 GeV/nucleon. With these constraints we have designed instrumentation packages which can deliver the maximum amount of dynamic range at a reasonable cost. Through the use of high quality optics systems and neutral density light filters we will achieve 4 to 5 orders of magnitude in light collection. By using digital IEEE1394 camera systems we are able to eliminate the frame-grabber stage in processing and directly transfer data at maximum rates of 400 Mb/set. In this note we give a detailed description of the system design and discuss the parameters used to develop the system specifications. We will also discuss the IEEE1394 camera software interface and the high-level user interface

  11. Results with the UKIRT infrared camera

    International Nuclear Information System (INIS)

    Mclean, I.S.

    1987-01-01

    Recent advances in focal plane array technology have made an immense impact on infrared astronomy. Results from the commissioning of the first infrared camera on UKIRT (the world's largest IR telescope) are presented. The camera, called IRCAM 1, employs the 62 x 58 InSb DRO array from SBRC in an otherwise general purpose system which is briefly described. Several imaging modes are possible including staring, chopping and a high-speed snapshot mode. Results to be presented include the first true high resolution images at IR wavelengths of the entire Orion nebula

  12. Medium-sized aperture camera for Earth observation

    Science.gov (United States)

    Kim, Eugene D.; Choi, Young-Wan; Kang, Myung-Seok; Kim, Ee-Eul; Yang, Ho-Soon; Rasheed, Ad. Aziz Ad.; Arshad, Ahmad Sabirin

    2017-11-01

    Satrec Initiative and ATSB have been developing a medium-sized aperture camera (MAC) for an earth observation payload on a small satellite. Developed as a push-broom type high-resolution camera, the camera has one panchromatic and four multispectral channels. The panchromatic channel has 2.5m, and multispectral channels have 5m of ground sampling distances at a nominal altitude of 685km. The 300mm-aperture Cassegrain telescope contains two aspheric mirrors and two spherical correction lenses. With a philosophy of building a simple and cost-effective camera, the mirrors incorporate no light-weighting, and the linear CCDs are mounted on a single PCB with no beam splitters. MAC is the main payload of RazakSAT to be launched in 2005. RazakSAT is a 180kg satellite including MAC, designed to provide high-resolution imagery of 20km swath width on a near equatorial orbit (NEqO). The mission objective is to demonstrate the capability of a high-resolution remote sensing satellite system on a near equatorial orbit. This paper describes the overview of the MAC and RarakSAT programmes, and presents the current development status of MAC focusing on key optical aspects of Qualification Model.

  13. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

    Science.gov (United States)

    Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar

    2015-01-01

    Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes. PMID:26712766

  14. Sensor Fusion Based Model for Collision Free Mobile Robot Navigation

    Directory of Open Access Journals (Sweden)

    Marwah Almasri

    2015-12-01

    Full Text Available Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes.

  15. Temperature dependence of ballistic mobility in a metamorphic InGaAs/InAlAs high electron mobility transistor

    International Nuclear Information System (INIS)

    Lee, Jongkyong; Gang, Suhyun; Jo, Yongcheol; Kim, Jongmin; Woo, Hyeonseok; Han, Jaeseok; Kim, Hyungsang; Im, Hyunsik

    2014-01-01

    We have investigated the temperature dependence of ballistic mobility in a 100 nm-long InGaAs/InAlAs metamorphic high-electron-mobility transistor designed for millimeter-wavelength RF applications. To extract the temperature dependence of quasi-ballistic mobility, our experiment involves measurements of the effective mobility in the low-bias linear region of the transistor and of the collision-dominated Hall mobility using a gated Hall bar of the same epitaxial structure. The data measured from the experiment are consistent with that of modeled ballistic mobility based on ballistic transport theory. These results advance the understanding of ballistic transport in various transistors with a nano-scale channel length that is comparable to the carrier's mean free path in the channel.

  16. Integrating Gigabit ethernet cameras into EPICS at Diamond light source

    International Nuclear Information System (INIS)

    Cobb, T.

    2012-01-01

    At Diamond Light Source a range of cameras are used to provide images for diagnostic purposes in both the accelerator and photo beamlines. The accelerator and existing beamlines use Point Grey Flea and Flea2 Firewire cameras. We have selected Gigabit Ethernet cameras supporting GigE Vision for our new photon beamlines. GigE Vision is an interface standard for high speed Ethernet cameras which encourages inter-operability between manufacturers. This paper describes the challenges encountered while integrating GigE Vision cameras from a range of vendors into EPICS. GigE Vision cameras appear to be more reliable than the Firewire cameras, and the simple cabling makes much easier to move the cameras to different positions. Upcoming power over Ethernet versions of the cameras will reduce the number of cables still further

  17. A Survey on 5G: The Next Generation of Mobile Communication

    OpenAIRE

    Panwar, Nisha; Sharma, Shantanu; Singh, Awadhesh Kumar

    2015-01-01

    The rapidly increasing number of mobile devices, voluminous data, and higher data rate are pushing to rethink the current generation of the cellular mobile communication. The next or fifth generation (5G) cellular networks are expected to meet high-end requirements. The 5G networks are broadly characterized by three unique features: ubiquitous connectivity, extremely low latency, and very high-speed data transfer. The 5G networks would provide novel architectures and technologies beyond state...

  18. Combining local and global optimisation for virtual camera control

    OpenAIRE

    Burelli, Paolo; Yannakakis, Georgios N.; 2010 IEEE Symposium on Computational Intelligence and Games

    2010-01-01

    Controlling a virtual camera in 3D computer games is a complex task. The camera is required to react to dynamically changing environments and produce high quality visual results and smooth animations. This paper proposes an approach that combines local and global search to solve the virtual camera control problem. The automatic camera control problem is described and it is decomposed into sub-problems; then a hierarchical architecture that solves each sub-problem using the most appropriate op...

  19. Streak cameras and their applications

    International Nuclear Information System (INIS)

    Bernet, J.M.; Imhoff, C.

    1987-01-01

    Over the last several years, development of various measurement techniques in the nanosecond and pico-second range has led to increased reliance on streak cameras. This paper will present the main electronic and optoelectronic performances of the Thomson-CSF TSN 506 cameras and their associated devices used to build an automatic image acquisition and processing system (NORMA). A brief survey of the diversity and the spread of the use of high speed electronic cinematography will be illustrated by a few typical applications [fr

  20. Design and tests of a portable mini gamma camera

    International Nuclear Information System (INIS)

    Sanchez, F.; Benlloch, J.M.; Escat, B.; Pavon, N.; Porras, E.; Kadi-Hanifi, D.; Ruiz, J.A.; Mora, F.J.; Sebastia, A.

    2004-01-01

    Design optimization, manufacturing, and tests, both laboratory and clinical, of a portable gamma camera for medical applications are presented. This camera, based on a continuous scintillation crystal and a position-sensitive photomultiplier tube, has an intrinsic spatial resolution of ≅2 mm, an energy resolution of 13% at 140 keV, and linearities of 0.28 mm (absolute) and 0.15 mm (differential), with a useful field of view of 4.6 cm diameter. Our camera can image small organs with high efficiency and so it can address the demand for devices of specific clinical applications like thyroid and sentinel node scintigraphy as well as scintimammography and radio-guided surgery. The main advantages of the gamma camera with respect to those previously reported in the literature are high portability, low cost, and weight (2 kg), with no significant loss of sensitivity and spatial resolution. All the electronic components are packed inside the minigamma camera, and no external electronic devices are required. The camera is only connected through the universal serial bus port to a portable personal computer (PC), where a specific software allows to control both the camera parameters and the measuring process, by displaying on the PC the acquired image on 'real time'. In this article, we present the camera and describe the procedures that have led us to choose its configuration. Laboratory and clinical tests are presented together with diagnostic capabilities of the gamma camera

  1. Multimedia information processing in the SWAN mobile networked computing system

    Science.gov (United States)

    Agrawal, Prathima; Hyden, Eoin; Krzyzanowsji, Paul; Srivastava, Mani B.; Trotter, John

    1996-03-01

    Anytime anywhere wireless access to databases, such as medical and inventory records, can simplify workflow management in a business, and reduce or even eliminate the cost of moving paper documents. Moreover, continual progress in wireless access technology promises to provide per-user bandwidths of the order of a few Mbps, at least in indoor environments. When combined with the emerging high-speed integrated service wired networks, it enables ubiquitous and tetherless access to and processing of multimedia information by mobile users. To leverage on this synergy an indoor wireless network based on room-sized cells and multimedia mobile end-points is being developed at AT&T Bell Laboratories. This research network, called SWAN (Seamless Wireless ATM Networking), allows users carrying multimedia end-points such as PDAs, laptops, and portable multimedia terminals, to seamlessly roam while accessing multimedia data streams from the wired backbone network. A distinguishing feature of the SWAN network is its use of end-to-end ATM connectivity as opposed to the connectionless mobile-IP connectivity used by present day wireless data LANs. This choice allows the wireless resource in a cell to be intelligently allocated amongst various ATM virtual circuits according to their quality of service requirements. But an efficient implementation of ATM in a wireless environment requires a proper mobile network architecture. In particular, the wireless link and medium-access layers need to be cognizant of the ATM traffic, while the ATM layers need to be cognizant of the mobility enabled by the wireless layers. This paper presents an overview of SWAN's network architecture, briefly discusses the issues in making ATM mobile and wireless, and describes initial multimedia applications for SWAN.

  2. On the accuracy potential of focused plenoptic camera range determination in long distance operation

    Science.gov (United States)

    Sardemann, Hannes; Maas, Hans-Gerd

    2016-04-01

    Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.

  3. High pH mobile phase effects on silica-based reversed-phase high-performance liquid chromatographic columns

    NARCIS (Netherlands)

    Kirkland, J.J.; Straten, van M.A.; Claessens, H.A.

    1995-01-01

    Aqueous mobile phases above pH 8 often cause premature column failure, limiting the utility of silica-based columns for applications requiring high pH. Previous studies suggest that covalently bound silane ligands are hydrolyzed and removed by high-pH mobile phases. However, we found that the

  4. Ultra high hole mobilities in a pure strained Ge quantum well

    International Nuclear Information System (INIS)

    Mironov, O.A.; Hassan, A.H.A.; Morris, R.J.H.; Dobbie, A.; Uhlarz, M.; Chrastina, D.; Hague, J.P.; Kiatgamolchai, S.; Beanland, R.; Gabani, S.; Berkutov, I.B.; Helm, M.; Drachenko, O.; Myronov, M.; Leadley, D.R.

    2014-01-01

    Hole mobilities at low and room temperature (RT) have been studied for a strained sGe/SiGe heterostructure using standard Van der Pauw resistivity and Hall effect measurements. The range of magnetic field and temperatures used were − 14 T < B < + 14 T and 1.5 K < T < 300 K respectively. Using maximum entropy-mobility spectrum analysis (ME-MSA) and Bryan's algorithm mobility spectrum (BAMS) analysis, a RT two dimensional hole gas drift mobility of (3.9 ± 0.4) × 10 3 cm 2 /V s was determined for a sheet density (p s ) 9.8 × 10 10 cm −2 (by ME-MSA) and (3.9 ± 0.2) × 10 3 cm 2 /V s for a sheet density (p s ) 5.9 × 10 10 cm −2 (by BAMS). - Highlights: • Pure strained Ge channel grown by reduced pressure chemical vapor deposition • Maximum entropy-mobility spectrum analysis • Bryan's algorithm mobility spectrum analysis • High room temperature hole drift mobility of (3.9 ± 0.4) × 10 3 cm 2 /V s • Extremely high hole mobility of 1.1 × 10 6 cm 2 /V s at 12 K

  5. The KCLBOT: Exploiting RGB-D Sensor Inputs for Navigation Environment Building and Mobile Robot Localization

    Directory of Open Access Journals (Sweden)

    Evangelos Georgiou

    2011-09-01

    Full Text Available This paper presents an alternative approach to implementing a stereo camera configuration for SLAM. The approach suggested implements a simplified method using a single RGB-D camera sensor mounted on a maneuverable non-holonomic mobile robot, the KCLBOT, used for extracting image feature depth information while maneuvering. Using a defined quadratic equation, based on the calibration of the camera, a depth computation model is derived base on the HSV color space map. Using this methodology it is possible to build navigation environment maps and carry out autonomous mobile robot path following and obstacle avoidance. This paper presents a calculation model which enables the distance estimation using the RGB-D sensor from Microsoft .NET micro framework device. Experimental results are presented to validate the distance estimation methodology.

  6. 3D for the people: multi-camera motion capture in the field with consumer-grade cameras and open source software

    Directory of Open Access Journals (Sweden)

    Brandon E. Jackson

    2016-09-01

    Full Text Available Ecological, behavioral and biomechanical studies often need to quantify animal movement and behavior in three dimensions. In laboratory studies, a common tool to accomplish these measurements is the use of multiple, calibrated high-speed cameras. Until very recently, the complexity, weight and cost of such cameras have made their deployment in field situations risky; furthermore, such cameras are not affordable to many researchers. Here, we show how inexpensive, consumer-grade cameras can adequately accomplish these measurements both within the laboratory and in the field. Combined with our methods and open source software, the availability of inexpensive, portable and rugged cameras will open up new areas of biological study by providing precise 3D tracking and quantification of animal and human movement to researchers in a wide variety of field and laboratory contexts.

  7. Mobile Phones on Campus

    Institute of Scientific and Technical Information of China (English)

    朴春宝

    2007-01-01

    After entering the 21st century, more and more people have mobile phones in China. At the end of 2002, there were 20 million mobile phone users. By the year 2005 the number has reached up to 30 million.

  8. High mobility transparent conducting oxides for thin film solar cells

    International Nuclear Information System (INIS)

    Calnan, S.; Tiwari, A.N.

    2010-01-01

    A special class of transparent conducting oxides (TCO) with high mobility of > 65 cm 2 V -1 s -1 allows film resistivity in the low 10 -4 Ω cm range and a high transparency of > 80% over a wide spectrum, from 300 nm to beyond 1500 nm. This exceptional coincidence of desirable optical and electrical properties provides opportunities to improve the performance of opto-electronic devices and opens possibilities for new applications. Strategies to attain high mobility (HM) TCO materials as well as the current status of such materials based on indium and cadmium containing oxides are presented. Various concepts used to understand the underlying mechanisms for high mobility in HMTCO films are discussed. Examples of HMTCO layers used as transparent electrodes in thin film solar cells are used to illustrate possible improvements in solar cell performance. Finally, challenges and prospects for further development of HMTCO materials are discussed.

  9. Meteor Film Recording with Digital Film Cameras with large CMOS Sensors

    Science.gov (United States)

    Slansky, P. C.

    2016-12-01

    In this article the author combines his professional know-how about cameras for film and television production with his amateur astronomy activities. Professional digital film cameras with high sensitivity are still quite rare in astronomy. One reason for this may be their costs of up to 20 000 and more (camera body only). In the interim, however,consumer photo cameras with film mode and very high sensitivity have come to the market for about 2 000 EUR. In addition, ultra-high sensitive professional film cameras, that are very interesting for meteor observation, have been introduced to the market. The particular benefits of digital film cameras with large CMOS sensors, including photo cameras with film recording function, for meteor recording are presented by three examples: a 2014 Camelopardalid, shot with a Canon EOS C 300, an exploding 2014 Aurigid, shot with a Sony alpha7S, and the 2016 Perseids, shot with a Canon ME20F-SH. All three cameras use large CMOS sensors; "large" meaning Super-35 mm, the classic 35 mm film format (24x13.5 mm, similar to APS-C size), or full format (36x24 mm), the classic 135 photo camera format. Comparisons are made to the widely used cameras with small CCD sensors, such as Mintron or Watec; "small" meaning 12" (6.4x4.8 mm) or less. Additionally, special photographic image processing of meteor film recordings is discussed.

  10. The role of camera-bundled image management software in the consumer digital imaging value chain

    Science.gov (United States)

    Mueller, Milton; Mundkur, Anuradha; Balasubramanian, Ashok; Chirania, Virat

    2005-02-01

    This research was undertaken by the Convergence Center at the Syracuse University School of Information Studies (www.digital-convergence.info). Project ICONICA, the name for the research, focuses on the strategic implications of digital Images and the CONvergence of Image management and image CApture. Consumer imaging - the activity that we once called "photography" - is now recognized as in the throes of a digital transformation. At the end of 2003, market researchers estimated that about 30% of the households in the U.S. and 40% of the households in Japan owned digital cameras. In 2004, of the 86 million new cameras sold (excluding one-time use cameras), a majority (56%) were estimated to be digital cameras. Sales of photographic film, while still profitable, are declining precipitously.

  11. Generic Dynamic Environment Perception Using Smart Mobile Devices.

    Science.gov (United States)

    Danescu, Radu; Itu, Razvan; Petrovai, Andra

    2016-10-17

    The driving environment is complex and dynamic, and the attention of the driver is continuously challenged, therefore computer based assistance achieved by processing image and sensor data may increase traffic safety. While active sensors and stereovision have the advantage of obtaining 3D data directly, monocular vision is easy to set up, and can benefit from the increasing computational power of smart mobile devices, and from the fact that almost all of them come with an embedded camera. Several driving assistance application are available for mobile devices, but they are mostly targeted for simple scenarios and a limited range of obstacle shapes and poses. This paper presents a technique for generic, shape independent real-time obstacle detection for mobile devices, based on a dynamic, free form 3D representation of the environment: the particle based occupancy grid. Images acquired in real time from the smart mobile device's camera are processed by removing the perspective effect and segmenting the resulted bird-eye view image to identify candidate obstacle areas, which are then used to update the occupancy grid. The occupancy grid tracked cells are grouped into obstacles depicted as cuboids having position, size, orientation and speed. The easy to set up system is able to reliably detect most obstacles in urban traffic, and its measurement accuracy is comparable to a stereovision system.

  12. A State-of-the-Art Review on Mapping and Localization of Mobile Robots Using Omnidirectional Vision Sensors

    Directory of Open Access Journals (Sweden)

    L. Payá

    2017-01-01

    Full Text Available Nowadays, the field of mobile robotics is experiencing a quick evolution, and a variety of autonomous vehicles is available to solve different tasks. The advances in computer vision have led to a substantial increase in the use of cameras as the main sensors in mobile robots. They can be used as the only source of information or in combination with other sensors such as odometry or laser. Among vision systems, omnidirectional sensors stand out due to the richness of the information they provide the robot with, and an increasing number of works about them have been published over the last few years, leading to a wide variety of frameworks. In this review, some of the most important works are analysed. One of the key problems the scientific community is addressing currently is the improvement of the autonomy of mobile robots. To this end, building robust models of the environment and solving the localization and navigation problems are three important abilities that any mobile robot must have. Taking it into account, the review concentrates on these problems; how researchers have addressed them by means of omnidirectional vision; the main frameworks they have proposed; and how they have evolved in recent years.

  13. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, U.; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is mounted in the ray inlet opening of the camera, while the others are placed on separate supports. The supports are swingably mounted upon a column one above the other through about 90 0 to a collimator exchange position. Each of the separate supports is swingable to a vertically aligned position, with limiting of the swinging movement and positioning of the support at the desired exchange position. The collimators are carried on the supports by means of a series of vertically disposed coil springs. Projections on the camera are movable from above into grooves of the collimator at the exchange position, whereupon the collimator is turned so that it is securely prevented from falling out of the camera head

  14. Retinol-induced changes in the phosphorylation levels of histones and high mobility group proteins from Sertoli cells

    Directory of Open Access Journals (Sweden)

    Moreira J.C.F.

    2000-01-01

    Full Text Available Chromatin proteins play a role in the organization and functions of DNA. Covalent modifications of nuclear proteins modulate their interactions with DNA sequences and are probably one of the multiple factors involved in the process of switch on/off transcriptionally active regions of DNA. Histones and high mobility group proteins (HMG are subject to many covalent modifications that may modulate their capacity to bind to DNA. We investigated the changes induced in the phosphorylation pattern of cultured Wistar rat Sertoli cell histones and high mobility group protein subfamilies exposed to 7 µM retinol for up to 48 h. In each experiment, 6 h before the end of the retinol treatment each culture flask received 370 KBq/ml [32P]-phosphate. The histone and HMGs were isolated as previously described [Moreira et al. Medical Science Research (1994 22: 783-784]. The total protein obtained by either method was quantified and electrophoresed as described by Spiker [Analytical Biochemistry (1980 108: 263-265]. The gels were stained with Coomassie brilliant blue R-250 and the stained bands were cut and dissolved in 0.5 ml 30% H2O2 at 60oC for 12 h. The vials were chilled and 5.0 ml scintillation liquid was added. The radioactivity in each vial was determined with a liquid scintillation counter. Retinol treatment significantly changed the pattern of each subfamily of histone and high mobility group proteins.

  15. Camera calibration method of binocular stereo vision based on OpenCV

    Science.gov (United States)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  16. High mobility solution-processed hybrid light emitting transistors

    International Nuclear Information System (INIS)

    Walker, Bright; Kim, Jin Young; Ullah, Mujeeb; Burn, Paul L.; Namdas, Ebinazar B.; Chae, Gil Jo; Cho, Shinuk; Seo, Jung Hwa

    2014-01-01

    We report the design, fabrication, and characterization of high-performance, solution-processed hybrid (inorganic-organic) light emitting transistors (HLETs). The devices employ a high-mobility, solution-processed cadmium sulfide layer as the switching and transport layer, with a conjugated polymer Super Yellow as an emissive material in non-planar source/drain transistor geometry. We demonstrate HLETs with electron mobilities of up to 19.5 cm 2 /V s, current on/off ratios of >10 7 , and external quantum efficiency of 10 −2 % at 2100 cd/m 2 . These combined optical and electrical performance exceed those reported to date for HLETs. Furthermore, we provide full analysis of charge injection, charge transport, and recombination mechanism of the HLETs. The high brightness coupled with a high on/off ratio and low-cost solution processing makes this type of hybrid device attractive from a manufacturing perspective

  17. Droplet deposition measurement with high-speed camera and novel high-speed liquid film sensor with high spatial resolution

    International Nuclear Information System (INIS)

    Damsohn, M.; Prasser, H.-M.

    2011-01-01

    Highlights: → Development of a sensor for time- and space-resolved droplet deposition in annular flow. → Experimental measurement of droplet deposition in horizontal annular flow to compare readings of the sensor with images of a high-speed camera when droplets are depositing unto the liquid film. → Self-adaptive signal filter based on autoregression to separate droplet impacts in the sensor signal from waves of liquid films. - Abstract: A sensor based on the electrical conductance method is presented for the measurement of dynamic liquid films in two-phase flow. The so called liquid film sensor consists of a matrix with 64 x 16 measuring points, a spatial resolution of 3.12 mm and a time resolution of 10 kHz. Experiments in a horizontal co-current air-water film flow were conducted to test the capability of the sensor to detect droplet deposition from the gas core onto the liquid film. The experimental setup is equipped with the liquid film sensor and a high speed camera (HSC) recording the droplet deposition with a sampling rate of 10 kHz simultaneously. In some experiments the recognition of droplet deposition on the sensor is enhanced by marking the droplets with higher electrical conductivity. The comparison between the HSC and the sensor shows, that the sensor captures the droplet deposition above a certain droplet diameter. The impacts of droplet deposition can be filtered from the wavy structures respectively conductivity changes of the liquid film using a filter algorithm based on autoregression. The results will be used to locally measure droplet deposition e.g. in the proximity of spacers in a subchannel geometry.

  18. Swedish High-End Apparel Online

    OpenAIRE

    Hansson, Christoffer; Grabe, Thomas; Thomander, Karolina

    2010-01-01

    The study aims to through a qualitative case study describe how six Swedish high-end apparel companies attributed as part of “the Swedish fashion wonder” with online distribution have been affected by six chosen factors. The six factors presented are extracted from previous studies and consist of customer relationships, intermediary relationships, pricing, costs and revenue, competitors and impact on the brand. The results show that customer relationships is an important factor that most comp...

  19. Factors affecting the myocardial activity acquired during exercise SPECT with a high-sensitivity cardiac CZT camera as compared with conventional Anger camera

    Energy Technology Data Exchange (ETDEWEB)

    Verger, Antoine; Karcher, Gilles [CHU-Nancy, Department of Nuclear Medicine, Nancy (France); INSERM U947 and Universite de Lorraine, Nancy (France); Nancyclotep experimental imaging platform, Nancy (France); Imbert, Laetitia [CHU-Nancy, Department of Nuclear Medicine, Nancy (France); Nancyclotep experimental imaging platform, Nancy (France); Centre Alexis Vautrin, Department of Radiotherapy, Vandoeuvre (France); Yagdigul, Yalcine; Roch, Veronique [CHU-Nancy, Department of Nuclear Medicine, Nancy (France); Nancyclotep experimental imaging platform, Nancy (France); Fay, Renaud [INSERM, Centre d' Investigation Clinique CIC-P 9501, Nancy (France); Djaballah, Wassila [CHU-Nancy, Department of Nuclear Medicine, Nancy (France); INSERM U947 and Universite de Lorraine, Nancy (France); Rouzet, Francois; Le Guludec, Dominique [AP-HP, Hopital Bichat, Department of Nuclear Medicine, Paris (France); INSERM U 773 and Denis Diderot University, Paris (France); Fourquet, Nicolas [Clinique Pasteur, Toulouse (France); Poussier, Sylvain [INSERM U947 and Universite de Lorraine, Nancy (France); Nancyclotep experimental imaging platform, Nancy (France); Marie, Pierre-Yves [CHU-Nancy, Department of Nuclear Medicine, Nancy (France); Nancyclotep experimental imaging platform, Nancy (France); INSERM U1116 and Universite de Lorraine, Nancy (France); CHU-Nancy, Allee du Morvan, Medecine Nucleaire, Hopital de Brabois, Vandoeuvre-les-Nancy (France)

    2014-03-15

    Injected doses are difficult to optimize for exercise SPECT since they depend on the myocardial fraction of injected activity (MFI) that is detected by the camera. The aim of this study was to analyse the factors affecting MFI determined using a cardiac CZT camera as compared with those determined using conventional Anger cameras. Factors affecting MFI were determined and compared in patients who had consecutive exercise SPECT acquisitions with {sup 201}Tl (84 patients) or {sup 99m}Tc-sestamibi (87 patients) with an Anger or a CZT camera. A predictive model was validated in a group of patients routinely referred for {sup 201}Tl (78 patients) or {sup 99m}Tc-sestamibi (80 patients) exercise CZT SPECT. The predictive model involved: (1) camera type, adjusted mean MFI being ninefold higher for CZT than for Anger SPECT, (2) tracer type, adjusted mean MFI being twofold higher for {sup 201}Tl than for {sup 99m}Tc-sestamibi, and (3) logarithm of body weight. The CZT SPECT model led to a +1 ± 26 % error in the prediction of the actual MFI from the validation group. The mean MFI values estimated for CZT SPECT were more than twofold higher in patients with a body weight of 60 kg than in patients with a body weight of 120 kg (15.9 and 6.8 ppm for {sup 99m}Tc-sestamibi and 30.5 and 13.1ppm for {sup 201}Tl, respectively), and for a 14-min acquisition of up to one million myocardial counts, the corresponding injected activities were only 80 and 186 MBq for {sup 99m}Tc-sestamibi and 39 and 91 MBq for {sup 201}Tl, respectively. Myocardial activities acquired during exercise CZT SPECT are strongly influenced by body weight and tracer type, and are dramatically higher than those obtained using an Anger camera, allowing very low-dose protocols to be planned, especially for {sup 99m}Tc-sestamibi and in non-obese subjects. (orig.)

  20. Factors affecting the myocardial activity acquired during exercise SPECT with a high-sensitivity cardiac CZT camera as compared with conventional Anger camera

    International Nuclear Information System (INIS)

    Verger, Antoine; Karcher, Gilles; Imbert, Laetitia; Yagdigul, Yalcine; Roch, Veronique; Fay, Renaud; Djaballah, Wassila; Rouzet, Francois; Le Guludec, Dominique; Fourquet, Nicolas; Poussier, Sylvain; Marie, Pierre-Yves

    2014-01-01

    Injected doses are difficult to optimize for exercise SPECT since they depend on the myocardial fraction of injected activity (MFI) that is detected by the camera. The aim of this study was to analyse the factors affecting MFI determined using a cardiac CZT camera as compared with those determined using conventional Anger cameras. Factors affecting MFI were determined and compared in patients who had consecutive exercise SPECT acquisitions with 201 Tl (84 patients) or 99m Tc-sestamibi (87 patients) with an Anger or a CZT camera. A predictive model was validated in a group of patients routinely referred for 201 Tl (78 patients) or 99m Tc-sestamibi (80 patients) exercise CZT SPECT. The predictive model involved: (1) camera type, adjusted mean MFI being ninefold higher for CZT than for Anger SPECT, (2) tracer type, adjusted mean MFI being twofold higher for 201 Tl than for 99m Tc-sestamibi, and (3) logarithm of body weight. The CZT SPECT model led to a +1 ± 26 % error in the prediction of the actual MFI from the validation group. The mean MFI values estimated for CZT SPECT were more than twofold higher in patients with a body weight of 60 kg than in patients with a body weight of 120 kg (15.9 and 6.8 ppm for 99m Tc-sestamibi and 30.5 and 13.1ppm for 201 Tl, respectively), and for a 14-min acquisition of up to one million myocardial counts, the corresponding injected activities were only 80 and 186 MBq for 99m Tc-sestamibi and 39 and 91 MBq for 201 Tl, respectively. Myocardial activities acquired during exercise CZT SPECT are strongly influenced by body weight and tracer type, and are dramatically higher than those obtained using an Anger camera, allowing very low-dose protocols to be planned, especially for 99m Tc-sestamibi and in non-obese subjects. (orig.)

  1. 75 FR 6704 - In the Matter of Certain Mobile Telephones and Wireless Communication Devices Featuring Digital...

    Science.gov (United States)

    2010-02-10

    ... States after importation of certain mobile telephones and wireless communication devices featuring... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-663] In the Matter of Certain Mobile Telephones and Wireless Communication Devices Featuring Digital Cameras, and Components Thereof; Notice of...

  2. 75 FR 65654 - In the Matter of: Certain Mobile Telephones and Wireless Communication Devices Featuring Digital...

    Science.gov (United States)

    2010-10-26

    ... States after importation of certain mobile telephones and wireless communication devices featuring... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-703] In the Matter of: Certain Mobile Telephones and Wireless Communication Devices Featuring Digital Cameras, and Components Thereof;Notice of...

  3. Interaction of a non-histone chromatin protein (high-mobility group protein 2) with DNA

    International Nuclear Information System (INIS)

    Goodwin, G.H.; Shooter, K.V.; Johns, E.W.

    1975-01-01

    The interaction with DNA of the calf thymus chromatin non-histone protein termed the high-mobility group protein 2 has been studied by sedimentation analysis in the ultracentrifuge and by measuring the binding of the 125 I-labelled protein to DNA. The results have been compared with those obtained previously by us [Eur. J. Biochem. (1974) 47, 263-270] for the interaction of high-mobility group protein 1 with DNA. Although the binding parameters are similar for these two proteins, high-mobility group protein 2 differs from high-mobility group protein 1 in that the former appears to change the shape of the DNA to a more compact form. The molecular weight of high-mobility group protein 2 has been determined by equilibrium sedimentation and a mean value of 26,000 was obtained. A low level of nuclease activity detected in one preparation of high-mobility group protein 2 has been investigated. (orig.) [de

  4. Performance Characterization of UV Science Cameras Developed for the Chromospheric Lyman-Alpha Spectro-Polarimeter

    Science.gov (United States)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, D.; Beabout, B.; Stewart, M.

    2014-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras will be built and tested for flight with the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The goal of the CLASP mission is to observe the scattering polarization in Lyman-alpha and to detect the Hanle effect in the line core. Due to the nature of Lyman-alpha polarization in the chromosphere, strict measurement sensitivity requirements are imposed on the CLASP polarimeter and spectrograph systems; science requirements for polarization measurements of Q/I and U/I are 0.1 percent in the line core. CLASP is a dual-beam spectro-polarimeter, which uses a continuously rotating waveplate as a polarization modulator, while the waveplate motor driver outputs trigger pulses to synchronize the exposures. The CCDs are operated in frame-transfer mode; the trigger pulse initiates the frame transfer, effectively ending the ongoing exposure and starting the next. The strict requirement of 0.1 percent polarization accuracy is met by using frame-transfer cameras to maximize the duty cycle in order to minimize photon noise. Coating the e2v CCD57-10 512x512 detectors with Lumogen-E coating allows for a relatively high (30 percent) quantum efficiency at the Lyman-alpha line. The CLASP cameras were designed to operate with 10 e-/pixel/second dark current, 25 e- read noise, a gain of 2.0 +/- 0.5 and 1.0 percent residual non-linearity. We present the results of the performance characterization study performed on the CLASP prototype camera; dark current, read noise, camera gain and residual non-linearity.

  5. High Expression of High-Mobility Group Box 1 in Menstrual Blood: Implications for Endometriosis.

    Science.gov (United States)

    Shimizu, Keiko; Kamada, Yasuhiko; Sakamoto, Ai; Matsuda, Miwa; Nakatsuka, Mikiya; Hiramatsu, Yuji

    2017-11-01

    Endometriosis is a benign gynecologic disease characterized by the presence of ectopic endometrium and associated with inflammation and immune abnormalities. However, the molecular basis for endometriosis is not well understood. To address this issue, the present study examined the expression of high-mobility group box (HMGB) 1 in menstrual blood to investigate its role in the ectopic growth of human endometriotic stromal cells (ESCs). A total of 139 patients were enrolled in this study; 84 had endometriosis and 55 were nonendometriotic gynecological patients (control). The HMGB1 levels in various fluids were measured by enzyme-linked immunosorbent assay. Expression of receptor for advanced glycation end products (RAGE) in eutopic and ectopic endometrium was assessed by immunohistochemistry, and RAGE and vascular endothelial growth factor ( VEGF) messenger RNA expression in HMGB1- and lipopolysaccharide (LPS)-treated ESCs was evaluated by real-time polymerase chain reaction. The HMGB1 concentration was higher in menstrual blood than in serum or peritoneal fluid ( P endometriosis following retrograde menstruation when complexed with other factors such as LPS by inducing inflammation and angiogenesis.

  6. Highly Mobile Students: Educational Problems and Possible Solutions. ERIC/CUE Digest, Number 73.

    Science.gov (United States)

    ERIC Clearinghouse on Urban Education, New York, NY.

    The following two types of student mobility stand out as causing educational problems: (1) inner-city mobility, which is prompted largely by fluctuations in the job market; and (2) intra-city mobility, which is caused by upward mobility or by poverty and homelessness. Most research indicates that high mobility negatively affects student…

  7. Mechanical Design of the LSST Camera

    Energy Technology Data Exchange (ETDEWEB)

    Nordby, Martin; Bowden, Gordon; Foss, Mike; Guiffre, Gary; /SLAC; Ku, John; /Unlisted; Schindler, Rafe; /SLAC

    2008-06-13

    The LSST camera is a tightly packaged, hermetically-sealed system that is cantilevered into the main beam of the LSST telescope. It is comprised of three refractive lenses, on-board storage for five large filters, a high-precision shutter, and a cryostat that houses the 3.2 giga-pixel CCD focal plane along with its support electronics. The physically large optics and focal plane demand large structural elements to support them, but the overall size of the camera and its components must be minimized to reduce impact on the image stability. Also, focal plane and optics motions must be minimized to reduce systematic errors in image reconstruction. Design and analysis for the camera body and cryostat will be detailed.

  8. BEYOND THE WORK-LIFE BALANCE: FAMILY AND INTERNATIONAL MOBILITY OF THE HIGHLY SKILLED

    Directory of Open Access Journals (Sweden)

    Núria Vergés Bosch

    2013-10-01

    Full Text Available International mobility of the highly skilled has become one of the cornerstones of development in the current knowledge society. Correspondingly, highly skilled personnel are impelled to move abroad in order to improve their competences and build influential professional networks. Mobility implies some advantages involving personal, social and family opportunities when movers experience handicaps in their country of origin. For movers, mobility becomes a new challenge beyond the work-family balance, particularly for women who usually take on the lion’s share of childcare and domestic tasks within the family. The literature exploring the gender dimension in relation to international mobility points to complex outcomes. Firstly, women are taking on a more active role in international mobility processes, even when they have family. Secondly, family and international mobility are interrelated both for men and for women, although family could become a hindrance, particularly for women. Thirdly, international mobility and women’s career development may interfere with family formation or modify traditional family values. Finally, families moving abroad constitute a challenge for public policy, since they present a new area of problems. We aim to analyse the relationship between international mobility and family based on in-depth interviews from a purposive sample of highly skilled personnel in science and technology. The results of our research suggest that international mobility of the highly skilled has effects on the family and vice versa; however, while international mobility and family are compatible, measures and policies to reconcile them are still insufficient.

  9. Automated Mobile System for Accurate Outdoor Tree Crop Enumeration Using an Uncalibrated Camera

    OpenAIRE

    Thuy Tuong Nguyen; David C. Slaughter; Bradley D. Hanson; Andrew Barber; Amy Freitas; Daniel Robles; Erin Whelan

    2015-01-01

    This paper demonstrates an automated computer vision system for outdoor tree crop enumeration in a seedling nursery. The complete system incorporates both hardware components (including an embedded microcontroller, an odometry encoder, and an uncalibrated digital color camera) and software algorithms (including microcontroller algorithms and the proposed algorithm for tree crop enumeration) required to obtain robust performance in a natural outdoor environment. The enumeration system uses a t...

  10. Solid-state framing camera with multiple time frames

    Energy Technology Data Exchange (ETDEWEB)

    Baker, K. L.; Stewart, R. E.; Steele, P. T.; Vernon, S. P.; Hsing, W. W.; Remington, B. A. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States)

    2013-10-07

    A high speed solid-state framing camera has been developed which can operate over a wide range of photon energies. This camera measures the two-dimensional spatial profile of the flux incident on a cadmium selenide semiconductor at multiple times. This multi-frame camera has been tested at 3.1 eV and 4.5 keV. The framing camera currently records two frames with a temporal separation between the frames of 5 ps but this separation can be varied between hundreds of femtoseconds up to nanoseconds and the number of frames can be increased by angularly multiplexing the probe beam onto the cadmium selenide semiconductor.

  11. Optical character recognition of camera-captured images based on phase features

    Science.gov (United States)

    Diaz-Escobar, Julia; Kober, Vitaly

    2015-09-01

    Nowadays most of digital information is obtained using mobile devices specially smartphones. In particular, it brings the opportunity for optical character recognition in camera-captured images. For this reason many recognition applications have been recently developed such as recognition of license plates, business cards, receipts and street signal; document classification, augmented reality, language translator and so on. Camera-captured images are usually affected by geometric distortions, nonuniform illumination, shadow, noise, which make difficult the recognition task with existing systems. It is well known that the Fourier phase contains a lot of important information regardless of the Fourier magnitude. So, in this work we propose a phase-based recognition system exploiting phase-congruency features for illumination/scale invariance. The performance of the proposed system is tested in terms of miss classifications and false alarms with the help of computer simulation.

  12. Mobile high-voltage switchboard. Variable and uncomplicated; Mobile Hochspannungsschaltanlage. Variabel und unkompliziert in der Anwendung

    Energy Technology Data Exchange (ETDEWEB)

    Albert, Andreas [Siemens AG, Erlangen (Germany). Sector Energy

    2009-07-13

    The mobile high-voltage switchboard ''REE-Movil 2'' for voltages up to 245 kV provides a complete and nearly autonomous switchboard in a container, a solution that has been available in the medium-voltage sector for some time already. It can be used whenever a quick replacement of a switchboard section or a temporary supplement to a switching substation is needed. The container is mounted on a trailer for maximum flexibility and mobility. (orig.)

  13. Mobile bearing medial unicompartmental knee arthroplasty in patients whose lifestyles involve high degrees of knee flexion: A 10-14year follow-up study.

    Science.gov (United States)

    Choy, Won Sik; Lee, Kwang Won; Kim, Ha Yong; Kim, Kap Jung; Chun, Young Sub; Yang, Dae Suk

    2017-08-01

    Because Asian populations have different lifestyles, such as squatting and sitting on the floor, from those of Western populations, it is possible that the clinical results and survival rate of unicompartmental knee arthroplasty (UKA) for Asian patients may be different. This study described outcomes of mobile bearing medial UKA for Korean patients. A total of the 164 knees treated with mobile bearing UKAs in 147 patients (14 males and 133 females) were reviewed. The mean follow-up period was 12.1years (range 10.1-14). The clinical outcomes, such as the Hospital for Special Surgery Knee score, the Oxford Knee Score and the Knee Society rating system, showed statistically significant improvement from pre-operative to final follow-up (Pbearing dislocation. The 95% confidence interval of survival rate at 12years was 84.1%, with revision for any reason as the end point. Minimally invasive mobile bearing UKA in Asian patients who required high degrees of knee flexion showed rapid recovery and good clinical outcome. However, they also showed relatively high rates of bearing dislocation and aseptic loosening. Therefore, mobile bearing UKA should only be performed in patients whose lifestyle involves high flexions after carefully considering these risks and benefits. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. High level bacterial contamination of secondary school students' mobile phones.

    Science.gov (United States)

    Kõljalg, Siiri; Mändar, Rando; Sõber, Tiina; Rööp, Tiiu; Mändar, Reet

    2017-06-01

    While contamination of mobile phones in the hospital has been found to be common in several studies, little information about bacterial abundance on phones used in the community is available. Our aim was to quantitatively determine the bacterial contamination of secondary school students' mobile phones. Altogether 27 mobile phones were studied. The contact plate method and microbial identification using MALDI-TOF mass spectrometer were used for culture studies. Quantitative PCR reaction for detection of universal 16S rRNA, Enterococcus faecalis 16S rRNA and Escherichia coli allantoin permease were performed, and the presence of tetracycline ( tet A, tet B, tet M), erythromycin ( erm B) and sulphonamide ( sul 1) resistance genes was assessed. We found a high median bacterial count on secondary school students' mobile phones (10.5 CFU/cm 2 ) and a median of 17,032 bacterial 16S rRNA gene copies per phone. Potentially pathogenic microbes ( Staphylococcus aureus , Acinetobacter spp. , Pseudomonas spp., Bacillus cereus and Neisseria flavescens ) were found among dominant microbes more often on phones with higher percentage of E. faecalis in total bacterial 16S rRNA. No differences in contamination level or dominating bacterial species between phone owner's gender and between phone types (touch screen/keypad) were found. No antibiotic resistance genes were detected on mobile phone surfaces. Quantitative study methods revealed high level bacterial contamination of secondary school students' mobile phones.

  15. Reliable and repeatable characterization of optical streak cameras

    International Nuclear Information System (INIS)

    Charest, Michael R. Jr.; Torres, Peter III; Silbernagel, Christopher T.; Kalantar, Daniel H.

    2008-01-01

    Optical streak cameras are used as primary diagnostics for a wide range of physics and laser experiments at facilities such as the National Ignition Facility. To meet the strict accuracy requirements needed for these experiments, the systematic nonlinearities of the streak cameras (attributed to nonlinearities in the optical and electrical components that make up the streak camera system) must be characterized. In some cases the characterization information is used as a guide to help determine how experiment data should be taken. In other cases, the characterization data are applied to the raw data images to correct for the nonlinearities. In order to characterize an optical streak camera, a specific set of data is collected, where the response to defined inputs are recorded. A set of analysis software routines has been developed to extract information such as spatial resolution, dynamic range, and temporal resolution from this data set. The routines are highly automated, requiring very little user input and thus provide very reliable and repeatable results that are not subject to interpretation. An emphasis on quality control has been placed on these routines due to the high importance of the camera characterization information.

  16. Reliable and Repeatable Characterization of Optical Streak Cameras

    International Nuclear Information System (INIS)

    Kalantar, D; Charest, M; Torres III, P; Charest, M

    2008-01-01

    Optical streak cameras are used as primary diagnostics for a wide range of physics and laser experiments at facilities such as the National Ignition Facility (NIF). To meet the strict accuracy requirements needed for these experiments, the systematic nonlinearities of the streak cameras (attributed to nonlinearities in the optical and electrical components that make up the streak camera system) must be characterized. In some cases the characterization information is used as a guide to help determine how experiment data should be taken. In other cases, the characterization data are applied to the raw data images to correct for the nonlinearities. In order to characterize an optical streak camera, a specific set of data is collected, where the response to defined inputs are recorded. A set of analysis software routines has been developed to extract information such as spatial resolution, dynamic range, and temporal resolution from this data set. The routines are highly automated, requiring very little user input and thus provide very reliable and repeatable results that are not subject to interpretation. An emphasis on quality control has been placed on these routines due to the high importance of the camera characterization information

  17. Continuous monitoring of Hawaiian volcanoes with thermal cameras

    Science.gov (United States)

    Patrick, Matthew R.; Orr, Tim R.; Antolik, Loren; Lee, Robert Lopaka; Kamibayashi, Kevan P.

    2014-01-01

    Continuously operating thermal cameras are becoming more common around the world for volcano monitoring, and offer distinct advantages over conventional visual webcams for observing volcanic activity. Thermal cameras can sometimes “see” through volcanic fume that obscures views to visual webcams and the naked eye, and often provide a much clearer view of the extent of high temperature areas and activity levels. We describe a thermal camera network recently installed by the Hawaiian Volcano Observatory to monitor Kīlauea’s summit and east rift zone eruptions (at Halema‘uma‘u and Pu‘u ‘Ō‘ō craters, respectively) and to keep watch on Mauna Loa’s summit caldera. The cameras are long-wave, temperature-calibrated models protected in custom enclosures, and often positioned on crater rims close to active vents. Images are transmitted back to the observatory in real-time, and numerous Matlab scripts manage the data and provide automated analyses and alarms. The cameras have greatly improved HVO’s observations of surface eruptive activity, which includes highly dynamic lava lake activity at Halema‘uma‘u, major disruptions to Pu‘u ‘Ō‘ō crater and several fissure eruptions.

  18. Reliable and Repeatable Characterization of Optical Streak Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Michael Charest Jr., Peter Torres III, Christopher Silbernagel, and Daniel Kalantar

    2008-10-31

    Optical streak cameras are used as primary diagnostics for a wide range of physics and laser experiments at facilities such as the National Ignition Facility (NIF). To meet the strict accuracy requirements needed for these experiments, the systematic nonlinearities of the streak cameras (attributed to nonlinearities in the optical and electrical components that make up the streak camera system) must be characterized. In some cases the characterization information is used as a guide to help determine how experiment data should be taken. In other cases, the characterization data are applied to the raw data images to correct for the nonlinearities. In order to characterize an optical streak camera, a specific set of data is collected, where the response to defined inputs are recorded. A set of analysis software routines has been developed to extract information such as spatial resolution, dynamic range, and temporal resolution from this data set. The routines are highly automated, requiring very little user input and thus provide very reliable and repeatable results that are not subject to interpretation. An emphasis on quality control has been placed on these routines due to the high importance of the camera characterization information.

  19. Reliable and Repeatable Characterication of Optical Streak Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Kalantar, D; Charest, M; Torres III, P; Charest, M

    2008-05-06

    Optical streak cameras are used as primary diagnostics for a wide range of physics and laser experiments at facilities such as the National Ignition Facility (NIF). To meet the strict accuracy requirements needed for these experiments, the systematic nonlinearities of the streak cameras (attributed to nonlinearities in the optical and electrical components that make up the streak camera system) must be characterized. In some cases the characterization information is used as a guide to help determine how experiment data should be taken. In other cases, the characterization data are applied to the raw data images to correct for the nonlinearities. In order to characterize an optical streak camera, a specific set of data is collected, where the response to defined inputs are recorded. A set of analysis software routines has been developed to extract information such as spatial resolution, dynamic range, and temporal resolution from this data set. The routines are highly automated, requiring very little user input and thus provide very reliable and repeatable results that are not subject to interpretation. An emphasis on quality control has been placed on these routines due to the high importance of the camera characterization information.

  20. Reliable and Repeatable Characterization of Optical Streak Cameras

    International Nuclear Information System (INIS)

    Michael R. Charest, Peter Torres III, Christopher Silbernagel

    2008-01-01

    Optical streak cameras are used as primary diagnostics for a wide range of physics and laser performance verification experiments at the National Ignition Facility (NIF). To meet the strict accuracy requirements needed for these experiments, the systematic nonlinearities of the streak cameras (attributed to nonlinearities in the optical and electronic components that make up the streak camera system) must be characterized. In some cases the characterization information is used as a guide to help determine how experiment data should be taken. In other cases the characterization data is used to 'correct' data images, to remove some of the nonlinearities. In order to obtain these camera characterizations, a specific data set is collected where the response to specific known inputs is recorded. A set of analysis software routines has been developed to extract information such as spatial resolution, dynamic range, temporal resolution, etc., from this data set. The routines are highly automated, requiring very little user input and thus provide very reliable and repeatable results that are not subject to interpretation. An emphasis on quality control has been placed on these routines due to the high importance of the camera characterization information

  1. Imaging of radiocesium uptake dynamics in a plant body using a newly developed high-resolution gamma camera for radiocesium

    Energy Technology Data Exchange (ETDEWEB)

    Kawachi, Naoki; Yin, Yong-Gen; Suzui, Nobuo; Ishii, Satomi; Fujimaki, Shu [Radiotracer Imaging Gr., Quantum Beam Science Directorate, Japan Atomic Energy Agency (JAEA), 1233 Watanuki, Takasaki, Gunma 370-1292 (Japan); Yoshihara, Toshihiro [Plant Molecular Biology, Laboratory of Environmental Science, Central Research Institute of Electric Power Industry (CRIEPI), 1646 Abiko, Chiba 270-1194 (Japan); Watabe, Hiroshi [Cyclotron and Radioisotope Center (CYRIC), Tohoku University, 6-3Aoba, Aramaki, Aoba-ku, Sendai, Miyagi, 980-8578 (Japan); Yamamoto, Seiichi [Department of Radiological and Medical Laboratory Sciences, Nagoya University Graduate School of Medicine, 1-1-20 Daiko-Minami, Higashi-ku, Nagoya 461-8673 (Japan)

    2014-07-01

    Vast agricultural and forest areas around the Tokyo Electric Power Company Fukushima Daiichi Nuclear Power Station in Japan were contaminated with radiocesium (Cs-134 and Cs-137) after the accident following the earthquake and tsunami in March 2011. A variety of agricultural studies, such as fertilizer management and plant breeding, have been undertaken intensively for reduction of radiocesium uptake in crops, or, enhancement of uptake in phyto-remediation. In this study, we newly developed a gamma camera specific for plant nutritional research, and performed quantitative analyses on uptake and partitioning of radiocesium in intact plant bodies. In general, gamma camera is a common technology in medical imaging, but it is not applicable to high-energy gamma rays such as emissions from Cs-137 (662 keV). Therefore, we designed our new gamma camera to prevent the penetration and scattering of the high-energy gamma rays. A single-crystal scintillator, Ce-doped Gd{sub 3}Al{sub 2}Ga{sub 3}O{sub 12} (Ce:GAGG), was employed, which has a relatively high density, a large light output, no natural radioactivity and no hygroscopicity. A 44 x 44 matrix of the Ce:GAGG pixels, with dimensions of 0.85 mm x 0.85 mm x 10 mm for each pixel, was coupled to a high-quantum efficiency position sensitive photomultiplier tube. This gamma detector unit was encased in a 20-mm-thick tungsten container with a tungsten pinhole collimator on the front. By using this gamma camera, soybean plants (Glycine max), grown in hydroponic solutions and fed with 1-2 MBq of Cs-137, were imaged for 6.5 days in maximum to investigate and visualize the uptake dynamics into/within the areal part. As a result, radiocesium gradually appeared in the shoot several hours after feeding of Cs-137, and then accumulated intensively in the maturing pods and seeds in a characteristic pattern. Our results also demonstrated that this gamma-camera method enables quantitative evaluation of plant ability to absorb, transport

  2. High mobility ZnO nanowires for terahertz detection applications

    International Nuclear Information System (INIS)

    Liu, Huiqiang; Peng, Rufang; Chu, Shijin; Chu, Sheng

    2014-01-01

    An oxide nanowire material was utilized for terahertz detection purpose. High quality ZnO nanowires were synthesized and field-effect transistors were fabricated. Electrical transport measurements demonstrated the nanowire with good transfer characteristics and fairly high electron mobility. It is shown that ZnO nanowires can be used as building blocks for the realization of terahertz detectors based on a one-dimensional plasmon detection configuration. Clear terahertz wave (∼0.3 THz) induced photovoltages were obtained at room temperature with varying incidence intensities. Further analysis showed that the terahertz photoresponse is closely related to the high electron mobility of the ZnO nanowire sample, which suggests that oxide nanoelectronics may find useful terahertz applications.

  3. A design of a high speed dual spectrometer by single line scan camera

    Science.gov (United States)

    Palawong, Kunakorn; Meemon, Panomsak

    2018-03-01

    A spectrometer that can capture two orthogonal polarization components of s light beam is demanded for polarization sensitive imaging system. Here, we describe the design and implementation of a high speed spectrometer for simultaneous capturing of two orthogonal polarization components, i.e. vertical and horizontal components, of light beam. The design consists of a polarization beam splitter, two polarization-maintain optical fibers, two collimators, a single line-scan camera, a focusing lens, and a reflection blaze grating. The alignment of two beam paths was designed to be symmetrically incident on the blaze side and reverse blaze side of reflection grating, respectively. The two diffracted beams were passed through the same focusing lens and focused on the single line-scan sensors of a CMOS camera. The two spectra of orthogonal polarization were imaged on 1000 pixels per spectrum. With the proposed setup, the amplitude and shape of the two detected spectra can be controlled by rotating the collimators. The technique for optical alignment of spectrometer will be presented and discussed. The two orthogonal polarization spectra can be simultaneously captured at a speed of 70,000 spectra per second. The high speed dual spectrometer can simultaneously detected two orthogonal polarizations, which is an important component for the development of polarization-sensitive optical coherence tomography. The performance of the spectrometer have been measured and analyzed.

  4. High Precision Sunphotometer using Wide Dynamic Range (WDR) Camera Tracking

    Science.gov (United States)

    Liss, J.; Dunagan, S. E.; Johnson, R. R.; Chang, C. S.; LeBlanc, S. E.; Shinozuka, Y.; Redemann, J.; Flynn, C. J.; Segal-Rosenhaimer, M.; Pistone, K.; Kacenelenbogen, M. S.; Fahey, L.

    2016-12-01

    High Precision Sunphotometer using Wide Dynamic Range (WDR) Camera TrackingThe NASA Ames Sun-photometer-Satellite Group, DOE, PNNL Atmospheric Sciences and Global Change Division, and NASA Goddard's AERONET (AErosol RObotic NETwork) team recently collaborated on the development of a new airborne sunphotometry instrument that provides information on gases and aerosols extending far beyond what can be derived from discrete-channel direct-beam measurements, while preserving or enhancing many of the desirable AATS features (e.g., compactness, versatility, automation, reliability). The enhanced instrument combines the sun-tracking ability of the current 14-Channel NASA Ames AATS-14 with the sky-scanning ability of the ground-based AERONET Sun/sky photometers, while extending both AATS-14 and AERONET capabilities by providing full spectral information from the UV (350 nm) to the SWIR (1,700 nm). Strengths of this measurement approach include many more wavelengths (isolated from gas absorption features) that may be used to characterize aerosols and detailed (oversampled) measurements of the absorption features of specific gas constituents. The Sky Scanning Sun Tracking Airborne Radiometer (3STAR) replicates the radiometer functionality of the AATS-14 instrument but incorporates modern COTS technologies for all instruments subsystems. A 19-channel radiometer bundle design is borrowed from a commercial water column radiance instrument manufactured by Biospherical Instruments of San Diego California (ref, Morrow and Hooker)) and developed using NASA funds under the Small Business Innovative Research (SBIR) program. The 3STAR design also incorporates the latest in robotic motor technology embodied in Rotary actuators from Oriental motor Corp. having better than 15 arc seconds of positioning accuracy. Control system was designed, tested and simulated using a Hybrid-Dynamical modeling methodology. The design also replaces the classic quadrant detector tracking sensor with a

  5. TH-CD-201-10: Highly Efficient Synchronized High-Speed Scintillation Camera System for Measuring Proton Range, SOBP and Dose Distributions in a 2D-Plane

    International Nuclear Information System (INIS)

    Goddu, S; Sun, B; Grantham, K; Zhao, T; Zhang, T; Bradley, J; Mutic, S

    2016-01-01

    Purpose: Proton therapy (PT) delivery is complex and extremely dynamic. Therefore, quality assurance testing is vital, but highly time-consuming. We have developed a High-Speed Scintillation-Camera-System (HS-SCS) for simultaneously measuring multiple beam characteristics. Methods: High-speed camera was placed in a light-tight housing and dual-layer neutron shield. HS-SCS is synchronized with a synchrocyclotron to capture individual proton-beam-pulses (PBPs) at ∼504 frames/sec. The PBPs from synchrocyclotron trigger the HS-SCS to open its shutter for programmed exposure-time. Light emissions within 30×30×5cm3 plastic-scintillator (BC-408) were captured by a CCD-camera as individual images revealing dose-deposition in a 2D-plane with a resolution of 0.7mm for range and SOBP measurements and 1.67mm for profiles. The CCD response as well as signal to noise ratio (SNR) was characterized for varying exposure times, gains for different light intensities using a TV-Optoliner system. Software tools were developed to analyze ∼5000 images to extract different beam parameters. Quenching correction-factors were established by comparing scintillation Bragg-Peaks with water scanned ionization-chamber measurements. Quenching corrected Bragg-peaks were integrated to ascertain proton-beam range (PBR), width of Spared-Out-Bragg-Peak (MOD) and distal

  6. TH-CD-201-10: Highly Efficient Synchronized High-Speed Scintillation Camera System for Measuring Proton Range, SOBP and Dose Distributions in a 2D-Plane

    Energy Technology Data Exchange (ETDEWEB)

    Goddu, S; Sun, B; Grantham, K; Zhao, T; Zhang, T; Bradley, J; Mutic, S [Washington University School of Medicine, Saint Louis, MO (United States)

    2016-06-15

    Purpose: Proton therapy (PT) delivery is complex and extremely dynamic. Therefore, quality assurance testing is vital, but highly time-consuming. We have developed a High-Speed Scintillation-Camera-System (HS-SCS) for simultaneously measuring multiple beam characteristics. Methods: High-speed camera was placed in a light-tight housing and dual-layer neutron shield. HS-SCS is synchronized with a synchrocyclotron to capture individual proton-beam-pulses (PBPs) at ∼504 frames/sec. The PBPs from synchrocyclotron trigger the HS-SCS to open its shutter for programmed exposure-time. Light emissions within 30×30×5cm3 plastic-scintillator (BC-408) were captured by a CCD-camera as individual images revealing dose-deposition in a 2D-plane with a resolution of 0.7mm for range and SOBP measurements and 1.67mm for profiles. The CCD response as well as signal to noise ratio (SNR) was characterized for varying exposure times, gains for different light intensities using a TV-Optoliner system. Software tools were developed to analyze ∼5000 images to extract different beam parameters. Quenching correction-factors were established by comparing scintillation Bragg-Peaks with water scanned ionization-chamber measurements. Quenching corrected Bragg-peaks were integrated to ascertain proton-beam range (PBR), width of Spared-Out-Bragg-Peak (MOD) and distal.

  7. Game of thrown bombs in 3D: using high speed cameras and photogrammetry techniques to reconstruct bomb trajectories at Stromboli (Italy)

    Science.gov (United States)

    Gaudin, D.; Taddeucci, J.; Scarlato, P.; Del Bello, E.; Houghton, B. F.; Orr, T. R.; Andronico, D.; Kueppers, U.

    2015-12-01

    Large juvenile bombs and lithic clasts, produced and ejected during explosive volcanic eruptions, follow ballistic trajectories. Of particular interest are: 1) the determination of ejection velocity and launch angle, which give insights into shallow conduit conditions and geometry; 2) particle trajectories, with an eye on trajectory evolution caused by collisions between bombs, as well as the interaction between bombs and ash/gas plumes; and 3) the computation of the final emplacement of bomb-sized clasts, which is important for hazard assessment and risk management. Ground-based imagery from a single camera only allows the reconstruction of bomb trajectories in a plan perpendicular to the line of sight, which may lead to underestimation of bomb velocities and does not allow the directionality of the ejections to be studied. To overcome this limitation, we adapted photogrammetry techniques to reconstruct 3D bomb trajectories from two or three synchronized high-speed video cameras. In particular, we modified existing algorithms to consider the errors that may arise from the very high velocity of the particles and the impossibility of measuring tie points close to the scene. Our method was tested during two field campaigns at Stromboli. In 2014, two high-speed cameras with a 500 Hz frame rate and a ~2 cm resolution were set up ~350m from the crater, 10° apart and synchronized. The experiment was repeated with similar parameters in 2015, but using three high-speed cameras in order to significantly reduce uncertainties and allow their estimation. Trajectory analyses for tens of bombs at various times allowed for the identification of shifts in the mean directivity and dispersal angle of the jets during the explosions. These time evolutions are also visible on the permanent video-camera monitoring system, demonstrating the applicability of our method to all kinds of explosive volcanoes.

  8. A telescopic cinema sound camera for observing high altitude aerospace vehicles

    Science.gov (United States)

    Slater, Dan

    2014-09-01

    Rockets and other high altitude aerospace vehicles produce interesting visual and aural phenomena that can be remotely observed from long distances. This paper describes a compact, passive and covert remote sensing system that can produce high resolution sound movies at >100 km viewing distances. The telescopic high resolution camera is capable of resolving and quantifying space launch vehicle dynamics including plume formation, staging events and payload fairing jettison. Flight vehicles produce sounds and vibrations that modulate the local electromagnetic environment. These audio frequency modulations can be remotely sensed by passive optical and radio wave detectors. Acousto-optic sensing methods were primarily used but an experimental radioacoustic sensor using passive micro-Doppler radar techniques was also tested. The synchronized combination of high resolution flight vehicle imagery with the associated vehicle sounds produces a cinema like experience that that is useful in both an aerospace engineering and a Hollywood film production context. Examples of visual, aural and radar observations of the first SpaceX Falcon 9 v1.1 rocket launch are shown and discussed.

  9. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, Ul; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is replaceably mounted in the ray inlet opening of the camera, while the others are placed on separate supports. Supports are swingably mounted upon a column one above the other

  10. A NEW AUTOMATIC SYSTEM CALIBRATION OF MULTI-CAMERAS AND LIDAR SENSORS

    Directory of Open Access Journals (Sweden)

    M. Hassanein

    2016-06-01

    Full Text Available In the last few years, multi-cameras and LIDAR systems draw the attention of the mapping community. They have been deployed on different mobile mapping platforms. The different uses of these platforms, especially the UAVs, offered new applications and developments which require fast and accurate results. The successful calibration of such systems is a key factor to achieve accurate results and for the successful processing of the system measurements especially with the different types of measurements provided by the LIDAR and the cameras. The system calibration aims to estimate the geometric relationships between the different system components. A number of applications require the systems be ready for operation in a short time especially for disasters monitoring applications. Also, many of the present system calibration techniques are constrained with the need of special arrangements in labs for the calibration procedures. In this paper, a new technique for calibration of integrated LIDAR and multi-cameras systems is presented. The new proposed technique offers a calibration solution that overcomes the need for special labs for standard calibration procedures. In the proposed technique, 3D reconstruction of automatically detected and matched image points is used to generate a sparse images-driven point cloud then, a registration between the LIDAR generated 3D point cloud and the images-driven 3D point takes place to estimate the geometric relationships between the cameras and the LIDAR.. In the presented technique a simple 3D artificial target is used to simplify the lab requirements for the calibration procedure. The used target is composed of three intersected plates. The choice of such target geometry was to ensure enough conditions for the convergence of registration between the constructed 3D point clouds from the two systems. The achieved results of the proposed approach prove its ability to provide an adequate and fully automated

  11. High-skilled labour mobility in Europe before and after the 2004 enlargement.

    Science.gov (United States)

    Petersen, Alexander M; Puliga, Michelangelo

    2017-03-01

    The extent to which international high-skilled mobility channels are forming is a question of great importance in an increasingly global knowledge-based economy. One factor facilitating the growth of high-skilled labour markets is the standardization of certifiable degrees meriting international recognition. Within this context, we analysed an extensive high-skilled mobility database comprising roughly 382 000 individuals from five broad profession groups (Medical, Education, Technical, Science & Engineering and Business & Legal) over the period 1997-2014, using the 13-country expansion of the European Union (EU) to provide insight into labour market integration. We compare the periods before and after the 2004 enlargement, showing the emergence of a new east-west migration channel between the 13 mostly eastern EU entrants (E) and the rest of the western European countries (W). Indeed, we observe a net directional loss of human capital from E → W, representing 29% of the total mobility after 2004. Nevertheless, the counter-migration from W → E is 7% of the total mobility over the same period, signalling the emergence of brain circulation within the EU. Our analysis of the country-country mobility networks and the country-profession bipartite networks provides timely quantitative evidence for the convergent integration of the EU, and highlights the central role of the UK and Germany as high-skilled labour hubs. We conclude with two data-driven models to explore the structural dynamics of the mobility networks. First, we develop a reconfiguration model to explore the potential ramifications of Brexit and the degree to which redirection of high-skilled labourers away from the UK may impact the integration of the rest of the European mobility network. Second, we use a panel regression model to explain empirical high-skilled mobility rates in terms of various economic 'push-pull' factors, the results of which show that government expenditure on education, per capita

  12. 3DS-colorimeter based on a mobile phone camera for industrial applications

    Science.gov (United States)

    Miettinen, Jari; Martinkauppi, J. Birgitta; Suopajärvi, Pekka

    2013-02-01

    Colour gives an essential finishing touch to many products. Consumers find it as an important factor, for example, when selecting doors, furniture, parquet and coated metal products. Currently, colour evaluation is often carried out by looking at the product. Since people's memory for an exact colour is poor, this method often produces unsatisfactory results in industrial quality control. In this paper, we discuss how to solve this problem by the use of a colour measurement technology for mobile phones equipped with a suitable accessory. Mobile phones provide a suitable monitor platform even for laymen as people are increasingly using their mobile devices for purposes of entertainment, communication and business, thus making them a familiar device to use. Our 3DS-colorimeter is a new, handheld, low-cost consumer/industrial-level prototype combining both a colorimeter feature and 3D surface measurement feature. In this paper, we describe its colorimeter features shortly and demonstrate its performance in measurement repeatability and colorimetric accuracy. As an application example, we show its usefulness for monitoring the colour appearance of painted doors. This study indicates that the 3DS-colorimeter is applicable to industrial quality control.

  13. 76 FR 17965 - In the Matter of Certain Mobile Telephones and Wireless Communication Devices Featuring Digital...

    Science.gov (United States)

    2011-03-31

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-703] In the Matter of Certain Mobile Telephones and Wireless Communication Devices Featuring Digital Cameras, and Components Thereof Notice of... for importation, and the sale within the United States after importation of certain mobile telephones...

  14. 75 FR 44282 - In the Matter of Certain Mobile Telephones and Wireless Communication Devices Featuring Digital...

    Science.gov (United States)

    2010-07-28

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-703] In the Matter of Certain Mobile Telephones and Wireless Communication Devices Featuring Digital Cameras, and Components Thereof; Notice of... for importation, and the sale within the United States after importation of certain mobile telephones...

  15. Identification of Mobile Phone and Analysis of Original Version of Videos through a Delay Time Analysis of Sound Signals from Mobile Phone Videos.

    Science.gov (United States)

    Hwang, Min Gu; Har, Dong Hwan

    2017-11-01

    This study designs a method of identifying the camera model used to take videos that are distributed through mobile phones and determines the original version of the mobile phone video for use as legal evidence. For this analysis, an experiment was conducted to find the unique characteristics of each mobile phone. The videos recorded by mobile phones were analyzed to establish the delay time of sound signals, and the differences between the delay times of sound signals for different mobile phones were traced by classifying their characteristics. Furthermore, the sound input signals for mobile phone videos used as legal evidence were analyzed to ascertain whether they have the unique characteristics of the original version. The objective of this study was to find a method for validating the use of mobile phone videos as legal evidence using mobile phones through differences in the delay times of sound input signals. © 2017 American Academy of Forensic Sciences.

  16. High mobility and quantum well transistors design and TCAD simulation

    CERN Document Server

    Hellings, Geert

    2013-01-01

    For many decades, the semiconductor industry has miniaturized transistors, delivering increased computing power to consumers at decreased cost. However, mere transistor downsizing does no longer provide the same improvements. One interesting option to further improve transistor characteristics is to use high mobility materials such as germanium and III-V materials. However, transistors have to be redesigned in order to fully benefit from these alternative materials. High Mobility and Quantum Well Transistors: Design and TCAD Simulation investigates planar bulk Germanium pFET technology in chapters 2-4, focusing on both the fabrication of such a technology and on the process and electrical TCAD simulation. Furthermore, this book shows that Quantum Well based transistors can leverage the benefits of these alternative materials, since they confine the charge carriers to the high-mobility material using a heterostructure. The design and fabrication of one particular transistor structure - the SiGe Implant-Free Qu...

  17. High mobility group box 1 levels are not associated with subclinical carotid atherosclerosis in patients with granulomatosis with polyangiitis but are reduced by glucocorticoids and statins

    NARCIS (Netherlands)

    Silva de Souza, Alexandre; De Leeuw, Karina; Westra, Johanna; Smit, Andries J.; Van Der Graaf, Anne Marijn; Nienhuis, Hans L.A.; Bijzet, Johan; Limburg, Pieter C.; Stegeman, Coen A.; Bijl, Marc; Kallenberg, Cees G.M.

    2012-01-01

    Background/Purpose: High mobility group box 1 (HMGB1) is a non-histone DNA binding protein that is passively released by dying cells or actively secreted by immunocompetent cells and the receptor for advanced glycation end-products (RAGE) is one of its receptors. Higher levels of HMGB1 have been

  18. Beam size measurement at high radiation levels

    International Nuclear Information System (INIS)

    Decker, F.J.

    1991-05-01

    At the end of the Stanford Linear Accelerator the high energy electron and positron beams are quite small. Beam sizes below 100 μm (σ) as well as the transverse distribution, especially tails, have to be determined. Fluorescent screens observed by TV cameras provide a quick two-dimensional picture, which can be analyzed by digitization. For running the SLAC Linear Collider (SLC) with low backgrounds at the interaction point, collimators are installed at the end of the linac. This causes a high radiation level so that the nearby cameras die within two weeks and so-called ''radiation hard'' cameras within two months. Therefore an optical system has been built, which guides a 5 mm wide picture with a resolution of about 30 μm over a distance of 12 m to an accessible region. The overall resolution is limited by the screen thickness, optical diffraction and the line resolution of the camera. Vibration, chromatic effects or air fluctuations play a much less important role. The pictures are colored to get fast information about the beam current, size and tails. Beside the emittance, more information about the tail size and betatron phase is obtained by using four screens. This will help to develop tail compensation schemes to decrease the emittance growth in the linac at high currents. 4 refs., 2 figs

  19. Real-time vehicle matching for multi-camera tunnel surveillance

    Science.gov (United States)

    Jelača, Vedran; Niño Castañeda, Jorge Oswaldo; Frías-Velázquez, Andrés; Pižurica, Aleksandra; Philips, Wilfried

    2011-03-01

    Tracking multiple vehicles with multiple cameras is a challenging problem of great importance in tunnel surveillance. One of the main challenges is accurate vehicle matching across the cameras with non-overlapping fields of view. Since systems dedicated to this task can contain hundreds of cameras which observe dozens of vehicles each, for a real-time performance computational efficiency is essential. In this paper, we propose a low complexity, yet highly accurate method for vehicle matching using vehicle signatures composed of Radon transform like projection profiles of the vehicle image. The proposed signatures can be calculated by a simple scan-line algorithm, by the camera software itself and transmitted to the central server or to the other cameras in a smart camera environment. The amount of data is drastically reduced compared to the whole image, which relaxes the data link capacity requirements. Experiments on real vehicle images, extracted from video sequences recorded in a tunnel by two distant security cameras, validate our approach.

  20. An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates.

    Science.gov (United States)

    Hobbs, Michael T; Brehme, Cheryl S

    2017-01-01

    Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.

  1. An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates

    Science.gov (United States)

    Hobbs, Michael T.; Brehme, Cheryl S.

    2017-01-01

    Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.

  2. Learning High-Order Filters for Efficient Blind Deconvolution of Document Photographs

    KAUST Repository

    Xiao, Lei

    2016-09-16

    Photographs of text documents taken by hand-held cameras can be easily degraded by camera motion during exposure. In this paper, we propose a new method for blind deconvolution of document images. Observing that document images are usually dominated by small-scale high-order structures, we propose to learn a multi-scale, interleaved cascade of shrinkage fields model, which contains a series of high-order filters to facilitate joint recovery of blur kernel and latent image. With extensive experiments, we show that our method produces high quality results and is highly efficient at the same time, making it a practical choice for deblurring high resolution text images captured by modern mobile devices. © Springer International Publishing AG 2016.

  3. Radiation-resistant camera tube

    International Nuclear Information System (INIS)

    Kuwahata, Takao; Manabe, Sohei; Makishima, Yasuhiro

    1982-01-01

    It was a long time ago that Toshiba launched on manufacturing black-and-white radiation-resistant camera tubes employing nonbrowning face-plate glass for ITV cameras used in nuclear power plants. Now in compliance with the increasing demand in nuclear power field, the Company is at grips with the development of radiation-resistant single color-camera tubes incorporating a color-stripe filter for color ITV cameras used under radiation environment. Herein represented are the results of experiments on characteristics of materials for single color-camera tubes and prospects for commercialization of the tubes. (author)

  4. A focal plane camera for celestial XUV sources

    International Nuclear Information System (INIS)

    Huizenga, H.

    1980-01-01

    This thesis describes the development and performance of a new type of X-ray camera for the 2-250 0 A wavelength range (XUV). The camera features high position resolution (FWHM approximately 0.2 mm at 2 A, -13 erg/cm 2 s in a one year mission. (Auth.)

  5. Video-rate or high-precision: a flexible range imaging camera

    Science.gov (United States)

    Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.; Payne, Andrew D.; Conroy, Richard M.; Godbaz, John P.; Jongenelen, Adrian P. P.

    2008-02-01

    A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene, hence is well suited to certain machine vision applications. Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512) pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s). Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many situations. The system's frame rate and length of acquisition is fully configurable in software, which means the measurement rate can be increased by compromising precision and image resolution. In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat signal frequency selection.

  6. Extended spectrum SWIR camera with user-accessible Dewar

    Science.gov (United States)

    Benapfl, Brendan; Miller, John Lester; Vemuri, Hari; Grein, Christoph; Sivananthan, Siva

    2017-02-01

    Episensors has developed a series of extended short wavelength infrared (eSWIR) cameras based on high-Cd concentration Hg1-xCdxTe absorbers. The cameras have a bandpass extending to 3 microns cutoff wavelength, opening new applications relative to traditional InGaAs-based cameras. Applications and uses are discussed and examples given. A liquid nitrogen pour-filled version was initially developed. This was followed by a compact Stirling-cooled version with detectors operating at 200 K. Each camera has unique sensitivity and performance characteristics. The cameras' size, weight and power specifications are presented along with images captured with band pass filters and eSWIR sources to demonstrate spectral response beyond 1.7 microns. The soft seal Dewars of the cameras are designed for accessibility, and can be opened and modified in a standard laboratory environment. This modular approach allows user flexibility for swapping internal components such as cold filters and cold stops. The core electronics of the Stirlingcooled camera are based on a single commercial field programmable gate array (FPGA) that also performs on-board non-uniformity corrections, bad pixel replacement, and directly drives any standard HDMI display.

  7. A novel simultaneous streak and framing camera without principle errors

    Science.gov (United States)

    Jingzhen, L.; Fengshan, S.; Ningwen, L.; Xiangdong, G.; Bin, H.; Qingyang, W.; Hongyi, C.; Yi, C.; Xiaowei, L.

    2018-02-01

    A novel simultaneous streak and framing camera with continuous access, the perfect information of which is far more important for the exact interpretation and precise evaluation of many detonation events and shockwave phenomena, has been developed. The camera with the maximum imaging frequency of 2 × 106 fps and the maximum scanning velocity of 16.3 mm/μs has fine imaging properties which are the eigen resolution of over 40 lp/mm in the temporal direction and over 60 lp/mm in the spatial direction and the framing frequency principle error of zero for framing record, and the maximum time resolving power of 8 ns and the scanning velocity nonuniformity of 0.136%~-0.277% for streak record. The test data have verified the performance of the camera quantitatively. This camera, simultaneously gained frames and streak with parallax-free and identical time base, is characterized by the plane optical system at oblique incidence different from space system, the innovative camera obscura without principle errors, and the high velocity motor driven beryllium-like rotating mirror, made of high strength aluminum alloy with cellular lateral structure. Experiments demonstrate that the camera is very useful and reliable to take high quality pictures of the detonation events.

  8. Contention aware mobility prediction routing for intermittently connected mobile networks

    KAUST Repository

    Elwhishi, Ahmed; Ho, Pin-Han; Shihada, Basem

    2013-01-01

    This paper introduces a novel multi-copy routing protocol, called predict and forward (PF), for delay tolerant networks, which aims to explore the possibility of using mobile nodes as message carriers for end-to-end delivery of the messages. With PF, the message forwarding decision is made by manipulating the probability distribution of future inter-contact and contact durations based on the network status, including wireless link condition and nodal buffer availability. In particular, PF is based on the observations that the node mobility behavior is semi-deterministic and could be predicted once there is sufficient mobility history information. We implemented the proposed protocol and compared it with a number of existing encounter-based routing approaches in terms of delivery delay, delivery ratio, and the number of transmissions required for message delivery. The simulation results show that PF outperforms all the counterpart multi-copy encounter-based routing protocols considered in the study.

  9. Contention aware mobility prediction routing for intermittently connected mobile networks

    KAUST Repository

    Elwhishi, Ahmed

    2013-04-26

    This paper introduces a novel multi-copy routing protocol, called predict and forward (PF), for delay tolerant networks, which aims to explore the possibility of using mobile nodes as message carriers for end-to-end delivery of the messages. With PF, the message forwarding decision is made by manipulating the probability distribution of future inter-contact and contact durations based on the network status, including wireless link condition and nodal buffer availability. In particular, PF is based on the observations that the node mobility behavior is semi-deterministic and could be predicted once there is sufficient mobility history information. We implemented the proposed protocol and compared it with a number of existing encounter-based routing approaches in terms of delivery delay, delivery ratio, and the number of transmissions required for message delivery. The simulation results show that PF outperforms all the counterpart multi-copy encounter-based routing protocols considered in the study.

  10. GRACE star camera noise

    Science.gov (United States)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  11. Ultimate response time of high electron mobility transistors

    International Nuclear Information System (INIS)

    Rudin, Sergey; Rupper, Greg; Shur, Michael

    2015-01-01

    We present theoretical studies of the response time of the two-dimensional gated electron gas to femtosecond pulses. Our hydrodynamic simulations show that the device response to a short pulse or a step-function signal is either smooth or oscillating time-decay at low and high mobility, μ, values, respectively. At small gate voltage swings, U 0  = U g  − U th , where U g is the gate voltage and U th is the threshold voltage, such that μU 0 /L < v s , where L is the channel length and v s is the effective electron saturation velocity, the decay time in the low mobility samples is on the order of L 2 /(μU 0 ), in agreement with the analytical drift model. However, the decay is preceded by a delay time on the order of L/s, where s is the plasma wave velocity. This delay is the ballistic transport signature in collision-dominated devices, which becomes important during very short time periods. In the high mobility devices, the period of the decaying oscillations is on the order of the plasma wave velocity transit time. Our analysis shows that short channel field effect transistors operating in the plasmonic regime can meet the requirements for applications as terahertz detectors, mixers, delay lines, and phase shifters in ultra high-speed wireless communication circuits

  12. Application of infrared camera to bituminous concrete pavements: measuring vehicle

    Science.gov (United States)

    Janků, Michal; Stryk, Josef

    2017-09-01

    Infrared thermography (IR) has been used for decades in certain fields. However, the technological level of advancement of measuring devices has not been sufficient for some applications. Over the recent years, good quality thermal cameras with high resolution and very high thermal sensitivity have started to appear on the market. The development in the field of measuring technologies allowed the use of infrared thermography in new fields and for larger number of users. This article describes the research in progress in Transport Research Centre with a focus on the use of infrared thermography for diagnostics of bituminous road pavements. A measuring vehicle, equipped with a thermal camera, digital camera and GPS sensor, was designed for the diagnostics of pavements. New, highly sensitive, thermal cameras allow to measure very small temperature differences from the moving vehicle. This study shows the potential of a high-speed inspection without lane closures while using IR thermography.

  13. Mechanical deployment system on aries an autonomous mobile robot

    International Nuclear Information System (INIS)

    Rocheleau, D.N.

    1995-01-01

    ARIES (Autonomous Robotic Inspection Experimental System) is under development for the Department of Energy (DOE) to survey and inspect drums containing low-level radioactive waste stored in warehouses at DOE facilities. This paper focuses on the mechanical deployment system-referred to as the camera positioning system (CPS)-used in the project. The CPS is used for positioning four identical but separate camera packages consisting of vision cameras and other required sensors such as bar-code readers and light stripe projectors. The CPS is attached to the top of a mobile robot and consists of two mechanisms. The first is a lift mechanism composed of 5 interlocking rail-elements which starts from a retracted position and extends upward to simultaneously position 3 separate camera packages to inspect the top three drums of a column of four drums. The second is a parallelogram special case Grashof four-bar mechanism which is used for positioning a camera package on drums on the floor. Both mechanisms are the subject of this paper, where the lift mechanism is discussed in detail

  14. Event detection intelligent camera development

    International Nuclear Information System (INIS)

    Szappanos, A.; Kocsis, G.; Molnar, A.; Sarkozi, J.; Zoletnik, S.

    2008-01-01

    A new camera system 'event detection intelligent camera' (EDICAM) is being developed for the video diagnostics of W-7X stellarator, which consists of 10 distinct and standalone measurement channels each holding a camera. Different operation modes will be implemented for continuous and for triggered readout as well. Hardware level trigger signals will be generated from real time image processing algorithms optimized for digital signal processor (DSP) and field programmable gate array (FPGA) architectures. At full resolution a camera sends 12 bit sampled 1280 x 1024 pixels with 444 fps which means 1.43 Terabyte over half an hour. To analyse such a huge amount of data is time consuming and has a high computational complexity. We plan to overcome this problem by EDICAM's preprocessing concepts. EDICAM camera system integrates all the advantages of CMOS sensor chip technology and fast network connections. EDICAM is built up from three different modules with two interfaces. A sensor module (SM) with reduced hardware and functional elements to reach a small and compact size and robust action in harmful environment as well. An image processing and control unit (IPCU) module handles the entire user predefined events and runs image processing algorithms to generate trigger signals. Finally a 10 Gigabit Ethernet compatible image readout card functions as the network interface for the PC. In this contribution all the concepts of EDICAM and the functions of the distinct modules are described

  15. Investigation of Doppler Effects on high mobility OFDM-MIMO systems with the support of High Altitude Platforms (HAPs)

    Science.gov (United States)

    Mohammed, H. A.; Sibley, M. J. N.; Mather, P. J.

    2012-05-01

    The merging of Orthogonal Frequency Division Multiplexing (OFDM) with Multiple-input multiple-output (MIMO) is a promising mobile air interface solution for next generation wireless local area networks (WLANs) and 4G mobile cellular wireless systems. This paper details the design of a highly robust and efficient OFDM-MIMO system to support permanent accessibility and higher data rates to users moving at high speeds, such as users travelling on trains. It has high relevance for next generation wireless local area networks (WLANs) and 4G mobile cellular wireless systems. The paper begins with a comprehensive literature review focused on both technologies. This is followed by the modelling of the OFDM-MIMO physical layer based on Simulink/Matlab that takes into consideration high vehicular mobility. Then the entire system is simulated and analysed under different encoding and channel estimation algorithms. The use of High Altitude Platform system (HAPs) technology is considered and analysed.

  16. Investigation of Doppler Effects on high mobility OFDM-MIMO systems with the support of High Altitude Platforms (HAPs)

    International Nuclear Information System (INIS)

    Mohammed, H A; Sibley, M J N; Mather, P J

    2012-01-01

    The merging of Orthogonal Frequency Division Multiplexing (OFDM) with Multiple-input multiple-output (MIMO) is a promising mobile air interface solution for next generation wireless local area networks (WLANs) and 4G mobile cellular wireless systems. This paper details the design of a highly robust and efficient OFDM-MIMO system to support permanent accessibility and higher data rates to users moving at high speeds, such as users travelling on trains. It has high relevance for next generation wireless local area networks (WLANs) and 4G mobile cellular wireless systems. The paper begins with a comprehensive literature review focused on both technologies. This is followed by the modelling of the OFDM-MIMO physical layer based on Simulink/Matlab that takes into consideration high vehicular mobility. Then the entire system is simulated and analysed under different encoding and channel estimation algorithms. The use of High Altitude Platform system (HAPs) technology is considered and analysed.

  17. A clinical gamma camera-based pinhole collimated system for high resolution small animal SPECT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Mejia, J.; Galvis-Alonso, O.Y., E-mail: mejia_famerp@yahoo.com.b [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Dept. de Biologia Molecular; Castro, A.A. de; Simoes, M.V. [Faculdade de Medicina de Sao Jose do Rio Preto (FAMERP), SP (Brazil). Dept. de Clinica Medica; Leite, J.P. [Universidade de Sao Paulo (FMRP/USP), Ribeirao Preto, SP (Brazil). Fac. de Medicina. Dept. de Neurociencias e Ciencias do Comportamento; Braga, J. [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil). Div. de Astrofisica

    2010-11-15

    The main objective of the present study was to upgrade a clinical gamma camera to obtain high resolution tomographic images of small animal organs. The system is based on a clinical gamma camera to which we have adapted a special-purpose pinhole collimator and a device for positioning and rotating the target based on a computer-controlled step motor. We developed a software tool to reconstruct the target's three-dimensional distribution of emission from a set of planar projections, based on the maximum likelihood algorithm. We present details on the hardware and software implementation. We imaged phantoms and heart and kidneys of rats. When using pinhole collimators, the spatial resolution and sensitivity of the imaging system depend on parameters such as the detector-to-collimator and detector-to-target distances and pinhole diameter. In this study, we reached an object voxel size of 0.6 mm and spatial resolution better than 2.4 and 1.7 mm full width at half maximum when 1.5- and 1.0-mm diameter pinholes were used, respectively. Appropriate sensitivity to study the target of interest was attained in both cases. Additionally, we show that as few as 12 projections are sufficient to attain good quality reconstructions, a result that implies a significant reduction of acquisition time and opens the possibility for radiotracer dynamic studies. In conclusion, a high resolution single photon emission computed tomography (SPECT) system was developed using a commercial clinical gamma camera, allowing the acquisition of detailed volumetric images of small animal organs. This type of system has important implications for research areas such as Cardiology, Neurology or Oncology. (author)

  18. Performance Characterization of UV Science Cameras Developed for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP)

    Science.gov (United States)

    Champey, Patrick; Kobayashi, Ken; Winebarger, Amy; Cirtin, Jonathan; Hyde, David; Robertson, Bryan; Beabout, Brent; Beabout, Dyana; Stewart, Mike

    2014-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras will be built and tested for flight with the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The goal of the CLASP mission is to observe the scattering polarization in Lyman-alpha and to detect the Hanle effect in the line core. Due to the nature of Lyman-alpha polarization in the chromosphere, strict measurement sensitivity requirements are imposed on the CLASP polarimeter and spectrograph systems; science requirements for polarization measurements of Q/I and U/I are 0.1% in the line core. CLASP is a dual-beam spectro-polarimeter, which uses a continuously rotating waveplate as a polarization modulator, while the waveplate motor driver outputs trigger pulses to synchronize the exposures. The CCDs are operated in frame-transfer mode; the trigger pulse initiates the frame transfer, effectively ending the ongoing exposure and starting the next. The strict requirement of 0.1% polarization accuracy is met by using frame-transfer cameras to maximize the duty cycle in order to minimize photon noise. Coating the e2v CCD57-10 512x512 detectors with Lumogen-E coating allows for a relatively high (30%) quantum efficiency at the Lyman-$\\alpha$ line. The CLASP cameras were designed to operate with =10 e- /pixel/second dark current, = 25 e- read noise, a gain of 2.0 and =0.1% residual non-linearity. We present the results of the performance characterization study performed on the CLASP prototype camera; dark current, read noise, camera gain and residual non-linearity.

  19. CALIBRATION PROCEDURES ON OBLIQUE CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    G. Kemper

    2016-06-01

    step with the help of the nadir camera and the GPS/IMU data, an initial orientation correction and radial correction were calculated. With this approach, the whole project was calculated and calibrated in one step. During the iteration process the radial and tangential parameters were switched on individually for the camera heads and after that the camera constants and principal point positions were checked and finally calibrated. Besides that, the bore side calibration can be performed either on basis of the nadir camera and their offsets, or independently for each camera without correlation to the others. This must be performed in a complete mission anyway to get stability between the single camera heads. Determining the lever arms of the nodal-points to the IMU centre needs more caution than for a single camera especially due to the strong tilt angle. Prepared all these previous steps, you get a highly accurate sensor that enables a fully automated data extraction with a rapid update of you existing data. Frequently monitoring urban dynamics is then possible in fully 3D environment.

  20. Next Phase of Mobile Communications – LTE: The End of Fixed Broadband?

    OpenAIRE

    Eylert, Bernd; Eras, Martin; Zeh, Thomas

    2008-01-01

    With the introduction of the Internet in the early 90th years of the last century, broadband demand has increased tremendously. As ISDN was the modern technology at that time, which has had its correspondence in the mobile world in the GSM technology, DSL and its evolution has its correspondence in UMTS/3G and its evolution into HSPA (High Speed Packet Access) and soon in LTE (Long Term Evolution). With the globalisation of our industries business has changed during the last 15 years. Employe...