WorldWideScience

Sample records for camera phone-based wayfinding

  1. Use of a smart phone based thermo camera for skin prick allergy testing: a feasibility study (Conference Presentation)

    Science.gov (United States)

    Barla, Lindi; Verdaasdonk, Rudolf M.; Rustemeyer, Thomas; Klaessens, John; van der Veen, Albert

    2016-02-01

    Allergy testing is usually performed by exposing the skin to small quantities of potential allergens on the inner forearm and scratching the protective epidermis to increase exposure. After 15 minutes the dermatologist performs a visual check for swelling and erythema which is subjective and difficult for e.g. dark skin types. A small smart phone based thermo camera (FLIR One) was used to obtain quantitative images in a feasibility study of 17 patients Directly after allergen exposure on the forearm, thermal images were captured at 30 seconds interval and processed to a time lapse movie over 15 minutes. Considering the 'subjective' reading of the dermatologist as golden standard, in 11/17 pts (65%) the evaluation of dermatologist was confirmed by the thermo camera including 5 of 6 patients without allergic response. In 7 patients thermo showed additional spots. Of the 342 sites tested, the dermatologist detected 47 allergies of which 28 (60%) were confirmed by thermo imaging while thermo imaging showed 12 additional spots. The method can be improved with user dedicated acquisition software and better registration between normal and thermal images. The lymphatic reaction seems to shift from the original puncture site. The interpretation of the thermal images is still subjective since collecting quantitative data is difficult due to motion patient during 15 minutes. Although not yet conclusive, thermal imaging shows to be promising to improve the sensitivity and selectivity of allergy testing using a smart phone based camera.

  2. A mobile phone-based retinal camera for portable wide field imaging.

    Science.gov (United States)

    Maamari, Robi N; Keenan, Jeremy D; Fletcher, Daniel A; Margolis, Todd P

    2014-04-01

    Digital fundus imaging is used extensively in the diagnosis, monitoring and management of many retinal diseases. Access to fundus photography is often limited by patient morbidity, high equipment cost and shortage of trained personnel. Advancements in telemedicine methods and the development of portable fundus cameras have increased the accessibility of retinal imaging, but most of these approaches rely on separate computers for viewing and transmission of fundus images. We describe a novel portable handheld smartphone-based retinal camera capable of capturing high-quality, wide field fundus images. The use of the mobile phone platform creates a fully embedded system capable of acquisition, storage and analysis of fundus images that can be directly transmitted from the phone via the wireless telecommunication system for remote evaluation. PMID:24344230

  3. Indoor wayfinding and navigation

    CERN Document Server

    2015-01-01

    Due to the widespread use of navigation systems for wayfinding and navigation in the outdoors, researchers have devoted their efforts in recent years to designing navigation systems that can be used indoors. This book is a comprehensive guide to designing and building indoor wayfinding and navigation systems. It covers all types of feasible sensors (for example, Wi-Fi, A-GPS), discussing the level of accuracy, the types of map data needed, the data sources, and the techniques for providing routes and directions within structures.

  4. Influence of Motivation on Wayfinding

    Science.gov (United States)

    Srinivas, Samvith

    2010-01-01

    This research explores the role of affect in the domain of human wayfinding by asking if increased motivation will alter the performance across various routes of increasing complexity. Participants were asked to perform certain navigation tasks within an indoor Virtual Reality (VR) environment under either motivated and not-motivated instructions.…

  5. Learning as way-finding

    DEFF Research Database (Denmark)

    Dau, Susanne

    2014-01-01

    motions of humans and non-human agencies. The findings reveal that learning; formal and informal can be conceptualized by the metaphor of way-finding; embodied, emotionally and/or cognitive both individually and socially. Way-finding, is argued, to be a contemporary concept for learning processes......Based on empirical case-study findings and the theoretical framework of learning by Illeris coupled with Nonaka & Takeuchis´s perspectives on knowledge creation, it is stressed that learning are conditioned by contextual orientations-processes in spaces near the body (peripersonal spaces) through......, knowledge development and identity-shaping, where learning emerges through motions, feeling and thinking within an information rich world in constant change....

  6. Wayfinding Design for Amherst Senior Center.

    Science.gov (United States)

    Kim, Karen

    2016-01-01

    This paper presents a design case of wayfinding design for a senior centre located in Amherst, New York. The design case proposed a new signage system and colour coding scheme to enhance the wayfinding experience of seniors, visitors, and staff members at Amherst Senior Center.

  7. Smart phone based bacterial detection using bio functionalized fluorescent nanoparticles

    International Nuclear Information System (INIS)

    We are describing immunochromatographic test strips with smart phone-based fluorescence readout. They are intended for use in the detection of the foodborne bacterial pathogens Salmonella spp. and Escherichia coli O157. Silica nanoparticles (SiNPs) were doped with FITC and Ru(bpy), conjugated to the respective antibodies, and then used in a conventional lateral flow immunoassay (LFIA). Fluorescence was recorded by inserting the nitrocellulose strip into a smart phone-based fluorimeter consisting of a light weight (40 g) optical module containing an LED light source, a fluorescence filter set and a lens attached to the integrated camera of the cell phone in order to acquire high-resolution fluorescence images. The images were analysed by exploiting the quick image processing application of the cell phone and enable the detection of pathogens within few minutes. This LFIA is capable of detecting pathogens in concentrations as low as 105 cfu mL−1 directly from test samples without pre-enrichment. The detection is one order of magnitude better compared to gold nanoparticle-based LFIAs under similar condition. The successful combination of fluorescent nanoparticle-based pathogen detection by LFIAs with a smart phone-based detection platform has resulted in a portable device with improved diagnosis features and having potential application in diagnostics and environmental monitoring. (author)

  8. Cell phone based balance trainer

    Directory of Open Access Journals (Sweden)

    Lee Beom-Chan

    2012-02-01

    Full Text Available Abstract Background In their current laboratory-based form, existing vibrotactile sensory augmentation technologies that provide cues of body motion are impractical for home-based rehabilitation use due to their size, weight, complexity, calibration procedures, cost, and fragility. Methods We have designed and developed a cell phone based vibrotactile feedback system for potential use in balance rehabilitation training in clinical and home environments. It comprises an iPhone with an embedded tri-axial linear accelerometer, custom software to estimate body tilt, a "tactor bud" accessory that plugs into the headphone jack to provide vibrotactile cues of body tilt, and a battery. Five young healthy subjects (24 ± 2.8 yrs, 3 females and 2 males and four subjects with vestibular deficits (42.25 ± 13.5 yrs, 2 females and 2 males participated in a proof-of-concept study to evaluate the effectiveness of the system. Healthy subjects used the system with eyes closed during Romberg, semi-tandem Romberg, and tandem Romberg stances. Subjects with vestibular deficits used the system with both eyes-open and eyes-closed conditions during semi-tandem Romberg stance. Vibrotactile feedback was provided when the subject exceeded either an anterior-posterior (A/P or a medial-lateral (M/L body tilt threshold. Subjects were instructed to move away from the vibration. Results The system was capable of providing real-time vibrotactile cues that informed corrective postural responses. When feedback was available, both healthy subjects and those with vestibular deficits significantly reduced their A/P or M/L RMS sway (depending on the direction of feedback, had significantly smaller elliptical area fits to their sway trajectory, spent a significantly greater mean percentage time within the no feedback zone, and showed a significantly greater A/P or M/L mean power frequency. Conclusion The results suggest that the real-time feedback provided by this system can be used

  9. Use of gestalt in wayfinding design and analysis of wayfinding process

    Institute of Scientific and Technical Information of China (English)

    Li NIU; Leiqing XU; Zhong TANG

    2008-01-01

    The authors brought forward the definition of "Gestalt space" and indicated this kind of space can be easily cognized. Three experiments showed that "clas-sification" and "grouping" are the human strategies to solve wayfinding problems. "Similarity" and "Legibi-lity" of the space are advantageous to help people to com-plete wayfinding tasks. The designer should provide the essential "Legibility" in Gestalt space, by using some tech-niques such as "break" and "accession" to settle the way-finding problem.

  10. Wayfinding Services for Open Educational Practices

    Directory of Open Access Journals (Sweden)

    M. Kalz

    2008-06-01

    Full Text Available To choose suitable resources for personalcompetence development in the vast amount of openeducational resources is a challenging task for a learner.Starting with a needs analysis of lifelong learners andlearning designers we introduce two wayfinding servicesthat are currently researched and developed in theframework of the Integrated Project TENCompetence.Then we discuss the role of these services to supportlearners in finding and selecting open educational resourcesand finally we give an outlook on future research.

  11. Mobile phone based SCADA for industrial automation.

    Science.gov (United States)

    Ozdemir, Engin; Karacor, Mevlut

    2006-01-01

    SCADA is the acronym for "Supervisory Control And Data Acquisition." SCADA systems are widely used in industry for supervisory control and data acquisition of industrial processes. Conventional SCADA systems use PC, notebook, thin client, and PDA as a client. In this paper, a Java-enabled mobile phone has been used as a client in a sample SCADA application in order to display and supervise the position of a sample prototype crane. The paper presents an actual implementation of the on-line controlling of the prototype crane via mobile phone. The wireless communication between the mobile phone and the SCADA server is performed by means of a base station via general packet radio service (GPRS) and wireless application protocol (WAP). Test results have indicated that the mobile phone based SCADA integration using the GPRS or WAP transfer scheme could enhance the performance of the crane in a day without causing an increase in the response times of SCADA functions. The operator can visualize and modify the plant parameters using his mobile phone, without reaching the site. In this way maintenance costs are reduced and productivity is increased. PMID:16480111

  12. Mobile Phone Based Participatory Sensing in Hydrology

    Science.gov (United States)

    Lowry, C.; Fienen, M. N.; Böhlen, M.

    2014-12-01

    Although many observations in the hydrologic sciences are easy to obtain, requiring very little training or equipment, spatial and temporally-distributed data collection is hindered by associated personnel and telemetry costs. Lack of data increases the uncertainty and can limit applications of both field and modeling studies. However, modern society is much more digitally connected than the past, which presents new opportunities to collect real-time hydrologic data through the use of participatory sensing. Participatory sensing in this usage refers to citizens contributing distributed observations of physical phenomena. Real-time data streams are possible as a direct result of the growth of mobile phone networks and high adoption rates of mobile users. In this research, we describe an example of the development, methodology, barriers to entry, data uncertainty, and results of mobile phone based participatory sensing applied to groundwater and surface water characterization. Results are presented from three participatory sensing experiments that focused on stream stage, surface water temperature, and water quality. Results demonstrate variability in the consistency and reliability across the type of data collected and the challenges of collecting research grade data. These studies also point to needed improvements and future developments for widespread use of low cost techniques for participatory sensing.

  13. Route complexity and simulated physical ageing negatively influence wayfinding.

    Science.gov (United States)

    Zijlstra, Emma; Hagedoorn, Mariët; Krijnen, Wim P; van der Schans, Cees P; Mobach, Mark P

    2016-09-01

    The aim of this age-simulation field experiment was to assess the influence of route complexity and physical ageing on wayfinding. Seventy-five people (aged 18-28) performed a total of 108 wayfinding tasks (i.e., 42 participants performed two wayfinding tasks and 33 performed one wayfinding task), of which 59 tasks were performed wearing gerontologic ageing suits. Outcome variables were wayfinding performance (i.e., efficiency and walking speed) and physiological outcomes (i.e., heart and respiratory rates). Analysis of covariance showed that persons on more complex routes (i.e., more floor and building changes) walked less efficiently than persons on less complex routes. In addition, simulated elderly participants perform worse in wayfinding than young participants in terms of speed (p < 0.001). Moreover, a linear mixed model showed that simulated elderly persons had higher heart rates and respiratory rates compared to young people during a wayfinding task, suggesting that simulated elderly consumed more energy during this task. PMID:27184311

  14. Wayfinding in Healthcare Facilities: Contributions from Environmental Psychology

    OpenAIRE

    Ann Sloan Devlin

    2014-01-01

    The ability to successfully navigate in healthcare facilities is an important goal for patients, visitors, and staff. Despite the fundamental nature of such behavior, it is not infrequent for planners to consider wayfinding only after the fact, once the building or building complex is complete. This review argues that more recognition is needed for the pivotal role of wayfinding in healthcare facilities. First, to provide context, the review presents a brief overview of the relationship betwe...

  15. Coded illumination for motion-blur free imaging of cells on cell-phone based imaging flow cytometer

    Science.gov (United States)

    Saxena, Manish; Gorthi, Sai Siva

    2014-10-01

    Cell-phone based imaging flow cytometry can be realized by flowing cells through the microfluidic devices, and capturing their images with an optically enhanced camera of the cell-phone. Throughput in flow cytometers is usually enhanced by increasing the flow rate of cells. However, maximum frame rate of camera system limits the achievable flow rate. Beyond this, the images become highly blurred due to motion-smear. We propose to address this issue with coded illumination, which enables recovery of high-fidelity images of cells far beyond their motion-blur limit. This paper presents simulation results of deblurring the synthetically generated cell/bead images under such coded illumination.

  16. Swarm-based wayfinding support in open and distance learning

    NARCIS (Netherlands)

    Tattersall, Colin; Manderveld, Jocelyn; Van den Berg, Bert; Van Es, René; Janssen, José; Koper, Rob

    2005-01-01

    Please refer to the original source: Tattersall, C. Manderveld, J., Van den Berg, B., Van Es, R., Janssen, J., & Koper, R. (2005). Swarm-based wayfinding support in open and distance learning. In Alkhalifa, E.M. (Ed). Cognitively Informed Systems: Utilizing Practical Approaches to Enrich Information

  17. A Wayfinding Grammar Based on Reference System Transformations

    NARCIS (Netherlands)

    Kiefer, Peter; Scheider, Simon; Giannopoulos, Ioannis; Weiser, Paul

    2015-01-01

    Wayfinding models can be helpful in describing, understanding, and technologically supporting the processes involved in navigation. However, current models either lack a high degree of formalization, or they are not holistic and perceptually grounded, which impedes their use for cognitive engineerin

  18. Rapid Prototyping a Collections-Based Mobile Wayfinding Application

    Science.gov (United States)

    Hahn, Jim; Morales, Alaina

    2011-01-01

    This research presents the results of a project that investigated how students use a library developed mobile app to locate books in the library. The study employed a methodology of formative evaluation so that the development of the mobile app would be informed by user preferences for next generation wayfinding systems. A key finding is the…

  19. Lost in the Labyrinthine Library: A Multi-Method Case Study Investigating Public Library User Wayfinding Behavior

    Science.gov (United States)

    Mandel, Lauren Heather

    2012-01-01

    Wayfinding is the method by which humans orient and navigate in space, and particularly in built environments such as cities and complex buildings, including public libraries. In order to wayfind successfully in the built environment, humans need information provided by wayfinding systems and tools, for instance architectural cues, signs, and…

  20. Applicability of an exposure model for the determination of emissions from mobile phone base stations

    DEFF Research Database (Denmark)

    Breckenkamp, J; Neitzke, H P; Bornkessel, C;

    2008-01-01

    Applicability of a model to estimate radiofrequency electromagnetic field (RF-EMF) strength in households from mobile phone base stations was evaluated with technical data of mobile phone base stations available from the German Net Agency, and dosimetric measurements, performed in an epidemiologi...

  1. Autonomous indoor wayfinding for individuals with cognitive impairments

    Directory of Open Access Journals (Sweden)

    Chang Yao-Jen

    2010-09-01

    Full Text Available Abstract Background A challenge to individuals with cognitive impairments in wayfinding is how to remain oriented, recall routines, and travel in unfamiliar areas in a way relying on limited cognitive capacity. While people without disabilities often use maps or written directions as navigation tools or for remaining oriented, this cognitively-impaired population is very sensitive to issues of abstraction (e.g. icons on maps or signage and presents the designer with a challenge to tailor navigation information specific to each user and context. Methods This paper describes an approach to providing distributed cognition support of travel guidance for persons with cognitive disabilities. A solution is proposed based on passive near-field RFID tags and scanning PDAs. A prototype is built and tested in field experiments with real subjects. The unique strength of the system is the ability to provide unique-to-the-user prompts that are triggered by context. The key to the approach is to spread the context awareness across the system, with the context being flagged by the RFID tags and the appropriate response being evoked by displaying the appropriate path guidance images indexed by the intersection of specific end-user and context ID embedded in RFID tags. Results We found that passive RFIDs generally served as good context for triggering navigation prompts, although individual differences in effectiveness varied. The results of controlled experiments provided more evidence with regard to applicabilities of the proposed autonomous indoor wayfinding method. Conclusions Our findings suggest that the ability to adapt indoor wayfinding devices for appropriate timing of directions and standing orientation will be particularly important.

  2. Dynamic Operations Wayfinding System (DOWS) for Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald Laurids [Idaho National Laboratory; Ulrich, Thomas Anthony [Idaho National Laboratory; Lew, Roger Thomas [Idaho National Laboratory

    2015-08-01

    A novel software tool is proposed to aid reactor operators in respond- ing to upset plant conditions. The purpose of the Dynamic Operations Wayfind- ing System (DOWS) is to diagnose faults, prioritize those faults, identify paths to resolve those faults, and deconflict the optimal path for the operator to fol- low. The objective of DOWS is to take the guesswork out of the best way to combine procedures to resolve compound faults, mitigate low threshold events, or respond to severe accidents. DOWS represents a uniquely flexible and dy- namic computer-based procedure system for operators.

  3. Colour contribution to children's wayfinding in school environments

    Science.gov (United States)

    Helvacıoǧlu, Elif; Olguntürk, Nilgün

    2011-03-01

    The purpose of this study was to explore the contribution of colour to children's wayfinding ability in school environments and to examine the differences between colours in terms of their remembrance and usability in route learning process. The experiment was conducted with three different sample groups for each of three experiment sets differentiated by their colour arrangement. The participants totalled 100 primary school children aged seven and eight years old. The study was conducted in four phases. In the first phase, the participants were tested for familiarity with the experiment site and also for colour vision deficiencies by using Ishihara's tests for colour-blindness. In the second phase, they were escorted on the experiment route by the tester one by one, from one starting point to one end point and were asked to lead the tester to the end point by the same route. In the third phase, they were asked to describe verbally the route. In the final phase, they were asked to remember the specific colours at their correct locations. It was found that colour has a significant effect on children's wayfinding performances in school environments. However, there were no differences between different colours in terms of their remembrances in route finding tasks. In addition, the correct identifications of specific colours and landmarks were dependent on their specific locations. Contrary to the literature, gender differences were not found to be significant in the accuracy of route learning performances.

  4. The language of landmarks: the role of background knowledge in indoor wayfinding.

    Science.gov (United States)

    Frankenstein, Julia; Brüssow, Sven; Ruzzoli, Felix; Hölscher, Christoph

    2012-08-01

    To effectively wayfind through unfamiliar buildings, humans infer their relative position to target locations not only by interpreting geometric layouts, especially length of line of sight, but also by using background knowledge to evaluate landmarks with respect to their probable spatial relation to a target. Questionnaire results revealed that participants have consistent background knowledge about the relative position of target locations. Landmarks were rated significantly differently with respect to their spatial relation to targets. In addition, results from a forced-choice task comparing snapshots of a virtual environment revealed that background knowledge influenced wayfinding decisions. We suggest that landmarks are interpreted semantically with respect to their function and spatial relation to the target location and thereby influence wayfinding decisions. This indicates that background knowledge plays a role in wayfinding.

  5. Exposure to radio waves near mobile phone base stations

    International Nuclear Information System (INIS)

    Measurements of power density have been made at 17 sites where people were concerned about their exposure to radio waves from mobile phone base stations and where technical data, including the frequencies and radiated powers, have been obtained from the operators. Based on the technical data, the radiated power from antennas used with macrocellular base stations in the UK appears to range from a few watts to a few tens of watts, with typical maximum powers around 80 W. Calculations based on this power indicate that compliance distances would be expected to be no more than 3.1 m for the NRPB guidelines and no more than 8.4 m for the ICNIRP public guidelines. Microcellular base stations appear to use powers no more than a few watts and would not be expected to require compliance distances in excess of a few tens of centimetres. Power density from the base stations of interest was measured at 118 locations at the 17 sites and these data were compared with calculations assuming an inverse square law dependence of power density upon distance from the antennas. It was found that the calculations overestimated the measured power density by up to four orders of magnitude at locations that were either not exposed to the main beam from antennas, or shielded by building fabric. For all locations and for distances up to 250 m from the base stations, power density at the measurement positions did not show any trend to decrease with increasing distance. The signals from other sources were frequently found to be of similar strength to the signals from the base stations of interest. Spectral measurements were obtained over the 30 MHz to 2.9 GHz range at 73 of the locations so that total exposure to radio signals could be assessed. The geometric mean total exposure arising from all radio signals at the locations considered was 2 millionths of the NRPB investigation level, or 18 millionths of the lower ICNIRP public reference level; however, the data varied over several decades. The

  6. A semiotic approach to blind wayfinding: some primary conceptual standpoints

    Directory of Open Access Journals (Sweden)

    Marcelo Santos

    2009-01-01

    Full Text Available Researchers from a wide variety of disciplines, such as philosophy, art, education or psychology, have over the years sustained the idea that blind persons are incapable or nearly incapable of formulating complex mental diagrammatic representations, which are schema based on the similarities found within internal logical relations between sign and object.Contrary to this widely accepted opinion, we will present an alternative approach in this paper: Our main idea is that blind and visually impaired people relying upon tact as a main knowledge source are capable of diagrammatic reasoning very well, but use a different method for this purpose, namely the method of inductive reasoning. Such method can effectively provide the mind with the data necessary to the elaboration of mental maps. Therefore, wayfinding as a semiotic process in which a route is planned and executed from marks or navigation indexes, is also enabled by tact.

  7. Detecting Signage and Doors for Blind Navigation and Wayfinding.

    Science.gov (United States)

    Wang, Shuihua; Yang, Xiaodong; Tian, Yingli

    2013-07-01

    Signage plays a very important role to find destinations in applications of navigation and wayfinding. In this paper, we propose a novel framework to detect doors and signage to help blind people accessing unfamiliar indoor environments. In order to eliminate the interference information and improve the accuracy of signage detection, we first extract the attended areas by using a saliency map. Then the signage is detected in the attended areas by using a bipartite graph matching. The proposed method can handle multiple signage detection. Furthermore, in order to provide more information for blind users to access the area associated with the detected signage, we develop a robust method to detect doors based on a geometric door frame model which is independent to door appearances. Experimental results on our collected datasets of indoor signage and doors demonstrate the effectiveness and efficiency of our proposed method.

  8. MAGELLAN: a cognitive map-based model of human wayfinding.

    Science.gov (United States)

    Manning, Jeremy R; Lew, Timothy F; Li, Ningcheng; Sekuler, Robert; Kahana, Michael J

    2014-06-01

    In an unfamiliar environment, searching for and navigating to a target requires that spatial information be acquired, stored, processed, and retrieved. In a study encompassing all of these processes, participants acted as taxicab drivers who learned to pick up and deliver passengers in a series of small virtual towns. We used data from these experiments to refine and validate MAGELLAN, a cognitive map-based model of spatial learning and wayfinding. MAGELLAN accounts for the shapes of participants' spatial learning curves, which measure their experience-based improvement in navigational efficiency in unfamiliar environments. The model also predicts the ease (or difficulty) with which different environments are learned and, within a given environment, which landmarks will be easy (or difficult) to localize from memory. Using just 2 free parameters, MAGELLAN provides a useful account of how participants' cognitive maps evolve over time with experience, and how participants use the information stored in their cognitive maps to navigate and explore efficiently.

  9. Smart-Phone Based Magnetic Levitation for Measuring Densities.

    Directory of Open Access Journals (Sweden)

    Stephanie Knowlton

    Full Text Available Magnetic levitation, which uses a magnetic field to suspend objects in a fluid, is a powerful and versatile technology. We develop a compact magnetic levitation platform compatible with a smart-phone to separate micro-objects and estimate the density of the sample based on its levitation height. A 3D printed attachment is mechanically installed over the existing camera unit of a smart-phone. Micro-objects, which may be either spherical or irregular in shape, are suspended in a paramagnetic medium and loaded in a microcapillary tube which is then inserted between two permanent magnets. The micro-objects are levitated and confined in the microcapillary at an equilibrium height dependent on their volumetric mass densities (causing a buoyancy force toward the edge of the microcapillary and magnetic susceptibilities (causing a magnetic force toward the center of the microcapillary relative to the suspending medium. The smart-phone camera captures magnified images of the levitating micro-objects through an additional lens positioned between the sample and the camera lens cover. A custom-developed Android application then analyzes these images to determine the levitation height and estimate the density. Using this platform, we were able to separate microspheres with varying densities and calibrate their levitation heights to known densities to develop a technique for precise and accurate density estimation. We have also characterized the magnetic field, the optical imaging capabilities, and the thermal state over time of this platform.

  10. Smart-Phone Based Magnetic Levitation for Measuring Densities.

    Science.gov (United States)

    Knowlton, Stephanie; Yu, Chu Hsiang; Jain, Nupur; Ghiran, Ionita Calin; Tasoglu, Savas

    2015-01-01

    Magnetic levitation, which uses a magnetic field to suspend objects in a fluid, is a powerful and versatile technology. We develop a compact magnetic levitation platform compatible with a smart-phone to separate micro-objects and estimate the density of the sample based on its levitation height. A 3D printed attachment is mechanically installed over the existing camera unit of a smart-phone. Micro-objects, which may be either spherical or irregular in shape, are suspended in a paramagnetic medium and loaded in a microcapillary tube which is then inserted between two permanent magnets. The micro-objects are levitated and confined in the microcapillary at an equilibrium height dependent on their volumetric mass densities (causing a buoyancy force toward the edge of the microcapillary) and magnetic susceptibilities (causing a magnetic force toward the center of the microcapillary) relative to the suspending medium. The smart-phone camera captures magnified images of the levitating micro-objects through an additional lens positioned between the sample and the camera lens cover. A custom-developed Android application then analyzes these images to determine the levitation height and estimate the density. Using this platform, we were able to separate microspheres with varying densities and calibrate their levitation heights to known densities to develop a technique for precise and accurate density estimation. We have also characterized the magnetic field, the optical imaging capabilities, and the thermal state over time of this platform. PMID:26308615

  11. Smart-Phone Based Magnetic Levitation for Measuring Densities

    OpenAIRE

    Stephanie Knowlton; Chu Hsiang Yu; Nupur Jain; Ionita Calin Ghiran; Savas Tasoglu

    2015-01-01

    Magnetic levitation, which uses a magnetic field to suspend objects in a fluid, is a powerful and versatile technology. We develop a compact magnetic levitation platform compatible with a smart-phone to separate micro-objects and estimate the density of the sample based on its levitation height. A 3D printed attachment is mechanically installed over the existing camera unit of a smart-phone. Micro-objects, which may be either spherical or irregular in shape, are suspended in a paramagnetic me...

  12. Smart phone-based Chemistry Instrumentation: Digitization of Colorimetric Measurements

    International Nuclear Information System (INIS)

    This report presents a mobile instrumentation platform based on a smart phone using its built-in functions for colorimetric diagnosis. The color change as a result of detection is taken as a picture through a CCD camera built in the smart phone, and is evaluated in the form of the hue value to give the well-defined relationship between the color and the concentration. To prove the concept in the present work, proton concentration measurements were conducted on pH paper coupled with a smart phone for demonstration. This report is believed to show the possibility of adapting a smart phone to a mobile analytical transducer, and more applications for bioanalysis are expected to be developed using other built-in functions of the smart phone

  13. Determination of exposure due to mobile phone base stations in an epidemiological study

    International Nuclear Information System (INIS)

    To investigate a supposed relationship between exposure by mobile phone base stations and well-being, an epidemiological cross sectional study is carried out within the German Mobile Telecommunication Research Program. In a parallel project, a method for the classification of electromagnetic exposure due to mobile phone base stations has been developed. This is based on the results of measurements of high frequency immissions in the interior of more than 1100 rooms and at outdoor locations, the calculation of the emissions of mobile phone antennas under free space propagation conditions and empirically determined transmission factors for the propagation of electromagnetic waves in different types of residential areas for passage of walls and windows. Standard tests (correlation-test, kappa-test, Bland-Altman-Plot, analysis of sensitivity and specificity) show that the method for computational exposure assessment developed in this project is applicable for a first classification of exposures due to mobile phone base stations in epidemiological studies. (authors)

  14. Mobile phone base stations and well-being--A meta-analysis.

    Science.gov (United States)

    Klaps, Armin; Ponocny, Ivo; Winker, Robert; Kundi, Michael; Auersperg, Felicitas; Barth, Alfred

    2016-02-15

    It is unclear whether electromagnetic fields emitted by mobile phone base stations affect well-being in adults. The existing studies on this topic are highly inconsistent. In the current paper we attempt to clarify this question by carrying out a meta-analysis which is based on the results of 17 studies. Double-blind studies found no effects on human well-being. By contrast, field or unblinded studies clearly showed that there were indeed effects. This provides evidence that at least some effects are based on a nocebo effect. Whether there is an influence of electromagnetic fields emitted by mobile phone base stations thus depends on a person's knowledge about the presence of the presumed cause. Taken together, the results of the meta-analysis show that the effects of mobile phone base stations seem to be rather unlikely. However, nocebo effects occur.

  15. Signage and wayfinding design a complete guide to creating environmental graphic design systems

    CERN Document Server

    Calori, Chris

    2015-01-01

    A new edition of the market-leading guide to signage and wayfinding design This new edition of Signage and Wayfinding Design: A Complete Guide to Creating Environmental Graphic Design Systems has been fully updated to offer you the latest, most comprehensive coverage of the environmental design process-from research and design development to project execution. Utilizing a cross-disciplinary approach that makes the information relevant to architects, interior designers, landscape architects, graphic designers, and industrial engineers alike, the book arms you with the skills needed to apply a

  16. Phone-based motivational interviewing to increase self-efficacy in individuals with phenylketonuria

    Directory of Open Access Journals (Sweden)

    Krista S. Viau

    2016-03-01

    Conclusion: These results demonstrate the feasibility of implementing phone-based dietary counseling for PKU using MI. This study also supports further investigation of MI as an intervention approach to improving self-efficacy and self-management behaviors in adolescents and adults with PKU.

  17. What input data are needed to accurately model electromagnetic fields from mobile phone base stations?

    NARCIS (Netherlands)

    Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel

    2015-01-01

    The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what

  18. Determinants and stability over time of perception of health risks related to mobile phone base stations

    DEFF Research Database (Denmark)

    Kowall, Bernd; Breckenkamp, Jürgen; Blettner, Maria;

    2012-01-01

    OBJECTIVE: Perception of possible health risks related to mobile phone base stations (MPBS) is an important factor in citizens' opposition against MPBS and is associated with health complaints. The aim of the present study is to assess whether risk perception of MPBS is associated with concerns...

  19. Indoor Signposting and Wayfinding through an Adaptation of the Dutch Cyclist Junction Network System

    NARCIS (Netherlands)

    Makri, A.; Verbree, E.

    2014-01-01

    Finding's ones way in complex indoor settings can be a quite stressful and time-consuming task, especially for users unfamiliar with the environment. There have been developed several different approaches to provide wayfinding assistance in order to guide a person from a starting point to a destinat

  20. Auditory Cues Used for Wayfinding in Urban Environments by Individuals with Visual Impairments

    Science.gov (United States)

    Koutsoklenis, Athanasios; Papadopoulos, Konstantinos

    2011-01-01

    The study presented here examined which auditory cues individuals with visual impairments use more frequently and consider to be the most important for wayfinding in urban environments. It also investigated the ways in which these individuals use the most significant auditory cues. (Contains 1 table and 3 figures.)

  1. Haptic Cues Used for Outdoor Wayfinding by Individuals with Visual Impairments

    Science.gov (United States)

    Koutsoklenis, Athanasios; Papadopoulos, Konstantinos

    2014-01-01

    Introduction: The study presented here examines which haptic cues individuals with visual impairments use more frequently and determines which of these cues are deemed by these individuals to be the most important for way-finding in urban environments. It also investigates the ways in which these haptic cues are used by individuals with visual…

  2. Pilot study of a cell phone-based exercise persistence intervention post-rehabilitation for COPD

    OpenAIRE

    Nguyen, Huong Q.; Gill, Dawn P; Wolpin, Seth; Steele, Bonnie G; Benditt, Joshua O.

    2009-01-01

    Objective To determine the feasibility and efficacy of a six-month, cell phone-based exercise persistence intervention for patients with chronic obstructive pulmonary disease (COPD) following pulmonary rehabilitation. Methods Participants who completed a two-week run-in were randomly assigned to either MOBILE-Coached (n = 9) or MOBILE-Self-Monitored (n = 8). All participants met with a nurse to develop an individualized exercise plan, were issued a pedometer and exercise booklet, and instruct...

  3. Novel versatile smart phone based Microplate readers for on-site diagnoses.

    Science.gov (United States)

    Fu, Qiangqiang; Wu, Ze; Li, Xiuqing; Yao, Cuize; Yu, Shiting; Xiao, Wei; Tang, Yong

    2016-07-15

    Microplate readers are important diagnostic instruments, used intensively for various readout test kits (biochemical analysis kits and ELISA kits). However, due to their expensive and non-portability, commercial microplate readers are unavailable for home testing, community and rural hospitals, especially in developing countries. In this study, to provide a field-portable, cost-effective and versatile diagnostic tool, we reported a novel smart phone based microplate reader. The basic principle of this devise relies on a smart phone's optical sensor that measures transmitted light intensities of liquid samples. To prove the validity of these devises, developed smart phone based microplate readers were applied to readout results of various analytical targets. These targets included analanine aminotransferase (ALT; limit of detection (LOD) was 17.54 U/L), alkaline phosphatase (AKP; LOD was 15.56 U/L), creatinine (LOD was 1.35μM), bovine serum albumin (BSA; LOD was 0.0041mg/mL), prostate specific antigen (PSA; LOD was 0.76pg/mL), and ractopamine (Rac; LOD was 0.31ng/mL). The developed smart phone based microplate readers are versatile, portable, and inexpensive; they are unique because of their ability to perform under circumstances where resources and expertize are limited.

  4. Novel versatile smart phone based Microplate readers for on-site diagnoses.

    Science.gov (United States)

    Fu, Qiangqiang; Wu, Ze; Li, Xiuqing; Yao, Cuize; Yu, Shiting; Xiao, Wei; Tang, Yong

    2016-07-15

    Microplate readers are important diagnostic instruments, used intensively for various readout test kits (biochemical analysis kits and ELISA kits). However, due to their expensive and non-portability, commercial microplate readers are unavailable for home testing, community and rural hospitals, especially in developing countries. In this study, to provide a field-portable, cost-effective and versatile diagnostic tool, we reported a novel smart phone based microplate reader. The basic principle of this devise relies on a smart phone's optical sensor that measures transmitted light intensities of liquid samples. To prove the validity of these devises, developed smart phone based microplate readers were applied to readout results of various analytical targets. These targets included analanine aminotransferase (ALT; limit of detection (LOD) was 17.54 U/L), alkaline phosphatase (AKP; LOD was 15.56 U/L), creatinine (LOD was 1.35μM), bovine serum albumin (BSA; LOD was 0.0041mg/mL), prostate specific antigen (PSA; LOD was 0.76pg/mL), and ractopamine (Rac; LOD was 0.31ng/mL). The developed smart phone based microplate readers are versatile, portable, and inexpensive; they are unique because of their ability to perform under circumstances where resources and expertize are limited. PMID:27019031

  5. Acoustic wayfinding: A method to measure the acoustic contrast of different paving materials for blind people.

    Science.gov (United States)

    Secchi, Simone; Lauria, Antonio; Cellai, Gianfranco

    2017-01-01

    Acoustic wayfinding involves using a variety of auditory cues to create a mental map of the surrounding environment. For blind people, these auditory cues become the primary substitute for visual information in order to understand the features of the spatial context and orient themselves. This can include creating sound waves, such as tapping a cane. This paper reports the results of a research about the "acoustic contrast" parameter between paving materials functioning as a cue and the surrounding or adjacent surface functioning as a background. A number of different materials was selected in order to create a test path and a procedure was defined for the verification of the ability of blind people to distinguish different acoustic contrasts. A method is proposed for measuring acoustic contrast generated by the impact of a cane tip on the ground to provide blind people with environmental information on spatial orientation and wayfinding in urban places. PMID:27633240

  6. Pilot study of a cell phone-based exercise persistence intervention post-rehabilitation for COPD

    OpenAIRE

    Nguyen, Huong Q.; Gill, Dawn P; Seth Wolpin; et al.

    2009-01-01

    Huong Q Nguyen1, Dawn P Gill1, Seth Wolpin1, Bonnie G Steele2, Joshua O Benditt11University of Washington, seattle, WA, USA; 2VA Puget Sound Health Care System, Seattle, WA, USAObjective: To determine the feasibility and efficacy of a six-month, cell phone-based exercise persistence intervention for patients with chronic obstructive pulmonary disease (COPD) following pulmonary rehabilitation.Methods: Participants who completed a two-week run-in were randomly assigned to either MOBILE...

  7. Perceived externalities of cell phone base stations: the case of property prices in Hamburg, Germany

    OpenAIRE

    Brandt, Sebastian; Maennig, Wolfgang

    2012-01-01

    We examine the impact of cell phone base stations on prices of condominiums in Hamburg, Germany. This is the first hedonic study on this subject for housing prices in Europe and the first ever to examine the price impact of base stations within a whole metropolis. We distinguish between individual masts and groups of masts. On the basis of a dataset of over 1000 base stations set up in Hamburg, we find that only immediate proximity to groups of antenna masts is perceived as harmful by residen...

  8. Age-related wayfinding differences in real large-scale environments: detrimental motor control effects during spatial learning are mediated by executive decline?

    Directory of Open Access Journals (Sweden)

    Mathieu Taillade

    Full Text Available The aim of this study was to evaluate motor control activity (active vs. passive condition with regards to wayfinding and spatial learning difficulties in large-scale spaces for older adults. We compared virtual reality (VR-based wayfinding and spatial memory (survey and route knowledge performances between 30 younger and 30 older adults. A significant effect of age was obtained on the wayfinding performances but not on the spatial memory performances. Specifically, the active condition deteriorated the survey measure in all of the participants and increased the age-related differences in the wayfinding performances. Importantly, the age-related differences in the wayfinding performances, after an active condition, were further mediated by the executive measures. All of the results relative to a detrimental effect of motor activity are discussed in terms of a dual task effect as well as executive decline associated with aging.

  9. Mobile Phone Based System Opportunities to Home-based Managing of Chemotherapy Side Effects

    Science.gov (United States)

    Davoodi, Somayeh; Mohammadzadeh, Zeinab; Safdari, Reza

    2016-01-01

    Objective: Applying mobile base systems in cancer care especially in chemotherapy management have remarkable growing in recent decades. Because chemotherapy side effects have significant influences on patient’s lives, therefore it is necessary to take ways to control them. This research has studied some experiences of using mobile phone based systems to home-based monitor of chemotherapy side effects in cancer. Methods: In this literature review study, search was conducted with keywords like cancer, chemotherapy, mobile phone, information technology, side effects and self managing, in Science Direct, Google Scholar and Pub Med databases since 2005. Results: Today, because of the growing trend of the cancer, we need methods and innovations such as information technology to manage and control it. Mobile phone based systems are the solutions that help to provide quick access to monitor chemotherapy side effects for cancer patients at home. Investigated studies demonstrate that using of mobile phones in chemotherapy management have positive results and led to patients and clinicians satisfactions. Conclusion: This study shows that the mobile phone system for home-based monitoring chemotherapy side effects works well. In result, knowledge of cancer self-management and the rate of patient’s effective participation in care process improved. PMID:27482134

  10. Mobile Phone Based System Opportunities to Home-based Managing of Chemotherapy Side Effects

    Science.gov (United States)

    Davoodi, Somayeh; Mohammadzadeh, Zeinab; Safdari, Reza

    2016-01-01

    Objective: Applying mobile base systems in cancer care especially in chemotherapy management have remarkable growing in recent decades. Because chemotherapy side effects have significant influences on patient’s lives, therefore it is necessary to take ways to control them. This research has studied some experiences of using mobile phone based systems to home-based monitor of chemotherapy side effects in cancer. Methods: In this literature review study, search was conducted with keywords like cancer, chemotherapy, mobile phone, information technology, side effects and self managing, in Science Direct, Google Scholar and Pub Med databases since 2005. Results: Today, because of the growing trend of the cancer, we need methods and innovations such as information technology to manage and control it. Mobile phone based systems are the solutions that help to provide quick access to monitor chemotherapy side effects for cancer patients at home. Investigated studies demonstrate that using of mobile phones in chemotherapy management have positive results and led to patients and clinicians satisfactions. Conclusion: This study shows that the mobile phone system for home-based monitoring chemotherapy side effects works well. In result, knowledge of cancer self-management and the rate of patient’s effective participation in care process improved.

  11. Comparison of the effectiveness of three retinal camera technologies for malarial retinopathy detection in Malawi

    Science.gov (United States)

    Soliz, Peter; Nemeth, Sheila C.; Barriga, E. Simon; Harding, Simon P.; Lewallen, Susan; Taylor, Terrie E.; MacCormick, Ian J.; Joshi, Vinayak S.

    2016-03-01

    The purpose of this study was to test the suitability of three available camera technologies (desktop, portable, and iphone based) for imaging comatose children who presented with clinical symptoms of malaria. Ultimately, the results of the project would form the basis for a design of a future camera to screen for malaria retinopathy (MR) in a resource challenged environment. The desktop, portable, and i-phone based cameras were represented by the Topcon, Pictor Plus, and Peek cameras, respectively. These cameras were tested on N=23 children presenting with symptoms of cerebral malaria (CM) at a malaria clinic, Queen Elizabeth Teaching Hospital in Malawi, Africa. Each patient was dilated for binocular indirect ophthalmoscopy (BIO) exam by an ophthalmologist followed by imaging with all three cameras. Each of the cases was graded according to an internationally established protocol and compared to the BIO as the clinical ground truth. The reader used three principal retinal lesions as markers for MR: hemorrhages, retinal whitening, and vessel discoloration. The study found that the mid-priced Pictor Plus hand-held camera performed considerably better than the lower price mobile phone-based camera, and slightly the higher priced table top camera. When comparing the readings of digital images against the clinical reference standard (BIO), the Pictor Plus camera had sensitivity and specificity for MR of 100% and 87%, respectively. This compares to a sensitivity and specificity of 87% and 75% for the i-phone based camera and 100% and 75% for the desktop camera. The drawback of all the cameras were their limited field of view which did not allow complete view of the periphery where vessel discoloration occurs most frequently. The consequence was that vessel discoloration was not addressed in this study. None of the cameras offered real-time image quality assessment to ensure high quality images to afford the best possible opportunity for reading by a remotely located

  12. Proposing a Multi-Criteria Path Optimization Method in Order to Provide a Ubiquitous Pedestrian Wayfinding Service

    Science.gov (United States)

    Sahelgozin, M.; Sadeghi-Niaraki, A.; Dareshiri, S.

    2015-12-01

    A myriad of novel applications have emerged nowadays for different types of navigation systems. One of their most frequent applications is Wayfinding. Since there are significant differences between the nature of the pedestrian wayfinding problems and of those of the vehicles, navigation services which are designed for vehicles are not appropriate for pedestrian wayfinding purposes. In addition, diversity in environmental conditions of the users and in their preferences affects the process of pedestrian wayfinding with mobile devices. Therefore, a method is necessary that performs an intelligent pedestrian routing with regard to this diversity. This intelligence can be achieved by the help of a Ubiquitous service that is adapted to the Contexts. Such a service possesses both the Context-Awareness and the User-Awareness capabilities. These capabilities are the main features of the ubiquitous services that make them flexible in response to any user in any situation. In this paper, it is attempted to propose a multi-criteria path optimization method that provides a Ubiquitous Pedestrian Way Finding Service (UPWFS). The proposed method considers four criteria that are summarized in Length, Safety, Difficulty and Attraction of the path. A conceptual framework is proposed to show the influencing factors that have effects on the criteria. Then, a mathematical model is developed on which the proposed path optimization method is based. Finally, data of a local district in Tehran is chosen as the case study in order to evaluate performance of the proposed method in real situations. Results of the study shows that the proposed method was successful to understand effects of the contexts in the wayfinding procedure. This demonstrates efficiency of the proposed method in providing a ubiquitous pedestrian wayfinding service.

  13. Applicability of an exposure model for the determination of emissions from mobile phone base stations

    International Nuclear Information System (INIS)

    Applicability of a model to estimate radiofrequency electromagnetic field (RF-EMF) strength in households from mobile phone base stations was evaluated with technical data of mobile phone base stations available from the German Net Agency, and dosimetric measurements, performed in an epidemiological study. Estimated exposure and exposure measured with dosemeters in 1322 participating households were compared. For that purpose, the upper 10. percentiles of both outcomes were defined as the 'higher exposed' groups. To assess the agreement of the defined 'higher' exposed groups, kappa coefficient, sensitivity and specificity were calculated. The present results show only a weak agreement of calculations and measurements (kappa values between -0.03 and 0.28, sensitivity between 7.1 and 34.6). Only in some of the sub-analyses, a higher agreement was found, e.g. when measured instead of interpolated geo-coordinates were used to calculate the distance between households and base stations, which is one important parameter in modelling exposure. During the development of the exposure model, more precise input data were available for its internal validation, which yielded kappa values between 0.41 and 0.68 and sensitivity between 55 and 76 for different types of housing areas. Contrary to this, the calculation of exposure - on the basis of the available imprecise data from the epidemiological study - is associated with a relatively high degree of uncertainty. Thus, the model can only be applied in epidemiological studies, when the uncertainty of the input data is considerably reduced. Otherwise, the use of dosemeters to determine the exposure from RF-EMF in epidemiological studies is recommended. (authors)

  14. The Effect of Gender, Wayfinding Strategy and Navigational Support on Wayfinding Behaviour%性别、寻路策略与导航方式对寻路行为的影响

    Institute of Scientific and Technical Information of China (English)

    房慧聪; 周琳

    2012-01-01

    The wayfinding strategy and the navigational support mode are two important factors in human wayfinding behavior. Although many lines of evidences have displayed the gender differences in the use of wayfinding strategy and the effectiveness of some navigational support designs, the interaction of these two factors still remained to be studied. The present study was aimed to investigate the effect of gender, wayfinding strategy and navigational support mode on wayfinding behavior. 120 subjects were screened by the classic Wayfinding Strategy Scale developed by Lawton and then were assigned to different navigational support mode in a VR maze program scripted with 3Dmax and Virtools. In the practice stage, the subjects were required to get familiar with the operation rules, such as moving forward or backward, turning left or right by pressing the cursor keys. Then, the subjects entered the formal test, in which they were asked to arrive at the exit of the maze as quickly as possible with the aid of a given navigational support mode. The navigation time and the route map were recorded when the subjects successfully completed the task. Firstly, our data showed that the navigation time in males with lower-score in orientation strategy was the shortest under the condition of the guide sign support in the VR maze, while it was the longest under the condition of the YAH map support. Moreover, they were significantly different between the two treatments. However, the effect of the navigational support mode on wayfinding performance was not significantly different in the males with higher score in orientation strategy. These data indicated that orientation strategy was an important factor to predict the male's navigational performance. Secondly, our data also showed that the effect of the navigational support mode on the female's wayfinding performance was statistically significant. The navigation time was the shortest under the condition of the guide sign support, and it was

  15. Mapping Cyclists’ Experiences and Agent-Based Modelling of Their Wayfinding Behaviour

    DEFF Research Database (Denmark)

    Snizek, Bernhard

    This dissertation is about modelling cycling transport behaviour. It is partly about urban experiences seen by the cyclist and about modelling, more specifically the agent-based modelling of cyclists' wayfinding behaviour. The dissertation consists of three papers. The first deals with the develo......This dissertation is about modelling cycling transport behaviour. It is partly about urban experiences seen by the cyclist and about modelling, more specifically the agent-based modelling of cyclists' wayfinding behaviour. The dissertation consists of three papers. The first deals......-based model of cycling transport behaviour using geodata, data from the Danish travel survey as well as behavioural data extracted from trajectories recorded utilising GPS units. Mapping Bicyclists’ Experiences in Copenhagen This paper presents an approach to the collection, mapping and analysing of cyclists......’ experiences. By relating spatial experiences to urban indicators such as land-use, street characteristics, cycle infrastructure, centrality and other aspects of the urban environment, their influence on cyclists’ experiences were analysed. 398 cyclists responded and plotted their most recent cycle route...

  16. Non-specific physical symptoms in relation to actual and perceived proximity to mobile phone base stations and powerlines.

    NARCIS (Netherlands)

    Baliatsas, C.; Kamp, I. van; Kelfkens, G.; Schipper, M.; Bolte, J.; Yzermans, J.; Lebret, E.

    2011-01-01

    BACKGROUND: Evidence about a possible causal relationship between non-specific physical symptoms (NSPS) and exposure to electromagnetic fields (EMF) emitted by sources such as mobile phone base stations (BS) and powerlines is insufficient. So far little epidemiological research has been published on

  17. Pilot study of a cell phone-based exercise persistence intervention post-rehabilitation for COPD

    Directory of Open Access Journals (Sweden)

    Huong Q Nguyen

    2009-08-01

    Full Text Available Huong Q Nguyen1, Dawn P Gill1, Seth Wolpin1, Bonnie G Steele2, Joshua O Benditt11University of Washington, seattle, WA, USA; 2VA Puget Sound Health Care System, Seattle, WA, USAObjective: To determine the feasibility and efficacy of a six-month, cell phone-based exercise persistence intervention for patients with chronic obstructive pulmonary disease (COPD following pulmonary rehabilitation.Methods: Participants who completed a two-week run-in were randomly assigned to either MOBILE-Coached (n = 9 or MOBILE-Self-Monitored (n = 8. All participants met with a nurse to develop an individualized exercise plan, were issued a pedometer and exercise booklet, and instructed to continue to log their daily exercise and symptoms. MOBILE-Coached also received weekly reinforcement text messages on their cell phones; reports of worsening symptoms were automatically flagged for follow-up. Usability and satisfaction were assessed. Participants completed incremental cycle and six minute walk (6MW tests, wore an activity monitor for 14 days, and reported their health-related quality of life (HRQL at baseline, three, and six months.Results: The sample had a mean age of 68 ± 11 and forced expiratory volume in one second (FEV1 of 40 ± 18% predicted. Participants reported that logging their exercise and symptoms was easy and that keeping track of their exercise helped them remain active. There were no differences between groups over time in maximal workload, 6MW distance, or HRQL (p > 0.05; however, MOBILE-Self-Monitored increased total steps/day whereas MOBILE-Coached logged fewer steps over six months (p = 0.04.Conclusions: We showed that it is feasible to deliver a cell phone-based exercise persistence intervention to patients with COPD post-rehabilitation and that the addition of coaching appeared to be no better than self-monitoring. The latter finding needs to be interpreted with caution since this was a purely exploratory study.Trial registration: Clinical

  18. Study of variations of radiofrequency power density from mobile phone base stations with distance

    International Nuclear Information System (INIS)

    The variations of radiofrequency (RF) radiation power density with distance around some mobile phone base stations (BTSs), in ten randomly selected locations in Ibadan, western Nigeria, were studied. Measurements were made with a calibrated hand-held spectrum analyser. The maximum Global System of Mobile (GSM) communication 1800 signal power density was 323.91 μW m-2 at 250 m radius of a BTS and that of GSM 900 was 1119.00 μW m-2 at 200 m radius of another BTS. The estimated total maximum power density was 2972.00 μW m-2 at 50 m radius of a different BTS. This study shows that the maximum carrier signal power density and the total maximum power density from a BTS may be observed averagely at 200 and 50 m of its radius, respectively. The result of this study demonstrates that exposure of people to RF radiation from phone BTSs in Ibadan city is far less than the recommended limits by International scientific bodies. (authors)

  19. Measurement and analysis of radiofrequency radiations from some mobile phone base stations in Ghana

    International Nuclear Information System (INIS)

    A survey of the radiofrequency electromagnetic radiation at public access points in the vicinity of 50 cellular phone base stations has been carried out. The primary objective was to measure and analyse the electromagnetic field strength levels emitted by antennae installed and operated by the Ghana Telecommunications Company. On all the sites measurements were made using a hand-held spectrum analyser to determine the electric field level with the 900 and 1800 MHz frequency bands. The results indicated that power densities at public access points varied from as low as 0.01 μW m-2 to as high as 10 μW m-2 for the frequency of 900 MHz. At a transmission frequency of 1800 MHz, the variation of power densities is from 0.01 to 100 μW m-2. The results were found to be in compliant with the International Commission on Non-ionizing Radiological Protection guidance level but were 20 times higher than the results generally obtained for such a practice elsewhere. There is therefore a need to re-assess the situation to ensure reduction in the present level as an increase in mobile phone usage is envisaged within the next few years. (authors)

  20. What input data are needed to accurately model electromagnetic fields from mobile phone base stations?

    Science.gov (United States)

    Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel

    2015-01-01

    The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available. PMID:24472756

  1. What input data are needed to accurately model electromagnetic fields from mobile phone base stations?

    Science.gov (United States)

    Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel

    2015-01-01

    The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available.

  2. Mobile Phone Based RIMS for Traffic Control a Case Study of Tanzania

    Directory of Open Access Journals (Sweden)

    Angela-Aida Karugila Runyoro

    2015-04-01

    Full Text Available Vehicles saturation in transportation infrastructure causes traffic congestion, accidents, transportation delays and environment pollution. This problem can be resolved with proper management of traffic flow. Existing traffic management systems are challenged on capturing and processing real-time road data from wide area road networks. The main purpose of this study is to address the gap by implementing a mobile phone based Road Information Management System. The proposed system integrates three modules for data collection, storage and information dissemination. The modules works together to enable real-time traffic control. Disseminated information from the system, enables road users to adjust their travelling habit, also it allows the traffic lights to control the traffic in relation to the real-time situation occurring on the road. In this paper the system implementation and testing was performed. The results indicated that there is a possibility to track traffic data using Global Positioning System enabled mobile phones, and after processing the collected data, real-time traffic status was displayed on web interface. This enabled road users to know in advance the situation occurring on the roads and hence make proper travelling decision. Further research should consider adjusting the traffic lights control system to understand the disseminated real-time traffic information.

  3. Mobile Phone-Based Field Monitoring for Satsuma Mandarin and Its Application to Watering Advice System

    Science.gov (United States)

    Kamiya, Toshiyuki; Numano, Nagisa; Yagyu, Hiroyuki; Shimazu, Hideo

    This paper describes a mobile phone-based data logging system for monitoring the growing status of Satsuma mandarin, a type of citrus fruit, in the field. The system can provide various feedback to the farm producers with collected data, such as visualization of related data as a timeline chart or advice on the necessity of watering crops. It is important to collect information on environment conditions, plant status and product quality, to analyze it and to provide it as feedback to the farm producers to aid their operations. This paper proposes a novel framework of field monitoring and feedback for open-field farming. For field monitoring, it combines a low-cost plant status monitoring method using a simple apparatus and a Field Server for environment condition monitoring. Each field worker has a simple apparatus to measure fruit firmness and records data with a mobile phone. The logged data are stored in the database of the system on the server. The system analyzes stored data for each field and is able to show the necessity of watering to the user in five levels. The system is also able to show various stored data in timeline chart form. The user and coach can compare or analyze these data via a web interface. A test site was built at a Satsuma mandarin field at Kumano in Mie Prefecture, Japan using the framework, and farm workers monitor in the area used and evaluated the system.

  4. An iPhone-based digital image colorimeter for detecting tetracycline in milk.

    Science.gov (United States)

    Masawat, Prinya; Harfield, Antony; Namwong, Anan

    2015-10-01

    An iPhone-based digital image colorimeter (DIC) was fabricated as a portable tool for monitoring tetracycline (TC) in bovine milk. An application named ColorConc was developed for the iPhone that utilizes an image matching algorithm to determine the TC concentration in a solution. The color values; red (R), green (G), blue (B), hue (H), saturation (S), brightness (V), and gray (Gr) were measured from each pictures of the TC standard solutions. TC solution extracted from milk samples using solid phase extraction (SPE) was captured and the concentration was predicted by comparing color values with those collected in a database. The amount of TC could be determined in the concentration range of 0.5-10 μg mL(-1). The proposed DIC-iPhone is able to provide a limit of detection (LOD) of 0.5 μg mL(-1) and limit of quantitation (LOQ) of 1.5 μg mL(-1). The enrichment factor was 70 and color of the extracted milk sample was a strong yellow solution after SPE. Therefore, the SPE-DIC-iPhone could be used for the assay of TC residues in milk at the concentration lower than LOD and LOQ of the proposed technique.

  5. Gamma camera

    International Nuclear Information System (INIS)

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  6. Way-finding during a fire emergency: an experimental study in a virtual environment.

    Science.gov (United States)

    Meng, Fanxing; Zhang, Wei

    2014-01-01

    The way-finding behaviour and response during a fire emergency in a virtual environment (VE) was experimentally investigated. Forty participants, divided into two groups, were required to find the emergency exit as soon as possible in a virtual hotel building because of a fire escape demand under condition 1 (VE without virtual fire, control group) and condition 2 (VE with virtual fire, treatment group). Compared to the control group, the treatment group induced significantly higher skin conductivity and heart rate, experienced more stress, took longer time to notice the evacuation signs, had quicker visual search and had a longer escape time to find the exit. These results indicated that the treatment condition induced higher physiological and psychological stress, and had influenced the escape behaviour compared to the control group. In practice, fire evacuation education and fire evacuation system design should consider the response characteristics in a fire emergency. PMID:24697193

  7. Natural perceptual wayfinding for urban accessibility of the elderly with early-stage AD

    Directory of Open Access Journals (Sweden)

    Giuliana Frau

    2015-04-01

    Full Text Available Population ageing and the increase in neurodegenerative diseases that lead to dementia, together with growing urbanisation, cause us to reflect on an important aspect of life in the city for elderly people: the ability to move around independently without getting lost and to find their way back home. By reviewing the existing literature on the theme of wayfinding and analysing some data on residual capacities in the early stages of Alzheimer’s Disease, the concept of ‘natural perceptual wayfinding’ is introduced, aimed, on the one hand, at improving urban accessibility of people with dementia and, on the other, at reconsidering a topic of vital importance, even if normally neglected in the dwelling design.

  8. Landmarks in nature to support wayfinding: the effects of seasons and experimental methods.

    Science.gov (United States)

    Kettunen, Pyry; Irvankoski, Katja; Krause, Christina M; Sarjakoski, L Tiina

    2013-08-01

    Landmarks constitute an essential basis for a structural understanding of the spatial environment. Therefore, they are crucial factors in external spatial representations such as maps and verbal route descriptions, which are used to support wayfinding. However, selecting landmarks for these representations is a difficult task, for which an understanding of how people perceive and remember landmarks in the environment is needed. We investigated the ways in which people perceive and remember landmarks in nature using the thinking aloud and sketch map methods during both the summer and the winter seasons. We examined the differences between methods to identify those landmarks that should be selected for external spatial representations, such as maps or route descriptions, in varying conditions. We found differences in the use of landmarks both in terms of the methods and also between the different seasons. In particular, the participants used passage and tree-related landmarks at significantly different frequencies with the thinking aloud and sketch map methods. The results are likely to reflect the different roles of the landmark groups when using the two methods, but also the differences in counting landmarks when using both methods. Seasonal differences in the use of landmarks occurred only with the thinking aloud method. Sketch maps were drawn similarly in summertime and wintertime; the participants remembered and selected landmarks similarly independent of the differences in their perceptions of the environment due to the season. The achieved results may guide the planning of external spatial representations within the context of wayfinding as well as when planning further experimental studies.

  9. Mobile phone-based asthma self-management aid for adolescents (mASMAA): a feasibility study

    OpenAIRE

    Rhee H; Allen J.; Mammen J; Swift M

    2014-01-01

    Hyekyun Rhee,1 James Allen,2 Jennifer Mammen,1 Mary Swift21School of Nursing, 2Department of Computer Science, University of Rochester, Rochester, NY, USAPurpose: Adolescents report high asthma-related morbidity that can be prevented by adequate self-management of the disease. Therefore, there is a need for a developmentally appropriate strategy to promote effective asthma self-management. Mobile phone-based technology is portable, commonly accessible, and well received by adolescents. The pu...

  10. Mobile Phone-Based Lifestyle Intervention for Reducing Overall Cardiovascular Disease Risk in Guangzhou, China: A Pilot Study.

    Science.gov (United States)

    Liu, Zhiting; Chen, Songting; Zhang, Guanrong; Lin, Aihua

    2015-12-17

    With the rapid and widespread adoption of mobile devices, mobile phones offer an opportunity to deliver cardiovascular disease (CVD) interventions. This study evaluated the efficacy of a mobile phone-based lifestyle intervention aimed at reducing the overall CVD risk at a health management center in Guangzhou, China. We recruited 589 workers from eight work units. Based on a group-randomized design, work units were randomly assigned either to receive the mobile phone-based lifestyle interventions or usual care. The reduction in 10-year CVD risk at 1-year follow-up for the intervention group was not statistically significant (-1.05%, p = 0.096). However, the mean risk increased significantly by 1.77% (p = 0.047) for the control group. The difference of the changes between treatment arms in CVD risk was -2.83% (p = 0.001). In addition, there were statistically significant changes for the intervention group relative to the controls, from baseline to year 1, in systolic blood pressure (-5.55 vs. 6.89 mmHg; p Mobile phone-based intervention may therefore be a potential solution for reducing CVD risk in China.

  11. Cell Phone-Based and Adherence Device Technologies for HIV Care and Treatment in Resource-Limited Settings: Recent Advances.

    Science.gov (United States)

    Campbell, Jeffrey I; Haberer, Jessica E

    2015-12-01

    Numerous cell phone-based and adherence monitoring technologies have been developed to address barriers to effective HIV prevention, testing, and treatment. Because most people living with HIV and AIDS reside in resource-limited settings (RLS), it is important to understand the development and use of these technologies in RLS. Recent research on cell phone-based technologies has focused on HIV education, linkage to and retention in care, disease tracking, and antiretroviral therapy adherence reminders. Advances in adherence devices have focused on real-time adherence monitors, which have been used for both antiretroviral therapy and pre-exposure prophylaxis. Real-time monitoring has recently been combined with cell phone-based technologies to create real-time adherence interventions using short message service (SMS). New developments in adherence technologies are exploring ingestion monitoring and metabolite detection to confirm adherence. This article provides an overview of recent advances in these two families of technologies and includes research on their acceptability and cost-effectiveness when available. It additionally outlines key challenges and needed research as use of these technologies continues to expand and evolve. PMID:26439917

  12. Mobile phone base stations and adverse health effects: phase 1 of a population-based, cross-sectional study in Germany

    DEFF Research Database (Denmark)

    Blettner, M; Schlehofer, B; Breckenkamp, J;

    2009-01-01

    -sectional study within the context of a large panel survey regularly carried out by a private research institute in Germany. In the initial phase, reported on in this paper, 30,047 persons from a total of 51,444 who took part in the nationwide survey also answered questions on how mobile phone base stations.......7% of participants were concerned about adverse health effects of mobile phone base stations, while an additional 10.3% attributed their personal adverse health effects to the exposure from them. Participants who were concerned about or attributed adverse health effects to mobile phone base stations and those living...

  13. The feasibility of cell phone based electronic diaries for STI/HIV research

    Directory of Open Access Journals (Sweden)

    Hensel Devon J

    2012-06-01

    Full Text Available Abstract Background Self-reports of sensitive, socially stigmatized or illegal behavior are common in STI/HIV research, but can raise challenges in terms of data reliability and validity. The use of electronic data collection tools, including ecological momentary assessment (EMA, can increase the accuracy of this information by allowing a participant to self-administer a survey or diary entry, in their own environment, as close to the occurrence of the behavior as possible. In this paper, we evaluate the feasibility of using cell phone-based EMA as a tool for understanding sexual risk and STI among adult men and women. Methods As part of a larger prospective clinical study on sexual risk behavior and incident STI in clinically recruited adult men and women, using study-provided cell phones, participants (N = 243 completed thrice–daily EMA diaries monitoring individual and partner-specific emotional attributes, non-sexual activities, non-coital or coital sexual behaviors, and contraceptive behaviors. Using these data, we assess feasibility in terms of participant compliance, behavior reactivity, general method acceptability and method efficacy for capturing behaviors. Results Participants were highly compliant with diary entry protocol and schedule: over the entire 12 study weeks, participants submitted 89.7% (54,914/61,236 of the expected diary entries, with an average of 18.86 of the 21 expected diaries (85.7% each week. Submission did not differ substantially across gender, race/ethnicity and baseline sexually transmitted infection status. A sufficient volume and range of sexual behaviors were captured, with reporting trends in different legal and illegal behaviors showing small variation over time. Participants found the methodology to be acceptable, enjoyed and felt comfortable participating in the study. Conclusion Achieving the correct medium of data collection can drastically improve, or degrade, the timeliness and quality of an

  14. CCD Camera

    Science.gov (United States)

    Roth, Roger R.

    1983-01-01

    A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

  15. Wayfinding: a quality factor in human design approach to healthcare facilities.

    Science.gov (United States)

    Del Nord, R

    1999-01-01

    The specific aim of this paper is the systematic analysis of interactions and reciprocal conditions existing between the physical space of hospital buildings and the different categories of individuals that come in contact with them. The physical and environmental facilities of hospital architecture often influence the therapeutic character of space and the employees. If the values of the individual are to be safeguarded in this context, priority needs to be given to such factors as communication, privacy, etc. This would mean the involvement of other professional groups such as psychologists, sociologists, ergonomists, etc. at the hospital building planning stage. This paper will outline the result of some research conducted at the University Research Center "TESIS" of Florence to provide better understanding of design strategies applied to reduce the pathology of spaces within the healthcare environment. The case studies will highlight the parameters and the possible architectural solutions to wayfinding and the humanization of spaces, with particular emphasis on lay-outs, technologies, furniture and finishing design. PMID:10622912

  16. Wayfinding: a quality factor in human design approach to healthcare facilities.

    Science.gov (United States)

    Del Nord, R

    1999-01-01

    The specific aim of this paper is the systematic analysis of interactions and reciprocal conditions existing between the physical space of hospital buildings and the different categories of individuals that come in contact with them. The physical and environmental facilities of hospital architecture often influence the therapeutic character of space and the employees. If the values of the individual are to be safeguarded in this context, priority needs to be given to such factors as communication, privacy, etc. This would mean the involvement of other professional groups such as psychologists, sociologists, ergonomists, etc. at the hospital building planning stage. This paper will outline the result of some research conducted at the University Research Center "TESIS" of Florence to provide better understanding of design strategies applied to reduce the pathology of spaces within the healthcare environment. The case studies will highlight the parameters and the possible architectural solutions to wayfinding and the humanization of spaces, with particular emphasis on lay-outs, technologies, furniture and finishing design.

  17. Phases in development of an interactive mobile phone-based system to support self-management of hypertension

    Directory of Open Access Journals (Sweden)

    Hallberg I

    2014-05-01

    Full Text Available Inger Hallberg,1,11 Charles Taft,1,11 Agneta Ranerup,2,11 Ulrika Bengtsson,1,11 Mikael Hoffmann,3,10 Stefan Höfer,4 Dick Kasperowski,5 Åsa Mäkitalo,6 Mona Lundin,6 Lena Ring,7,8 Ulf Rosenqvist,9 Karin Kjellgren1,10,11 1Institute of Health and Care Sciences, 2Department of Applied Information Technology, University of Gothenburg, Gothenburg, 3The NEPI Foundation, Linköping, Sweden; 4Department of Medical Psychology, Innsbruck Medical University, Innsbruck, Austria; 5Department of Philosophy, Linguistics and Theory of Science, 6Department of Education, Communication and Learning, University of Gothenburg, Gothenburg, 7Centre for Research Ethics and Bioethics, Uppsala University, 8Department of Use of Medical Products, Medical Products Agency, Uppsala, 9Department of Medical Specialist and Department of Medical and Health Sciences, Linköping University, Motala, 10Department of Medical and Health Sciences, Linköping University, Linköping, 11Centre for Person-Centred Care, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden Abstract: Hypertension is a significant risk factor for heart disease and stroke worldwide. Effective treatment regimens exist; however, treatment adherence rates are poor (30%–50%. Improving self-management may be a way to increase adherence to treatment. The purpose of this paper is to describe the phases in the development and preliminary evaluation of an interactive mobile phone-based system aimed at supporting patients in self-managing their hypertension. A person-centered and participatory framework emphasizing patient involvement was used. An interdisciplinary group of researchers, patients with hypertension, and health care professionals who were specialized in hypertension care designed and developed a set of questions and motivational messages for use in an interactive mobile phone-based system. Guided by the US Food and Drug Administration framework for the development of patient-reported outcome

  18. Assessment of radiofrequency/microwave radiation emitted by the antennas of rooftop-mounted mobile phone base stations

    International Nuclear Information System (INIS)

    Radiofrequency (RF) and microwave (MW) radiation exposures from the antennas of rooftop-mounted mobile telephone base stations have become a serious issue in recent years due to the rapidly evolving technologies in wireless telecommunication systems. In Malaysia, thousands of mobile telephone base stations have been erected all over the country, most of which are mounted on the rooftops. In view of public concerns, measurements of the RF/MW levels emitted by the base stations were carried out in this study. The values were compared with the exposure limits set by several organisations and countries. Measurements were performed at 200 sites around 47 mobile phone base stations. It was found that the RF/MW radiation from these base stations were well below the maximum exposure limits set by various agencies. (authors)

  19. Flying solo: A review of the literature on wayfinding for older adults experiencing visual or cognitive decline.

    Science.gov (United States)

    Bosch, Sheila J; Gharaveis, Arsalan

    2017-01-01

    Accessible tourism is a growing market within the travel industry, but little research has focused on travel barriers for older adults who may be experiencing visual and cognitive decline as part of the normal aging process, illness, or other disabling conditions. Travel barriers, such as difficulty finding one's way throughout an airport, may adversely affect older adults' travel experience, thereby reducing their desire to travel. This review of the literature investigates wayfinding strategies to ensure that older passengers who have planned to travel independently can do so with dignity. These include facility planning and design strategies (e.g., layout, signage) and technological solutions. Although technological approaches, such as smart phone apps, appear to offer the most promising new solutions for enhancing airport navigation, more traditional approaches, such as designing facilities with an intuitive building layout, are still heavily relied upon in the aviation industry. While there are many design guidelines for enhancing wayfinding for older adults, many are not based on scientific investigation.

  20. Color Targets: Fiducials to Help Visually Impaired People Find Their Way by Camera Phone

    Directory of Open Access Journals (Sweden)

    Manduchi Roberto

    2007-01-01

    Full Text Available A major challenge faced by the blind and visually impaired population is that of wayfinding—the ability of a person to find his or her way to a given destination. We propose a new wayfinding aid based on a camera cell phone, which is held by the user to find and read aloud specially designed machine-readable signs, which we call color targets, in indoor environments (labeling locations such as offices and restrooms. Our main technical innovation is that we have designed the color targets to be detected and located in fractions of a second on the cell phone CPU, even at a distance of several meters. Once the sign has been quickly detected, nearby information in the form of a barcode can be read, an operation that typically requires more computational time. An important contribution of this paper is a principled method for optimizing the design of the color targets and the color target detection algorithm based on training data, instead of relying on heuristic choices as in our previous work. We have implemented the system on Nokia 7610 cell phone, and preliminary experiments with blind subjects demonstrate the feasibility of using the system as a real-time wayfinding aid.

  1. Mobile phone base stations and adverse health effects: phase 2 of a cross-sectional study with measured radio frequency electromagnetic fields

    DEFF Research Database (Denmark)

    Berg-Beckhoff, Gabriele; Blettner, M; Kowall, B;

    2009-01-01

    in urban regions were selected from a nationwide study in 2006. In total, 3526 persons responded to a questionnaire (response rate 85%). For the exposure assessment a dosimeter measuring different RF-EMF frequencies was used. Participants answered a postal questionnaire on how mobile phone base stations...

  2. A portable smart phone-based plasmonic nanosensor readout platform that measures transmitted light intensities of nanosubstrates using an ambient light sensor.

    Science.gov (United States)

    Fu, Qiangqiang; Wu, Ze; Xu, Fangxiang; Li, Xiuqing; Yao, Cuize; Xu, Meng; Sheng, Liangrong; Yu, Shiting; Tang, Yong

    2016-05-21

    Plasmonic nanosensors may be used as tools for diagnostic testing in the field of medicine. However, quantification of plasmonic nanosensors often requires complex and bulky readout instruments. Here, we report the development of a portable smart phone-based plasmonic nanosensor readout platform (PNRP) for accurate quantification of plasmonic nanosensors. This device operates by transmitting excitation light from a LED through a nanosubstrate and measuring the intensity of the transmitted light using the ambient light sensor of a smart phone. The device is a cylinder with a diameter of 14 mm, a length of 38 mm, and a gross weight of 3.5 g. We demonstrated the utility of this smart phone-based PNRP by measuring two well-established plasmonic nanosensors with this system. In the first experiment, the device measured the morphology changes of triangular silver nanoprisms (AgNPRs) in an immunoassay for the detection of carcinoembryonic antigen (CEA). In the second experiment, the device measured the aggregation of gold nanoparticles (AuNPs) in an aptamer-based assay for the detection of adenosine triphosphate (ATP). The results from the smart phone-based PNRP were consistent with those from commercial spectrophotometers, demonstrating that the smart phone-based PNRP enables accurate quantification of plasmonic nanosensors. PMID:27137512

  3. A comparative study of radiofrequency emission from roof top mobile phone base station antennas and tower mobile phone base antennas located at some selected cell sites in Accra, Ghana

    International Nuclear Information System (INIS)

    RF radiation exposure from antennas mounted on rooftop mobile phone base stations have become a serious issue in recent years due to the rapidly developing technologies in wireless telecommunication. The heightening numbers of base station and their closeness to the general public has led to possible health concerns as a result of exposure to RF radiations. The primary objective of this study was to assess the level of RF radiation emitted from roof top mobile phone base station antennas and compare the measured results with the guidelines set by International Commission on Non-ionization Radiation. The maximum and minimum average power density measured from the rooftop sites inside buildings were 2.46xI0-2 and 1.68x10-3 W/m2 respectively whereas that for outside buildings at the same rooftop site was also 7.44x 10-5 and 3.35x 10-3 W/m2 respectively. Public exposure quotient also ranged between 3.74x10-10 to 1.31x10-07 inside buildings whilst that for outside varied between 7.44x 10-10 to 1.65x 10-06. Occupational exposure quotient inside buildings varied between 1.66x 10-11 to 2.11 x 10-09 whereas that for outside ranged from 3.31x10-09 to 3.30x10-07 all at the rooftop site. The results obtained for a typical tower base station also indicated that the maximum and minimum average power density was 4.57x10-1 W/m2 and 7.13x10-3 W/m2 respectively. The public exposure quotient varied between 1.58x10-09 to 1.01x10-07 whilst that for occupational exposure quotient ranged between 3.17x10-10 to 2.03x10-08. The values of power densities levels inside buildings at rooftop sites are low compared to that of tower sites. This could be due to high attenuation caused by thick concrete walls and ceilings. The results obtained were found to be in compliance with ICNIRP and FCC guidance levels of 4.5 W/m2 and 6 W/m2 respectively. (au)

  4. Mobile phone-based asthma self-management aid for adolescents (mASMAA: a feasibility study

    Directory of Open Access Journals (Sweden)

    Rhee H

    2014-01-01

    Full Text Available Hyekyun Rhee,1 James Allen,2 Jennifer Mammen,1 Mary Swift21School of Nursing, 2Department of Computer Science, University of Rochester, Rochester, NY, USAPurpose: Adolescents report high asthma-related morbidity that can be prevented by adequate self-management of the disease. Therefore, there is a need for a developmentally appropriate strategy to promote effective asthma self-management. Mobile phone-based technology is portable, commonly accessible, and well received by adolescents. The purpose of this study was to develop and evaluate the feasibility and acceptability of a comprehensive mobile phone-based asthma self-management aid for adolescents (mASMAA that was designed to facilitate symptom monitoring, treatment adherence, and adolescent–parent partnership. The system used state-of-the-art natural language-understanding technology that allowed teens to use unconstrained English in their texts, and to self-initiate interactions with the system.Materials and methods: mASMAA was developed based on an existing natural dialogue system that supports broad coverage of everyday natural conversation in English. Fifteen adolescent–parent dyads participated in a 2-week trial that involved adolescents' daily scheduled and unscheduled interactions with mASMAA and parents responding to daily reports on adolescents' asthma condition automatically generated by mASMAA. Subsequently, four focus groups were conducted to systematically obtain user feedback on the system. Frequency data on the daily usage of mASMAA over the 2-week period were tabulated, and content analysis was conducted for focus group interview data.Results: Response rates for daily text messages were 81%–97% in adolescents. The average number of self-initiated messages to mASMAA was 19 per adolescent. Symptoms were the most common topic of teen-initiated messages. Participants concurred that use of mASMAA improved awareness of symptoms and triggers, promoted treatment adherence and

  5. Integrating mobile-phone based assessment for psychosis into people’s everyday lives and clinical care: a qualitative study

    Directory of Open Access Journals (Sweden)

    Palmier-Claus Jasper E

    2013-01-01

    Full Text Available Abstract Background Over the past decade policy makers have emphasised the importance of healthcare technology in the management of long-term conditions. Mobile-phone based assessment may be one method of facilitating clinically- and cost-effective intervention, and increasing the autonomy and independence of service users. Recently, text-message and smartphone interfaces have been developed for the real-time assessment of symptoms in individuals with schizophrenia. Little is currently understood about patients’ perceptions of these systems, and how they might be implemented into their everyday routine and clinical care. Method 24 community based individuals with non-affective psychosis completed a randomised repeated-measure cross-over design study, where they filled in self-report questions about their symptoms via text-messages on their own phone, or via a purpose designed software application for Android smartphones, for six days. Qualitative interviews were conducted in order to explore participants’ perceptions and experiences of the devices, and thematic analysis was used to analyse the data. Results Three themes emerged from the data: i the appeal of usability and familiarity, ii acceptability, validity and integration into domestic routines, and iii perceived impact on clinical care. Although participants generally found the technology non-stigmatising and well integrated into their everyday activities, the repetitiveness of the questions was identified as a likely barrier to long-term adoption. Potential benefits to the quality of care received were seen in terms of assisting clinicians, faster and more efficient data exchange, and aiding patient-clinician communication. However, patients often failed to see the relevance of the systems to their personal situations, and emphasised the threat to the person centred element of their care. Conclusions The feedback presented in this paper suggests that patients are conscious of the

  6. Effects of competing environmental variables and signage on route-choices in simulated everyday and emergency wayfinding situations.

    Science.gov (United States)

    Vilar, Elisângela; Rebelo, Francisco; Noriega, Paulo; Duarte, Emília; Mayhorn, Christopher B

    2014-01-01

    This study examined the relative influence of environmental variables (corridor width and brightness) and signage (directional and exit signs), when presented in competition, on participants' route-choices in two situational variables (everyday vs. emergency), during indoor wayfinding in virtual environments. A virtual reality-based methodology was used. Thus, participants attempted to find a room (everyday situation) in a virtual hotel, followed by a fire-related emergency egress (emergency situation). Different behaviours were observed. In the everyday situation, for no-signs condition, participants choose mostly the wider and brighter corridors, suggesting a heavy reliance on the environmental affordances. Conversely, for signs condition, participants mostly complied with signage, suggesting a greater reliance on the signs rather than on the environmental cues. During emergency, without signage, reliance on environmental affordances seems to be affected by the intersection type. In the sign condition, the reliance on environmental affordances that started strong decreases along the egress route.

  7. Interpretation of way-finding healthcare symbols by a multicultural population: navigation signage design for global health.

    Science.gov (United States)

    Hashim, Muhammad Jawad; Alkaabi, Mariam Salem Khamis Matar; Bharwani, Sulaiman

    2014-05-01

    The interpretation of way-finding symbols for healthcare facilities in a multicultural community was assessed in a cross-sectional study. One hundred participants recruited from Al Ain city in the United Arab Emirates were asked to interpret 28 healthcare symbols developed at Hablamos Juntos (such as vaccinations and laboratory) as well as 18 general-purpose symbols (such as elevators and restrooms). The mean age was 27.6 years (16-55 years) of whom 84 (84%) were females. Healthcare symbols were more difficult to comprehend than general-purpose signs. Symbols referring to abstract concepts were the most misinterpreted including oncology, diabetes education, outpatient clinic, interpretive services, pharmacy, internal medicine, registration, social services, obstetrics and gynecology, pediatrics and infectious diseases. Interpretation rates varied across cultural backgrounds and increased with higher education and younger age. Signage within healthcare facilities should be tested among older persons, those with limited literacy and across a wide range of cultures.

  8. Proactive PTZ Camera Control

    Science.gov (United States)

    Qureshi, Faisal Z.; Terzopoulos, Demetri

    We present a visual sensor network—comprising wide field-of-view (FOV) passive cameras and pan/tilt/zoom (PTZ) active cameras—capable of automatically capturing closeup video of selected pedestrians in a designated area. The passive cameras can track multiple pedestrians simultaneously and any PTZ camera can observe a single pedestrian at a time. We propose a strategy for proactive PTZ camera control where cameras plan ahead to select optimal camera assignment and handoff with respect to predefined observational goals. The passive cameras supply tracking information that is used to control the PTZ cameras.

  9. Putting prevention in their pockets: developing mobile phone-based HIV interventions for black men who have sex with men.

    Science.gov (United States)

    Muessig, Kathryn E; Pike, Emily C; Fowler, Beth; LeGrand, Sara; Parsons, Jeffrey T; Bull, Sheana S; Wilson, Patrick A; Wohl, David A; Hightow-Weidman, Lisa B

    2013-04-01

    Young black men who have sex with men (MSM) bear a disproportionate burden of HIV. Rapid expansion of mobile technologies, including smartphone applications (apps), provides a unique opportunity for outreach and tailored health messaging. We collected electronic daily journals and conducted surveys and focus groups with 22 black MSM (age 18-30) at three sites in North Carolina to inform the development of a mobile phone-based intervention. Qualitative data was analyzed thematically using NVivo. Half of the sample earned under $11,000 annually. All participants owned smartphones and had unlimited texting and many had unlimited data plans. Phones were integral to participants' lives and were a primary means of Internet access. Communication was primarily through text messaging and Internet (on-line chatting, social networking sites) rather than calls. Apps were used daily for entertainment, information, productivity, and social networking. Half of participants used their phones to find sex partners; over half used phones to find health information. For an HIV-related app, participants requested user-friendly content about test site locators, sexually transmitted diseases, symptom evaluation, drug and alcohol risk, safe sex, sexuality and relationships, gay-friendly health providers, and connection to other gay/HIV-positive men. For young black MSM in this qualitative study, mobile technologies were a widely used, acceptable means for HIV intervention. Future research is needed to measure patterns and preferences of mobile technology use among broader samples.

  10. Encouraging 5-year olds to attend to landmarks: A way to improve children’s wayfinding strategies in a virtual environment.

    Directory of Open Access Journals (Sweden)

    Jamie eLingwood

    2015-03-01

    Full Text Available Wayfinding can be defined as the ability to learn and remember a route through an environment. Previous researchers have shown that young children have difficulties remembering routes. However, very few researchers have considered how to improve young children’s wayfinding abilities. Therefore, we investigated ways to help children increase their wayfinding skills. In two studies, a total of 72 5-year olds were shown a route in a six turn virtual environment and were then asked to retrace this route by themselves. A unique landmark was positioned at each junction and each junction was made up of two paths: a correct choice and an incorrect choice. Two different strategies improved route learning performance. In Experiment 1, verbally labelling landmarks at junctions during the first walk reduced children’s errors at turns, and the number of trials they needed to reach the learning criterion. In Experiment 2, encouraging children to attend to landmarks at junctions on the first walk reduced the children’s errors when making a turn. This is the first study to show that very young children can be taught effective route learning skills.

  11. Are people living next to mobile phone base stations more strained? Relationship of health concerns, self-estimated distance to base station, and psychological parameters

    OpenAIRE

    Augner Christoph; Hacker Gerhard

    2009-01-01

    Background and Aims: Coeval with the expansion of mobile phone technology and the associated obvious presence of mobile phone base stations, some people living close to these masts reported symptoms they attributed to electromagnetic fields (EMF). Public and scientific discussions arose with regard to whether these symptoms were due to EMF or were nocebo effects. The aim of this study was to find out if people who believe that they live close to base stations show psychological or psychobiol...

  12. A web- and mobile phone-based intervention to prevent obesity in 4-year-olds (MINISTOP): a population-based randomized controlled trial

    OpenAIRE

    Delisle, Christine; Sandin, Sven; Forsum, Elisabet; Henriksson, Hanna; Trolle-Lagerros, Ylva; Larsson, Christel; Maddison, Ralph; Ortega Porcel, Francisco B.; Ruiz, Jonatan R.; Silfvernagel, Kristin; Timpka, Toomas; L??f, Marie

    2015-01-01

    Background: Childhood obesity is an increasing health problem globally. Overweight and obesity may be established as early as 2-5 years of age, highlighting the need for evidence-based effective prevention and treatment programs early in life. In adults, mobile phone based interventions for weight management (mHealth) have demonstrated positive effects on body mass, however, their use in child populations has yet to be examined. The aim of this paper is to report the study design and methodol...

  13. Optimization of measurement methods for a multi-frequency electromagnetic field from mobile phone base station using broadband EMF meter

    Directory of Open Access Journals (Sweden)

    Paweł Bieńkowski

    2015-10-01

    Full Text Available Background: This paper presents the characteristics of the mobile phone base station (BS as an electromagnetic field (EMF source. The most common system configurations with their construction are described. The parameters of radiated EMF in the context of the access to methods and other parameters of the radio transmission are discussed. Attention was also paid to antennas that are used in this technology. Material and Methods: The influence of individual components of a multi-frequency EMF, most commonly found in the BS surroundings, on the resultant EMF strength value indicated by popular broadband EMF meters was analyzed. The examples of metrological characteristics of the most common EMF probes and 2 measurement scenarios of the multisystem base station, with and without microwave relays, are shown. Results: The presented method for measuring the multi-frequency EMF using 2 broadband probes allows for the significant minimization of measurement uncertainty. Equations and formulas that can be used to calculate the actual EMF intensity from multi-frequency sources are shown. They have been verified in the laboratory conditions on a specific standard setup as well as in real conditions in a survey of the existing base station with microwave relays. Conclusions: Presented measurement methodology of multi-frequency EMF from BS with microwave relays, validated both in laboratory and real conditions. It has been proven that the described measurement methodology is the optimal approach to the evaluation of EMF exposure in BS surrounding. Alternative approaches with much greater uncertainty (precaution method or more complex measuring procedure (sources exclusion method are also presented. Med Pr 2015;66(5:701–712

  14. Non-specific physical symptoms in relation to actual and perceived proximity to mobile phone base stations and powerlines

    Directory of Open Access Journals (Sweden)

    Bolte John

    2011-06-01

    Full Text Available Abstract Background Evidence about a possible causal relationship between non-specific physical symptoms (NSPS and exposure to electromagnetic fields (EMF emitted by sources such as mobile phone base stations (BS and powerlines is insufficient. So far little epidemiological research has been published on the contribution of psychological components to the occurrence of EMF-related NSPS. The prior objective of the current study is to explore the relative importance of actual and perceived proximity to base stations and psychological components as determinants of NSPS, adjusting for demographic, residency and area characteristics. Methods Analysis was performed on data obtained in a cross-sectional study on environment and health in 2006 in the Netherlands. In the current study, 3611 adult respondents (response rate: 37% in twenty-two Dutch residential areas completed a questionnaire. Self-reported instruments included a symptom checklist and assessment of environmental and psychological characteristics. The computation of the distance between household addresses and location of base stations and powerlines was based on geo-coding. Multilevel regression models were used to test the hypotheses regarding the determinants related to the occurrence of NSPS. Results After adjustment for demographic and residential characteristics, analyses yielded a number of statistically significant associations: Increased report of NSPS was predominantly predicted by higher levels of self-reported environmental sensitivity; perceived proximity to base stations and powerlines, lower perceived control and increased avoidance (coping behavior were also associated with NSPS. A trend towards a moderator effect of perceived environmental sensitivity on the relation between perceived proximity to BS and NSPS was verified (p = 0.055. There was no significant association between symptom occurrence and actual distance to BS or powerlines. Conclusions Perceived proximity to BS

  15. Vacuum Camera Cooler

    Science.gov (United States)

    Laugen, Geoffrey A.

    2011-01-01

    Acquiring cheap, moving video was impossible in a vacuum environment, due to camera overheating. This overheating is brought on by the lack of cooling media in vacuum. A water-jacketed camera cooler enclosure machined and assembled from copper plate and tube has been developed. The camera cooler (see figure) is cup-shaped and cooled by circulating water or nitrogen gas through copper tubing. The camera, a store-bought "spy type," is not designed to work in a vacuum. With some modifications the unit can be thermally connected when mounted in the cup portion of the camera cooler. The thermal conductivity is provided by copper tape between parts of the camera and the cooled enclosure. During initial testing of the demonstration unit, the camera cooler kept the CPU (central processing unit) of this video camera at operating temperature. This development allowed video recording of an in-progress test, within a vacuum environment.

  16. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  17. Constrained space camera assembly

    Science.gov (United States)

    Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.

    1999-01-01

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.

  18. Harpicon camera for HDTV

    Science.gov (United States)

    Tanada, Jun

    1992-08-01

    Ikegami has been involved in broadcast equipment ever since it was established as a company. In conjunction with NHK it has brought forth countless television cameras, from black-and-white cameras to color cameras, HDTV cameras, and special-purpose cameras. In the early days of HDTV (high-definition television, also known as "High Vision") cameras the specifications were different from those for the cameras of the present-day system, and cameras using all kinds of components, having different arrangements of components, and having different appearances were developed into products, with time spent on experimentation, design, fabrication, adjustment, and inspection. But recently the knowhow built up thus far in components, , printed circuit boards, and wiring methods has been incorporated in camera fabrication, making it possible to make HDTV cameras by metbods similar to the present system. In addition, more-efficient production, lower costs, and better after-sales service are being achieved by using the same circuits, components, mechanism parts, and software for both HDTV cameras and cameras that operate by the present system.

  19. Digital Pinhole Camera

    Science.gov (United States)

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  20. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen......- erate cinematographic game experiences reducing, however, the player’s feeling of agency. We propose a methodology to integrate the player in the camera control loop that allows to design and generate personalised cinematographic expe- riences. Furthermore, we present an evaluation of the afore......- mentioned methodology showing that the generated camera movements are positively perceived by novice asnd intermediate players....

  1. Telemonitoring and Mobile Phone-Based Health Coaching Among Finnish Diabetic and Heart Disease Patients: Randomized Controlled Trial

    Science.gov (United States)

    Karhula, Tuula; Rääpysjärvi, Katja; Pakanen, Mira; Itkonen, Pentti; Tepponen, Merja; Junno, Ulla-Maija; Jokinen, Tapio; van Gils, Mark; Lähteenmäki, Jaakko; Kohtamäki, Kari; Saranummi, Niilo

    2015-01-01

    Background There is a strong will and need to find alternative models of health care delivery driven by the ever-increasing burden of chronic diseases. Objective The purpose of this 1-year trial was to study whether a structured mobile phone-based health coaching program, which was supported by a remote monitoring system, could be used to improve the health-related quality of life (HRQL) and/or the clinical measures of type 2 diabetes and heart disease patients. Methods A randomized controlled trial was conducted among type 2 diabetes patients and heart disease patients of the South Karelia Social and Health Care District. Patients were recruited by sending invitations to randomly selected patients using the electronic health records system. Health coaches called patients every 4 to 6 weeks and patients were encouraged to self-monitor their weight, blood pressure, blood glucose (diabetics), and steps (heart disease patients) once per week. The primary outcome was HRQL measured by the Short Form (36) Health Survey (SF-36) and glycosylated hemoglobin (HbA1c) among diabetic patients. The clinical measures assessed were blood pressure, weight, waist circumference, and lipid levels. Results A total of 267 heart patients and 250 diabetes patients started in the trial, of which 246 and 225 patients concluded the end-point assessments, respectively. Withdrawal from the study was associated with the patients’ unfamiliarity with mobile phones—of the 41 dropouts, 85% (11/13) of the heart disease patients and 88% (14/16) of the diabetes patients were familiar with mobile phones, whereas the corresponding percentages were 97.1% (231/238) and 98.6% (208/211), respectively, among the rest of the patients (P=.02 and P=.004). Withdrawal was also associated with heart disease patients’ comorbidities—40% (8/20) of the dropouts had at least one comorbidity, whereas the corresponding percentage was 18.9% (47/249) among the rest of the patients (P=.02). The intervention showed

  2. GRACE star camera noise

    Science.gov (United States)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  3. Microchannel plate streak camera

    Science.gov (United States)

    Wang, Ching L.

    1989-01-01

    An improved streak camera in which a microchannel plate electron multiplier is used in place of or in combination with the photocathode used in prior streak cameras. The improved streak camera is far more sensitive to photons (UV to gamma-rays) than the conventional x-ray streak camera which uses a photocathode. The improved streak camera offers gamma-ray detection with high temporal resolution. It also offers low-energy x-ray detection without attenuation inside the cathode. Using the microchannel plate in the improved camera has resulted in a time resolution of about 150 ps, and has provided a sensitivity sufficient for 1000 KeV x-rays.

  4. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  5. Polarization encoded color camera.

    Science.gov (United States)

    Schonbrun, Ethan; Möller, Guðfríður; Di Caprio, Giuseppe

    2014-03-15

    Digital cameras would be colorblind if they did not have pixelated color filters integrated into their image sensors. Integration of conventional fixed filters, however, comes at the expense of an inability to modify the camera's spectral properties. Instead, we demonstrate a micropolarizer-based camera that can reconfigure its spectral response. Color is encoded into a linear polarization state by a chiral dispersive element and then read out in a single exposure. The polarization encoded color camera is capable of capturing three-color images at wavelengths spanning the visible to the near infrared. PMID:24690806

  6. Ringfield lithographic camera

    Science.gov (United States)

    Sweatt, William C.

    1998-01-01

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D.sub.source .apprxeq.0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry with an increased etendue for the camera system. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors.

  7. The Application of Architecture Metaphor in Historical and Cultural Block Wayfinding Design%城市历史文化街区导识系统设计中的“建筑隐喻”※

    Institute of Scientific and Technical Information of China (English)

    张莉娜; 王宗雪

    2013-01-01

    Taking the the construction of Architecture metaphor's influence on the city historical and cultural block Wayfinding system as breakthrough point,With The historical and cultural block South Lane Area and Yandai xie street in Beijing Wayfinding system design as an example, Analysis the two aspects of Architecture Metaphor in Wayfinding system design form Architectural form and architectural decoration,and put forward the Construction significance of Architecture metaphor in Wayfinding system design is not the simply "copy" of the traditional visual symbols, but update and renewal of regional cultural image.%  本文从建筑的隐喻对城市历史文化街区导识系统的影响为切入点,以北京历史文化街区南锣鼓巷、烟袋斜街导识系统设计为例,从建筑形制的隐喻、建筑装饰的隐喻两方面就导识系统设计进行了梳理与分析,提出了建筑的隐喻对导识系统设计的借鉴意义并非传统视觉符号的简单“复制”,而是地域文化形象的更新与再现。

  8. Camera Operator and Videographer

    Science.gov (United States)

    Moore, Pam

    2007-01-01

    Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…

  9. CCD Luminescence Camera

    Science.gov (United States)

    Janesick, James R.; Elliott, Tom

    1987-01-01

    New diagnostic tool used to understand performance and failures of microelectronic devices. Microscope integrated to low-noise charge-coupled-device (CCD) camera to produce new instrument for analyzing performance and failures of microelectronics devices that emit infrared light during operation. CCD camera also used to indentify very clearly parts that have failed where luminescence typically found.

  10. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  11. Dry imaging cameras

    Directory of Open Access Journals (Sweden)

    I K Indrajit

    2011-01-01

    Full Text Available Dry imaging cameras are important hard copy devices in radiology. Using dry imaging camera, multiformat images of digital modalities in radiology are created from a sealed unit of unexposed films. The functioning of a modern dry camera, involves a blend of concurrent processes, in areas of diverse sciences like computers, mechanics, thermal, optics, electricity and radiography. Broadly, hard copy devices are classified as laser and non laser based technology. When compared with the working knowledge and technical awareness of different modalities in radiology, the understanding of a dry imaging camera is often superficial and neglected. To fill this void, this article outlines the key features of a modern dry camera and its important issues that impact radiology workflow.

  12. Camera as Cultural Critique

    DEFF Research Database (Denmark)

    Suhr, Christian

    2015-01-01

    What does the use of cameras entail for the production of cultural critique in anthropology? Visual anthropological analysis and cultural critique starts at the very moment a camera is brought into the field or existing visual images are engaged. The framing, distances, and interactions between...... researchers, cameras, and filmed subjects already inherently comprise analytical decisions. It is these ethnographic qualities inherent in audiovisual and photographic imagery that make it of particular value to a participatory anthropological enterprise that seeks to resist analytic closure and seeks instead...

  13. Are people living next to mobile phone base stations more strained? Relationship of health concerns, self-estimated distance to base station, and psychological parameters

    Directory of Open Access Journals (Sweden)

    Augner Christoph

    2009-01-01

    Full Text Available Background and Aims: Coeval with the expansion of mobile phone technology and the associated obvious presence of mobile phone base stations, some people living close to these masts reported symptoms they attributed to electromagnetic fields (EMF. Public and scientific discussions arose with regard to whether these symptoms were due to EMF or were nocebo effects. The aim of this study was to find out if people who believe that they live close to base stations show psychological or psychobiological differences that would indicate more strain or stress. Furthermore, we wanted to detect the relevant connections linking self-estimated distance between home and the next mobile phone base station (DBS, daily use of mobile phone (MPU, EMF-health concerns, electromagnetic hypersensitivity, and psychological strain parameters. Design, Materials and Methods: Fifty-seven participants completed standardized and non-standardized questionnaires that focused on the relevant parameters. In addition, saliva samples were used as an indication to determine the psychobiological strain by concentration of alpha-amylase, cortisol, immunoglobulin A (IgA, and substance P. Results: Self-declared base station neighbors (DBS ≤ 100 meters had significantly higher concentrations of alpha-amylase in their saliva, higher rates in symptom checklist subscales (SCL somatization, obsessive-compulsive, anxiety, phobic anxiety, and global strain index PST (Positive Symptom Total. There were no differences in EMF-related health concern scales. Conclusions: We conclude that self-declared base station neighbors are more strained than others. EMF-related health concerns cannot explain these findings. Further research should identify if actual EMF exposure or other factors are responsible for these results.

  14. The BCAM Camera

    CERN Document Server

    Hashemi, K S

    2000-01-01

    The BCAM, or Boston CCD Angle Monitor, is a camera looking at one or more light sources. We describe the application of the The BCAM, or Boston CCD Angle Monitor, is a camera looking at one or more light sources. We describe the application of the BCAM to the ATLAS forward muon detector alignment system. We show that the camera's performance is only weakly dependent upon the brightness, focus and diameter of the source image. Its resolution is dominated by turbulence along the external light path. The camera electronics is radiation-resistant. With a field of view of ± 10 mrad, it tracks the bearing of a light source 16 m away with better than 3 µrad accuracy, well within the ATLAS requirements.

  15. TARGETLESS CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    L. Barazzetti

    2012-09-01

    Full Text Available In photogrammetry a camera is considered calibrated if its interior orientation parameters are known. These encompass the principal distance, the principal point position and some Additional Parameters used to model possible systematic errors. The current state of the art for automated camera calibration relies on the use of coded targets to accurately determine the image correspondences. This paper presents a new methodology for the efficient and rigorous photogrammetric calibration of digital cameras which does not require any longer the use of targets. A set of images depicting a scene with a good texture are sufficient for the extraction of natural corresponding image points. These are automatically matched with feature-based approaches and robust estimation techniques. The successive photogrammetric bundle adjustment retrieves the unknown camera parameters and their theoretical accuracies. Examples, considerations and comparisons with real data and different case studies are illustrated to show the potentialities of the proposed methodology.

  16. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  17. The MKID Camera

    Science.gov (United States)

    Maloney, P. R.; Czakon, N. G.; Day, P. K.; Duan, R.; Gao, J.; Glenn, J.; Golwala, S.; Hollister, M.; LeDuc, H. G.; Mazin, B.; Noroozian, O.; Nguyen, H. T.; Sayers, J.; Schlaerth, J.; Vaillancourt, J. E.; Vayonakis, A.; Wilson, P.; Zmuidzinas, J.

    2009-12-01

    The MKID Camera project is a collaborative effort of Caltech, JPL, the University of Colorado, and UC Santa Barbara to develop a large-format, multi-color millimeter and submillimeter-wavelength camera for astronomy using microwave kinetic inductance detectors (MKIDs). These are superconducting, micro-resonators fabricated from thin aluminum and niobium films. We couple the MKIDs to multi-slot antennas and measure the change in surface impedance produced by photon-induced breaking of Cooper pairs. The readout is almost entirely at room temperature and can be highly multiplexed; in principle hundreds or even thousands of resonators could be read out on a single feedline. The camera will have 576 spatial pixels that image simultaneously in four bands at 750, 850, 1100 and 1300 microns. It is scheduled for deployment at the Caltech Submillimeter Observatory in the summer of 2010. We present an overview of the camera design and readout and describe the current status of testing and fabrication.

  18. Gamma camera system

    International Nuclear Information System (INIS)

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  19. Segment Based Camera Calibration

    Institute of Scientific and Technical Information of China (English)

    马颂德; 魏国庆; 等

    1993-01-01

    The basic idea of calibrating a camera system in previous approaches is to determine camera parmeters by using a set of known 3D points as calibration reference.In this paper,we present a method of camera calibration in whih camera parameters are determined by a set of 3D lines.A set of constraints is derived on camea parameters in terms of perspective line mapping.Form these constraints,the same perspective transformation matrix as that for point mapping can be computed linearly.The minimum number of calibration lines is 6.This result generalizes that of Liu,Huang and Faugeras[12] for camera location determination in which at least 8 line correspondences are required for linear computation of camera location.Since line segments in an image can be located easily and more accurately than points,the use of lines as calibration reference tends to ease the computation in inage preprocessing and to improve calibration accuracy.Experimental results on the calibration along with stereo reconstruction are reported.

  20. Spacecraft camera image registration

    Science.gov (United States)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)

    1987-01-01

    A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).

  1. Deployable Wireless Camera Penetrators

    Science.gov (United States)

    Badescu, Mircea; Jones, Jack; Sherrit, Stewart; Wu, Jiunn Jeng

    2008-01-01

    A lightweight, low-power camera dart has been designed and tested for context imaging of sampling sites and ground surveys from an aerobot or an orbiting spacecraft in a microgravity environment. The camera penetrators also can be used to image any line-of-sight surface, such as cliff walls, that is difficult to access. Tethered cameras to inspect the surfaces of planetary bodies use both power and signal transmission lines to operate. A tether adds the possibility of inadvertently anchoring the aerobot, and requires some form of station-keeping capability of the aerobot if extended examination time is required. The new camera penetrators are deployed without a tether, weigh less than 30 grams, and are disposable. They are designed to drop from any altitude with the boost in transmitting power currently demonstrated at approximately 100-m line-of-sight. The penetrators also can be deployed to monitor lander or rover operations from a distance, and can be used for surface surveys or for context information gathering from a touch-and-go sampling site. Thanks to wireless operation, the complexity of the sampling or survey mechanisms may be reduced. The penetrators may be battery powered for short-duration missions, or have solar panels for longer or intermittent duration missions. The imaging device is embedded in the penetrator, which is dropped or projected at the surface of a study site at 90 to the surface. Mirrors can be used in the design to image the ground or the horizon. Some of the camera features were tested using commercial "nanny" or "spy" camera components with the charge-coupled device (CCD) looking at a direction parallel to the ground. Figure 1 shows components of one camera that weighs less than 8 g and occupies a volume of 11 cm3. This camera could transmit a standard television signal, including sound, up to 100 m. Figure 2 shows the CAD models of a version of the penetrator. A low-volume array of such penetrator cameras could be deployed from an

  2. CAOS-CMOS camera.

    Science.gov (United States)

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems. PMID:27410361

  3. The Dark Energy Camera

    Energy Technology Data Exchange (ETDEWEB)

    Flaugher, B. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States). et al.

    2015-04-11

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250-μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 Mpixel focal plane comprises 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. The CCDs have 15μm x 15μm pixels with a plate scale of 0.263" per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  4. The Dark Energy Camera

    CERN Document Server

    Flaugher, B; Honscheid, K; Abbott, T M C; Alvarez, O; Angstadt, R; Annis, J T; Antonik, M; Ballester, O; Beaufore, L; Bernstein, G M; Bernstein, R A; Bigelow, B; Bonati, M; Boprie, D; Brooks, D; Buckley-Geer, E J; Campa, J; Cardiel-Sas, L; Castander, F J; Castilla, J; Cease, H; Cela-Ruiz, J M; Chappa, S; Chi, E; Cooper, C; da Costa, L N; Dede, E; Derylo, G; DePoy, D L; de Vicente, J; Doel, P; Drlica-Wagner, A; Eiting, J; Elliott, A E; Emes, J; Estrada, J; Neto, A Fausti; Finley, D A; Flores, R; Frieman, J; Gerdes, D; Gladders, M D; Gregory, B; Gutierrez, G R; Hao, J; Holland, S E; Holm, S; Huffman, D; Jackson, C; James, D J; Jonas, M; Karcher, A; Karliner, I; Kent, S; Kessler, R; Kozlovsky, M; Kron, R G; Kubik, D; Kuehn, K; Kuhlmann, S; Kuk, K; Lahav, O; Lathrop, A; Lee, J; Levi, M E; Lewis, P; Li, T S; Mandrichenko, I; Marshall, J L; Martinez, G; Merritt, K W; Miquel, R; Munoz, F; Neilsen, E H; Nichol, R C; Nord, B; Ogando, R; Olsen, J; Palio, N; Patton, K; Peoples, J; Plazas, A A; Rauch, J; Reil, K; Rheault, J -P; Roe, N A; Rogers, H; Roodman, A; Sanchez, E; Scarpine, V; Schindler, R H; Schmidt, R; Schmitt, R; Schubnell, M; Schultz, K; Schurter, P; Scott, L; Serrano, S; Shaw, T M; Smith, R C; Soares-Santos, M; Stefanik, A; Stuermer, W; Suchyta, E; Sypniewski, A; Tarle, G; Thaler, J; Tighe, R; Tran, C; Tucker, D; Walker, A R; Wang, G; Watson, M; Weaverdyck, C; Wester, W; Woods, R; Yanny, B

    2015-01-01

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250 micron thick fully-depleted CCDs cooled inside a vacuum Dewar. The 570 Mpixel focal plane comprises 62 2kx4k CCDs for imaging and 12 2kx2k CCDs for guiding and focus. The CCDs have 15 microns x15 microns pixels with a plate scale of 0.263 arc sec per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construct...

  5. The Dark Energy Camera

    Science.gov (United States)

    Flaugher, B.; Diehl, H. T.; Honscheid, K.; Abbott, T. M. C.; Alvarez, O.; Angstadt, R.; Annis, J. T.; Antonik, M.; Ballester, O.; Beaufore, L.; Bernstein, G. M.; Bernstein, R. A.; Bigelow, B.; Bonati, M.; Boprie, D.; Brooks, D.; Buckley-Geer, E. J.; Campa, J.; Cardiel-Sas, L.; Castander, F. J.; Castilla, J.; Cease, H.; Cela-Ruiz, J. M.; Chappa, S.; Chi, E.; Cooper, C.; da Costa, L. N.; Dede, E.; Derylo, G.; DePoy, D. L.; de Vicente, J.; Doel, P.; Drlica-Wagner, A.; Eiting, J.; Elliott, A. E.; Emes, J.; Estrada, J.; Fausti Neto, A.; Finley, D. A.; Flores, R.; Frieman, J.; Gerdes, D.; Gladders, M. D.; Gregory, B.; Gutierrez, G. R.; Hao, J.; Holland, S. E.; Holm, S.; Huffman, D.; Jackson, C.; James, D. J.; Jonas, M.; Karcher, A.; Karliner, I.; Kent, S.; Kessler, R.; Kozlovsky, M.; Kron, R. G.; Kubik, D.; Kuehn, K.; Kuhlmann, S.; Kuk, K.; Lahav, O.; Lathrop, A.; Lee, J.; Levi, M. E.; Lewis, P.; Li, T. S.; Mandrichenko, I.; Marshall, J. L.; Martinez, G.; Merritt, K. W.; Miquel, R.; Muñoz, F.; Neilsen, E. H.; Nichol, R. C.; Nord, B.; Ogando, R.; Olsen, J.; Palaio, N.; Patton, K.; Peoples, J.; Plazas, A. A.; Rauch, J.; Reil, K.; Rheault, J.-P.; Roe, N. A.; Rogers, H.; Roodman, A.; Sanchez, E.; Scarpine, V.; Schindler, R. H.; Schmidt, R.; Schmitt, R.; Schubnell, M.; Schultz, K.; Schurter, P.; Scott, L.; Serrano, S.; Shaw, T. M.; Smith, R. C.; Soares-Santos, M.; Stefanik, A.; Stuermer, W.; Suchyta, E.; Sypniewski, A.; Tarle, G.; Thaler, J.; Tighe, R.; Tran, C.; Tucker, D.; Walker, A. R.; Wang, G.; Watson, M.; Weaverdyck, C.; Wester, W.; Woods, R.; Yanny, B.; DES Collaboration

    2015-11-01

    The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel-1. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6-9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  6. CAOS-CMOS camera.

    Science.gov (United States)

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  7. HIGH SPEED CAMERA

    Science.gov (United States)

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  8. Artificial human vision camera

    Science.gov (United States)

    Goudou, J.-F.; Maggio, S.; Fagno, M.

    2014-10-01

    In this paper we present a real-time vision system modeling the human vision system. Our purpose is to inspire from human vision bio-mechanics to improve robotic capabilities for tasks such as objects detection and tracking. This work describes first the bio-mechanical discrepancies between human vision and classic cameras and the retinal processing stage that takes place in the eye, before the optic nerve. The second part describes our implementation of these principles on a 3-camera optical, mechanical and software model of the human eyes and associated bio-inspired attention model.

  9. The Star Formation Camera

    OpenAIRE

    Scowen, Paul A.; Jansen, Rolf; Beasley, Matthew; Calzetti, Daniela; Desch, Steven; Fullerton, Alex; Gallagher, John; Lisman, Doug; Macenka, Steve; Malhotra, Sangeeta; McCaughrean, Mark; Nikzad, Shouleh; O'Connell, Robert; Oey, Sally; Padgett, Deborah

    2009-01-01

    The Star Formation Camera (SFC) is a wide-field (~15'x19, >280 arcmin^2), high-resolution (18x18 mas pixels) UV/optical dichroic camera designed for the Theia 4-m space-borne space telescope concept. SFC will deliver diffraction-limited images at lambda > 300 nm in both a blue (190-517nm) and a red (517-1075nm) channel simultaneously. Our aim is to conduct a comprehensive and systematic study of the astrophysical processes and environments relevant for the births and life cycles of stars and ...

  10. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...... camera. We approach this problem by modelling it as a dynamic multi-objective optimisation problem and show how this metaphor allows a much richer expressiveness than a classical single objective approach. Finally, we showcase the application of a multi-objective evolutionary algorithm to generate a shot...

  11. Underwater camera with depth measurement

    Science.gov (United States)

    Wang, Wei-Chih; Lin, Keng-Ren; Tsui, Chi L.; Schipf, David; Leang, Jonathan

    2016-04-01

    The objective of this study is to develop an RGB-D (video + depth) camera that provides three-dimensional image data for use in the haptic feedback of a robotic underwater ordnance recovery system. Two camera systems were developed and studied. The first depth camera relies on structured light (as used by the Microsoft Kinect), where the displacement of an object is determined by variations of the geometry of a projected pattern. The other camera system is based on a Time of Flight (ToF) depth camera. The results of the structural light camera system shows that the camera system requires a stronger light source with a similar operating wavelength and bandwidth to achieve a desirable working distance in water. This approach might not be robust enough for our proposed underwater RGB-D camera system, as it will require a complete re-design of the light source component. The ToF camera system instead, allows an arbitrary placement of light source and camera. The intensity output of the broadband LED light source in the ToF camera system can be increased by putting them into an array configuration and the LEDs can be modulated comfortably with any waveform and frequencies required by the ToF camera. In this paper, both camera were evaluated and experiments were conducted to demonstrate the versatility of the ToF camera.

  12. Communities, Cameras, and Conservation

    Science.gov (United States)

    Patterson, Barbara

    2012-01-01

    Communities, Cameras, and Conservation (CCC) is the most exciting and valuable program the author has seen in her 30 years of teaching field science courses. In this citizen science project, students and community volunteers collect data on mountain lions ("Puma concolor") at four natural areas and public parks along the Front Range of Colorado.…

  13. Advanced Virgo phase cameras

    Science.gov (United States)

    van der Schaaf, L.; Agatsuma, K.; van Beuzekom, M.; Gebyehu, M.; van den Brand, J.

    2016-05-01

    A century after the prediction of gravitational waves, detectors have reached the sensitivity needed to proof their existence. One of them, the Virgo interferometer in Pisa, is presently being upgraded to Advanced Virgo (AdV) and will come into operation in 2016. The power stored in the interferometer arms raises from 20 to 700 kW. This increase is expected to introduce higher order modes in the beam, which could reduce the circulating power in the interferometer, limiting the sensitivity of the instrument. To suppress these higher-order modes, the core optics of Advanced Virgo is equipped with a thermal compensation system. Phase cameras, monitoring the real-time status of the beam constitute a critical component of this compensation system. These cameras measure the phases and amplitudes of the laser-light fields at the frequencies selected to control the interferometer. The measurement combines heterodyne detection with a scan of the wave front over a photodetector with pin-hole aperture. Three cameras observe the phase front of these laser sidebands. Two of them monitor the in-and output of the interferometer arms and the third one is used in the control of the aberrations introduced by the power recycling cavity. In this paper the working principle of the phase cameras is explained and some characteristic parameters are described.

  14. The world's fastest camera

    CERN Multimedia

    Piquepaille, Roland

    2006-01-01

    This image processor is not your typical digital camera. It took 6 years to 20 people and $6 million to build the "Regional Calorimeter Trigger"(RCT) which will be a component of the Compact Muon Solenoid (CMS) experiment, one of the detectors on the Large Hadron Collider (LHC) in Geneva, Switzerland (1 page)

  15. Make a Pinhole Camera

    Science.gov (United States)

    Fisher, Diane K.; Novati, Alexander

    2009-01-01

    On Earth, using ordinary visible light, one can create a single image of light recorded over time. Of course a movie or video is light recorded over time, but it is a series of instantaneous snapshots, rather than light and time both recorded on the same medium. A pinhole camera, which is simple to make out of ordinary materials and using ordinary…

  16. Phone-based intervention under nurse guidance after stroke: concept for lowering blood pressure after stroke in Sub-Saharan Africa.

    Science.gov (United States)

    Ovbiagele, Bruce

    2015-01-01

    Over the last 4 decades, rates of stroke occurrence in low- and middle-income countries (LMIC) have roughly doubled, whereas they have substantively decreased in high-income countries. Most of these LMIC are in Sub-Saharan Africa (SSA) where the burden of stroke will probably continue to rise over the next few decades because of an ongoing epidemiologic transition. Moreover, SSA is circumstantially distinct: socioeconomic obstacles, cultural barriers, underdiagnosis, uncoordinated care, and shortage of physicians impede the ability of SSA countries to implement cardiovascular disease prevention among people with diabetes mellitus in a timely and sustainable manner. Reducing the burden of stroke in SSA may necessitate an initial emphasis on high-risk individuals motivated to improve their health, multidisciplinary care coordination initiatives with clinical decision support, evidence-based interventions tailored for cultural relevance, task shifting from physicians to nurses and other health providers, use of novel patient-accessible tools, and a multilevel approach that incorporates individual- and system-level components. This article proposes a theory-based integrated blood pressure (BP) self-management intervention called Phone-based Intervention under Nurse Guidance after Stroke (PINGS) that could be tested among hospitalized stroke patients with poorly controlled hypertension encountered in SSA. PINGS would comprise the implementation of nurse-run BP control clinics and administration of health technology (personalized phone text messaging and home telemonitoring), aimed at boosting patient self-efficacy and intrinsic motivation for sustained adherence to antihypertensive medications. PMID:25440360

  17. Image Sensors Enhance Camera Technologies

    Science.gov (United States)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  18. Wayfinding in Social Networks

    Science.gov (United States)

    Liben-Nowell, David

    With the recent explosion of popularity of commercial social-networking sites like Facebook and MySpace, the size of social networks that can be studied scientifically has passed from the scale traditionally studied by sociologists and anthropologists to the scale of networks more typically studied by computer scientists. In this chapter, I will highlight a recent line of computational research into the modeling and analysis of the small-world phenomenon - the observation that typical pairs of people in a social network are connected by very short chains of intermediate friends - and the ability of members of a large social network to collectively find efficient routes to reach individuals in the network. I will survey several recent mathematical models of social networks that account for these phenomena, with an emphasis on both the provable properties of these social-network models and the empirical validation of the models against real large-scale social-network data.

  19. Linking Wayfinding and Wayfaring

    DEFF Research Database (Denmark)

    Lanng, Ditte Bendix; Jensen, Ole B.

    2016-01-01

    In this chapter we propose to expand and enhance the understanding of wayfi nding beyond the strictly “instrumental” (i.e., getting from point A to point B), to include the qualities and multi-sensorial inputs that inform and shape people’s movement through space. We take as a point of departure...... of environmental information , which includes the embodied, multi-sensorial experience of moving through physical space. We base our examination in part on the classic positions of the wayfi nding literature—for example, Lynch’s seminal study, The Image of the City ( 1960 ). However, we also examine the so......-called mobilities turn in which mobility is viewed as a complex, multilayered process that entails much more than simply getting from point A to point B (see Cresswell 2006 ; Jensen 2013 ; Urry 2007 ).The structure of the chapter is simple: We fi rst introduce the concepts that are key to linking wayfi nding...

  20. MISR radiometric camera-by-camera Cloud Mask V004

    Data.gov (United States)

    National Aeronautics and Space Administration — This file contains the Radiometric camera-by-camera Cloud Mask dataset. It is used to determine whether a scene is classified as clear or cloudy. A new parameter...

  1. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  2. Combustion pinhole camera system

    Science.gov (United States)

    Witte, Arvel B.

    1984-02-21

    A pinhole camera system utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.

  3. Camera Surveillance Quadrotor

    OpenAIRE

    Hjelm, Emil; Yousif, Robert

    2015-01-01

    A quadrotor is a helicopter with four rotors placed at equal distance from the crafts centre of gravity, controlled by letting the different rotors generate different amount of thrust. It uses various sensors to stay stable in the air, correct readings from these sensors are therefore critical. By reducing vibrations, electromagnetic interference and external disturbances the quadrotor’s stability can increase. The purpose of this project is to analyse the feasibility of a quadrotor camera su...

  4. The Star Formation Camera

    CERN Document Server

    Scowen, Paul A; Beasley, Matthew; Calzetti, Daniela; Desch, Steven; Fullerton, Alex; Gallagher, John; Lisman, Doug; Macenka, Steve; Malhotra, Sangeeta; McCaughrean, Mark; Nikzad, Shouleh; O'Connell, Robert; Oey, Sally; Padgett, Deborah; Rhoads, James; Roberge, Aki; Siegmund, Oswald; Shaklan, Stuart; Smith, Nathan; Stern, Daniel; Tumlinson, Jason; Windhorst, Rogier; Woodruff, Robert

    2009-01-01

    The Star Formation Camera (SFC) is a wide-field (~15'x19, >280 arcmin^2), high-resolution (18x18 mas pixels) UV/optical dichroic camera designed for the Theia 4-m space-borne space telescope concept. SFC will deliver diffraction-limited images at lambda > 300 nm in both a blue (190-517nm) and a red (517-1075nm) channel simultaneously. Our aim is to conduct a comprehensive and systematic study of the astrophysical processes and environments relevant for the births and life cycles of stars and their planetary systems, and to investigate and understand the range of environments, feedback mechanisms, and other factors that most affect the outcome of the star and planet formation process. This program addresses the origins and evolution of stars, galaxies, and cosmic structure and has direct relevance for the formation and survival of planetary systems like our Solar System and planets like Earth. We present the design and performance specifications resulting from the implementation study of the camera, conducted ...

  5. Hemispherical Laue camera

    Science.gov (United States)

    Li, James C. M.; Chu, Sungnee G.

    1980-01-01

    A hemispherical Laue camera comprises a crystal sample mount for positioning a sample to be analyzed at the center of sphere of a hemispherical, X-radiation sensitive film cassette, a collimator, a stationary or rotating sample mount and a set of standard spherical projection spheres. X-radiation generated from an external source is directed through the collimator to impinge onto the single crystal sample on the stationary mount. The diffracted beam is recorded on the hemispherical X-radiation sensitive film mounted inside the hemispherical film cassette in either transmission or back-reflection geometry. The distances travelled by X-radiation diffracted from the crystal to the hemispherical film are the same for all crystal planes which satisfy Bragg's Law. The recorded diffraction spots or Laue spots on the film thereby preserve both the symmetry information of the crystal structure and the relative intensities which are directly related to the relative structure factors of the crystal orientations. The diffraction pattern on the exposed film is compared with the known diffraction pattern on one of the standard spherical projection spheres for a specific crystal structure to determine the orientation of the crystal sample. By replacing the stationary sample support with a rotating sample mount, the hemispherical Laue camera can be used for crystal structure determination in a manner previously provided in conventional Debye-Scherrer cameras.

  6. Gamma ray camera

    Science.gov (United States)

    Perez-Mendez, Victor

    1997-01-01

    A gamma ray camera for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array.

  7. Adaptive compressive sensing camera

    Science.gov (United States)

    Hsu, Charles; Hsu, Ming K.; Cha, Jae; Iwamura, Tomo; Landa, Joseph; Nguyen, Charles; Szu, Harold

    2013-05-01

    We have embedded Adaptive Compressive Sensing (ACS) algorithm on Charge-Coupled-Device (CCD) camera based on the simplest concept that each pixel is a charge bucket, and the charges comes from Einstein photoelectric conversion effect. Applying the manufactory design principle, we only allow altering each working component at a minimum one step. We then simulated what would be such a camera can do for real world persistent surveillance taking into account of diurnal, all weather, and seasonal variations. The data storage has saved immensely, and the order of magnitude of saving is inversely proportional to target angular speed. We did design two new components of CCD camera. Due to the matured CMOS (Complementary metal-oxide-semiconductor) technology, the on-chip Sample and Hold (SAH) circuitry can be designed for a dual Photon Detector (PD) analog circuitry for changedetection that predicts skipping or going forward at a sufficient sampling frame rate. For an admitted frame, there is a purely random sparse matrix [Φ] which is implemented at each bucket pixel level the charge transport bias voltage toward its neighborhood buckets or not, and if not, it goes to the ground drainage. Since the snapshot image is not a video, we could not apply the usual MPEG video compression and Hoffman entropy codec as well as powerful WaveNet Wrapper on sensor level. We shall compare (i) Pre-Processing FFT and a threshold of significant Fourier mode components and inverse FFT to check PSNR; (ii) Post-Processing image recovery will be selectively done by CDT&D adaptive version of linear programming at L1 minimization and L2 similarity. For (ii) we need to determine in new frames selection by SAH circuitry (i) the degree of information (d.o.i) K(t) dictates the purely random linear sparse combination of measurement data a la [Φ]M,N M(t) = K(t) Log N(t).

  8. PAU camera: detectors characterization

    Science.gov (United States)

    Casas, Ricard; Ballester, Otger; Cardiel-Sas, Laia; Castilla, Javier; Jiménez, Jorge; Maiorino, Marino; Pío, Cristóbal; Sevilla, Ignacio; de Vicente, Juan

    2012-07-01

    The PAU Camera (PAUCam) [1,2] is a wide field camera that will be mounted at the corrected prime focus of the William Herschel Telescope (Observatorio del Roque de los Muchachos, Canary Islands, Spain) in the next months. The focal plane of PAUCam is composed by a mosaic of 18 CCD detectors of 2,048 x 4,176 pixels each one with a pixel size of 15 microns, manufactured by Hamamatsu Photonics K. K. This mosaic covers a field of view (FoV) of 60 arcmin (minutes of arc), 40 of them are unvignetted. The behaviour of these 18 devices, plus four spares, and their electronic response should be characterized and optimized for the use in PAUCam. This job is being carried out in the laboratories of the ICE/IFAE and the CIEMAT. The electronic optimization of the CCD detectors is being carried out by means of an OG (Output Gate) scan and maximizing it CTE (Charge Transfer Efficiency) while the read-out noise is minimized. The device characterization itself is obtained with different tests. The photon transfer curve (PTC) that allows to obtain the electronic gain, the linearity vs. light stimulus, the full-well capacity and the cosmetic defects. The read-out noise, the dark current, the stability vs. temperature and the light remanence.

  9. Transmission electron microscope CCD camera

    Science.gov (United States)

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  10. Novel gamma cameras

    International Nuclear Information System (INIS)

    The gamma-ray cameras described are based on radiation imaging devices which permit the direct recording of the distribution of radioactive material from a radiative source, such as a human organ. They consist in principle of a collimator, a converter matrix converting gamma photons to electrons, and an electron image multiplier producing a multiplied electron output, and means for reading out the information. The electron image multiplier is a device which produces a multiplied electron image. It can be in principle, either gas avalanche electron multiplier or a multi-channel plate. The multi-channel plate employed is a novel device, described elsewhere. The three described embodiments, in which the converter matrix can be either of metal type or of scintillation crystal type, were designed and are being developed

  11. The Circular Camera Movement

    DEFF Research Database (Denmark)

    Hansen, Lennard Højbjerg

    2014-01-01

    It has been an accepted precept in film theory that specific stylistic features do not express specific content. Nevertheless, it is possible to find many examples in the history of film in which stylistic features do express specific content: for instance, the circular camera movement is used...... repeatedly to convey the feeling of a man and a woman falling in love. This raises the question of why producers and directors choose certain stylistic features to narrate certain categories of content. Through the analysis of several short film and TV clips, this article explores whether...... or not there are perceptual aspects related to specific stylistic features that enable them to be used for delimited narrational purposes. The article further attempts to reopen this particular stylistic debate by exploring the embodied aspects of visual perception in relation to specific stylistic features...

  12. Radiation camera motion correction system

    Science.gov (United States)

    Hoffer, P.B.

    1973-12-18

    The device determines the ratio of the intensity of radiation received by a radiation camera from two separate portions of the object. A correction signal is developed to maintain this ratio at a substantially constant value and this correction signal is combined with the camera signal to correct for object motion. (Official Gazette)

  13. LISS-4 camera for Resourcesat

    Science.gov (United States)

    Paul, Sandip; Dave, Himanshu; Dewan, Chirag; Kumar, Pradeep; Sansowa, Satwinder Singh; Dave, Amit; Sharma, B. N.; Verma, Anurag

    2006-12-01

    The Indian Remote Sensing Satellites use indigenously developed high resolution cameras for generating data related to vegetation, landform /geomorphic and geological boundaries. This data from this camera is used for working out maps at 1:12500 scale for national level policy development for town planning, vegetation etc. The LISS-4 Camera was launched onboard Resourcesat-1 satellite by ISRO in 2003. LISS-4 is a high-resolution multi-spectral camera with three spectral bands and having a resolution of 5.8m and swath of 23Km from 817 Km altitude. The panchromatic mode provides a swath of 70Km and 5-day revisit. This paper briefly discusses the configuration of LISS-4 Camera of Resourcesat-1, its onboard performance and also the changes in the Camera being developed for Resourcesat-2. LISS-4 camera images the earth in push-broom mode. It is designed around a three mirror un-obscured telescope, three linear 12-K CCDs and associated electronics for each band. Three spectral bands are realized by splitting the focal plane in along track direction using an isosceles prism. High-speed Camera Electronics is designed for each detector with 12- bit digitization and digital double sampling of video. Seven bit data selected from 10 MSBs data by Telecommand is transmitted. The total dynamic range of the sensor covers up to 100% albedo. The camera structure has heritage of IRS- 1C/D. The optical elements are precisely glued to specially designed flexure mounts. The camera is assembled onto a rotating deck on spacecraft to facilitate +/- 26° steering in Pitch-Yaw plane. The camera is held on spacecraft in a stowed condition before deployment. The excellent imageries from LISS-4 Camera onboard Resourcesat-1 are routinely used worldwide. Such second Camera is being developed for Resourcesat-2 launch in 2007 with similar performance. The Camera electronics is optimized and miniaturized. The size and weight are reduced to one third and the power to half of the values in Resourcesat

  14. Camera sensitivity study

    Science.gov (United States)

    Schlueter, Jonathan; Murphey, Yi L.; Miller, John W. V.; Shridhar, Malayappan; Luo, Yun; Khairallah, Farid

    2004-12-01

    As the cost/performance Ratio of vision systems improves with time, new classes of applications become feasible. One such area, automotive applications, is currently being investigated. Applications include occupant detection, collision avoidance and lane tracking. Interest in occupant detection has been spurred by federal automotive safety rules in response to injuries and fatalities caused by deployment of occupant-side air bags. In principle, a vision system could control airbag deployment to prevent this type of mishap. Employing vision technology here, however, presents a variety of challenges, which include controlling costs, inability to control illumination, developing and training a reliable classification system and loss of performance due to production variations due to manufacturing tolerances and customer options. This paper describes the measures that have been developed to evaluate the sensitivity of an occupant detection system to these types of variations. Two procedures are described for evaluating how sensitive the classifier is to camera variations. The first procedure is based on classification accuracy while the second evaluates feature differences.

  15. Gamma camera system

    International Nuclear Information System (INIS)

    The invention provides a composite solid state detector for use in deriving a display, by spatial coordinate information, of the distribution or radiation emanating from a source within a region of interest, comprising several solid state detector components, each having a given surface arranged for exposure to impinging radiation and exhibiting discrete interactions therewith at given spatially definable locations. The surface of each component and the surface disposed opposite and substantially parallel thereto are associated with impedence means configured to provide for each opposed surface outputs for signals relating the given location of the interactions with one spatial coordinate parameter of one select directional sense. The detector components are arranged to provide groupings of adjacently disposed surfaces mutually linearly oriented to exhibit a common directional sense of the spatial coordinate parameter. Means interconnect at least two of the outputs associated with each of the surfaces within a given grouping for collecting the signals deriving therefrom. The invention also provides a camera system for imaging the distribution of a source of gamma radiation situated within a region of interest

  16. Proportional counter radiation camera

    Science.gov (United States)

    Borkowski, C.J.; Kopp, M.K.

    1974-01-15

    A gas-filled proportional counter camera that images photon emitting sources is described. A two-dimensional, positionsensitive proportional multiwire counter is provided as the detector. The counter consists of a high- voltage anode screen sandwiched between orthogonally disposed planar arrays of multiple parallel strung, resistively coupled cathode wires. Two terminals from each of the cathode arrays are connected to separate timing circuitry to obtain separate X and Y coordinate signal values from pulse shape measurements to define the position of an event within the counter arrays which may be recorded by various means for data display. The counter is further provided with a linear drift field which effectively enlarges the active gas volume of the counter and constrains the recoil electrons produced from ionizing radiation entering the counter to drift perpendicularly toward the planar detection arrays. A collimator is interposed between a subject to be imaged and the counter to transmit only the radiation from the subject which has a perpendicular trajectory with respect to the planar cathode arrays of the detector. (Official Gazette)

  17. The framework and the key technology of the alarm embedded in a mobile phone based on automatic photographing and transmitting the photos%基于自动拍照传输图片的手机嵌入式报警器架构及关键技术研究

    Institute of Scientific and Technical Information of China (English)

    陈阵

    2013-01-01

    To design a personal alarm with one by a mobile phone.Based on the ARM11 mobile phone,the mobile phone camera module and MMS module were the main components,at first connected the mobile phone's camera module and the flash circuit module were connected with the mobile phone's camera shortcut key in series,then used the carbide.c+ + language on S60 platform of the Symbian smartphone OS to realize that the MCU interrupted by FIQ,unlocked the keyboard and invoked the JAVA program after that the mobile phone' s camera shortcut key of the hardware bottom was touched,at last use d J2me language on the WTK platform to realize the mobile phone ' s automatic taking pictures and sending pictures by MMS to the designated mobile phone.In case of emergency once the user triggered the mobile phones's camera shortcut key,the mobile phone immediately unlocked the keypad,the light flashed,automatically taked a picture for the scene and sended the picture through mms to the relatives and friends,so that they could take timely measures and provide help to the user,also preserve the information in order to take the evidence.The alarm embedded in the Mobile phone can effectively protect personal safety on body and property.%利用手机设计一种个人随身报警器.基于ARM11手机,以手机照相模块和彩信模块做为主要部件,首先把手机拍照模块和闪光灯电路模块与手机照相机快捷键相串接,然后在Symbian智能手机系统的S60平台上采用carbide.c++语言实现在硬件底层的手机照相机快捷键被触发后MCU进行快速中断启动键盘解锁并调用JAVA程序,最后在WTK平台上采用J2me语言实现手机自动拍照通过彩信传送图片给指定手机.在紧急情况下一旦用户触发手机照相快捷键,手机立刻解开键盘锁,闪光灯亮,并自动对现场情景照相后把图片通过彩信发送给亲友,以便亲友能及时采取措施对用户提供帮助,同时也为事件取证做好信息保存.手机嵌

  18. Vision Sensors and Cameras

    Science.gov (United States)

    Hoefflinger, Bernd

    Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 μm technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and μm-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.

  19. 医院标识导向系统的分类、制作与定位%CLASSIFICATION, PRODUCTION AND POSITIONING OF HOSPITAL SIGNAGE AND WAY-FINDING SYSTEM

    Institute of Scientific and Technical Information of China (English)

    吴培波

    2015-01-01

    The article introduces Xinjiang Uygur Autonomous Region People's Hospital signage and way-finding system design and instal background;the classification principle of the hospital signage and way-finding system;materials and technologies of each signage board in terms of external signage and internal signage;attentions on the signage distribution and positioning;and summarizes the practice and experience.%文章简单分析了新疆维吾尔自治区人民医院标识导向系统设计安装的项目背景,介绍了医院标识导向系统的分级原理,并从外部标识和内部标识两个方面,详细介绍了各种标识牌的材质和制作工艺,阐述了标识系统布点及定位时需要考虑的重点问题。

  20. An Inexpensive Digital Infrared Camera

    Science.gov (United States)

    Mills, Allan

    2012-01-01

    Details are given for the conversion of an inexpensive webcam to a camera specifically sensitive to the near infrared (700-1000 nm). Some experiments and practical applications are suggested and illustrated. (Contains 9 figures.)

  1. The future of consumer cameras

    Science.gov (United States)

    Battiato, Sebastiano; Moltisanti, Marco

    2015-03-01

    In the last two decades multimedia, and in particular imaging devices (camcorders, tablets, mobile phones, etc.) have been dramatically diffused. Moreover the increasing of their computational performances, combined with an higher storage capability, allows them to process large amount of data. In this paper an overview of the current trends of consumer cameras market and technology will be given, providing also some details about the recent past (from Digital Still Camera up today) and forthcoming key issues.

  2. Performance evaluation of CCD- and mobile-phone-based near-infrared fluorescence imaging systems with molded and 3D-printed phantoms

    Science.gov (United States)

    Wang, Bohan; Ghassemi, Pejhman; Wang, Jianting; Wang, Quanzeng; Chen, Yu; Pfefer, Joshua

    2016-03-01

    Increasing numbers of devices are emerging which involve biophotonic imaging on a mobile platform. Therefore, effective test methods are needed to ensure that these devices provide a high level of image quality. We have developed novel phantoms for performance assessment of near infrared fluorescence (NIRF) imaging devices. Resin molding and 3D printing techniques were applied for phantom fabrication. Comparisons between two imaging approaches - a CCD-based scientific camera and an NIR-enabled mobile phone - were made based on evaluation of the contrast transfer function and penetration depth. Optical properties of the phantoms were evaluated, including absorption and scattering spectra and fluorescence excitation-emission matrices. The potential viability of contrastenhanced biological NIRF imaging with a mobile phone is demonstrated, and color-channel-specific variations in image quality are documented. Our results provide evidence of the utility of novel phantom-based test methods for quantifying image quality in emerging NIRF devices.

  3. Field-testing of a cost-effective mobile-phone based microscope for screening of Schistosoma haematobium infection (Conference Presentation)

    Science.gov (United States)

    Ceylan Koydemir, Hatice; Bogoch, Isaac I.; Tseng, Derek; Ephraim, Richard K. D.; Duah, Evans; Tee, Joseph; Andrews, Jason R.; Ozcan, Aydogan

    2016-03-01

    Schistosomiasis is a parasitic and neglected tropical disease, and affects mobile-phone microscope, a custom-designed 3D printed opto-mechanical attachment (~150g) is placed in contact with the smartphone camera-lens, creating an imaging-system with a half-pitch resolution of ~0.87µm. This unit includes an external lens (also taken from a mobile-phone camera), a sample tray, a z-stage to adjust the focus, two light-emitting-diodes (LEDs) and two diffusers for uniform illumination of the sample. In our field-testing, 60 urine samples, collected from children, were used, where the prevalence of the infection was 72.9%. After concentration of the sample with centrifugation, the sediment was placed on a glass-slide and S. haematobium eggs were first identified/quantified using conventional benchtop microscopy by an expert diagnostician, and then a second expert, blinded to these results, determined the presence/absence of eggs using our mobile-phone microscope. Compared to conventional microscopy, our mobile-phone microscope had a diagnostic sensitivity of 72.1%, specificity of 100%, positive-predictive-value of 100%, and a negative-predictive-value of 57.1%. Furthermore, our mobile-phone platform demonstrated a sensitivity of 65.7% and 100% for low-intensity infections (≤50 eggs/10 mL urine) and high-intensity infections (mobile-phone microscope may play an important role in the diagnosis of schistosomiasis and various other global health challenges.

  4. SUB-CAMERA CALIBRATION OF A PENTA-CAMERA

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-03-01

    Full Text Available Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors

  5. Association of Exposure to Radio-Frequency Electromagnetic Field Radiation (RF-EMFR) Generated by Mobile Phone Base Stations with Glycated Hemoglobin (HbA1c) and Risk of Type 2 Diabetes Mellitus.

    Science.gov (United States)

    Meo, Sultan Ayoub; Alsubaie, Yazeed; Almubarak, Zaid; Almutawa, Hisham; AlQasem, Yazeed; Hasanato, Rana Muhammed

    2015-11-01

    Installation of mobile phone base stations in residential areas has initiated public debate about possible adverse effects on human health. This study aimed to determine the association of exposure to radio frequency electromagnetic field radiation (RF-EMFR) generated by mobile phone base stations with glycated hemoglobin (HbA1c) and occurrence of type 2 diabetes mellitus. For this study, two different elementary schools (school-1 and school-2) were selected. We recruited 159 students in total; 96 male students from school-1, with age range 12-16 years, and 63 male students with age range 12-17 years from school-2. Mobile phone base stations with towers existed about 200 m away from the school buildings. RF-EMFR was measured inside both schools. In school-1, RF-EMFR was 9.601 nW/cm² at frequency of 925 MHz, and students had been exposed to RF-EMFR for a duration of 6 h daily, five days in a week. In school-2, RF-EMFR was 1.909 nW/cm² at frequency of 925 MHz and students had been exposed for 6 h daily, five days in a week. 5-6 mL blood was collected from all the students and HbA1c was measured by using a Dimension Xpand Plus Integrated Chemistry System, Siemens. The mean HbA1c for the students who were exposed to high RF-EMFR was significantly higher (5.44 ± 0.22) than the mean HbA1c for the students who were exposed to low RF-EMFR (5.32 ± 0.34) (p = 0.007). Moreover, students who were exposed to high RF-EMFR generated by MPBS had a significantly higher risk of type 2 diabetes mellitus (p = 0.016) relative to their counterparts who were exposed to low RF-EMFR. It is concluded that exposure to high RF-EMFR generated by MPBS is associated with elevated levels of HbA1c and risk of type 2 diabetes mellitus.

  6. Association of Exposure to Radio-Frequency Electromagnetic Field Radiation (RF-EMFR) Generated by Mobile Phone Base Stations with Glycated Hemoglobin (HbA1c) and Risk of Type 2 Diabetes Mellitus

    Science.gov (United States)

    Meo, Sultan Ayoub; Alsubaie, Yazeed; Almubarak, Zaid; Almutawa, Hisham; AlQasem, Yazeed; Muhammed Hasanato, Rana

    2015-01-01

    Installation of mobile phone base stations in residential areas has initiated public debate about possible adverse effects on human health. This study aimed to determine the association of exposure to radio frequency electromagnetic field radiation (RF-EMFR) generated by mobile phone base stations with glycated hemoglobin (HbA1c) and occurrence of type 2 diabetes mellitus. For this study, two different elementary schools (school-1 and school-2) were selected. We recruited 159 students in total; 96 male students from school-1, with age range 12–16 years, and 63 male students with age range 12–17 years from school-2. Mobile phone base stations with towers existed about 200 m away from the school buildings. RF-EMFR was measured inside both schools. In school-1, RF-EMFR was 9.601 nW/cm2 at frequency of 925 MHz, and students had been exposed to RF-EMFR for a duration of 6 h daily, five days in a week. In school-2, RF-EMFR was 1.909 nW/cm2 at frequency of 925 MHz and students had been exposed for 6 h daily, five days in a week. 5–6 mL blood was collected from all the students and HbA1c was measured by using a Dimension Xpand Plus Integrated Chemistry System, Siemens. The mean HbA1c for the students who were exposed to high RF-EMFR was significantly higher (5.44 ± 0.22) than the mean HbA1c for the students who were exposed to low RF-EMFR (5.32 ± 0.34) (p = 0.007). Moreover, students who were exposed to high RF-EMFR generated by MPBS had a significantly higher risk of type 2 diabetes mellitus (p = 0.016) relative to their counterparts who were exposed to low RF-EMFR. It is concluded that exposure to high RF-EMFR generated by MPBS is associated with elevated levels of HbA1c and risk of type 2 diabetes mellitus. PMID:26580639

  7. Association of Exposure to Radio-Frequency Electromagnetic Field Radiation (RF-EMFR Generated by Mobile Phone Base Stations with Glycated Hemoglobin (HbA1c and Risk of Type 2 Diabetes Mellitus

    Directory of Open Access Journals (Sweden)

    Sultan Ayoub Meo

    2015-11-01

    Full Text Available Installation of mobile phone base stations in residential areas has initiated public debate about possible adverse effects on human health. This study aimed to determine the association of exposure to radio frequency electromagnetic field radiation (RF-EMFR generated by mobile phone base stations with glycated hemoglobin (HbA1c and occurrence of type 2 diabetes mellitus. For this study, two different elementary schools (school-1 and school-2 were selected. We recruited 159 students in total; 96 male students from school-1, with age range 12–16 years, and 63 male students with age range 12–17 years from school-2. Mobile phone base stations with towers existed about 200 m away from the school buildings. RF-EMFR was measured inside both schools. In school-1, RF-EMFR was 9.601 nW/cm2 at frequency of 925 MHz, and students had been exposed to RF-EMFR for a duration of 6 h daily, five days in a week. In school-2, RF-EMFR was 1.909 nW/cm2 at frequency of 925 MHz and students had been exposed for 6 h daily, five days in a week. 5–6 mL blood was collected from all the students and HbA1c was measured by using a Dimension Xpand Plus Integrated Chemistry System, Siemens. The mean HbA1c for the students who were exposed to high RF-EMFR was significantly higher (5.44 ± 0.22 than the mean HbA1c for the students who were exposed to low RF-EMFR (5.32 ± 0.34 (p = 0.007. Moreover, students who were exposed to high RF-EMFR generated by MPBS had a significantly higher risk of type 2 diabetes mellitus (p = 0.016 relative to their counterparts who were exposed to low RF-EMFR. It is concluded that exposure to high RF-EMFR generated by MPBS is associated with elevated levels of HbA1c and risk of type 2 diabetes mellitus.

  8. The Clementine longwave infrared camera

    Energy Technology Data Exchange (ETDEWEB)

    Priest, R.E.; Lewis, I.T.; Sewall, N.R.; Park, H.S.; Shannon, M.J.; Ledebuhr, A.G.; Pleasance, L.D. [Lawrence Livermore National Lab., CA (United States); Massie, M.A. [Pacific Advanced Technology, Solvang, CA (United States); Metschuleit, K. [Amber/A Raytheon Co., Goleta, CA (United States)

    1995-04-01

    The Clementine mission provided the first ever complete, systematic surface mapping of the moon from the ultra-violet to the near-infrared regions. More than 1.7 million images of the moon, earth and space were returned from this mission. The longwave-infrared (LWIR) camera supplemented the UV/Visible and near-infrared mapping cameras providing limited strip coverage of the moon, giving insight to the thermal properties of the soils. This camera provided {approximately}100 m spatial resolution at 400 km periselene, and a 7 km across-track swath. This 2.1 kg camera using a 128 x 128 Mercury-Cadmium-Telluride (MCT) FPA viewed thermal emission of the lunar surface and lunar horizon in the 8.0 to 9.5 {micro}m wavelength region. A description of this light-weight, low power LWIR camera along with a summary of lessons learned is presented. Design goals and preliminary on-orbit performance estimates are addressed in terms of meeting the mission`s primary objective for flight qualifying the sensors for future Department of Defense flights.

  9. Dark Energy Camera for Blanco

    Energy Technology Data Exchange (ETDEWEB)

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images from the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.

  10. The GISMO-2 Bolometer Camera

    Science.gov (United States)

    Staguhn, Johannes G.; Benford, Dominic J.; Fixsen, Dale J.; Hilton, Gene; Irwin, Kent D.; Jhabvala, Christine A.; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; Miller, Timothy M.; Moseley, Samuel H.; Sharp, Elemer H.; Wollack, Edward J.

    2012-01-01

    We present the concept for the GISMO-2 bolometer camera) which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISM0-2 will operate Simultaneously in the 1 mm and 2 mm atmospherical windows. The 1 mm channel uses a 32 x 40 TES-based Backshort Under Grid (BUG) bolometer array, the 2 mm channel operates with a 16 x 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISM0-2 was strongly influenced by our experience with the GISMO 2 mm bolometer camera which is successfully operating at the 30m telescope. GISMO is accessible to the astronomical community through the regular IRAM call for proposals.

  11. BLAST Autonomous Daytime Star Cameras

    CERN Document Server

    Rex, M; Devlin, M J; Gundersen, J; Klein, J; Pascale, E; Wiebe, D; Rex, Marie; Chapin, Edward; Devlin, Mark J.; Gundersen, Joshua; Klein, Jeff; Pascale, Enzo; Wiebe, Donald

    2006-01-01

    We have developed two redundant daytime star cameras to provide the fine pointing solution for the balloon-borne submillimeter telescope, BLAST. The cameras are capable of providing a reconstructed pointing solution with an absolute accuracy < 5 arcseconds. They are sensitive to stars down to magnitudes ~ 9 in daytime float conditions. Each camera combines a 1 megapixel CCD with a 200 mm f/2 lens to image a 2 degree x 2.5 degree field of the sky. The instruments are autonomous. An internal computer controls the temperature, adjusts the focus, and determines a real-time pointing solution at 1 Hz. The mechanical details and flight performance of these instruments are presented.

  12. EDICAM (Event Detection Intelligent Camera)

    International Nuclear Information System (INIS)

    Highlights: ► We present EDICAM's hardware modules. ► We present EDICAM's main design concepts. ► This paper will describe EDICAM firmware architecture. ► Operation principles description. ► Further developments. -- Abstract: A new type of fast framing camera has been developed for fusion applications by the Wigner Research Centre for Physics during the last few years. A new concept was designed for intelligent event driven imaging which is capable of focusing image readout to Regions of Interests (ROIs) where and when predefined events occur. At present these events mean intensity changes and external triggers but in the future more sophisticated methods might also be defined. The camera provides 444 Hz frame rate at full resolution of 1280 × 1024 pixels, but monitoring of smaller ROIs can be done in the 1–116 kHz range even during exposure of the full image. Keeping space limitations and the harsh environment in mind the camera is divided into a small Sensor Module and a processing card interconnected by a fast 10 Gbit optical link. This camera hardware has been used for passive monitoring of the plasma in different devices for example at ASDEX Upgrade and COMPASS with the first version of its firmware. The new firmware and software package is now available and ready for testing the new event processing features. This paper will present the operation principle and features of the Event Detection Intelligent Camera (EDICAM). The device is intended to be the central element in the 10-camera monitoring system of the Wendelstein 7-X stellarator

  13. Results of a cross-sectional study on the association of electromagnetic fields emitted from mobile phone base stations and health complaints; Ergebnisse einer Querschnittsstudie zum Zusammenhang von elektromagnetischen Feldern von Mobilfunksendeanlagen und unspezifischen gesundheitlichen Beschwerden

    Energy Technology Data Exchange (ETDEWEB)

    Breckenkamp, Juergen; Berg-Beckhoff, Gabriele [Bielefeld Univ. (Germany). Arbeitsgebiet Epidemiologie und International Public Health; Blettner, Maria [Mainz Univ. (Germany). Inst. fuer Medizinische Biometrie, Epidemiologie und Informatik; Kowall, Bernd [Duesseldorf Univ. (Germany). Deutsches Diabetes Zentrum; Schuez, Joachim [Institute of Cancer Epidemiology, Strandboulevarden (Denmark). Dept. of Biostatistics and Epidemiology; Schlehofer, Brigitte [Deutsches Krebsforschungszentrum Heidelberg (Germany). Arbeitsgebiet Umweltepidemiologie; Schmiedel, Sven [Mainz Univ. (Germany). Inst. fuer Medizinische Biometrie, Epidemiologie und Informatik; Institute of Cancer Epidemiology, Strandboulevarden (Denmark). Dept. of Biostatistics and Epidemiology; Bornkessel, Christian [Institut fuer Mobil- und Satellitenfunktechnik (IMST GmbH), Pruefzentrum EMV, Kamp-Lintfort (Germany); Reis, Ursula; Potthoff, Peter [TNS Healthcare GmbH, Muenchen (Germany)

    2010-07-01

    Background: Despite the fact that adverse health effects are not confirmed for exposure to radiofrequency electromagnetic field (RFEMF) levels below the limit values, as defined in the guidelines of the International Commission on Non-Ionizing Radiation Protection, many persons are worried about possible adverse health effects caused by the RF-EMF emitted from mobile phone base stations, or they attribute their unspecific health complaints like headache or sleep disturbances to these fields. Method: In the framework of a cross-sectional study a questionnaire was sent to 4150 persons living in predominantly urban areas. Participants were asked whether base stations affected their health. Health complaints were measured with standardized health questionnaires for sleep disturbances, headache, health complaints and mental and physical health. 3,526 persons responded (85%) to the questionnaire and 1,808 (51%) agreed to dosimetric measurements in their flats. Exposure was measured in 1,500 flats. Results: The measurements accomplished in the bedrooms in most cases showed very low exposure values, most often below sensitivity limit of the dosimeter. An association of exposure with the occurrence of health complaints was not found, but an association between the attribution of adverse health effects to base stations and the occurrence of health complaints. Conclusions: However, concerns about health and the attribution of adverse health effects to these mobile phone base stations should be taken serious and require a risk communication with concerned persons. Future research should focus on the processes of perception and appraisal of RF-EMF risks, and ascertain the determinants of concerns and attributions in the context of RF-EMF. (orig.)

  14. Camera assisted multimodal user interaction

    Science.gov (United States)

    Hannuksela, Jari; Silvén, Olli; Ronkainen, Sami; Alenius, Sakari; Vehviläinen, Markku

    2010-01-01

    Since more processing power, new sensing and display technologies are already available in mobile devices, there has been increased interest in building systems to communicate via different modalities such as speech, gesture, expression, and touch. In context identification based user interfaces, these independent modalities are combined to create new ways how the users interact with hand-helds. While these are unlikely to completely replace traditional interfaces, they will considerably enrich and improve the user experience and task performance. We demonstrate a set of novel user interface concepts that rely on built-in multiple sensors of modern mobile devices for recognizing the context and sequences of actions. In particular, we use the camera to detect whether the user is watching the device, for instance, to make the decision to turn on the display backlight. In our approach the motion sensors are first employed for detecting the handling of the device. Then, based on ambient illumination information provided by a light sensor, the cameras are turned on. The frontal camera is used for face detection, while the back camera provides for supplemental contextual information. The subsequent applications triggered by the context can be, for example, image capturing, or bar code reading.

  15. Response to Comments on Meo et al. Association of Exposure to Radio-Frequency Electromagnetic Field Radiation (RF-EMFR Generated by Mobile Phone Base Stations with Glycated Hemoglobin (HbA1c and Risk of Type 2 Diabetes Mellitus. Int. J. Environ. Res. Public Health, 2015, 12, 14519–14528

    Directory of Open Access Journals (Sweden)

    Sultan Ayoub Meo

    2016-02-01

    Full Text Available We highly appreciate the readers’ interest [1] in our article [2] titled “Association of Exposure to Radio-Frequency Electromagnetic Field Radiation (RF-EMFR Generated by Mobile Phone Base Stations with Glycated Hemoglobin (HbA1c and Risk of Type 2 Diabetes Mellitus” published in the International Journal of Environmental Research and Public Health [2].[...

  16. Response to Comments on Meo et al. Association of Exposure to Radio-Frequency Electromagnetic Field Radiation (RF-EMFR) Generated by Mobile Phone Base Stations with Glycated Hemoglobin (HbA1c) and Risk of Type 2 Diabetes Mellitus. Int. J. Environ. Res. Public Health, 2015, 12, 14519–14528

    OpenAIRE

    Sultan Ayoub Meo; Yazeed Alsubaie; Zaid Almubarak; Hisham Almutawa; Yazeed AlQasem; Rana Muhammed Hasanato

    2016-01-01

    We highly appreciate the readers’ interest [1] in our article [2] titled “Association of Exposure to Radio-Frequency Electromagnetic Field Radiation (RF-EMFR) Generated by Mobile Phone Base Stations with Glycated Hemoglobin (HbA1c) and Risk of Type 2 Diabetes Mellitus” published in the International Journal of Environmental Research and Public Health [2].[...

  17. Lytro camera technology: theory, algorithms, performance analysis

    Science.gov (United States)

    Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio

    2013-03-01

    The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.

  18. Architectural Design Document for Camera Models

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study.......Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study....

  19. Electronographic cameras for space astronomy.

    Science.gov (United States)

    Carruthers, G. R.; Opal, C. B.

    1972-01-01

    Magnetically-focused electronographic cameras have been under development at the Naval Research Laboratory for use in far-ultraviolet imagery and spectrography, primarily in astronomical and optical-geophysical observations from sounding rockets and space vehicles. Most of this work has been with cameras incorporating internal optics of the Schmidt or wide-field all-reflecting types. More recently, we have begun development of electronographic spectrographs incorporating an internal concave grating, operating at normal or grazing incidence. We also are developing electronographic image tubes of the conventional end-window-photo-cathode type, for far-ultraviolet imagery at the focus of a large space telescope, with image formats up to 120 mm in diameter.

  20. The Dark Energy Survey Camera

    Science.gov (United States)

    Flaugher, Brenna

    2012-03-01

    The Dark Energy Survey Collaboration has built the Dark Energy Camera (DECam), a 3 square degree, 520 Megapixel CCD camera which is being mounted on the Blanco 4-meter telescope at CTIO. DECam will be used to carry out the 5000 sq. deg. Dark Energy Survey, using 30% of the telescope time over a 5 year period. During the remainder of the time, and after the survey, DECam will be available as a community instrument. Construction of DECam is complete. The final components were shipped to Chile in Dec. 2011 and post-shipping checkout is in progress in Dec-Jan. Installation and commissioning on the telescope are taking place in 2012. A summary of lessons learned and an update of the performance of DECam and the status of the DECam installation and commissioning will be presented.

  1. An optical metasurface planar camera

    CERN Document Server

    Arbabi, Amir; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei

    2016-01-01

    Optical metasurfaces are 2D arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optical design by enabling complex low cost systems where multiple metasurfaces are lithographically stacked on top of each other and are integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here, we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has an f-number of 0.9, an angle-of-view larger than 60$^\\circ$$\\times$60$^\\circ$, and operates at 850 nm wavelength with large transmission. The camera exhibits high image quality, which indicates the potential of this technology to produce a paradigm shift in future designs of imaging systems for microscopy, photograp...

  2. Camera Movement in Narrative Cinema

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2007-01-01

    Just like art historians have focused on e.g. composition or lighting, this dissertation takes a single stylistic parameter as its object of study: camera movement. Within film studies this localized avenue of middle-level research has become increasingly viable under the aegis of a perspective...... known as ‘the poetics of cinema.’ The dissertation embraces two branches of research within this perspective: stylistics and historical poetics (stylistic history). The dissertation takes on three questions in relation to camera movement and is accordingly divided into three major sections. The first...... cinematic poetics and interpretive criticism sensitive to style may gain from each other. There is no reason why stylistically informed interpretive criticism cannot be considered within a functional framework and there is no reason why one should not use a functional taxonomy as a basis on which to launch...

  3. Combustion pinhole-camera system

    Science.gov (United States)

    Witte, A.B.

    1982-05-19

    A pinhole camera system is described utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.

  4. ISO camera array development status

    Science.gov (United States)

    Sibille, F.; Cesarsky, C.; Agnese, P.; Rouan, D.

    1989-01-01

    A short outline is given of the Infrared Space Observatory Camera (ISOCAM), one of the 4 instruments onboard the Infrared Space Observatory (ISO), with the current status of its two 32x32 arrays, an InSb charge injection device (CID) and a Si:Ga direct read-out (DRO), and the results of the in orbit radiation simulation with gamma ray sources. A tentative technique for the evaluation of the flat fielding accuracy is also proposed.

  5. Graphic design of pinhole cameras

    Science.gov (United States)

    Edwards, H. B.; Chu, W. P.

    1979-01-01

    The paper describes a graphic technique for the analysis and optimization of pinhole size and focal length. The technique is based on the use of the transfer function of optical elements described by Scott (1959) to construct the transfer function of a circular pinhole camera. This transfer function is the response of a component or system to a pattern of lines having a sinusoidally varying radiance at varying spatial frequencies. Some specific examples of graphic design are presented.

  6. 21 CFR 886.1120 - Opthalmic camera.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Opthalmic camera. 886.1120 Section 886.1120 Food... DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1120 Opthalmic camera. (a) Identification. An ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding...

  7. 21 CFR 892.1110 - Positron camera.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Positron camera. 892.1110 Section 892.1110 Food... DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A positron camera is a device intended to image the distribution of positron-emitting radionuclides in the...

  8. 16 CFR 501.1 - Camera film.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Camera film. 501.1 Section 501.1 Commercial Practices FEDERAL TRADE COMMISSION RULES, REGULATIONS, STATEMENT OF GENERAL POLICY OR INTERPRETATION AND... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the...

  9. Solid-state array cameras.

    Science.gov (United States)

    Strull, G; List, W F; Irwin, E L; Farnsworth, D L

    1972-05-01

    Over the past few years there has been growing interest shown in the rapidly maturing technology of totally solid-state imaging. This paper presents a synopsis of developments made in this field at the Westinghouse ATL facilities with emphasis on row-column organized monolithic arrays of diffused junction phototransistors. The complete processing sequence applicable to the fabrication of modern highdensity arrays is described from wafer ingot preparation to final sensor testing. Special steps found necessary for high yield processing, such as surface etching prior to both sawing and lapping, are discussed along with the rationale behind their adoption. Camera systems built around matrix array photosensors are presented in a historical time-wise progression beginning with the first 50 x 50 element converter developed in 1965 and running through the most recent 400 x 500 element system delivered in 1972. The freedom of mechanical architecture made available to system designers by solid-state array cameras is noted from the description of a bare-chip packaged cubic inch camera. Hybrid scan systems employing one-dimensional line arrays are cited, and the basic tradeoffs to their use are listed. PMID:20119094

  10. Unassisted 3D camera calibration

    Science.gov (United States)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  11. Learning as way-finding

    DEFF Research Database (Denmark)

    Dau, Susanne

    This paper is based on case study findings from studying undergraduate students’ perceptions of their navigation in a blended learning environment where different learning spaces are offered. In this paper learning is regarded as a multi-level and multi complex concept. In this regard the concept...... of learning used in this paper is inspired by the latest work of the Danish professor Illeris and the interwoven concept of knowledge development as revealed in the SECI-model generated by the Japanese professors Nonaka and Takeuchi. The empirical investigation, which is the basis of the presented assumptions...... to be “blurred ecotones” between studying, leisure, sociality, identity-seeking and daily life which demands for an extension of the concept of learning. It is stressed that learning are conditioned by contextual orientations-processes in peripersonal spaces. Spaces of learning seem to guide how learning can...

  12. HHEBBES! All sky camera system: status update

    Science.gov (United States)

    Bettonvil, F.

    2015-01-01

    A status update is given of the HHEBBES! All sky camera system. HHEBBES!, an automatic camera for capturing bright meteor trails, is based on a DSLR camera and a Liquid Crystal chopper for measuring the angular velocity. Purpose of the system is to a) recover meteorites; b) identify origin/parental bodies. In 2015, two new cameras were rolled out: BINGO! -alike HHEBBES! also in The Netherlands-, and POgLED, in Serbia. BINGO! is a first camera equipped with a longer focal length fisheye lens, to further increase the accuracy. Several minor improvements have been done and the data reduction pipeline was used for processing two prominent Dutch fireballs.

  13. Mini gamma camera, camera system and method of use

    Science.gov (United States)

    Majewski, Stanislaw; Weisenberger, Andrew G.; Wojcik, Randolph F.

    2001-01-01

    A gamma camera comprising essentially and in order from the front outer or gamma ray impinging surface: 1) a collimator, 2) a scintillator layer, 3) a light guide, 4) an array of position sensitive, high resolution photomultiplier tubes, and 5) printed circuitry for receipt of the output of the photomultipliers. There is also described, a system wherein the output supplied by the high resolution, position sensitive photomultipiler tubes is communicated to: a) a digitizer and b) a computer where it is processed using advanced image processing techniques and a specific algorithm to calculate the center of gravity of any abnormality observed during imaging, and c) optional image display and telecommunications ports.

  14. 手机戒烟干预和网络戒烟干预的国际进展研究%Domestic and International Progress of Mobile Phone-based and Web-based Smoking Cessation Interventions

    Institute of Scientific and Technical Information of China (English)

    王立立; 王燕玲; 姜垣

    2011-01-01

    提供戒烟帮助是控烟工作的重点之一.该文对国际上新显现的手机戒烟干预和网络戒烟干预的发展进行了总结,以期为我国的控烟工作提供借鉴支持.手机和网络戒烟干预方式的共同优势在于:无时间和地域性的限制,范围更广;规避了有些人不愿意面对面交流的忧虑,保护了咨询者的隐私;成本相对较低.二者在可及性、沟通效果、成本效益等方面则各有利弊.%Providing smoking cessation assistance is a key way in tobacco control practice. This paper summarized the progress of mobile phone-based and web-based smoking cessation interventions newly emerged around internationally, which could be experience for China. The mutual advantages of these two ways of interventions were: no time limitation and as well as geography; no necessary of face to face communication for the sake of privacy; and high cost-effective ratio. Meanwhile, the two ways of interventions are different in accessibility, communication effect, and cost, etc.

  15. MIOTIC study: a prospective, multicenter, randomized study to evaluate the long-term efficacy of mobile phone-based Internet of Things in the management of patients with stable COPD.

    Science.gov (United States)

    Zhang, Jing; Song, Yuan-Lin; Bai, Chun-Xue

    2013-01-01

    Chronic obstructive pulmonary disease (COPD) is a common disease that leads to huge economic and social burden. Efficient and effective management of stable COPD is essential to improve quality of life and reduce medical expenditure. The Internet of Things (IoT), a recent breakthrough in communication technology, seems promising in improving health care delivery, but its potential strengths in COPD management remain poorly understood. We have developed a mobile phone-based IoT (mIoT) platform and initiated a randomized, multicenter, controlled trial entitled the 'MIOTIC study' to investigate the influence of mIoT among stable COPD patients. In the MIOTIC study, at least 600 patients with stable GOLD group C or D COPD and with a history of at least two moderate-to-severe exacerbations within the previous year will be randomly allocated to the control group, which receives routine follow-up, or the intervention group, which receives mIoT management. Endpoints of the study include (1) frequency and severity of acute exacerbation; (2) symptomatic evaluation; (3) pre- and post-bronchodilator forced expiratory volume in 1 second (FEV1) and FEV1/forced vital capacity (FVC) measurement; (4) exercise capacity; and (5) direct medical cost per year. Results from this study should provide direct evidence for the suitability of mIoT in stable COPD patient management. PMID:24082784

  16. MIOTIC study: a prospective, multicenter, randomized study to evaluate the long-term efficacy of mobile phone-based Internet of Things in the management of patients with stable COPD.

    Science.gov (United States)

    Zhang, Jing; Song, Yuan-Lin; Bai, Chun-Xue

    2013-01-01

    Chronic obstructive pulmonary disease (COPD) is a common disease that leads to huge economic and social burden. Efficient and effective management of stable COPD is essential to improve quality of life and reduce medical expenditure. The Internet of Things (IoT), a recent breakthrough in communication technology, seems promising in improving health care delivery, but its potential strengths in COPD management remain poorly understood. We have developed a mobile phone-based IoT (mIoT) platform and initiated a randomized, multicenter, controlled trial entitled the 'MIOTIC study' to investigate the influence of mIoT among stable COPD patients. In the MIOTIC study, at least 600 patients with stable GOLD group C or D COPD and with a history of at least two moderate-to-severe exacerbations within the previous year will be randomly allocated to the control group, which receives routine follow-up, or the intervention group, which receives mIoT management. Endpoints of the study include (1) frequency and severity of acute exacerbation; (2) symptomatic evaluation; (3) pre- and post-bronchodilator forced expiratory volume in 1 second (FEV1) and FEV1/forced vital capacity (FVC) measurement; (4) exercise capacity; and (5) direct medical cost per year. Results from this study should provide direct evidence for the suitability of mIoT in stable COPD patient management.

  17. Cryogenic mechanism for ISO camera

    Science.gov (United States)

    Luciano, G.

    1987-12-01

    The Infrared Space Observatory (ISO) camera configuration, architecture, materials, tribology, motorization, and development status are outlined. The operating temperature is 2 to 3 K, at 2.5 to 18 microns. Selected material is a titanium alloy, with MoS2/TiC lubrication. A stepping motor drives the ball-bearing mounted wheels to which the optical elements are fixed. Model test results are satisfactory, and also confirm the validity of the test facilities, particularly for vibration tests at 4K.

  18. Video clustering using camera motion

    OpenAIRE

    Tort Alsina, Laura

    2012-01-01

    Com el moviment de càmera en un clip de vídeo pot ser útil per a la seva classificació en termes semàntics. [ANGLÈS] This document contains the work done in INP Grenoble during the second semester of the academic year 2011-2012, completed in Barcelona during the first months of the 2012-2013. The work presented consists in a camera motion study in different types of video in order to group fragments that have some similarity in the content. In the document it is explained how the data extr...

  19. Optimising camera traps for monitoring small mammals.

    Science.gov (United States)

    Glen, Alistair S; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce

    2013-01-01

    Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea), feral cats (Felis catus) and hedgehogs (Erinaceuseuropaeus). Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  20. The Dark Energy Camera (DECam)

    CERN Document Server

    Honscheid, K; Abbott, T; Annis, J; Antonik, M; Barcel, M; Bernstein, R; Bigelow, B; Brooks, D; Buckley-Geer, E; Campa, J; Cardiel, L; Castander, F; Castilla, J; Cease, H; Chappa, S; Dede, E; Derylo, G; Diehl, T; Doel, P; De Vicente, J; Eiting, J; Estrada, J; Finley, D; Flaugher, B; Gaztañaga, E; Gerdes, D; Gladders, M; Guarino, V; Gutíerrez, G; Hamilton, J; Haney, M; Holland, S; Huffman, D; Karliner, I; Kau, D; Kent, S; Kozlovsky, M; Kubik, D; Kühn, K; Kuhlmann, S; Kuk, K; Leger, F; Lin, H; Martínez, G; Martínez, M; Merritt, W; Mohr, J; Moore, P; Moore, T; Nord, B; Ogando, R; Olsen, J; Onal, B; Peoples, J; Qian, T; Roe, N; Sánchez, E; Scarpine, V; Schmidt, R; Schmitt, R; Schubnell, M; Schultz, K; Selen, M; Shaw, T; Simaitis, V; Slaughter, J; Smith, C; Spinka, H; Stefanik, A; Stuermer, W; Talaga, R; Tarle, G; Thaler, J; Tucker, D; Walker, A; Worswick, S; Zhao, A

    2008-01-01

    In this paper we describe the Dark Energy Camera (DECam), which will be the primary instrument used in the Dark Energy Survey. DECam will be a 3 sq. deg. mosaic camera mounted at the prime focus of the Blanco 4m telescope at the Cerro-Tololo International Observatory (CTIO). It consists of a large mosaic CCD focal plane, a five element optical corrector, five filters (g,r,i,z,Y), a modern data acquisition and control system and the associated infrastructure for operation in the prime focus cage. The focal plane includes of 62 2K x 4K CCD modules (0.27"/pixel) arranged in a hexagon inscribed within the roughly 2.2 degree diameter field of view and 12 smaller 2K x 2K CCDs for guiding, focus and alignment. The CCDs will be 250 micron thick fully-depleted CCDs that have been developed at the Lawrence Berkeley National Laboratory (LBNL). Production of the CCDs and fabrication of the optics, mechanical structure, mechanisms, and control system for DECam are underway; delivery of the instrument to CTIO is scheduled ...

  1. Action selection for single-camera SLAM

    OpenAIRE

    Vidal-Calleja, Teresa A.; Sanfeliu, Alberto; Andrade-Cetto, J

    2010-01-01

    A method for evaluating, at video rate, the quality of actions for a single camera while mapping unknown indoor environments is presented. The strategy maximizes mutual information between measurements and states to help the camera avoid making ill-conditioned measurements that are appropriate to lack of depth in monocular vision systems. Our system prompts a user with the appropriate motion commands during 6-DOF visual simultaneous localization and mapping with a handheld camera. Additionall...

  2. Development of biostereometric experiments. [stereometric camera system

    Science.gov (United States)

    Herron, R. E.

    1978-01-01

    The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.

  3. Movement-based Interaction in Camera Spaces

    DEFF Research Database (Denmark)

    Eriksson, Eva; Riisgaard Hansen, Thomas; Lykke-Olesen, Andreas

    2006-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movement-based projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space......, relations, and feedback. We see these as central for describing and analysing movement-based systems using camera tracking and we show how these three concepts can be used to analyse other camera tracking applications....

  4. Comparative evaluation of consumer grade cameras and mobile phone cameras for close range photogrammetry

    Science.gov (United States)

    Chikatsu, Hirofumi; Takahashi, Yoji

    2009-08-01

    The authors have been concentrating on developing convenient 3D measurement methods using consumer grade digital cameras, and it was concluded that consumer grade digital cameras are expected to become a useful photogrammetric device for the various close range application fields. On the other hand, mobile phone cameras which have 10 mega pixels were appeared on the market in Japan. In these circumstances, we are faced with alternative epoch-making problem whether mobile phone cameras are able to take the place of consumer grade digital cameras in close range photogrammetric applications. In order to evaluate potentials of mobile phone cameras in close range photogrammetry, comparative evaluation between mobile phone cameras and consumer grade digital cameras are investigated in this paper with respect to lens distortion, reliability, stability and robustness. The calibration tests for 16 mobile phone cameras and 50 consumer grade digital cameras were conducted indoors using test target. Furthermore, practability of mobile phone camera for close range photogrammetry was evaluated outdoors. This paper presents that mobile phone cameras have ability to take the place of consumer grade digital cameras, and develop the market in digital photogrammetric fields.

  5. True-color night vision cameras

    Science.gov (United States)

    Kriesel, Jason; Gat, Nahum

    2007-04-01

    This paper describes True-Color Night Vision cameras that are sensitive to the visible to near-infrared (V-NIR) portion of the spectrum allowing for the "true-color" of scenes and objects to be displayed and recorded under low-light-level conditions. As compared to traditional monochrome (gray or green) night vision imagery, color imagery has increased information content and has proven to enable better situational awareness, faster response time, and more accurate target identification. Urban combat environments, where rapid situational awareness is vital, and marine operations, where there is inherent information in the color of markings and lights, are example applications that can benefit from True-Color Night Vision technology. Two different prototype cameras, employing two different true-color night vision technological approaches, are described and compared in this paper. One camera uses a fast-switching liquid crystal filter in front of a custom Gen-III image intensified camera, and the second camera is based around an EMCCD sensor with a mosaic filter applied directly to the sensor. In addition to visible light, both cameras utilize NIR to (1) increase the signal and (2) enable the viewing of laser aiming devices. The performance of the true-color cameras, along with the performance of standard (monochrome) night vision cameras, are reported and compared under various operating conditions in the lab and the field. In addition to subjective criterion, figures of merit designed specifically for the objective assessment of such cameras are used in this analysis.

  6. Research of Camera Calibration Based on DSP

    OpenAIRE

    Zheng Zhang; Yukun Wan; Lixin Cai

    2013-01-01

    To take advantage of the high-efficiency and stability of DSP in the data processing and the functions of OpenCV library, this study brought forward a scheme that camera calibration in DSP embedded system calibration. An arithmetic of camera calibration based on OpenCV is designed by analyzing the camera model and lens distortion. The transplantation of EMCV to DSP is completed and the arithmetic of camera calibration is migrated and optimized based on the CCS development environment and the ...

  7. Omnidirectional Underwater Camera Design and Calibration

    Directory of Open Access Journals (Sweden)

    Josep Bosch

    2015-03-01

    Full Text Available This paper presents the development of an underwater omnidirectional multi-camera system (OMS based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3 and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach.

  8. Framework for Evaluating Camera Opinions

    Directory of Open Access Journals (Sweden)

    K.M. Subramanian

    2015-03-01

    Full Text Available Opinion mining plays a most important role in text mining applications in brand and product positioning, customer relationship management, consumer attitude detection and market research. The applications lead to new generation of companies/products meant for online market perception, online content monitoring and reputation management. Expansion of the web inspires users to contribute/express opinions via blogs, videos and social networking sites. Such platforms provide valuable information for analysis of sentiment pertaining a product or service. This study investigates the performance of various feature extraction methods and classification algorithm for opinion mining. Opinions expressed in Amazon website for cameras are collected and used for evaluation. Features are extracted from the opinions using Term Document Frequency and Inverse Document Frequency (TDFIDF. Feature transformation is achieved through Principal Component Analysis (PCA and kernel PCA. Naïve Bayes, K Nearest Neighbor and Classification and Regression Trees (CART classification algorithms classify the features extracted.

  9. Illumination box and camera system

    Science.gov (United States)

    Haas, Jeffrey S.; Kelly, Fredrick R.; Bushman, John F.; Wiefel, Michael H.; Jensen, Wayne A.; Klunder, Gregory L.

    2002-01-01

    A hand portable, field-deployable thin-layer chromatography (TLC) unit and a hand portable, battery-operated unit for development, illumination, and data acquisition of the TLC plates contain many miniaturized features that permit a large number of samples to be processed efficiently. The TLC unit includes a solvent tank, a holder for TLC plates, and a variety of tool chambers for storing TLC plates, solvent, and pipettes. After processing in the TLC unit, a TLC plate is positioned in a collapsible illumination box, where the box and a CCD camera are optically aligned for optimal pixel resolution of the CCD images of the TLC plate. The TLC system includes an improved development chamber for chemical development of TLC plates that prevents solvent overflow.

  10. HRSC: High resolution stereo camera

    Science.gov (United States)

    Neukum, G.; Jaumann, R.; Basilevsky, A.T.; Dumke, A.; Van Gasselt, S.; Giese, B.; Hauber, E.; Head, J. W.; Heipke, C.; Hoekzema, N.; Hoffmann, H.; Greeley, R.; Gwinner, K.; Kirk, R.; Markiewicz, W.; McCord, T.B.; Michael, G.; Muller, Jan-Peter; Murray, J.B.; Oberst, J.; Pinet, P.; Pischel, R.; Roatsch, T.; Scholten, F.; Willner, K.

    2009-01-01

    The High Resolution Stereo Camera (HRSC) on Mars Express has delivered a wealth of image data, amounting to over 2.5 TB from the start of the mapping phase in January 2004 to September 2008. In that time, more than a third of Mars was covered at a resolution of 10-20 m/pixel in stereo and colour. After five years in orbit, HRSC is still in excellent shape, and it could continue to operate for many more years. HRSC has proven its ability to close the gap between the low-resolution Viking image data and the high-resolution Mars Orbiter Camera images, leading to a global picture of the geological evolution of Mars that is now much clearer than ever before. Derived highest-resolution terrain model data have closed major gaps and provided an unprecedented insight into the shape of the surface, which is paramount not only for surface analysis and geological interpretation, but also for combination with and analysis of data from other instruments, as well as in planning for future missions. This chapter presents the scientific output from data analysis and highlevel data processing, complemented by a summary of how the experiment is conducted by the HRSC team members working in geoscience, atmospheric science, photogrammetry and spectrophotometry. Many of these contributions have been or will be published in peer-reviewed journals and special issues. They form a cross-section of the scientific output, either by summarising the new geoscientific picture of Mars provided by HRSC or by detailing some of the topics of data analysis concerning photogrammetry, cartography and spectral data analysis.

  11. MISR FIRSTLOOK radiometric camera-by-camera Cloud Mask V001

    Data.gov (United States)

    National Aeronautics and Space Administration — This file contains the FIRSTLOOK Radiometric camera-by-camera Cloud Mask (RCCM) dataset produced using ancillary inputs (RCCT) from the previous time period. It is...

  12. MIOTIC study: a prospective, multicenter, randomized study to evaluate the long-term efficacy of mobile phone-based Internet of Things in the management of patients with stable COPD

    Directory of Open Access Journals (Sweden)

    Zhang J

    2013-09-01

    Full Text Available Jing Zhang, Yuan-lin Song, Chun-xue Bai Department of Pulmonary Medicine, Zhongshan Hospital, Fudan University, Shanghai, People's Republic of China Abstract: Chronic obstructive pulmonary disease (COPD is a common disease that leads to huge economic and social burden. Efficient and effective management of stable COPD is essential to improve quality of life and reduce medical expenditure. The Internet of Things (IoT, a recent breakthrough in communication technology, seems promising in improving health care delivery, but its potential strengths in COPD management remain poorly understood. We have developed a mobile phone-based IoT (mIoT platform and initiated a randomized, multicenter, controlled trial entitled the ‘MIOTIC study’ to investigate the influence of mIoT among stable COPD patients. In the MIOTIC study, at least 600 patients with stable GOLD group C or D COPD and with a history of at least two moderate-to-severe exacerbations within the previous year will be randomly allocated to the control group, which receives routine follow-up, or the intervention group, which receives mIoT management. Endpoints of the study include (1 frequency and severity of acute exacerbation; (2 symptomatic evaluation; (3 pre- and post-bronchodilator forced expiratory volume in 1 second (FEV1 and FEV1/forced vital capacity (FVC measurement; (4 exercise capacity; and (5 direct medical cost per year. Results from this study should provide direct evidence for the suitability of mIoT in stable COPD patient management. Keywords: Internet of Things, mobile phone, chronic obstructive pulmonary disease, efficacy

  13. 基于智能手机平台的主动安全预警系统关键技术研究%Research on Smart-phone Based Active Safety Warning Technology

    Institute of Scientific and Technical Information of China (English)

    金茂菁

    2012-01-01

    Vehicle active safety systems have been proved effective for saving lives and reducing traffic accident. However, these systems were more expensive and less widespread than smart-phones,the multi-sensor functions and information processing ability inside the cell phone were greatly improved for further application. Firstly, the parameters and functions for these sensors are introduced in this study. Then, the framework of smart-phone based active safety system is proposed. The function for safety warning system is designed using forward collision warning, lane departure warning system as examples, and is well explained. The field experiment using two typically smart-phone system and professional system was conduct for function and accuracy analysis finally. The results indicate that the accuracy of smartphone based system was acceptable; the high-equipped smart-phones can realize even more active safety warning function.%介绍了智能手机中传感器类型,提出了基于智能手机的预警系统体系框架,设计了安全预警功能.采用道路实验横向测评了两款智能手机安全预警软件和一款专业设备在功能、可靠性等方面的指标,前碰撞和车道偏离预警结果表明:智能手机可以实现专业系统的主动安全预警功能并进一步拓展,预警信息发布精度也在可接受范围内.

  14. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many ca

  15. Improving Situational Awareness in camera surveillance by combining top-view maps with camera images

    NARCIS (Netherlands)

    Kooi, F.L.; Zeeders, R.

    2009-01-01

    The goal of the experiment described is to improve today's camera surveillance in public spaces. Three designs with the camera images combined on a top-view map were compared to each other and to the current situation in camera surveillance. The goal was to test which design makes spatial relationsh

  16. Camera self-calibration from translation by referring to a known camera.

    Science.gov (United States)

    Zhao, Bin; Hu, Zhaozheng

    2015-09-01

    This paper presents a novel linear method for camera self-calibration by referring to a known (or calibrated) camera. The method requires at least three images, with two images generated by the uncalibrated camera from pure translation and one image generated by the known reference camera. We first propose a method to compute the infinite homography from scene depths. Based on this, we use two images generated by translating the uncalibrated camera to recover scene depths, which are further utilized to linearly compute the infinite homography between an arbitrary uncalibrated image, and the image from the known camera. With the known camera as reference, the computed infinite homography is readily decomposed for camera calibration. The proposed self-calibration method has been tested with simulation and real image data. Experimental results demonstrate that the method is practical and accurate. This paper proposes using a "known reference camera" for camera calibration. The pure translation, as required in the method, is much more maneuverable, compared with some strict motions in the literature, such as pure rotation. The proposed self-calibration method has good potential for solving online camera calibration problems, which has important applications, especially for multicamera and zooming camera systems.

  17. Centering mount for a gamma camera

    International Nuclear Information System (INIS)

    A device for centering a γ-camera detector in case of radionuclide diagnosis is described. It permits the use of available medical coaches instead of a table with a transparent top. The device can be used for centering a detector (when it is fixed at the low end of a γ-camera) on a required area of the patient's body

  18. A BASIC CAMERA UNIT FOR MEDICAL PHOTOGRAPHY.

    Science.gov (United States)

    SMIALOWSKI, A; CURRIE, D J

    1964-08-22

    A camera unit suitable for most medical photographic purposes is described. The unit comprises a single-lens reflex camera, an electronic flash unit and supplementary lenses. Simple instructions for use of th's basic unit are presented. The unit is entirely suitable for taking fine-quality photographs of most medical subjects by persons who have had little photographic training.

  19. Cameras Monitor Spacecraft Integrity to Prevent Failures

    Science.gov (United States)

    2014-01-01

    The Jet Propulsion Laboratory contracted Malin Space Science Systems Inc. to outfit Curiosity with four of its cameras using the latest commercial imaging technology. The company parlayed the knowledge gained under working with NASA to develop an off-the-shelf line of cameras, along with a digital video recorder, designed to help troubleshoot problems that may arise on satellites in space.

  20. CCD Color Camera Characterization for Image Measurements

    NARCIS (Netherlands)

    Withagen, P.J.; Groen, F.C.A.; Schutte, K.

    2007-01-01

    In this article, we will analyze a range of different types of cameras for its use in measurements. We verify a general model of a charged coupled device camera using experiments. This model includes gain and offset, additive and multiplicative noise, and gamma correction. It is shown that for sever

  1. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  2. Depth Estimation Using a Sliding Camera.

    Science.gov (United States)

    Ge, Kailin; Hu, Han; Feng, Jianjiang; Zhou, Jie

    2016-02-01

    Image-based 3D reconstruction technology is widely used in different fields. The conventional algorithms are mainly based on stereo matching between two or more fixed cameras, and high accuracy can only be achieved using a large camera array, which is very expensive and inconvenient in many applications. Another popular choice is utilizing structure-from-motion methods for arbitrarily placed camera(s). However, due to too many degrees of freedom, its computational cost is heavy and its accuracy is rather limited. In this paper, we propose a novel depth estimation algorithm using a sliding camera system. By analyzing the geometric properties of the camera system, we design a camera pose initialization algorithm that can work satisfyingly with only a small number of feature points and is robust to noise. For pixels corresponding to different depths, an adaptive iterative algorithm is proposed to choose optimal frames for stereo matching, which can take advantage of continuously pose-changing imaging and save the time consumption amazingly too. The proposed algorithm can also be easily extended to handle less constrained situations (such as using a camera mounted on a moving robot or vehicle). Experimental results on both synthetic and real-world data have illustrated the effectiveness of the proposed algorithm.

  3. Depth Estimation Using a Sliding Camera.

    Science.gov (United States)

    Ge, Kailin; Hu, Han; Feng, Jianjiang; Zhou, Jie

    2016-02-01

    Image-based 3D reconstruction technology is widely used in different fields. The conventional algorithms are mainly based on stereo matching between two or more fixed cameras, and high accuracy can only be achieved using a large camera array, which is very expensive and inconvenient in many applications. Another popular choice is utilizing structure-from-motion methods for arbitrarily placed camera(s). However, due to too many degrees of freedom, its computational cost is heavy and its accuracy is rather limited. In this paper, we propose a novel depth estimation algorithm using a sliding camera system. By analyzing the geometric properties of the camera system, we design a camera pose initialization algorithm that can work satisfyingly with only a small number of feature points and is robust to noise. For pixels corresponding to different depths, an adaptive iterative algorithm is proposed to choose optimal frames for stereo matching, which can take advantage of continuously pose-changing imaging and save the time consumption amazingly too. The proposed algorithm can also be easily extended to handle less constrained situations (such as using a camera mounted on a moving robot or vehicle). Experimental results on both synthetic and real-world data have illustrated the effectiveness of the proposed algorithm. PMID:26685238

  4. Rosetta Star Tracker and Navigation Camera

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera.......Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera....

  5. Fazendo 3d com uma camera so

    CERN Document Server

    Lunazzi, J J

    2010-01-01

    A simple system to make stereo photography or videos based in just two mirrors was made in 1989 and recently adapted to a digital camera setup. Um sistema simples para fazer fotografia ou videos em estereo baseado em dois espelhos que dividem o campo da imagem foi criado no ano 1989, e recentemente adaptado para camera digital.

  6. Laser Dazzling of Focal Plane Array Cameras

    NARCIS (Netherlands)

    Schleijpen, H.M.A.; Dimmeler, A.; Eberle, B; Heuvel, J.C. van den; Mieremet, A.L.; Bekman, H.H.P.T.; Mellier, B.

    2007-01-01

    Laser countermeasures against infrared focal plane array cameras aim to saturate the full camera image. In this paper we will discuss the results of dazzling experiments performed with MWIR lasers. In the “low energy” pulse regime we observe an increasing saturated area with increasing power. The si

  7. Laser Dazzling of Focal Plane Array Cameras

    NARCIS (Netherlands)

    Schleijpen, H.M.A.; Heuvel, J.C. van den; Mieremet, A.J.; Mellier, B.; Putten, F.J.M. van

    2007-01-01

    Laser countermeasures against infrared focal plane array cameras aim to saturate the full camera image. In this paper we will discuss the results of three different dazzling experiments performed with MWIR lasers and show that the obtained results are independent of the read-out mechanism of the cam

  8. Creating and Using a Camera Obscura

    Science.gov (United States)

    Quinnell, Justin

    2012-01-01

    The camera obscura (Latin for "darkened room") is the earliest optical device and goes back over 2500 years. The small pinhole or lens at the front of the room allows light to enter and this is then "projected" onto a screen inside the room. This differs from a camera, which projects its image onto light-sensitive material. Originally images were…

  9. Flow visualization by mobile phone cameras

    Science.gov (United States)

    Cierpka, Christian; Hain, Rainer; Buchmann, Nicolas A.

    2016-06-01

    Mobile smart phones were completely changing people's communication within the last ten years. However, these devices do not only offer communication through different channels but also devices and applications for fun and recreation. In this respect, mobile phone cameras include now relatively fast (up to 240 Hz) cameras to capture high-speed videos of sport events or other fast processes. The article therefore explores the possibility to make use of this development and the wide spread availability of these cameras in the terms of velocity measurements for industrial or technical applications and fluid dynamics education in high schools and at universities. The requirements for a simplistic PIV (particle image velocimetry) system are discussed. A model experiment of a free water jet was used to prove the concept and shed some light on the achievable quality and determine bottle necks by comparing the results obtained with a mobile phone camera with data taken by a high-speed camera suited for scientific experiments.

  10. Gamma camera performance: technical assessment protocol

    Energy Technology Data Exchange (ETDEWEB)

    Bolster, A.A. [West Glasgow Hospitals NHS Trust, London (United Kingdom). Dept. of Clinical Physics; Waddington, W.A. [University College London Hospitals NHS Trust, London (United Kingdom). Inst. of Nuclear Medicine

    1996-12-31

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera`s computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author).

  11. Adapting virtual camera behaviour through player modelling

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    the viewpoint movements to the player type and her game-play style. Ultimately, the methodology is applied to a 3D platform game and is evaluated through a controlled experiment; the results suggest that the resulting adaptive cinematographic experience is favoured by some player types and it can generate......Research in virtual camera control has focused primarily on finding methods to allow designers to place cameras effectively and efficiently in dynamic and unpredictable environments, and to generate complex and dynamic plans for cinematography in virtual environments. In this article, we propose...... a novel approach to virtual camera control, which builds upon camera control and player modelling to provide the user with an adaptive point-of-view. To achieve this goal, we propose a methodology to model the player’s preferences on virtual camera movements and we employ the resulting models to tailor...

  12. Modelling Virtual Camera Behaviour Through Player Gaze

    DEFF Research Database (Denmark)

    Picardi, Andrea; Burelli, Paolo; Yannakakis, Georgios N.

    2012-01-01

    In a three-dimensional virtual environment, aspects such as narrative and interaction largely depend on the placement and animation of the virtual camera. Therefore, virtual camera control plays a critical role in player experience and, thereby, in the overall quality of a computer game. Both game...... industry and game AI research focus on the devel- opment of increasingly sophisticated systems to automate the control of the virtual camera integrating artificial intel- ligence algorithms within physical simulations. However, in both industry and academia little research has been carried out...... on the relationship between virtual camera, game-play and player behaviour. We run a game user experiment to shed some light on this relationship and identify relevant dif- ferences between camera behaviours through different game sessions, playing behaviours and player gaze patterns. Re- sults show that users can...

  13. Airborne Digital Camera. A digital view from above; Airborne Digital Camera. Der digitale Blick von oben

    Energy Technology Data Exchange (ETDEWEB)

    Roeser, H.P. [DLR Deutsches Zentrum fuer Luft- und Raumfahrt e.V., Berlin (Germany). Inst. fuer Weltraumsensorik und Planetenerkundung

    1999-09-01

    The Airborne Digital Camera is based on the WAOSS camera of the MARS-96 mission. The camera will provide a new basis for airborne photogrammetry and remote exploration. The ADC project aims at the development of the first commercial digital airborne camera. [German] Die Wurzeln des Projektes Airborne Digital Camera (ADC) liegen in der Mission MARS-96. Die hierfuer konzipierte Marskamera WAOSS lieferte die Grundlage fuer das innovative Konzept einer digitalen Flugzeugkamera. Diese ist auf dem Weg, die flugzeuggestuetzte Photogrammetrie und Fernerkundung auf eine technologisch voellig neue Basis zu stellen. Ziel des Projektes ADC ist die Entwicklung der ersten kommerziellen digitalen Luftbildkamera. (orig.)

  14. NIR Camera/spectrograph: TEQUILA

    Science.gov (United States)

    Ruiz, E.; Sohn, E.; Cruz-Gonzalez, I.; Salas, L.; Parraga, A.; Torres, R.; Perez, M.; Cobos, F.; Tejada, C.; Iriarte, A.

    1998-11-01

    We describe the configuration and operation modes of the IR camera/spectrograph called TEQUILA, based on a 1024X1024 HgCdTe FPA (HAWAII). The optical system will allow three possible modes of operation: direct imaging, low and medium resolution spectroscopy and polarimetry. The basic system is being designed to consist of the following: 1) A LN$_2$ dewar that allocates the FPA together with the preamplifiers and a 24 filter position cylinder. 2) Control and readout electronics based on DSP modules linked to a workstation through fiber optics. 3) An optomechanical assembly cooled to -30oC that provides an efficient operation of the instrument in its various modes. 4) A control module for the moving parts of the instrument. The opto-mechanical assembly will have the necessary provisions to install a scanning Fabry-Perot interferometer and an adaptive optics correction system. The final image acquisition and control of the whole instrument is carried out in a workstation to provide the observer with a friendly environment. The system will operate at the 2.1 m telescope at the Observatorio Astronomico Nacional in San Pedro Martir, B.C. (Mexico), and is intended to be a first-light instrument for the new 7.8 m Mexican Infrared-Optical Telescope (TIM).

  15. Cloud Computing with Context Cameras

    Science.gov (United States)

    Pickles, A. J.; Rosing, W. E.

    2016-05-01

    We summarize methods and plans to monitor and calibrate photometric observations with our autonomous, robotic network of 2m, 1m and 40cm telescopes. These are sited globally to optimize our ability to observe time-variable sources. Wide field "context" cameras are aligned with our network telescopes and cycle every ˜2 minutes through BVr'i'z' filters, spanning our optical range. We measure instantaneous zero-point offsets and transparency (throughput) against calibrators in the 5-12m range from the all-sky Tycho2 catalog, and periodically against primary standards. Similar measurements are made for all our science images, with typical fields of view of ˜0.5 degrees. These are matched against Landolt, Stetson and Sloan standards, and against calibrators in the 10-17m range from the all-sky APASS catalog. Such measurements provide pretty good instantaneous flux calibration, often to better than 5%, even in cloudy conditions. Zero-point and transparency measurements can be used to characterize, monitor and inter-compare sites and equipment. When accurate calibrations of Target against Standard fields are required, monitoring measurements can be used to select truly photometric periods when accurate calibrations can be automatically scheduled and performed.

  16. Cloud Computing with Context Cameras

    CERN Document Server

    Pickles, A J

    2013-01-01

    We summarize methods and plans to monitor and calibrate photometric observations with our autonomous, robotic network of 2m, 1m and 40cm telescopes. These are sited globally to optimize our ability to observe time-variable sources. Wide field "context" cameras are aligned with our network telescopes and cycle every 2 minutes through BVriz filters, spanning our optical range. We measure instantaneous zero-point offsets and transparency (throughput) against calibrators in the 5-12m range from the all-sky Tycho2 catalog, and periodically against primary standards. Similar measurements are made for all our science images, with typical fields of view of 0.5 degrees. These are matched against Landolt, Stetson and Sloan standards, and against calibrators in the 10-17m range from the all-sky APASS catalog. Such measurements provide pretty good instantaneous flux calibration, often to better than 5%, even in cloudy conditions. Zero-point and transparency measurements can be used to characterize, monitor and inter-comp...

  17. Smart Camera Technology Increases Quality

    Science.gov (United States)

    2004-01-01

    When it comes to real-time image processing, everyone is an expert. People begin processing images at birth and rapidly learn to control their responses through the real-time processing of the human visual system. The human eye captures an enormous amount of information in the form of light images. In order to keep the brain from becoming overloaded with all the data, portions of an image are processed at a higher resolution than others, such as a traffic light changing colors. changing colors. In the same manner, image processing products strive to extract the information stored in light in the most efficient way possible. Digital cameras available today capture millions of pixels worth of information from incident light. However, at frame rates more than a few per second, existing digital interfaces are overwhelmed. All the user can do is store several frames to memory until that memory is full and then subsequent information is lost. New technology pairs existing digital interface technology with an off-the-shelf complementary metal oxide semiconductor (CMOS) imager to provide more than 500 frames per second of specialty image processing. The result is a cost-effective detection system unlike any other.

  18. True three-dimensional camera

    Science.gov (United States)

    Kornreich, Philipp; Farell, Bart

    2013-01-01

    An imager that can measure the distance from each pixel to the point on the object that is in focus at the pixel is described. This is accomplished by short photo-conducting lightguides at each pixel. In the eye the rods and cones are the fiber-like lightguides. The device uses ambient light that is only coherent in spherical shell-shaped light packets of thickness of one coherence length. Modern semiconductor technology permits the construction of lightguides shorter than a coherence length of ambient light. Each of the frequency components of the broad band light arriving at a pixel has a phase proportional to the distance from an object point to its image pixel. Light frequency components in the packet arriving at a pixel through a convex lens add constructively only if the light comes from the object point in focus at this pixel. The light in packets from all other object points cancels. Thus the pixel receives light from one object point only. The lightguide has contacts along its length. The lightguide charge carriers are generated by the light patterns. These light patterns, and thus the photocurrent, shift in response to the phase of the input signal. Thus, the photocurrent is a function of the distance from the pixel to its object point. Applications include autonomous vehicle navigation and robotic vision. Another application is a crude teleportation system consisting of a camera and a three-dimensional printer at a remote location.

  19. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-20 deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  20. Sky camera geometric calibration using solar observations

    Science.gov (United States)

    Urquhart, Bryan; Kurtz, Ben; Kleissl, Jan

    2016-09-01

    A camera model and associated automated calibration procedure for stationary daytime sky imaging cameras is presented. The specific modeling and calibration needs are motivated by remotely deployed cameras used to forecast solar power production where cameras point skyward and use 180° fisheye lenses. Sun position in the sky and on the image plane provides a simple and automated approach to calibration; special equipment or calibration patterns are not required. Sun position in the sky is modeled using a solar position algorithm (requiring latitude, longitude, altitude and time as inputs). Sun position on the image plane is detected using a simple image processing algorithm. The performance evaluation focuses on the calibration of a camera employing a fisheye lens with an equisolid angle projection, but the camera model is general enough to treat most fixed focal length, central, dioptric camera systems with a photo objective lens. Calibration errors scale with the noise level of the sun position measurement in the image plane, but the calibration is robust across a large range of noise in the sun position. Calibration performance on clear days ranged from 0.94 to 1.24 pixels root mean square error.

  1. Thermal characterization of a NIR hyperspectral camera

    Science.gov (United States)

    Parra, Francisca; Meza, Pablo; Pezoa, Jorge E.; Torres, Sergio N.

    2011-11-01

    The accuracy achieved by applications employing hyperspectral data collected by hyperspectral cameras depends heavily on a proper estimation of the true spectral signal. Beyond question, a proper knowledge about the sensor response is key in this process. It is argued here that the common first order representation for hyperspectral NIR sensors does not represent accurately their thermal wavelength-dependent response, hence calling for more sophisticated and precise models. In this work, a wavelength-dependent, nonlinear model for a near infrared (NIR) hyperspectral camera is proposed based on its experimental characterization. Experiments have shown that when temperature is used as the input signal, the camera response is almost linear at low wavelengths, while as the wavelength increases the response becomes exponential. This wavelength-dependent behavior is attributed to the nonlinear responsivity of the sensors in the NIR spectrum. As a result, the proposed model considers different nonlinear input/output responses, at different wavelengths. To complete the representation, both the nonuniform response of neighboring detectors in the camera and the time varying behavior of the input temperature have also been modeled. The experimental characterization and the proposed model assessment have been conducted using a NIR hyperspectral camera in the range of 900 to 1700 [nm] and a black body radiator source. The proposed model was utilized to successfully compensate for both: (i) the nonuniformity noise inherent to the NIR camera, and (ii) the stripping noise induced by the nonuniformity and the scanning process of the camera while rendering hyperspectral images.

  2. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2-deg deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  3. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (''bang-bang'') closed-loop control with a +-2-deg deadband. The deadband area is desirable to avoid operator ''seasickness'' caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator System SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system

  4. A solid state streak camera

    Science.gov (United States)

    Kleinfelder, Stuart; Kwiatkowski, Kris; Shah, Ashish

    2005-03-01

    A monolithic solid-state streak camera has been designed and fabricated in a standard 0.35 μm, 3.3V, thin-oxide digital CMOS process. It consists of a 1-D linear array of 150 integrated photodiodes, followed by fast analog buffers and on-chip, 150-deep analog frame storage. Each pixel's front-end consists of an n-diffusion / p-well photodiode, with fast complementary reset transistors, and a source-follower buffer. Each buffer drives a line of 150 sample circuits per pixel, with each sample circuit consisting of an n-channel sample switch, a 0.1 pF double-polysilicon sample capacitor, a reset switch to definitively clear the capacitor, and a multiplexed source-follower readout buffer. Fast on-chip sample clock generation was designed using a self-timed break-before-make operation that insures the maximum time for sample settling. The electrical analog bandwidth of each channels buffer and sampling circuits was designed to exceed 1 GHz. Sampling speeds of 400 M-frames/s have been achieved using electrical input signals. Operation with optical input signals has been demonstrated at 100 MHz sample rates. Sample output multiplexing allows the readout of all 22,500 samples (150 pixels times 150 samples per pixel) in about 3 ms. The chip"s output range was a maximum of 1.48 V on a 3.3V supply voltage, corresponding to a maximum 2.55 V swing at the photodiode. Time-varying output noise was measured to be 0.51 mV, rms, at 100 MHz, for a dynamic range of ~11.5 bits, rms. Circuit design details are presented, along with the results of electrical measurements and optical experiments with fast pulsed laser light sources at several wavelengths.

  5. Determining camera parameters for round glassware measurements

    Science.gov (United States)

    Baldner, F. O.; Costa, P. B.; Gomes, J. F. S.; Filho, D. M. E. S.; Leta, F. R.

    2015-01-01

    Nowadays there are many types of accessible cameras, including digital single lens reflex ones. Although these cameras are not usually employed in machine vision applications, they can be an interesting choice. However, these cameras have many available parameters to be chosen by the user and it may be difficult to select the best of these in order to acquire images with the needed metrological quality. This paper proposes a methodology to select a set of parameters that will supply a machine vision system with the needed quality image, considering the measurement required of a laboratory glassware.

  6. Uncertainty of temperature measurement with thermal cameras

    Science.gov (United States)

    Chrzanowski, Krzysztof; Matyszkiel, Robert; Fischer, Joachim; Barela, Jaroslaw

    2001-06-01

    All main international metrological organizations are proposing a parameter called uncertainty as a measure of the accuracy of measurements. A mathematical model that enables the calculations of uncertainty of temperature measurement with thermal cameras is presented. The standard uncertainty or the expanded uncertainty of temperature measurement of the tested object can be calculated when the bounds within which the real object effective emissivity (epsilon) r, the real effective background temperature Tba(r), and the real effective atmospheric transmittance (tau) a(r) are located and can be estimated; and when the intrinsic uncertainty of the thermal camera and the relative spectral sensitivity of the thermal camera are known.

  7. Close-range photogrammetry with video cameras

    Science.gov (United States)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1985-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  8. Screen-Camera Calibration Using Gray Codes

    OpenAIRE

    FRANCKEN, Yannick; Hermans, Chris; Bekaert, Philippe

    2009-01-01

    In this paper we present a method for efficient calibration of a screen-camera setup, in which the camera is not directly facing the screen. A spherical mirror is used to make the screen visible to the camera. Using Gray code illumination patterns, we can uniquely identify the reflection of each screen pixel on the imaged spherical mirror. This allows us to compute a large set of 2D-3D correspondences, using only two sphere locations. Compared to previous work, this means we require less manu...

  9. Fuzzy logic control for camera tracking system

    Science.gov (United States)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  10. Neural network method for characterizing video cameras

    Science.gov (United States)

    Zhou, Shuangquan; Zhao, Dazun

    1998-08-01

    This paper presents a neural network method for characterizing color video camera. A multilayer feedforward network with the error back-propagation learning rule for training, is used as a nonlinear transformer to model a camera, which realizes a mapping from the CIELAB color space to RGB color space. With SONY video camera, D65 illuminant, Pritchard Spectroradiometer, 410 JIS color charts as training data and 36 charts as testing data, results show that the mean error of training data is 2.9 and that of testing data is 4.0 in a 2563 RGB space.

  11. Task Panel Sensing with a Movable Camera

    Science.gov (United States)

    Wolfe, William J.; Mathis, Donald W.; Magee, Michael; Hoff, William A.

    1990-03-01

    This paper discusses the integration of model based computer vision with a robot planning system. The vision system deals with structured objects with several movable parts (the "Task Panel"). The robot planning system controls a T3-746 manipulator that has a gripper and a wrist mounted camera. There are two control functions: move the gripper into position for manipulating the panel fixtures (doors, latches, etc.), and move the camera into positions preferred by the vision system. This paper emphasizes the issues related to repositioning the camera for improved viewpoints.

  12. Camera Based Navigation System with Augmented Reality

    Directory of Open Access Journals (Sweden)

    M. Marcu

    2012-06-01

    Full Text Available Nowadays smart mobile devices have enough processing power, memory, storage and always connected wireless communication bandwidth that makes them available for any type of application. Augmented reality (AR proposes a new type of applications that tries to enhance the real world by superimposing or combining virtual objects or computer generated information with it. In this paper we present a camera based navigation system with augmented reality integration. The proposed system aims to the following: the user points the camera of the smartphone towards a point of interest, like a building or any other place, and the application searches for relevant information about that specific place and superimposes the data over the video feed on the display. When the user moves the camera away, changing its orientation, the data changes as well, in real-time, with the proper information about the place that is now in the camera view.

  13. Action selection for single-camera SLAM.

    Science.gov (United States)

    Vidal-Calleja, Teresa A; Sanfeliu, Alberto; Andrade-Cetto, Juan

    2010-12-01

    A method for evaluating, at video rate, the quality of actions for a single camera while mapping unknown indoor environments is presented. The strategy maximizes mutual information between measurements and states to help the camera avoid making ill-conditioned measurements that are appropriate to lack of depth in monocular vision systems. Our system prompts a user with the appropriate motion commands during 6-DOF visual simultaneous localization and mapping with a handheld camera. Additionally, the system has been ported to a mobile robotic platform, thus closing the control-estimation loop. To show the viability of the approach, simulations and experiments are presented for the unconstrained motion of a handheld camera and for the motion of a mobile robot with nonholonomic constraints. When combined with a path planner, the technique safely drives to a marked goal while, at the same time, producing an optimal estimated map. PMID:20350845

  14. Research of Camera Calibration Based on DSP

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2013-09-01

    Full Text Available To take advantage of the high-efficiency and stability of DSP in the data processing and the functions of OpenCV library, this study brought forward a scheme that camera calibration in DSP embedded system calibration. An arithmetic of camera calibration based on OpenCV is designed by analyzing the camera model and lens distortion. The transplantation of EMCV to DSP is completed and the arithmetic of camera calibration is migrated and optimized based on the CCS development environment and the DSP/BIOS system. On the premise of realizing calibration function, this arithmetic improves the efficiency of program execution and the precision of calibration and lays the foundation for further research of the visual location based on DSP embedded system.

  15. Calibration Procedures on Oblique Camera Setups

    Science.gov (United States)

    Kemper, G.; Melykuti, B.; Yu, C.

    2016-06-01

    Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager) is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna -IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first step with the help of

  16. CALIBRATION PROCEDURES ON OBLIQUE CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    G. Kemper

    2016-06-01

    Full Text Available Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna –IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first

  17. Increase in the Array Television Camera Sensitivity

    Science.gov (United States)

    Shakhrukhanov, O. S.

    A simple adder circuit for successive television frames that enables to considerably increase the sensitivity of such radiation detectors is suggested by the example of array television camera QN902K.

  18. Traffic Cameras, MDTA Cameras, Camera locations at MDTA, Camera location inside the tunnel (SENSITIVE), Published in 2010, 1:1200 (1in=100ft) scale, Maryland Transportation Authority.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Traffic Cameras dataset, published at 1:1200 (1in=100ft) scale, was produced all or in part from Field Survey/GPS information as of 2010. It is described as...

  19. Ge Quantum Dot Infrared Imaging Camera Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Luna Innovations Incorporated proposes to develop a high performance Ge quantum dots-based infrared (IR) imaging camera on Si substrate. The high sensitivity, large...

  20. Portable mini gamma camera for medical applications

    CERN Document Server

    Porras, E; Benlloch, J M; El-Djalil-Kadi-Hanifi, M; López, S; Pavon, N; Ruiz, J A; Sánchez, F; Sebastiá, A

    2002-01-01

    A small, portable and low-cost gamma camera for medical applications has been developed and clinically tested. This camera, based on a scintillator crystal and a Position Sensitive Photo-Multiplier Tube, has a useful field of view of 4.6 cm diameter and provides 2.2 mm of intrinsic spatial resolution. Its mobility and light weight allow to reach the patient from any desired direction. This camera images small organs with high efficiency and so addresses the demand for devices of specific clinical applications. In this paper, we present the camera and briefly describe the procedures that have led us to choose its configuration and the image reconstruction method. The clinical tests and diagnostic capability are also presented and discussed.

  1. A Survey of Catadioptric Omnidirectional Camera Calibration

    Directory of Open Access Journals (Sweden)

    Yan Zhang

    2013-02-01

    Full Text Available For dozen years, computer vision becomes more popular, in which omnidirectional camera has a larger field of view and widely been used in many fields, such as: robot navigation, visual surveillance, virtual reality, three-dimensional reconstruction, and so on. Camera calibration is an essential step to obtain three-dimensional geometric information from a two-dimensional image. Meanwhile, the omnidirectional camera image has catadioptric distortion, which need to be corrected in many applications, thus the study of such camera calibration method has important theoretical significance and practical applications. This paper firstly introduces the research status of catadioptric omnidirectional imaging system; then the image formation process of catadioptric omnidirectional imaging system has been given; finally a simple classification of omnidirectional imaging method is given, and we discussed the advantages and disadvantages of these methods.

  2. Compact stereo endoscopic camera using microprism arrays.

    Science.gov (United States)

    Yang, Sung-Pyo; Kim, Jae-Jun; Jang, Kyung-Won; Song, Weon-Kook; Jeong, Ki-Hun

    2016-03-15

    This work reports a microprism array (MPA) based compact stereo endoscopic camera with a single image sensor. The MPAs were monolithically fabricated by using two-step photolithography and geometry-guided resist reflow to form an appropriate prism angle for stereo image pair formation. The fabricated MPAs were transferred onto a glass substrate with a UV curable resin replica by using polydimethylsiloxane (PDMS) replica molding and then successfully integrated in front of a single camera module. The stereo endoscopic camera with MPA splits an image into two stereo images and successfully demonstrates the binocular disparities between the stereo image pairs for objects with different distances. This stereo endoscopic camera can serve as a compact and 3D imaging platform for medical, industrial, or military uses.

  3. Selecting the Right Camera for Your Desktop.

    Science.gov (United States)

    Rhodes, John

    1997-01-01

    Provides an overview of camera options and selection criteria for desktop videoconferencing. Key factors in image quality are discussed, including lighting, resolution, and signal-to-noise ratio; and steps to improve image quality are suggested. (LRW)

  4. Vacuum compatible miniature CCD camera head

    Science.gov (United States)

    Conder, Alan D.

    2000-01-01

    A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close(0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.

  5. AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA)

    OpenAIRE

    Veena G.S; Chandrika Prasad; Khaleel K

    2013-01-01

    The proposed work aims to create a smart application camera, with the intention of eliminating the need for a human presence to detect any unwanted sinister activities, such as theft in this case. Spread among the campus, are certain valuable biometric identification systems at arbitrary locations. The application monitosr these systems (hereafter referred to as “object”) using our smart camera system based on an OpenCV platform. By using OpenCV Haar Training, employing the Vio...

  6. The Large APEX Bolometer Camera LABOCA

    CERN Document Server

    Siringo, G; Kovács, A; Schuller, F; Weiss, A; Esch, W; Gemuend, H P; Jethava, N; Lundershausen, G; Colin, A; Guesten, R; Menten, K M; Beelen, A; Bertoldi, F; Beeman, J W; Haller, E E

    2009-01-01

    The Large APEX Bolometer Camera, LABOCA, has been commissioned for operation as a new facility instrument t the Atacama Pathfinder Experiment 12m submillimeter telescope. This new 295-bolometer total power camera, operating in the 870 micron atmospheric window, combined with the high efficiency of APEX and the excellent atmospheric transmission at the site, offers unprecedented capability in mapping submillimeter continuum emission for a wide range of astronomical purposes.

  7. CMOS Camera Array With Onboard Memory

    Science.gov (United States)

    Gat, Nahum

    2009-01-01

    A compact CMOS (complementary metal oxide semiconductor) camera system has been developed with high resolution (1.3 Megapixels), a USB (universal serial bus) 2.0 interface, and an onboard memory. Exposure times, and other operating parameters, are sent from a control PC via the USB port. Data from the camera can be received via the USB port and the interface allows for simple control and data capture through a laptop computer.

  8. Imaging camera with multiwire proportional chamber

    International Nuclear Information System (INIS)

    The camera for imaging radioisotope dislocations for use in nuclear medicine or for other applications, claimed in the patent, is provided by two multiwire lattices for the x-coordinate connected to a first coincidence circuit, and by two multiwire lattices for the y-coordinate connected to a second coincidence circuit. This arrangement eliminates the need of using a collimator and increases camera sensitivity while reducing production cost. (Ha)

  9. An imaging system for a gamma camera

    International Nuclear Information System (INIS)

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  10. Image noise induced errors in camera positioning

    OpenAIRE

    G. Chesi; Hung, YS

    2007-01-01

    The problem of evaluating worst-case camera positioning error induced by unknown-but-bounded (UBB) image noise for a given object-camera configuration is considered. Specifically, it is shown that upper bounds to the rotation and translation worst-case error for a certain image noise intensity can be obtained through convex optimizations. These upper bounds, contrary to lower bounds provided by standard optimization tools, allow one to design robust visual servo systems. © 2007 IEEE.

  11. A stereoscopic lens for digital cinema cameras

    Science.gov (United States)

    Lipton, Lenny; Rupkalvis, John

    2015-03-01

    Live-action stereoscopic feature films are, for the most part, produced using a costly post-production process to convert planar cinematography into stereo-pair images and are only occasionally shot stereoscopically using bulky dual-cameras that are adaptations of the Ramsdell rig. The stereoscopic lens design described here might very well encourage more live-action image capture because it uses standard digital cinema cameras and workflow to save time and money.

  12. A comparison of colour micrographs obtained with a charged couple devise (CCD) camera and a 35-mm camera

    DEFF Research Database (Denmark)

    Pedersen, Mads Møller; Smedegaard, Jesper; Jensen, Peter Koch;

    2005-01-01

    ophthalmology, colour CCD camera, colour film, digital imaging, resolution, micrographs, histopathology, light microscopy......ophthalmology, colour CCD camera, colour film, digital imaging, resolution, micrographs, histopathology, light microscopy...

  13. Lag Camera: A Moving Multi-Camera Array for Scene-Acquisition

    Directory of Open Access Journals (Sweden)

    Yi Xu

    2007-04-01

    Full Text Available Many applications, such as telepresence, virtual reality, and interactive walkthroughs, require a three-dimensional (3Dmodel of real-world environments. Methods, such as lightfields, geometric reconstruction and computer vision use cameras to acquire visual samples of the environment and construct a model. Unfortunately, obtaining models of real-world locations is a challenging task. In particular, important environments are often actively in use, containing moving objects, such as people entering and leaving the scene. The methods previously listed have difficulty in capturing the color and structure of the environment while in the presence of moving and temporary occluders. We describe a class of cameras called lag cameras. The main concept is to generalize a camera to take samples over space and time. Such a camera, can easily and interactively detect moving objects while continuously moving through the environment. Moreover, since both the lag camera and occluder are moving, the scene behind the occluder is captured by the lag camera even from viewpoints where the occluder lies in between the lag camera and the hidden scene. We demonstrate an implementation of a lag camera, complete with analysis and captured environments.

  14. New developments to improve SO2 cameras

    Science.gov (United States)

    Luebcke, P.; Bobrowski, N.; Hoermann, C.; Kern, C.; Klein, A.; Kuhn, J.; Vogel, L.; Platt, U.

    2012-12-01

    The SO2 camera is a remote sensing instrument that measures the two-dimensional distribution of SO2 (column densities) in volcanic plumes using scattered solar radiation as a light source. From these data SO2-fluxes can be derived. The high time resolution of the order of 1 Hz allows correlating SO2 flux measurements with other traditional volcanological measurement techniques, i.e., seismology. In the last years the application of SO2 cameras has increased, however, there is still potential to improve the instrumentation. First of all, the influence of aerosols and ash in the volcanic plume can lead to large errors in the calculated SO2 flux, if not accounted for. We present two different concepts to deal with the influence of ash and aerosols. The first approach uses a co-axial DOAS system that was added to a two filter SO2 camera. The camera used Filter A (peak transmission centred around 315 nm) to measures the optical density of SO2 and Filter B (centred around 330 nm) to correct for the influence of ash and aerosol. The DOAS system simultaneously performs spectroscopic measurements in a small area of the camera's field of view and gives additional information to correct for these effects. Comparing the optical densities for the two filters with the SO2 column density from the DOAS allows not only a much more precise calibration, but also to draw conclusions about the influence from ash and aerosol scattering. Measurement examples from Popocatépetl, Mexico in 2011 are shown and interpreted. Another approach combines the SO2 camera measurement principle with the extremely narrow and periodic transmission of a Fabry-Pérot interferometer. The narrow transmission window allows to select individual SO2 absorption bands (or series of bands) as a substitute for Filter A. Measurements are therefore more selective to SO2. Instead of Filter B, as in classical SO2 cameras, the correction for aerosol can be performed by shifting the transmission window of the Fabry

  15. Development of a Mobile Phone-Based Weight Loss Lifestyle Intervention for Filipino Americans with Type 2 Diabetes: Protocol and Early Results From the PilAm Go4Health Randomized Controlled Trial

    Science.gov (United States)

    2016-01-01

    Background Filipino Americans are the second largest Asian subgroup in the United States, and were found to have the highest prevalence of obesity and type 2 diabetes (T2D) compared to all Asian subgroups and non-Hispanic whites. In addition to genetic factors, risk factors for Filipinos that contribute to this health disparity include high sedentary rates and high fat diets. However, Filipinos are seriously underrepresented in preventive health research. Research is needed to identify effective interventions to reduce Filipino diabetes risks, subsequent comorbidities, and premature death. Objective The overall goal of this project is to assess the feasibility and potential efficacy of the Filipino Americans Go4Health Weight Loss Program (PilAm Go4Health). This program is a culturally adapted weight loss lifestyle intervention, using digital technology for Filipinos with T2D, to reduce their risk for metabolic syndrome. Methods This study was a 3-month mobile phone-based pilot randomized controlled trial (RCT) weight loss intervention with a wait list active control, followed by a 3-month maintenance phase design for 45 overweight Filipinos with T2D. Participants were randomized to an intervention group (n=22) or active control group (n=23), and analyses of the results are underway. The primary outcome will be percent weight change of the participants, and secondary outcomes will include changes in waist circumference, fasting plasma glucose, glycated hemoglobin A1c, physical activity, fat intake, and sugar-sweetened beverage intake. Data analyses will include descriptive statistics to describe sample characteristics and a feasibility assessment based on recruitment, adherence, and retention. Chi-square, Fisher's exact tests, t-tests, and nonparametric rank tests will be used to assess characteristics of randomized groups. Primary analyses will use analysis of covariance and linear mixed models to compare primary and secondary outcomes at 3 months, compared by arm

  16. How to Build Your Own Document Camera for around $100

    Science.gov (United States)

    Van Orden, Stephen

    2010-01-01

    Document cameras can have great utility in second language classrooms. However, entry-level consumer document cameras start at around $350. This article describes how the author built three document cameras and offers suggestions for how teachers can successfully build their own quality document camera using a webcam for around $100.

  17. 16 CFR 1025.45 - In camera materials.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false In camera materials. 1025.45 Section 1025.45... PROCEEDINGS Hearings § 1025.45 In camera materials. (a) Definition. In camera materials are documents... excluded from the public record. (b) In camera treatment of documents and testimony. The Presiding...

  18. Design of Endoscopic Capsule With Multiple Cameras.

    Science.gov (United States)

    Gu, Yingke; Xie, Xiang; Li, Guolin; Sun, Tianjia; Wang, Dan; Yin, Zheng; Zhang, Pengfei; Wang, Zhihua

    2015-08-01

    In order to reduce the miss rate of the wireless capsule endoscopy, in this paper, we propose a new system of the endoscopic capsule with multiple cameras. A master-slave architecture, including an efficient bus architecture and a four level clock management architecture, is applied for the Multiple Cameras Endoscopic Capsule (MCEC). For covering more area of the gastrointestinal tract wall with low power, multiple cameras with a smart image capture strategy, including movement sensitive control and camera selection, are used in the MCEC. To reduce the data transfer bandwidth and power consumption to prolong the MCEC's working life, a low complexity image compressor with PSNR 40.7 dB and compression rate 86% is implemented. A chipset is designed and implemented for the MCEC and a six cameras endoscopic capsule prototype is implemented by using the chipset. With the smart image capture strategy, the coverage rate of the MCEC prototype can achieve 98% and its power consumption is only about 7.1 mW. PMID:25376042

  19. Calibration of action cameras for photogrammetric purposes.

    Science.gov (United States)

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  20. Calibration of Action Cameras for Photogrammetric Purposes

    Directory of Open Access Journals (Sweden)

    Caterina Balletti

    2014-09-01

    Full Text Available The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a easy to handle, (b capable of performing under extreme conditions and more importantly (c able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  1. Modulated CMOS camera for fluorescence lifetime microscopy.

    Science.gov (United States)

    Chen, Hongtao; Holst, Gerhard; Gratton, Enrico

    2015-12-01

    Widefield frequency-domain fluorescence lifetime imaging microscopy (FD-FLIM) is a fast and accurate method to measure the fluorescence lifetime of entire images. However, the complexity and high costs involved in construction of such a system limit the extensive use of this technique. PCO AG recently released the first luminescence lifetime imaging camera based on a high frequency modulated CMOS image sensor, QMFLIM2. Here we tested and provide operational procedures to calibrate the camera and to improve the accuracy using corrections necessary for image analysis. With its flexible input/output options, we are able to use a modulated laser diode or a 20 MHz pulsed white supercontinuum laser as the light source. The output of the camera consists of a stack of modulated images that can be analyzed by the SimFCS software using the phasor approach. The nonuniform system response across the image sensor must be calibrated at the pixel level. This pixel calibration is crucial and needed for every camera settings, e.g. modulation frequency and exposure time. A significant dependency of the modulation signal on the intensity was also observed and hence an additional calibration is needed for each pixel depending on the pixel intensity level. These corrections are important not only for the fundamental frequency, but also for the higher harmonics when using the pulsed supercontinuum laser. With these post data acquisition corrections, the PCO CMOS-FLIM camera can be used for various biomedical applications requiring a large frame and high speed acquisition. PMID:26500051

  2. Modulated CMOS camera for fluorescence lifetime microscopy.

    Science.gov (United States)

    Chen, Hongtao; Holst, Gerhard; Gratton, Enrico

    2015-12-01

    Widefield frequency-domain fluorescence lifetime imaging microscopy (FD-FLIM) is a fast and accurate method to measure the fluorescence lifetime of entire images. However, the complexity and high costs involved in construction of such a system limit the extensive use of this technique. PCO AG recently released the first luminescence lifetime imaging camera based on a high frequency modulated CMOS image sensor, QMFLIM2. Here we tested and provide operational procedures to calibrate the camera and to improve the accuracy using corrections necessary for image analysis. With its flexible input/output options, we are able to use a modulated laser diode or a 20 MHz pulsed white supercontinuum laser as the light source. The output of the camera consists of a stack of modulated images that can be analyzed by the SimFCS software using the phasor approach. The nonuniform system response across the image sensor must be calibrated at the pixel level. This pixel calibration is crucial and needed for every camera settings, e.g. modulation frequency and exposure time. A significant dependency of the modulation signal on the intensity was also observed and hence an additional calibration is needed for each pixel depending on the pixel intensity level. These corrections are important not only for the fundamental frequency, but also for the higher harmonics when using the pulsed supercontinuum laser. With these post data acquisition corrections, the PCO CMOS-FLIM camera can be used for various biomedical applications requiring a large frame and high speed acquisition.

  3. Designing Camera Networks by Convex Quadratic Programming

    KAUST Repository

    Ghanem, Bernard

    2015-05-04

    ​In this paper, we study the problem of automatic camera placement for computer graphics and computer vision applications. We extend the problem formulations of previous work by proposing a novel way to incorporate visibility constraints and camera-to-camera relationships. For example, the placement solution can be encouraged to have cameras that image the same important locations from different viewing directions, which can enable reconstruction and surveillance tasks to perform better. We show that the general camera placement problem can be formulated mathematically as a convex binary quadratic program (BQP) under linear constraints. Moreover, we propose an optimization strategy with a favorable trade-off between speed and solution quality. Our solution is almost as fast as a greedy treatment of the problem, but the quality is significantly higher, so much so that it is comparable to exact solutions that take orders of magnitude more computation time. Because it is computationally attractive, our method also allows users to explore the space of solutions for variations in input parameters. To evaluate its effectiveness, we show a range of 3D results on real-world floorplans (garage, hotel, mall, and airport). ​

  4. Calibration of action cameras for photogrammetric purposes.

    Science.gov (United States)

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-01-01

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898

  5. Camera Calibration with Radial Variance Component Estimation

    Science.gov (United States)

    Mélykuti, B.; Kruck, E. J.

    2014-11-01

    Camera calibration plays a more and more important role in recent times. Beside real digital aerial survey cameras the photogrammetric market is dominated by a big number of non-metric digital cameras mounted on UAVs or other low-weight flying platforms. The in-flight calibration of those systems has a significant role to enhance the geometric accuracy of survey photos considerably. It is expected to have a better precision of photo measurements in the center of images then along the edges or in the corners. With statistical methods the accuracy of photo measurements in dependency of the distance of points from image center has been analyzed. This test provides a curve for the measurement precision as function of the photo radius. A high number of camera types have been tested with well penetrated point measurements in image space. The result of the tests led to a general consequence to show a functional connection between accuracy and radial distance and to give a method how to check and enhance the geometrical capability of the cameras in respect to these results.

  6. Managing a large database of camera fingerprints

    Science.gov (United States)

    Goljan, Miroslav; Fridrich, Jessica; Filler, Tomáš

    2010-01-01

    Sensor fingerprint is a unique noise-like pattern caused by slightly varying pixel dimensions and inhomogeneity of the silicon wafer from which the sensor is made. The fingerprint can be used to prove that an image came from a specific digital camera. The presence of a camera fingerprint in an image is usually established using a detector that evaluates cross-correlation between the fingerprint and image noise. The complexity of the detector is thus proportional to the number of pixels in the image. Although computing the detector statistic for a few megapixel image takes several seconds on a single-processor PC, the processing time becomes impractically large if a sizeable database of camera fingerprints needs to be searched through. In this paper, we present a fast searching algorithm that utilizes special "fingerprint digests" and sparse data structures to address several tasks that forensic analysts will find useful when deploying camera identification from fingerprints in practice. In particular, we develop fast algorithms for finding if a given fingerprint already resides in the database and for determining whether a given image was taken by a camera whose fingerprint is in the database.

  7. Phase camera experiment for Advanced Virgo

    Science.gov (United States)

    Agatsuma, Kazuhiro; van Beuzekom, Martin; van der Schaaf, Laura; van den Brand, Jo

    2016-07-01

    We report on a study of the phase camera, which is a frequency selective wave-front sensor of a laser beam. This sensor is utilized for monitoring sidebands produced by phase modulations in a gravitational wave (GW) detector. Regarding the operation of the GW detectors, the laser modulation/demodulation method is used to measure mirror displacements and used for the position controls. This plays a significant role because the quality of controls affect the noise level of the GW detector. The phase camera is able to monitor each sideband separately, which has a great benefit for the manipulation of the delicate controls. Also, overcoming mirror aberrations will be an essential part of Advanced Virgo (AdV), which is a GW detector close to Pisa. Especially low-frequency sidebands can be affected greatly by aberrations in one of the interferometer cavities. The phase cameras allow tracking such changes because the state of the sidebands gives information on mirror aberrations. A prototype of the phase camera has been developed and is currently tested. The performance checks are almost completed and the installation of the optics at the AdV site has started. After the installation and commissioning, the phase camera will be combined to a thermal compensation system that consists of CO2 lasers and compensation plates. In this paper, we focus on the prototype and show some limitations from the scanner performance.

  8. Hidden cameras everything you need to know about covert recording, undercover cameras and secret filming

    CERN Document Server

    Plomin, Joe

    2016-01-01

    Providing authoritative information on the practicalities of using hidden cameras to expose abuse or wrongdoing, this book is vital reading for anyone who may use or encounter secret filming. It gives specific advice on using phones or covert cameras and unravels the complex legal and ethical issues that need to be considered.

  9. Mobile phone camera benchmarking: combination of camera speed and image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  10. Calibration method for a central catadioptric-perspective camera system.

    Science.gov (United States)

    He, Bingwei; Chen, Zhipeng; Li, Youfu

    2012-11-01

    A central catadioptric-perspective camera system is widely used nowadays. A critical problem is that current calibration methods cannot determine the extrinsic parameters between the central catadioptric camera and a perspective camera effectively. We present a novel calibration method for a central catadioptric-perspective camera system, in which the central catadioptric camera has a hyperbolic mirror. Two cameras are used to capture images of one calibration pattern at different spatial positions. A virtual camera is constructed at the origin of the central catadioptric camera and faced toward the calibration pattern. The transformation between the virtual camera and the calibration pattern could be computed first and the extrinsic parameters between the central catadioptric camera and the calibration pattern could be obtained. Three-dimensional reconstruction results of the calibration pattern show a high accuracy and validate the feasibility of our method.

  11. The development of large-aperture test system of infrared camera and visible CCD camera

    Science.gov (United States)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  12. Progress in gamma-camera quality control

    International Nuclear Information System (INIS)

    The latest developments in the art of quality control of gamma cameras are emphasized in a simple historical manner. The exhibit describes methods developed by the Bureau of Radiological Health (BRH) in comparison with previously accepted techniques for routine evaluation of gamma-camera performance. Gamma cameras require periodic testing of their performance parameters to ensure that their optimum imaging capability is maintained. Quality control parameters reviewed are field uniformity, spatial distortion, intrinsic and spatial resolution, and temporal resolution. The methods developed for the measurement of these parameters are simple, not requiring additional electronic equipment or computers. The data has been arranged in six panels as follows: schematic diagrams of the most important test patterns used in nuclear medicine; field uniformity; regional displacements in transmission pattern image; spatial resolution using the BRH line-source phantom; instrinsic resolution using the BRH Test Pattern; and Temporal resolution and count losses at high counting rates

  13. Camera placement in integer lattices (extended abstract)

    Science.gov (United States)

    Pocchiola, Michel; Kranakis, Evangelos

    1990-09-01

    Techniques for studying an art gallery problem (the camera placement problem) in the infinite lattice (L sup d) of d tuples of integers are considered. A lattice point A is visible from a camera C positioned at a vertex of (L sup d) if A does not equal C and if the line segment joining A and C crosses no other lattice vertex. By using a combination of probabilistic, combinatorial optimization and algorithmic techniques the position they must occupy in the lattice (L sup d) in the order to maximize their visibility can be determined in polynomial time, for any given number s less than or equal to (5 sup d) of cameras. This improves previous results for s less than or equal to (3 sup d).

  14. Results of the prototype camera for FACT

    Energy Technology Data Exchange (ETDEWEB)

    Anderhub, H. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Backes, M. [Technische Universitaet Dortmund, D-44221 Dortmund (Germany); Biland, A.; Boller, A.; Braun, I. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Bretz, T. [Ecole Polytechnique Federale de Lausanne, CH-1015 Lausanne (Switzerland); Commichau, S.; Commichau, V. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Dorner, D. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); INTEGRAL Science Data Center, CH-1290 Versoix (Switzerland); Gendotti, A.; Grimm, O.; Gunten, H. von; Hildebrand, D.; Horisberger, U. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Koehne, J.-H. [Technische Universitaet Dortmund, D-44221 Dortmund (Germany); Kraehenbuehl, T., E-mail: thomas.kraehenbuehl@phys.ethz.c [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Kranich, D.; Lorenz, E.; Lustermann, W. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Mannheim, K. [Universitaet Wuerzburg, D-97074 Wuerzburg (Germany)

    2011-05-21

    The maximization of the photon detection efficiency (PDE) is a key issue in the development of cameras for Imaging Atmospheric Cherenkov Telescopes. Geiger-mode Avalanche Photodiodes (G-APD) are a promising candidate to replace the commonly used photomultiplier tubes by offering a larger PDE and in addition a facilitated handling. The FACT (First G-APD Cherenkov Telescope) project evaluates the feasibility of this change by building a camera based on 1440 G-APDs for an existing small telescope. As a first step towards a full camera, a prototype module using 144 G-APDs was successfully built and tested. The strong temperature dependence of G-APDs is compensated using a feedback system, which allows to keep the gain of the G-APDs constant to 0.5%.

  15. PEOPLE REIDENTIFCATION IN A DISTRIBUTED CAMERA NETWORK

    Directory of Open Access Journals (Sweden)

    Icaro Oliveira de Oliveira

    2010-06-01

    Full Text Available This paper presents an approach to the object reidentification problem in a distributed camera network system. The reidentification or reacquisition problem consists essentially on the matching process of images acquired from different cameras. This work is applied in a monitored environment by cameras. This application is important to modern security systems, in which the targets presence identification in the environment expands the capacity of action by security agents in real time and provides important parameters like localization for each target. We used target’s interest points and target’s color with features for reidentification. The satisfactory results were obtained from real experiments in public video datasets and synthetic images with noise.

  16. Mechanical Design of the LSST Camera

    Energy Technology Data Exchange (ETDEWEB)

    Nordby, Martin; Bowden, Gordon; Foss, Mike; Guiffre, Gary; /SLAC; Ku, John; /Unlisted; Schindler, Rafe; /SLAC

    2008-06-13

    The LSST camera is a tightly packaged, hermetically-sealed system that is cantilevered into the main beam of the LSST telescope. It is comprised of three refractive lenses, on-board storage for five large filters, a high-precision shutter, and a cryostat that houses the 3.2 giga-pixel CCD focal plane along with its support electronics. The physically large optics and focal plane demand large structural elements to support them, but the overall size of the camera and its components must be minimized to reduce impact on the image stability. Also, focal plane and optics motions must be minimized to reduce systematic errors in image reconstruction. Design and analysis for the camera body and cryostat will be detailed.

  17. Collimated trans-axial tomographic scintillation camera

    International Nuclear Information System (INIS)

    The objects of this invention are first to reduce the time required to obtain statistically significant data in trans-axial tomographic radioisotope scanning using a scintillation camera. Secondly, to provide a scintillation camera system to increase the rate of acceptance of radioactive events to contribute to the positional information obtainable from a known radiation source without sacrificing spatial resolution. Thirdly to reduce the scanning time without loss of image clarity. The system described comprises a scintillation camera detector, means for moving this in orbit about a cranial-caudal axis relative to a patient and a collimator having septa defining apertures such that gamma rays perpendicular to the axis are admitted with high spatial resolution, parallel to the axis with low resolution. The septa may be made of strips of lead. Detailed descriptions are given. (U.K.)

  18. HIGH SPEED KERR CELL FRAMING CAMERA

    Science.gov (United States)

    Goss, W.C.; Gilley, L.F.

    1964-01-01

    The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)

  19. Ultra-fast framing camera tube

    Science.gov (United States)

    Kalibjian, Ralph

    1981-01-01

    An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.

  20. Small Orbital Stereo Tracking Camera Technology Development

    Science.gov (United States)

    Bryan, Tom; MacLeod, Todd; Gagliano, Larry

    2016-01-01

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  1. Analysis of Brown camera distortion model

    Science.gov (United States)

    Nowakowski, Artur; Skarbek, Władysław

    2013-10-01

    Contemporary image acquisition devices introduce optical distortion into image. It results in pixel displacement and therefore needs to be compensated for many computer vision applications. The distortion is usually modeled by the Brown distortion model, which parameters can be included in camera calibration task. In this paper we describe original model, its dependencies and analyze orthogonality with regard to radius for its decentering distortion component. We also report experiments with camera calibration algorithm included in OpenCV library, especially a stability of distortion parameters estimation is evaluated.

  2. Camera-enabled techniques for organic synthesis

    Directory of Open Access Journals (Sweden)

    Steven V. Ley

    2013-05-01

    Full Text Available A great deal of time is spent within synthetic chemistry laboratories on non-value-adding activities such as sample preparation and work-up operations, and labour intensive activities such as extended periods of continued data collection. Using digital cameras connected to computer vision algorithms, camera-enabled apparatus can perform some of these processes in an automated fashion, allowing skilled chemists to spend their time more productively. In this review we describe recent advances in this field of chemical synthesis and discuss how they will lead to advanced synthesis laboratories of the future.

  3. Camera-enabled techniques for organic synthesis

    Science.gov (United States)

    Ingham, Richard J; O’Brien, Matthew; Browne, Duncan L

    2013-01-01

    Summary A great deal of time is spent within synthetic chemistry laboratories on non-value-adding activities such as sample preparation and work-up operations, and labour intensive activities such as extended periods of continued data collection. Using digital cameras connected to computer vision algorithms, camera-enabled apparatus can perform some of these processes in an automated fashion, allowing skilled chemists to spend their time more productively. In this review we describe recent advances in this field of chemical synthesis and discuss how they will lead to advanced synthesis laboratories of the future. PMID:23766820

  4. A multidetector scintillation camera with 254 channels

    DEFF Research Database (Denmark)

    Sveinsdottir, E; Larsen, B; Rommer, P;

    1977-01-01

    A computer-based scintillation camera has been designed for both dynamic and static radionuclide studies. The detecting head has 254 independent sodium iodide crystals, each with a photomultiplier and amplifier. In dynamic measurements simultaneous events can be recorded, and 1 million total counts...... per second can be accommodated with less than 0.5% loss in any one channel. This corresponds to a calculated deadtime of 5 nsec. The multidetector camera is being used for 133Xe dynamic studies of regional cerebral blood flow in man and for 99mTc and 197 Hg static imaging of the brain....

  5. Camera Trajectory fromWide Baseline Images

    Science.gov (United States)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the

  6. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  7. Towards Adaptive Virtual Camera Control In Computer Games

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2011-01-01

    model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platform game, we cluster gaze and camera information to identify camera behaviour profiles and we employ......Automatic camera control aims to define a framework to control virtual camera movements in dynamic and unpredictable virtual environments while ensuring a set of desired visual properties. We inves- tigate the relationship between camera placement and playing behaviour in games and build a user...... machine learning to build predictive models of the virtual camera behaviour. The perfor- mance of the models on unseen data reveals accuracies above 70% for all the player behaviour types identified. The characteristics of the gener- ated models, their limits and their use for creating adaptive automatic...

  8. Lights, Camera, Read! Arizona Reading Program Manual.

    Science.gov (United States)

    Arizona State Dept. of Library, Archives and Public Records, Phoenix.

    This document is the manual for the Arizona Reading Program (ARP) 2003 entitled "Lights, Camera, Read!" This theme spotlights books that were made into movies, and allows readers to appreciate favorite novels and stories that have progressed to the movie screen. The manual consists of eight sections. The Introduction includes welcome letters from…

  9. FPS camera sync and reset chassis

    International Nuclear Information System (INIS)

    The sync and reset chassis provides all the circuitry required to synchronize an event to be studied, a remote free-running focus projection and scanning (FPS) data-acquisition TV camera, and a video signal recording system. The functions, design, and operation of this chassis are described in detail

  10. Increased Automation in Stereo Camera Calibration Techniques

    Directory of Open Access Journals (Sweden)

    Brandi House

    2006-08-01

    Full Text Available Robotic vision has become a very popular field in recent years due to the numerous promising applications it may enhance. However, errors within the cameras and in their perception of their environment can cause applications in robotics to fail. To help correct these internal and external imperfections, stereo camera calibrations are performed. There are currently many accurate methods of camera calibration available; however, most or all of them are time consuming and labor intensive. This research seeks to automate the most labor intensive aspects of a popular calibration technique developed by Jean-Yves Bouguet. His process requires manual selection of the extreme corners of a checkerboard pattern. The modified process uses embedded LEDs in the checkerboard pattern to act as active fiducials. Images are captured of the checkerboard with the LEDs on and off in rapid succession. The difference of the two images automatically highlights the location of the four extreme corners, and these corner locations take the place of the manual selections. With this modification to the calibration routine, upwards of eighty mouse clicks are eliminated per stereo calibration. Preliminary test results indicate that accuracy is not substantially affected by the modified procedure. Improved automation to camera calibration procedures may finally penetrate the barriers to the use of calibration in practice.

  11. GAMPIX: A new generation of gamma camera

    Science.gov (United States)

    Gmar, M.; Agelou, M.; Carrel, F.; Schoepff, V.

    2011-10-01

    Gamma imaging is a technique of great interest in several fields such as homeland security or decommissioning/dismantling of nuclear facilities in order to localize hot spots of radioactivity. In the nineties, previous works led by CEA LIST resulted in the development of a first generation of gamma camera called CARTOGAM, now commercialized by AREVA CANBERRA. Even if its performances can be adapted to many applications, its weight of 15 kg can be an issue. For several years, CEA LIST has been developing a new generation of gamma camera, called GAMPIX. This system is mainly based on the Medipix2 chip, hybridized to a 1 mm thick CdTe substrate. A coded mask replaces the pinhole collimator in order to increase the sensitivity of the gamma camera. Hence, we obtained a very compact device (global weight less than 1 kg without any shielding), which is easy to handle and to use. In this article, we present the main characteristics of GAMPIX and we expose the first experimental results illustrating the performances of this new generation of gamma camera.

  12. Parametrizable cameras for 3D computational steering

    NARCIS (Netherlands)

    Mulder, J.D.; Wijk, J.J. van

    1997-01-01

    We present a method for the definition of multiple views in 3D interfaces for computational steering. The method uses the concept of a point-based parametrizable camera object. This concept enables a user to create and configure multiple views on his custom 3D interface in an intuitive graphical man

  13. Camera! Action! Collaborate with Digital Moviemaking

    Science.gov (United States)

    Swan, Kathleen Owings; Hofer, Mark; Levstik, Linda S.

    2007-01-01

    Broadly defined, digital moviemaking integrates a variety of media (images, sound, text, video, narration) to communicate with an audience. There is near-ubiquitous access to the necessary software (MovieMaker and iMovie are bundled free with their respective operating systems) and hardware (computers with Internet access, digital cameras, etc.).…

  14. Camera Systems Rapidly Scan Large Structures

    Science.gov (United States)

    2013-01-01

    Needing a method to quickly scan large structures like an aircraft wing, Langley Research Center developed the line scanning thermography (LST) system. LST works in tandem with a moving infrared camera to capture how a material responds to changes in temperature. Princeton Junction, New Jersey-based MISTRAS Group Inc. now licenses the technology and uses it in power stations and industrial plants.

  15. Face identification in videos from mobile cameras

    NARCIS (Netherlands)

    Mu, Meiru; Spreeuwers, Luuk; Veldhuis, Raymond

    2014-01-01

    It is still challenging to recognize faces reliably in videos from mobile camera, although mature automatic face recognition technology for still images has been available for quite some time. Suppose we want to be alerted when suspects appear in the recording of a police Body-Cam, even a good face

  16. Digital Camera Project Fosters Communication Skills

    Science.gov (United States)

    Fisher, Ashley; Lazaros, Edward J.

    2009-01-01

    This article details the many benefits of educators' use of digital camera technology and provides an activity in which students practice taking portrait shots of classmates, manipulate the resulting images, and add language arts practice by interviewing their subjects to produce a photo-illustrated Word document. This activity gives…

  17. Case on Camera--An Audience Verdict.

    Science.gov (United States)

    Wober, J. M.

    In July 1984, British Channel 4 began televising Case on Camera, a series based on genuine arbitration of civil cases carried out by a retired judge, recorded as it happened, and edited into half hour programs. Because of the Independent Broadcasting Authority's concern for the rights to privacy, a systematic study of public reaction to the series…

  18. Development of a multispectral camera system

    Science.gov (United States)

    Sugiura, Hiroaki; Kuno, Tetsuya; Watanabe, Norihiro; Matoba, Narihiro; Hayashi, Junichiro; Miyake, Yoichi

    2000-05-01

    A highly accurate multispectral camera and the application software have been developed as a practical system to capture digital images of the artworks stored in galleries and museums. Instead of recording color data in the conventional three RGB primary colors, the newly developed camera and the software carry out a pixel-wise estimation of spectral reflectance, the color data specific to the object, to enable the practical multispectral imaging. In order to realize the accurate multispectral imaging, the dynamic range of the camera is set to 14 bits or over and the output bits to 14 bits so as to allow capturing even when the difference in light quantity between the each channel is large. Further, a small-size rotary color filter was simultaneously developed to keep the camera to a practical size. We have developed software capable of selecting the optimum combination of color filters available in the market. Using this software, n types of color filter can be selected from m types of color filter giving a minimum Euclidean distance or minimum color difference in CIELAB color space between actual and estimated spectral reflectance as to 147 types of oil paint samples.

  19. Teaching Camera Calibration by a Constructivist Methodology

    Science.gov (United States)

    Samper, D.; Santolaria, J.; Pastor, J. J.; Aguilar, J. J.

    2010-01-01

    This article describes the Metrovisionlab simulation software and practical sessions designed to teach the most important machine vision camera calibration aspects in courses for senior undergraduate students. By following a constructivist methodology, having received introductory theoretical classes, students use the Metrovisionlab application to…

  20. Video Analysis with a Web Camera

    Science.gov (United States)

    Wyrembeck, Edward P.

    2009-01-01

    Recent advances in technology have made video capture and analysis in the introductory physics lab even more affordable and accessible. The purchase of a relatively inexpensive web camera is all you need if you already have a newer computer and Vernier's Logger Pro 3 software. In addition to Logger Pro 3, other video analysis tools such as…

  1. A novel fully integrated handheld gamma camera

    Science.gov (United States)

    Massari, R.; Ucci, A.; Campisi, C.; Scopinaro, F.; Soluri, A.

    2016-10-01

    In this paper, we present an innovative, fully integrated handheld gamma camera, namely designed to gather in the same device the gamma ray detector with the display and the embedded computing system. The low power consumption allows the prototype to be battery operated. To be useful in radioguided surgery, an intraoperative gamma camera must be very easy to handle since it must be moved to find a suitable view. Consequently, we have developed the first prototype of a fully integrated, compact and lightweight gamma camera for radiopharmaceuticals fast imaging. The device can operate without cables across the sterile field, so it may be easily used in the operating theater for radioguided surgery. The prototype proposed consists of a Silicon Photomultiplier (SiPM) array coupled with a proprietary scintillation structure based on CsI(Tl) crystals. To read the SiPM output signals, we have developed a very low power readout electronics and a dedicated analog to digital conversion system. One of the most critical aspects we faced designing the prototype was the low power consumption, which is mandatory to develop a battery operated device. We have applied this detection device in the lymphoscintigraphy technique (sentinel lymph node mapping) comparing the results obtained with those of a commercial gamma camera (Philips SKYLight). The results obtained confirm a rapid response of the device and an adequate spatial resolution for the use in the scintigraphic imaging. This work confirms the feasibility of a small gamma camera with an integrated display. This device is designed for radioguided surgery and small organ imaging, but it could be easily combined into surgical navigation systems.

  2. Measuring rainfall with low-cost cameras

    Science.gov (United States)

    Allamano, Paola; Cavagnero, Paolo; Croci, Alberto; Laio, Francesco

    2016-04-01

    In Allamano et al. (2015), we propose to retrieve quantitative measures of rainfall intensity by relying on the acquisition and analysis of images captured from professional cameras (SmartRAIN technique in the following). SmartRAIN is based on the fundamentals of camera optics and exploits the intensity changes due to drop passages in a picture. The main steps of the method include: i) drop detection, ii) blur effect removal, iii) estimation of drop velocities, iv) drop positioning in the control volume, and v) rain rate estimation. The method has been applied to real rain events with errors of the order of ±20%. This work aims to bridge the gap between the need of acquiring images via professional cameras and the possibility of exporting the technique to low-cost webcams. We apply the image processing algorithm to frames registered with low-cost cameras both in the lab (i.e., controlled rain intensity) and field conditions. The resulting images are characterized by lower resolutions and significant distortions with respect to professional camera pictures, and are acquired with fixed aperture and a rolling shutter. All these hardware limitations indeed exert relevant effects on the readability of the resulting images, and may affect the quality of the rainfall estimate. We demonstrate that a proper knowledge of the image acquisition hardware allows one to fully explain the artefacts and distortions due to the hardware. We demonstrate that, by correcting these effects before applying the image processing algorithm, quantitative rain intensity measures are obtainable with a good accuracy also with low-cost modules.

  3. X-ray imaging using digital cameras

    Science.gov (United States)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  4. NEW VERSATILE CAMERA CALIBRATION TECHNIQUE BASED ON LINEAR RECTIFICATION

    Institute of Scientific and Technical Information of China (English)

    Pan Feng; Wang Xuanyin

    2004-01-01

    A new versatile camera calibration technique for machine vision using off-the-shelf cameras is described. Aimed at the large distortion of the off-the-shelf cameras, a new camera distortion rectification technology based on line-rectification is proposed. A full-camera-distortion model is introduced and a linear algorithm is provided to obtain the solution. After the camera rectification intrinsic and extrinsic parameters are obtained based on the relationship between the homograph and absolute conic. This technology needs neither a high-accuracy three-dimensional calibration block, nor a complicated translation or rotation platform. Both simulations and experiments show that this method is effective and robust.

  5. Analysis of RED ONE Digital Cinema Camera and RED Workflow

    OpenAIRE

    Foroughi Mobarakeh, Taraneh

    2009-01-01

    RED Digital Cinema is a rather new company that has developed a camera that has shaken the world of the film industry, the RED One camera. RED One is a digital cinema camera with the characteristics of a 35mm film camera. With a custom made 12 megapixel CMOS sensor it offers images with a filmic look that cannot be achieved with many other digital cinema cameras. With a new camera comes a new set of media files to work with, which brings new software applications supporting them. RED Digital ...

  6. Control of the movement of a ROV camera; Controle de posicionamento da camera de um ROV

    Energy Technology Data Exchange (ETDEWEB)

    Lima, Alexandre S. de; Dutra, Max Suell [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE); Reis, Ney Robinson S. dos [PETROBRAS, Rio de Janeiro, RJ (Brazil). Centro de Pesquisas; Santos, Auderi V. dos [Pontificia Univ. Catolica do Rio de Janeiro, RJ (Brazil)

    2004-07-01

    The ROV's (Remotely Operated Vehicles) are used for installation and maintenance of underwater exploration systems in the oil industry. These systems are operated in distant areas thus being of essential importance the use of a cameras for the visualization of the work area. The synchronization necessary in the accomplishment of the tasks when operating the manipulator and the movement of the camera for the operator is a complex task. For the accomplishment of this synchronization is presented in this work the analysis of the interconnection of the systems. The concatenation of the systems is made through the interconnection of the electric signals of the proportional valves of the actuators of the manipulator with the signals of the proportional valves of the actuators of the camera. With this interconnection the approach accompaniment of the movement of the manipulator for the camera, keeping the object of the visualization of the field of vision of the operator is obtained. (author)

  7. National Guidelines for Digital Camera Systems Certification

    Science.gov (United States)

    Yaron, Yaron; Keinan, Eran; Benhamu, Moshe; Regev, Ronen; Zalmanzon, Garry

    2016-06-01

    Digital camera systems are a key component in the production of reliable, geometrically accurate, high-resolution geospatial products. These systems have replaced film imaging in photogrammetric data capturing. Today, we see a proliferation of imaging sensors collecting photographs in different ground resolutions, spectral bands, swath sizes, radiometric characteristics, accuracies and carried on different mobile platforms. In addition, these imaging sensors are combined with navigational tools (such as GPS and IMU), active sensors such as laser scanning and powerful processing tools to obtain high quality geospatial products. The quality (accuracy, completeness, consistency, etc.) of these geospatial products is based on the use of calibrated, high-quality digital camera systems. The new survey regulations of the state of Israel specify the quality requirements for each geospatial product including: maps at different scales and for different purposes, elevation models, orthophotographs, three-dimensional models at different levels of details (LOD) and more. In addition, the regulations require that digital camera systems used for mapping purposes should be certified using a rigorous mapping systems certification and validation process which is specified in the Director General Instructions. The Director General Instructions for digital camera systems certification specify a two-step process as follows: 1. Theoretical analysis of system components that includes: study of the accuracy of each component and an integrative error propagation evaluation, examination of the radiometric and spectral response curves for the imaging sensors, the calibration requirements, and the working procedures. 2. Empirical study of the digital mapping system that examines a typical project (product scale, flight height, number and configuration of ground control points and process). The study examine all the aspects of the final product including; its accuracy, the product pixels size

  8. Method for out-of-focus camera calibration.

    Science.gov (United States)

    Bell, Tyler; Xu, Jing; Zhang, Song

    2016-03-20

    State-of-the-art camera calibration methods assume that the camera is at least nearly in focus and thus fail if the camera is substantially defocused. This paper presents a method which enables the accurate calibration of an out-of-focus camera. Specifically, the proposed method uses a digital display (e.g., liquid crystal display monitor) to generate fringe patterns that encode feature points into the carrier phase; these feature points can be accurately recovered, even if the fringe patterns are substantially blurred (i.e., the camera is substantially defocused). Experiments demonstrated that the proposed method can accurately calibrate a camera regardless of the amount of defocusing: the focal length difference is approximately 0.2% when the camera is focused compared to when the camera is substantially defocused.

  9. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    Science.gov (United States)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced

  10. Calibrating a depth camera but ignoring it for SLAM

    OpenAIRE

    Castro, Daniel Herrera

    2014-01-01

    Recent improvements in resolution, accuracy, and cost have made depth cameras a very popular alternative for 3D reconstruction and navigation. Thus, accurate depth camera calibration a very relevant aspect of many 3D pipelines. We explore what are the limits of a practical depth camera calibration algorithm: how to accurately calibrate a noisy depth camera without a precise calibration object and without using brightness or depth discontinuities. We present an algorithm that uses an external ...

  11. Calibration of omnidirectional cameras in practice: A comparison of methods

    OpenAIRE

    Puig, Luis; Bermúdez, Jesús; Sturm, Peter; Guerrero, Josechu

    2012-01-01

    International audience Omnidirectional cameras are becoming increasingly popular in computer vision and robotics. Camera calibration is a step before performing any task involving metric scene measurement, required in nearly all robotics tasks. In recent years many different methods to calibrate central omnidirectional cameras have been developed, based on different camera models and often limited to a specific mirror shape. In this paper we review the existing methods designed to calibrat...

  12. Dynamic Vision Sensor Camera Based Bare Hand Gesture Recognition

    OpenAIRE

    kashmera ashish khedkkar safaya; Rekha Lathi

    2012-01-01

    This Paper proposes a method to recognize bare hand gestures using dynamic vision sensor (DVS) camera. DVS camera only responds asynchronously to pixels that have temporal changes in intensity which different from conventional camera. This paper attempts to recognize three different hand gestures rock, paper and scissors and using those hand gestures design mouse free interface.   Keywords: Dynamic vision sensor camera, Hand gesture recognition

  13. Dynamic Vision Sensor Camera Based Bare Hand Gesture Recognition

    Directory of Open Access Journals (Sweden)

    kashmera ashish khedkkar safaya

    2012-05-01

    Full Text Available This Paper proposes a method to recognize bare hand gestures using dynamic vision sensor (DVS camera. DVS camera only responds asynchronously to pixels that have temporal changes in intensity which different from conventional camera. This paper attempts to recognize three different hand gestures rock, paper and scissors and using those hand gestures design mouse free interface.   Keywords: Dynamic vision sensor camera, Hand gesture recognition

  14. Situational Awareness from a Low-Cost Camera System

    Science.gov (United States)

    Freudinger, Lawrence C.; Ward, David; Lesage, John

    2010-01-01

    A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.

  15. ASSESSING THE PHOTOGRAMMETRIC POTENTIAL OF CAMERAS IN PORTABLE DEVICES

    OpenAIRE

    Smith, M J; Kokkas, N.

    2012-01-01

    In recent years, there have been an increasing number of portable devices, tablets and Smartphone’s employing high-resolution digital cameras to satisfy consumer demand. In most cases, these cameras are designed primarily for capturing visually pleasing images and the potential of using Smartphone and tablet cameras for metric applications remains uncertain. The compact nature of the host’s devices leads to very small cameras and therefore smaller geometric characteristics. This also makes th...

  16. Declarative camera control for automatic cinematography

    Energy Technology Data Exchange (ETDEWEB)

    Christianson, D.B.; Anderson, S.E.; Li-wei He [Univ. of Washington, Seattle, WA (United States)] [and others

    1996-12-31

    Animations generated by interactive 3D computer graphics applications are typically portrayed either from a particular character`s point of view or from a small set of strategically-placed viewpoints. By ignoring camera placement, such applications fail to realize important storytelling capabilities that have been explored by cinematographers for many years. In this paper, we describe several of the principles of cinematography and show how they can be formalized into a declarative language, called the Declarative Camera Control Language (DCCL). We describe the application of DCCL within the context of a simple interactive video game and argue that DCCL represents cinematic knowledge at the same level of abstraction as expert directors by encoding 16 idioms from a film textbook. These idioms produce compelling animations, as demonstrated on the accompanying videotape.

  17. SLAM using camera and IMU sensors.

    Energy Technology Data Exchange (ETDEWEB)

    Rothganger, Fredrick H.; Muguira, Maritza M.

    2007-01-01

    Visual simultaneous localization and mapping (VSLAM) is the problem of using video input to reconstruct the 3D world and the path of the camera in an 'on-line' manner. Since the data is processed in real time, one does not have access to all of the data at once. (Contrast this with structure from motion (SFM), which is usually formulated as an 'off-line' process on all the data seen, and is not time dependent.) A VSLAM solution is useful for mobile robot navigation or as an assistant for humans exploring an unknown environment. This report documents the design and implementation of a VSLAM system that consists of a small inertial measurement unit (IMU) and camera. The approach is based on a modified Extended Kalman Filter. This research was performed under a Laboratory Directed Research and Development (LDRD) effort.

  18. Blind identification of cellular phone cameras

    Science.gov (United States)

    Çeliktutan, Oya; Avcibas, Ismail; Sankur, Bülent

    2007-02-01

    In this paper, we focus on blind source cell-phone identification problem. It is known various artifacts in the image processing pipeline, such as pixel defects or unevenness of the responses in the CCD sensor, black current noise, proprietary interpolation algorithms involved in color filter array [CFA] leave telltale footprints. These artifacts, although often imperceptible, are statistically stable and can be considered as a signature of the camera type or even of the individual device. For this purpose, we explore a set of forensic features, such as binary similarity measures, image quality measures and higher order wavelet statistics in conjunction SVM classifier to identify the originating cell-phone type. We provide identification results among 9 different brand cell-phone cameras. In addition to our initial results, we applied a set of geometrical operations to original images in order to investigate how much our proposed method is robust under these manipulations.

  19. Camera Augmented Mobile C-arm

    Science.gov (United States)

    Wang, Lejing; Weidert, Simon; Traub, Joerg; Heining, Sandro Michael; Riquarts, Christian; Euler, Ekkehard; Navab, Nassir

    The Camera Augmented Mobile C-arm (CamC) system that extends a regular mobile C-arm by a video camera provides an X-ray and video image overlay. Thanks to the mirror construction and one time calibration of the device, the acquired X-ray images are co-registered with the video images without any calibration or registration during the intervention. It is very important to quantify and qualify the system before its introduction into the OR. In this communication, we extended the previously performed overlay accuracy analysis of the CamC system by another clinically important parameter, the applied radiation dose for the patient. Since the mirror of the CamC system will absorb and scatter radiation, we introduce a method for estimating the correct applied dose by using an independent dose measurement device. The results show that the mirror absorbs and scatters 39% of X-ray radiation.

  20. Cervical SPECT Camera for Parathyroid Imaging

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2012-08-31

    Primary hyperparathyroidism characterized by one or more enlarged parathyroid glands has become one of the most common endocrine diseases in the world affecting about 1 per 1000 in the United States. Standard treatment is highly invasive exploratory neck surgery called Parathyroidectomy. The surgery has a notable mortality rate because of the close proximity to vital structures. The move to minimally invasive parathyroidectomy is hampered by the lack of high resolution pre-surgical imaging techniques that can accurately localize the parathyroid with respect to surrounding structures. We propose to develop a dedicated ultra-high resolution (~ 1 mm) and high sensitivity (10x conventional camera) cervical scintigraphic imaging device. It will be based on a multiple pinhole-camera SPECT system comprising a novel solid state CZT detector that offers the required performance. The overall system will be configured to fit around the neck and comfortably image a patient.

  1. First polarised light with the NIKA camera

    CERN Document Server

    Ritacco, A; Adane, A; Ade, P; André, P; Beelen, A; Belier, B; Benoît, A; Bideaud, A; Billot, N; Bourrion, O; Calvo, M; Catalano, A; Coiffard, G; Comis, B; D'Addabbo, A; Désert, F -X; Doyle, S; Goupy, J; Kramer, C; Leclercq, S; Macías-Pérez, J F; Martino, J; Mauskopf, P; Maury, A; Mayet, F; Monfardini, A; Pajot, F; Pascale, E; Perotto, L; Pisano, G; Ponthieu, N; Rebolo-Iglesias, M; Réveret, V; Rodriguez, L; Savini, G; Schuster, K; Sievers, A; Thum, C; Triqueneaux, S; Tucker, C; Zylka, R

    2015-01-01

    NIKA is a dual-band camera operating with 315 frequency multiplexed LEKIDs cooled at 100 mK. NIKA is designed to observe the sky in intensity and polarisation at 150 and 260 GHz from the IRAM 30-m telescope. It is a test-bench for the final NIKA2 camera. The incoming linear polarisation is modulated at four times the mechanical rotation frequency by a warm rotating multi-layer Half Wave Plate. Then, the signal is analysed by a wire grid and finally absorbed by the LEKIDs. The small time constant (< 1ms ) of the LEKID detectors combined with the modulation of the HWP enables the quasi-simultaneous measurement of the three Stokes parameters I, Q, U, representing linear polarisation. In this pa- per we present results of recent observational campaigns demonstrating the good performance of NIKA in detecting polarisation at mm wavelength.

  2. Camera Raw解读(1)

    Institute of Scientific and Technical Information of China (English)

    张恣宽

    2010-01-01

    Camera Raw是Adobe公司研发的,它是Photoshop软件中的一个RAW格式文件的转换插件。虽然一些大的相机生产商,如尼康、佳能公司各自都有自主开发的RAW格式转换软件,性能也很好,但Adobe以其Photoshop软件开发的优势,将RAW格式转换融合在Photoshop软件中,使RAW格式转换优势更加突出,功能十分强大。特别是PhotoshopCS4中的Camera Raw5,功能更加强大。

  3. AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA

    Directory of Open Access Journals (Sweden)

    Veena G.S

    2013-12-01

    Full Text Available The proposed work aims to create a smart application camera, with the intention of eliminating the need for a human presence to detect any unwanted sinister activities, such as theft in this case. Spread among the campus, are certain valuable biometric identification systems at arbitrary locations. The application monitosr these systems (hereafter referred to as “object” using our smart camera system based on an OpenCV platform. By using OpenCV Haar Training, employing the Viola-Jones algorithm implementation in OpenCV, we teach the machine to identify the object in environmental conditions. An added feature of face recognition is based on Principal Component Analysis (PCA to generate Eigen Faces and the test images are verified by using distance based algorithm against the eigenfaces, like Euclidean distance algorithm or Mahalanobis Algorithm. If the object is misplaced, or an unauthorized user is in the extreme vicinity of the object, an alarm signal is raised.

  4. Advanced EVA Suit Camera System Development Project

    Science.gov (United States)

    Mock, Kyla

    2016-01-01

    The National Aeronautics and Space Administration (NASA) at the Johnson Space Center (JSC) is developing a new extra-vehicular activity (EVA) suit known as the Advanced EVA Z2 Suit. All of the improvements to the EVA Suit provide the opportunity to update the technology of the video imagery. My summer internship project involved improving the video streaming capabilities of the cameras that will be used on the Z2 Suit for data acquisition. To accomplish this, I familiarized myself with the architecture of the camera that is currently being tested to be able to make improvements on the design. Because there is a lot of benefit to saving space, power, and weight on the EVA suit, my job was to use Altium Design to start designing a much smaller and simplified interface board for the camera's microprocessor and external components. This involved checking datasheets of various components and checking signal connections to ensure that this architecture could be used for both the Z2 suit and potentially other future projects. The Orion spacecraft is a specific project that may benefit from this condensed camera interface design. The camera's physical placement on the suit also needed to be determined and tested so that image resolution can be maximized. Many of the options of the camera placement may be tested along with other future suit testing. There are multiple teams that work on different parts of the suit, so the camera's placement could directly affect their research or design. For this reason, a big part of my project was initiating contact with other branches and setting up multiple meetings to learn more about the pros and cons of the potential camera placements we are analyzing. Collaboration with the multiple teams working on the Advanced EVA Z2 Suit is absolutely necessary and these comparisons will be used as further progress is made for the overall suit design. This prototype will not be finished in time for the scheduled Z2 Suit testing, so my time was

  5. Slit-Drum Camera For Projectile Studies

    Science.gov (United States)

    Liangyi, Chen; Shaoxiang, Zhou; Guanhua, Cha; Yuxi, Hu

    1983-03-01

    The' model XF-70 slit-drum camera has been developed to record projectile in flight for observation and acquisition. It has two operation modes: (1) synchro-ballistic photography, (2) streak record. The film is located on the inner surface of rotating drum to make it travel. The folding mirror is arranged to reflect light beam 90 degree on to film. The assembly of folding mirror and slit aperture can be together rotated about the optical axis of objective so that the camera makes a feature of recording projectile having any launching angle either in synchro-ballistic photography or in streak record through prerotating the folding mirror assembly by an appropriate angle. The mechanical-electric shutter preventing film from reexposing is close to the slit aperture. The loading mechanism is designed for use in daylight. LED fiducial mark and timing mark are printed at the edges of the frame for accurate measurements.

  6. Using a portable holographic camera in cosmetology

    Science.gov (United States)

    Bakanas, R.; Gudaitis, G. A.; Zacharovas, S. J.; Ratcliffe, D. B.; Hirsch, S.; Frey, S.; Thelen, A.; Ladrière, N.; Hering, P.

    2006-07-01

    The HSF-MINI portable holographic camera is used to record holograms of the human face. The recorded holograms are analyzed using a unique three-dimensional measurement system that provides topometric data of the face with resolution less than or equal to 0.5 mm. The main advantages of this method over other, more traditional methods (such as laser triangulation and phase-measurement triangulation) are discussed.

  7. Delay in camera-to-display systems

    OpenAIRE

    2011-01-01

    Today we see an increasing number of time dependent visual computer systems, ranging from interactive video installations, via high definition teleconferencing to the high performance computer vision disciplines for example in industry and robotics. Common for all of these are the requirement for low and predictable delays from the system itself and its components. In this thesis, we look into the delay of camera-to-display computer systems to understand the properties of their delay com...

  8. Fundus camera systems: a comparative analysis

    OpenAIRE

    DeHoog, Edward; Schwiegerling, James

    2009-01-01

    Retinal photography requires the use of a complex optical system, called a fundus camera, capable of illuminating and imaging the retina simultaneously. The patent literature shows two design forms but does not provide the specifics necessary for a thorough analysis of the designs to be performed. We have constructed our own designs based on the patent literature in optical design software and compared them for illumination efficiency, image quality, ability to accommodate for patient refract...

  9. Rank-based camera spectral sensitivity estimation.

    Science.gov (United States)

    Finlayson, Graham; Darrodi, Maryam Mohammadzadeh; Mackiewicz, Michal

    2016-04-01

    In order to accurately predict a digital camera response to spectral stimuli, the spectral sensitivity functions of its sensor need to be known. These functions can be determined by direct measurement in the lab-a difficult and lengthy procedure-or through simple statistical inference. Statistical inference methods are based on the observation that when a camera responds linearly to spectral stimuli, the device spectral sensitivities are linearly related to the camera rgb response values, and so can be found through regression. However, for rendered images, such as the JPEG images taken by a mobile phone, this assumption of linearity is violated. Even small departures from linearity can negatively impact the accuracy of the recovered spectral sensitivities, when a regression method is used. In our work, we develop a novel camera spectral sensitivity estimation technique that can recover the linear device spectral sensitivities from linear images and the effective linear sensitivities from rendered images. According to our method, the rank order of a pair of responses imposes a constraint on the shape of the underlying spectral sensitivity curve (of the sensor). Technically, each rank-pair splits the space where the underlying sensor might lie in two parts (a feasible region and an infeasible region). By intersecting the feasible regions from all the ranked-pairs, we can find a feasible region of sensor space. Experiments demonstrate that using rank orders delivers equal estimation to the prior art. However, the Rank-based method delivers a step-change in estimation performance when the data is not linear and, for the first time, allows for the estimation of the effective sensitivities of devices that may not even have "raw mode." Experiments validate our method. PMID:27140768

  10. Rank-based camera spectral sensitivity estimation.

    Science.gov (United States)

    Finlayson, Graham; Darrodi, Maryam Mohammadzadeh; Mackiewicz, Michal

    2016-04-01

    In order to accurately predict a digital camera response to spectral stimuli, the spectral sensitivity functions of its sensor need to be known. These functions can be determined by direct measurement in the lab-a difficult and lengthy procedure-or through simple statistical inference. Statistical inference methods are based on the observation that when a camera responds linearly to spectral stimuli, the device spectral sensitivities are linearly related to the camera rgb response values, and so can be found through regression. However, for rendered images, such as the JPEG images taken by a mobile phone, this assumption of linearity is violated. Even small departures from linearity can negatively impact the accuracy of the recovered spectral sensitivities, when a regression method is used. In our work, we develop a novel camera spectral sensitivity estimation technique that can recover the linear device spectral sensitivities from linear images and the effective linear sensitivities from rendered images. According to our method, the rank order of a pair of responses imposes a constraint on the shape of the underlying spectral sensitivity curve (of the sensor). Technically, each rank-pair splits the space where the underlying sensor might lie in two parts (a feasible region and an infeasible region). By intersecting the feasible regions from all the ranked-pairs, we can find a feasible region of sensor space. Experiments demonstrate that using rank orders delivers equal estimation to the prior art. However, the Rank-based method delivers a step-change in estimation performance when the data is not linear and, for the first time, allows for the estimation of the effective sensitivities of devices that may not even have "raw mode." Experiments validate our method.

  11. Worldview and route planning using live public cameras

    Science.gov (United States)

    Kaseb, Ahmed S.; Chen, Wenyi; Gingade, Ganesh; Lu, Yung-Hsiang

    2015-03-01

    Planning a trip needs to consider many unpredictable factors along the route such as traffic, weather, accidents, etc. People are interested viewing the places they plan to visit and the routes they plan to take. This paper presents a system with an Android mobile application that allows users to: (i) Watch the live feeds (videos or snapshots) from more than 65,000 geotagged public cameras around the world. The user can select the cameras using an interactive world map. (ii) Search for and watch the live feeds from the cameras along the route between a starting point and a destination. The system consists of a server which maintains a database with the cameras' information, and a mobile application that shows the camera map and communicates with the cameras. In order to evaluate the system, we compare it with existing systems in terms of the total number of cameras, the cameras' coverage, and the number of cameras on various routes. We also discuss the response time of loading the camera map, finding the cameras on a route, and communicating with the cameras.

  12. CCD characterization for a range of color cameras

    NARCIS (Netherlands)

    Withagen, P.J.; Groen, F.C.A.; Schutte, K.

    2005-01-01

    CCD cameras are widely used for remote sensing and image processing applications. However, most cameras are produced to create nice images, not to do accurate measurements. Post processing operations such as gamma adjustment and automatic gain control are incorporated in the camera. When a (CCD) cam

  13. Camera Network Coverage Improving by Particle Swarm Optimization

    NARCIS (Netherlands)

    Xu, Y.-C.; Lei, B.; Hendriks, E.A.

    2011-01-01

    This paper studies how to improve the field of view (FOV) coverage of a camera network. We focus on a special but practical scenario where the cameras are randomly scattered in a wide area and each camera may adjust its orientation but cannot move in any direction. We propose a particle swarm optimi

  14. Speed cameras : how they work and what effect they have.

    NARCIS (Netherlands)

    2011-01-01

    Much research has been carried out into the effects of speed cameras, and the research shows consistently positive results. International review studies report that speed cameras produce a reduction of approximately 20% in personal injury crashes on road sections where cameras are used. In the Nethe

  15. 16 CFR 3.45 - In camera orders.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false In camera orders. 3.45 Section 3.45... PRACTICE FOR ADJUDICATIVE PROCEEDINGS Hearings § 3.45 In camera orders. (a) Definition. Except as hereinafter provided, material made subject to an in camera order will be kept confidential and not placed...

  16. 39 CFR 3001.31a - In camera orders.

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false In camera orders. 3001.31a Section 3001.31a Postal... Applicability § 3001.31a In camera orders. (a) Definition. Except as hereinafter provided, documents and testimony made subject to in camera orders are not made a part of the public record, but are...

  17. 15 CFR 743.3 - Thermal imaging camera reporting.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Thermal imaging camera reporting. 743... REPORTING § 743.3 Thermal imaging camera reporting. (a) General requirement. Exports of thermal imaging cameras must be reported to BIS as provided in this section. (b) Transactions to be reported. Exports...

  18. 21 CFR 892.1100 - Scintillation (gamma) camera.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Scintillation (gamma) camera. 892.1100 Section 892...) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1100 Scintillation (gamma) camera. (a) Identification. A scintillation (gamma) camera is a device intended to image the distribution of radionuclides...

  19. 21 CFR 878.4160 - Surgical camera and accessories.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Surgical camera and accessories. 878.4160 Section... (CONTINUED) MEDICAL DEVICES GENERAL AND PLASTIC SURGERY DEVICES Surgical Devices § 878.4160 Surgical camera and accessories. (a) Identification. A surgical camera and accessories is a device intended to be...

  20. SPECT detectors: the Anger Camera and beyond.

    Science.gov (United States)

    Peterson, Todd E; Furenlid, Lars R

    2011-09-01

    The development of radiation detectors capable of delivering spatial information about gamma-ray interactions was one of the key enabling technologies for nuclear medicine imaging and, eventually, single-photon emission computed tomography (SPECT). The continuous sodium iodide scintillator crystal coupled to an array of photomultiplier tubes, almost universally referred to as the Anger Camera after its inventor, has long been the dominant SPECT detector system. Nevertheless, many alternative materials and configurations have been investigated over the years. Technological advances as well as the emerging importance of specialized applications, such as cardiac and preclinical imaging, have spurred innovation such that alternatives to the Anger Camera are now part of commercial imaging systems. Increased computing power has made it practical to apply advanced signal processing and estimation schemes to make better use of the information contained in the detector signals. In this review we discuss the key performance properties of SPECT detectors and survey developments in both scintillator and semiconductor detectors and their readouts with an eye toward some of the practical issues at least in part responsible for the continuing prevalence of the Anger Camera in the clinic. PMID:21828904

  1. Refocusing distance of a standard plenoptic camera.

    Science.gov (United States)

    Hahne, Christopher; Aggoun, Amar; Velisavljevic, Vladan; Fiebig, Susanne; Pesch, Matthias

    2016-09-19

    Recent developments in computational photography enabled variation of the optical focus of a plenoptic camera after image exposure, also known as refocusing. Existing ray models in the field simplify the camera's complexity for the purpose of image and depth map enhancement, but fail to satisfyingly predict the distance to which a photograph is refocused. By treating a pair of light rays as a system of linear functions, it will be shown in this paper that its solution yields an intersection indicating the distance to a refocused object plane. Experimental work is conducted with different lenses and focus settings while comparing distance estimates with a stack of refocused photographs for which a blur metric has been devised. Quantitative assessments over a 24 m distance range suggest that predictions deviate by less than 0.35 % in comparison to an optical design software. The proposed refocusing estimator assists in predicting object distances just as in the prototyping stage of plenoptic cameras and will be an essential feature in applications demanding high precision in synthetic focus or where depth map recovery is done by analyzing a stack of refocused photographs. PMID:27661891

  2. Terrain mapping camera for Chandrayaan-1

    Indian Academy of Sciences (India)

    A S Kiran Kumar; A Roy Chowdhury

    2005-12-01

    The Terrain Mapping Camera (TMC)on India ’s first satellite for lunar exploration,Chandrayaan-1, is for generating high-resolution 3-dimensional maps of the Moon.With this instrument,a complete topographic map of the Moon with 5 m spatial resolution and 10-bit quantization will be available for scienti fic studies.The TMC will image within the panchromatic spectral band of 0.4 to 0.9 m with a stereo view in the fore,nadir and aft directions of the spacecraft movement and have a B/H ratio of 1.The swath coverage will be 20 km.The camera is configured for imaging in the push broom-mode with three linear detectors in the image plane.The camera will have four gain settings to cover the varying illumination conditions of the Moon.Additionally,a provision of imaging with reduced resolution,for improving Signal-to-Noise Ratio (SNR)in polar regions,which have poor illumination conditions throughout,has been made.SNR of better than 100 is expected in the ± 60° latitude region for mature mare soil,which is one of the darkest regions on the lunar surface. This paper presents a brief description of the TMC instrument.

  3. Infrared stereo camera for human machine interface

    Science.gov (United States)

    Edmondson, Richard; Vaden, Justin; Chenault, David

    2012-06-01

    Improved situational awareness results not only from improved performance of imaging hardware, but also when the operator and human factors are considered. Situational awareness for IR imaging systems frequently depends on the contrast available. A significant improvement in effective contrast for the operator can result when depth perception is added to the display of IR scenes. Depth perception through flat panel 3D displays are now possible due to the number of 3D displays entering the consumer market. Such displays require appropriate and human friendly stereo IR video input in order to be effective in the dynamic military environment. We report on a stereo IR camera that has been developed for integration on to an unmanned ground vehicle (UGV). The camera has auto-convergence capability that significantly reduces ill effects due to image doubling, minimizes focus-convergence mismatch, and eliminates the need for the operator to manually adjust camera properties. Discussion of the size, weight, and power requirements as well as integration onto the robot platform will be given along with description of the stand alone operation.

  4. SPECT detectors: the Anger Camera and beyond

    Science.gov (United States)

    Peterson, Todd E.; Furenlid, Lars R.

    2011-09-01

    The development of radiation detectors capable of delivering spatial information about gamma-ray interactions was one of the key enabling technologies for nuclear medicine imaging and, eventually, single-photon emission computed tomography (SPECT). The continuous sodium iodide scintillator crystal coupled to an array of photomultiplier tubes, almost universally referred to as the Anger Camera after its inventor, has long been the dominant SPECT detector system. Nevertheless, many alternative materials and configurations have been investigated over the years. Technological advances as well as the emerging importance of specialized applications, such as cardiac and preclinical imaging, have spurred innovation such that alternatives to the Anger Camera are now part of commercial imaging systems. Increased computing power has made it practical to apply advanced signal processing and estimation schemes to make better use of the information contained in the detector signals. In this review we discuss the key performance properties of SPECT detectors and survey developments in both scintillator and semiconductor detectors and their readouts with an eye toward some of the practical issues at least in part responsible for the continuing prevalence of the Anger Camera in the clinic.

  5. Improvement of passive THz camera images

    Science.gov (United States)

    Kowalski, Marcin; Piszczek, Marek; Palka, Norbert; Szustakowski, Mieczyslaw

    2012-10-01

    Terahertz technology is one of emerging technologies that has a potential to change our life. There are a lot of attractive applications in fields like security, astronomy, biology and medicine. Until recent years, terahertz (THz) waves were an undiscovered, or most importantly, an unexploited area of electromagnetic spectrum. The reasons of this fact were difficulties in generation and detection of THz waves. Recent advances in hardware technology have started to open up the field to new applications such as THz imaging. The THz waves can penetrate through various materials. However, automated processing of THz images can be challenging. The THz frequency band is specially suited for clothes penetration because this radiation does not point any harmful ionizing effects thus it is safe for human beings. Strong technology development in this band have sparked with few interesting devices. Even if the development of THz cameras is an emerging topic, commercially available passive cameras still offer images of poor quality mainly because of its low resolution and low detectors sensitivity. Therefore, THz image processing is very challenging and urgent topic. Digital THz image processing is a really promising and cost-effective way for demanding security and defense applications. In the article we demonstrate the results of image quality enhancement and image fusion of images captured by a commercially available passive THz camera by means of various combined methods. Our research is focused on dangerous objects detection - guns, knives and bombs hidden under some popular types of clothing.

  6. Single eye or camera with depth perception

    Science.gov (United States)

    Kornreich, Philipp; Farell, Bart

    2012-10-01

    An imager that can measure the distance from each pixel to the point on the object that is in focus at the pixel is described. This is accomplished by a short photoconducting lossi lightguide section at each pixel. The eye or camera lens selects the object point who's range is to be determined at the pixel. Light arriving at an image point trough a convex lens adds constructively only if it comes from the object point that is in focus at this pixel.. Light waves from all other object points cancel. Thus the lightguide at this pixel receives light from one object point only. This light signal has a phase component proportional to the range. The light intensity modes and thus the photocurrent in the lightguides shift in response to the phase of the incoming light. Contacts along the length of the lightguide collect the photocurrent signal containing the range information. Applications of this camera include autonomous vehicle navigation and robotic vision. An interesting application is as part of a crude teleportation system consisting of this camera and a three dimensional printer at a remote location.

  7. Theory and applications of smart cameras

    CERN Document Server

    2016-01-01

    This book presents an overview of smart camera systems, considering practical applications but also reviewing fundamental aspects of the underlying technology.  It introduces in a tutorial style the principles of sensing and signal processing, and also describes topics such as wireless connection to the Internet of Things (IoT) which is expected to be the biggest market for smart cameras. It is an excellent guide to the fundamental of smart camera technology, and the chapters complement each other well as the authors have worked as a team under the auspice of GFP(Global Frontier Project), the largest-scale funded research in Korea.  This is the third of three books based on the Integrated Smart Sensors research project, which describe the development of innovative devices, circuits, and system-level enabling technologies.  The aim of the project was to develop common platforms on which various devices and sensors can be loaded, and to create systems offering significant improvements in information processi...

  8. Stereo cameras on the International Space Station

    Science.gov (United States)

    Sabbatini, Massimo; Visentin, Gianfranco; Collon, Max; Ranebo, Hans; Sunderland, David; Fortezza, Raimondo

    2007-02-01

    Three-dimensional media is a unique and efficient means to virtually visit/observe objects that cannot be easily reached otherwise, like the International Space Station. The advent of auto-stereoscopic displays and stereo projection system is making the stereo media available to larger audiences than the traditional scientists and design engineers communities. It is foreseen that a major demand for 3D content shall come from the entertainment area. Taking advantage of the 6 months long permanence on the International Space Station of a colleague European Astronaut, Thomas Reiter, the Erasmus Centre uploaded to the ISS a newly developed, fully digital stereo camera, the Erasmus Recording Binocular. Testing the camera and its human interfaces in weightlessness, as well as accurately mapping the interior of the ISS are the main objectives of the experiment that has just been completed at the time of writing. The intent of this paper is to share with the readers the design challenges tackled in the development and operation of the ERB camera and highlight some of the future plans the Erasmus Centre team has in the pipeline.

  9. Modeling and simulation of gamma camera

    International Nuclear Information System (INIS)

    Simulation techniques play a vital role in designing of sophisticated instruments and also for the training of operating and maintenance staff. Gamma camera systems have been used for functional imaging in nuclear medicine. Functional images are derived from the external counting of the gamma emitting radioactive tracer that after introduction in to the body mimics the behavior of native biochemical compound. The position sensitive detector yield the coordinates of the gamma ray interaction with the detector and are used to estimate the point of gamma ray emission within the tracer distribution space. This advanced imaging device is thus dependent on the performance of algorithm for coordinate computing, estimation of point of emission, generation of image and display of the image data. Contemporary systems also have protocols for quality control and clinical evaluation of imaging studies. Simulation of this processing leads to understanding of the basic camera design problems. This report describes a PC based package for design and simulation of gamma camera along with the options of simulating data acquisition and quality control of imaging studies. Image display and data processing the other options implemented in SIMCAM will be described in separate reports (under preparation). Gamma camera modeling and simulation in SIMCAM has preset configuration of the design parameters for various sizes of crystal detector with the option to pack the PMT on hexagon or square lattice. Different algorithm for computation of coordinates and spatial distortion removal are allowed in addition to the simulation of energy correction circuit. The user can simulate different static, dynamic, MUGA and SPECT studies. The acquired/ simulated data is processed for quality control and clinical evaluation of the imaging studies. Results show that the program can be used to assess these performances. Also the variations in performance parameters can be assessed due to the induced

  10. Preliminary field evaluation of solid state cameras for security applications

    International Nuclear Information System (INIS)

    Recent developments in solid state imager technology have resulted in a series of compact, lightweight, all-solid-state closed circuit television (CCTV) cameras. Although it is widely known that the various solid state cameras have less light sensitivity and lower resolution than their vacuum tube counterparts, the potential for having a much longer Mean Time Between Failure (MTBF) for the all-solid-state cameras is generating considerable interest within the security community. Questions have been raised as to whether the newest and best of the solid state cameras are a viable alternative to the high maintenance vacuum tube cameras in exterior security applications. To help answer these questions, a series of tests were performed by Sandia National Laboratories at various test sites and under several lighting conditions. In general, all-solid-state cameras need to be improved in four areas before they can be used as wholesale replacements for tube cameras in exterior security applications: resolution, sensitivity, contrast, and smear. However, with careful design some of the higher performance cameras can be used for perimeter security systems, and all of the cameras have applications where they are uniquely qualified. Many of the cameras are well suited for interior assessment and surveillance uses, and several of the cameras are well designed as robotics and machine vision devices

  11. Presence capture cameras - a new challenge to the image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  12. High-dimensional camera shake removal with given depth map.

    Science.gov (United States)

    Yue, Tao; Suo, Jinli; Dai, Qionghai

    2014-06-01

    Camera motion blur is drastically nonuniform for large depth-range scenes, and the nonuniformity caused by camera translation is depth dependent but not the case for camera rotations. To restore the blurry images of large-depth-range scenes deteriorated by arbitrary camera motion, we build an image blur model considering 6-degrees of freedom (DoF) of camera motion with a given scene depth map. To make this 6D depth-aware model tractable, we propose a novel parametrization strategy to reduce the number of variables and an effective method to estimate high-dimensional camera motion as well. The number of variables is reduced by temporal sampling motion function, which describes the 6-DoF camera motion by sampling the camera trajectory uniformly in time domain. To effectively estimate the high-dimensional camera motion parameters, we construct the probabilistic motion density function (PMDF) to describe the probability distribution of camera poses during exposure, and apply it as a unified constraint to guide the convergence of the iterative deblurring algorithm. Specifically, PMDF is computed through a back projection from 2D local blur kernels to 6D camera motion parameter space and robust voting. We conduct a series of experiments on both synthetic and real captured data, and validate that our method achieves better performance than existing uniform methods and nonuniform methods on large-depth-range scenes.

  13. Qualification Tests of Micro-camera Modules for Space Applications

    Science.gov (United States)

    Kimura, Shinichi; Miyasaka, Akira

    Visual capability is very important for space-based activities, for which small, low-cost space cameras are desired. Although cameras for terrestrial applications are continually being improved, little progress has been made on cameras used in space, which must be extremely robust to withstand harsh environments. This study focuses on commercial off-the-shelf (COTS) CMOS digital cameras because they are very small and are based on an established mass-market technology. Radiation and ultrahigh-vacuum tests were conducted on a small COTS camera that weighs less than 100 mg (including optics). This paper presents the results of the qualification tests for COTS cameras and for a small, low-cost COTS-based space camera.

  14. Determining Vision Graphs for Distributed Camera Networks Using Feature Digests

    Directory of Open Access Journals (Sweden)

    Cheng Zhaolin

    2007-01-01

    Full Text Available We propose a decentralized method for obtaining the vision graph for a distributed, ad-hoc camera network, in which each edge of the graph represents two cameras that image a sufficiently large part of the same environment. Each camera encodes a spatially well-distributed set of distinctive, approximately viewpoint-invariant feature points into a fixed-length "feature digest" that is broadcast throughout the network. Each receiver camera robustly matches its own features with the decompressed digest and decides whether sufficient evidence exists to form a vision graph edge. We also show how a camera calibration algorithm that passes messages only along vision graph edges can recover accurate 3D structure and camera positions in a distributed manner. We analyze the performance of different message formation schemes, and show that high detection rates ( can be achieved while maintaining low false alarm rates ( using a simulated 60-node outdoor camera network.

  15. The GCT camera for the Cherenkov Telescope Array

    CERN Document Server

    Brown, Anthony M; Allan, D; Amans, J P; Armstrong, T P; Balzer, A; Berge, D; Boisson, C; Bousquet, J -J; Bryan, M; Buchholtz, G; Chadwick, P M; Costantini, H; Cotter, G; Daniel, M K; De Franco, A; De Frondat, F; Dournaux, J -L; Dumas, D; Fasola, G; Funk, S; Gironnet, J; Graham, J A; Greenshaw, T; Hervet, O; Hidaka, N; Hinton, J A; Huet, J -M; Jegouzo, I; Jogler, T; Kraus, M; Lapington, J S; Laporte, P; Lefaucheur, J; Markoff, S; Melse, T; Mohrmann, L; Molyneux, P; Nolan, S J; Okumura, A; Osborne, J P; Parsons, R D; Rosen, S; Ross, D; Rowell, G; Sato, Y; Sayede, F; Schmoll, J; Schoorlemmer, H; Servillat, M; Sol, H; Stamatescu, V; Stephan, M; Stuik, R; Sykes, J; Tajima, H; Thornhill, J; Tibaldo, L; Trichard, C; Vink, J; Watson, J J; White, R; Yamane, N; Zech, A; Zink, A; Zorn, J

    2016-01-01

    The Gamma-ray Cherenkov Telescope (GCT) is proposed for the Small-Sized Telescope component of the Cherenkov Telescope Array (CTA). GCT's dual-mirror Schwarzschild-Couder (SC) optical system allows the use of a compact camera with small form-factor photosensors. The GCT camera is ~0.4 m in diameter and has 2048 pixels; each pixel has a ~0.2 degree angular size, resulting in a wide field-of-view. The design of the GCT camera is high performance at low cost, with the camera housing 32 front-end electronics modules providing full waveform information for all of the camera's 2048 pixels. The first GCT camera prototype, CHEC-M, was commissioned during 2015, culminating in the first Cherenkov images recorded by a SC telescope and the first light of a CTA prototype. In this contribution we give a detailed description of the GCT camera and present preliminary results from CHEC-M's commissioning.

  16. Simple method for calibrating omnidirectional stereo with multiple cameras

    Science.gov (United States)

    Ha, Jong-Eun; Choi, I.-Sak

    2011-04-01

    Cameras can give useful information for the autonomous navigation of a mobile robot. Typically, one or two cameras are used for this task. Recently, an omnidirectional stereo vision system that can cover the whole surrounding environment of a mobile robot is adopted. They usually adopt a mirror that cannot offer uniform spatial resolution. In this paper, we deal with an omnidirectional stereo system which consists of eight cameras where each two vertical cameras constitute one stereo system. Camera calibration is the first necessary step to obtain 3D information. Calibration using a planar pattern requires many images acquired under different poses so it is a tedious step to calibrate all eight cameras. In this paper, we present a simple calibration procedure using a cubic-type calibration structure that surrounds the omnidirectional stereo system. We can calibrate all the cameras on an omnidirectional stereo system in just one shot.

  17. Calibration of asynchronous smart phone cameras from moving objects

    Science.gov (United States)

    Hagen, Oksana; Istenič, Klemen; Bharti, Vibhav; Dhali, Maruf Ahmed; Barmaimon, Daniel; Houssineau, Jérémie; Clark, Daniel

    2015-04-01

    Calibrating multiple cameras is a fundamental prerequisite for many Computer Vision applications. Typically this involves using a pair of identical synchronized industrial or high-end consumer cameras. This paper considers an application on a pair of low-cost portable cameras with different parameters that are found in smart phones. This paper addresses the issues of acquisition, detection of moving objects, dynamic camera registration and tracking of arbitrary number of targets. The acquisition of data is performed using two standard smart phone cameras and later processed using detections of moving objects in the scene. The registration of cameras onto the same world reference frame is performed using a recently developed method for camera calibration using a disparity space parameterisation and the single-cluster PHD filter.

  18. Markerless Camera Pose Estimation - An Overview

    OpenAIRE

    Nöll, Tobias; Pagani, Alain; Stricker, Didier

    2011-01-01

    As shown by the human perception, a correct interpretation of a 3D scene on the basis of a 2D image is possible without markers. Solely by identifying natural features of different objects, their locations and orientations on the image can be identified. This allows a three dimensional interpretation of a two dimensional pictured scene. The key aspect for this interpretation is the correct estimation of the camera pose, i.e. the knowledge of the orientation and location a picture was recorded...

  19. A positron camera for industrial application

    International Nuclear Information System (INIS)

    A positron camera for application to flow tracing and measurement in mechanical subjects is described. It is based on two 300 x 600 mm2 hybrid multiwire detectors; the cathodes are in the form of lead strips planted onto printed-circuit board, and delay lines are used to determine the location of photon interactions. Measurements of the positron detection efficiency (30 Hz μCi-1 for a centred unshielded source), the maximum data logging rate (3 kHz) and the spatial resolving power (point source response = 5.7 mm fwhm) are presented and discussed, and results from initial demonstration experiments are shown. (orig.)

  20. Calibrating Images from the MINERVA Cameras

    Science.gov (United States)

    Mercedes Colón, Ana

    2016-01-01

    The MINiature Exoplanet Radial Velocity Array (MINERVA) consists of an array of robotic telescopes located on Mount Hopkins, Arizona with the purpose of performing transit photometry and spectroscopy to find Earth-like planets around Sun-like stars. In order to make photometric observations, it is necessary to perform calibrations on the CCD cameras of the telescopes to take into account possible instrument error on the data. In this project, we developed a pipeline that takes optical images, calibrates them using sky flats, darks, and biases to generate a transit light curve.

  1. Development of a micro-PIXE camera

    International Nuclear Information System (INIS)

    We developed a system of μ-PIXE analysis at the division of Takasaki ion accelerator for advanced radiation application (TIARA) in Japan Atomic Energy Research institute (JAERI), which consists of a microbeam apparatus, a multi-parameter data acquisition system and a personal computer. Elemental analysis in the region of 500 μm x 500 μm can be performed with a spatial resolution of < 0.3 μm and multi-elemental distributions are presented as images on a computer display even during measurement. We call this system a micro-PIXE camera. (author)

  2. Computational cameras for moving iris recognition

    Science.gov (United States)

    McCloskey, Scott; Venkatesha, Sharath

    2015-05-01

    Iris-based biometric identification is increasingly used for facility access and other security applications. Like all methods that exploit visual information, however, iris systems are limited by the quality of captured images. Optical defocus due to a small depth of field (DOF) is one such challenge, as is the acquisition of sharply-focused iris images from subjects in motion. This manuscript describes the application of computational motion-deblurring cameras to the problem of moving iris capture, from the underlying theory to system considerations and performance data.

  3. Disaster Response for Effective Mapping and Wayfinding

    NARCIS (Netherlands)

    Gunawan L.T.

    2013-01-01

    The research focuses on guiding the affected population towards a safe location in a disaster area by utilizing their self-help capacity with prevalent mobile technology. In contrast to the traditional centralized information management systems for disaster response, this research proposes a decen-

  4. Analysis of Camera Parameters Value in Various Object Distances Calibration

    International Nuclear Information System (INIS)

    In photogrammetric applications, good camera parameters are needed for mapping purpose such as an Unmanned Aerial Vehicle (UAV) that encompassed with non-metric camera devices. Simple camera calibration was being a common application in many laboratory works in order to get the camera parameter's value. In aerial mapping, interior camera parameters' value from close-range camera calibration is used to correct the image error. However, the causes and effects of the calibration steps used to get accurate mapping need to be analyze. Therefore, this research aims to contribute an analysis of camera parameters from portable calibration frame of 1.5 × 1 meter dimension size. Object distances of two, three, four, five, and six meters are the research focus. Results are analyzed to find out the changes in image and camera parameters' value. Hence, camera calibration parameter's of a camera is consider different depend on type of calibration parameters and object distances

  5. Online camera-gyroscope autocalibration for cell phones.

    Science.gov (United States)

    Jia, Chao; Evans, Brian L

    2014-12-01

    The gyroscope is playing a key role in helping estimate 3D camera rotation for various vision applications on cell phones, including video stabilization and feature tracking. Successful fusion of gyroscope and camera data requires that the camera, gyroscope, and their relative pose to be calibrated. In addition, the timestamps of gyroscope readings and video frames are usually not well synchronized. Previous paper performed camera-gyroscope calibration and synchronization offline after the entire video sequence has been captured with restrictions on the camera motion, which is unnecessarily restrictive for everyday users to run apps that directly use the gyroscope. In this paper, we propose an online method that estimates all the necessary parameters, whereas a user is capturing video. Our contributions are: 1) simultaneous online camera self-calibration and camera-gyroscope calibration based on an implicit extended Kalman filter and 2) generalization of the multiple-view coplanarity constraint on camera rotation in a rolling shutter camera model for cell phones. The proposed method is able to estimate the needed calibration and synchronization parameters online with all kinds of camera motion and can be embedded in gyro-aided applications, such as video stabilization and feature tracking. Both Monte Carlo simulation and cell phone experiments show that the proposed online calibration and synchronization method converge fast to the ground truth values.

  6. Time-of-Flight Microwave Camera

    Science.gov (United States)

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-10-01

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.

  7. Women's Creation of Camera Phone Culture

    Directory of Open Access Journals (Sweden)

    Dong-Hoo Lee

    2005-01-01

    Full Text Available A major aspect of the relationship between women and the media is the extent to which the new media environment is shaping how women live and perceive the world. It is necessary to understand, in a concrete way, how the new media environment is articulated to our gendered culture, how the symbolic or physical forms of the new media condition women’s experiences, and the degree to which a ‘post-gendered re-codification’ can be realized within a new media environment. This paper intends to provide an ethnographic case study of women’s experiences with camera phones, examining the extent to which these experiences recreate or reconstruct women’s subjectivity or identity. By taking a close look at the ways in which women utilize and appropriate the camera phone in their daily lives, it focuses not only on women’s cultural practices in making meanings but also on their possible effect in the deconstruction of gendered techno-culture.

  8. Scalable IC Platform for Smart Cameras

    Directory of Open Access Journals (Sweden)

    Harry Broers

    2005-08-01

    Full Text Available Smart cameras are among the emerging new fields of electronics. The points of interest are in the application areas, software and IC development. In order to reduce cost, it is worthwhile to invest in a single architecture that can be scaled for the various application areas in performance (and resulting power consumption. In this paper, we show that the combination of an SIMD (single-instruction multiple-data processor and a general-purpose DSP is very advantageous for the image processing tasks encountered in smart cameras. While the SIMD processor gives the very high performance necessary by exploiting the inherent data parallelism found in the pixel crunching part of the algorithms, the DSP offers a friendly approach to the more complex tasks. The paper continues to motivate that SIMD processors have very convenient scaling properties in silicon, making the complete, SIMD-DSP architecture suitable for different application areas without changing the software suite. Analysis of the changes in power consumption due to scaling shows that for typical image processing tasks, it is beneficial to scale the SIMD processor to use the maximum level of parallelism available in the algorithm if the IC supply voltage can be lowered. If silicon cost is of importance, the parallelism of the processor should be scaled to just reach the desired performance given the speed of the silicon.

  9. Infrared Camera Analysis of Laser Hardening

    Directory of Open Access Journals (Sweden)

    J. Tesar

    2012-01-01

    Full Text Available The improvement of surface properties such as laser hardening becomes very important in present manufacturing. Resulting laser hardening depth and surface hardness can be affected by changes in optical properties of material surface, that is, by absorptivity that gives the ratio between absorbed energy and incident laser energy. The surface changes on tested sample of steel block were made by engraving laser with different scanning velocity and repetition frequency. During the laser hardening the process was observed by infrared (IR camera system that measures infrared radiation from the heated sample and depicts it in a form of temperature field. The images from the IR camera of the sample are shown, and maximal temperatures of all engraved areas are evaluated and compared. The surface hardness was measured, and the hardening depth was estimated from the measured hardness profile in the sample cross-section. The correlation between reached temperature, surface hardness, and hardening depth is shown. The highest and the lowest temperatures correspond to the lowest/highest hardness and the highest/lowest hardening depth.

  10. Multi-band infrared camera systems

    Science.gov (United States)

    Davis, Tim; Lang, Frank; Sinneger, Joe; Stabile, Paul; Tower, John

    1994-12-01

    The program resulted in an IR camera system that utilizes a unique MOS addressable focal plane array (FPA) with full TV resolution, electronic control capability, and windowing capability. Two systems were delivered, each with two different camera heads: a Stirling-cooled 3-5 micron band head and a liquid nitrogen-cooled, filter-wheel-based, 1.5-5 micron band head. Signal processing features include averaging up to 16 frames, flexible compensation modes, gain and offset control, and real-time dither. The primary digital interface is a Hewlett-Packard standard GPID (IEEE-488) port that is used to upload and download data. The FPA employs an X-Y addressed PtSi photodiode array, CMOS horizontal and vertical scan registers, horizontal signal line (HSL) buffers followed by a high-gain preamplifier and a depletion NMOS output amplifier. The 640 x 480 MOS X-Y addressed FPA has a high degree of flexibility in operational modes. By changing the digital data pattern applied to the vertical scan register, the FPA can be operated in either an interlaced or noninterlaced format. The thermal sensitivity performance of the second system's Stirling-cooled head was the best of the systems produced.

  11. The design of aerial camera focusing mechanism

    Science.gov (United States)

    Hu, Changchang; Yang, Hongtao; Niu, Haijun

    2015-10-01

    In order to ensure the imaging resolution of aerial camera and compensating defocusing caused by the changing of atmospheric temperature, pressure, oblique photographing distance and other environmental factor [1,2], and to meeting the overall design requirements of the camera for the lower mass and smaller size , the linear focusing mechanism is designed. Through the target surface support, the target surface component is connected with focusing driving mechanism. Make use of precision ball screws, focusing mechanism transforms the input rotary motion of motor into linear motion of the focal plane assembly. Then combined with the form of linear guide restraint movement, the magnetic encoder is adopted to detect the response of displacement. And the closed loop control is adopted to realize accurate focusing. This paper illustrated the design scheme for a focusing mechanism and analyzed its error sources. It has the advantages of light friction and simple transmission chain and reducing the transmission error effectively. And this paper also analyses the target surface by finite element analysis and lightweight design. Proving that the precision of focusing mechanism can achieve higher than 3um, and the focusing range is +/-2mm.

  12. Far-infrared cameras for automotive safety

    Science.gov (United States)

    Lonnoy, Jacques; Le Guilloux, Yann; Moreira, Raphael

    2005-02-01

    Far Infrared cameras used initially for the driving of military vehicles are slowly coming into the area of commercial (luxury) cars while providing with the FIR imagery a useful assistance for driving at night or in adverse conditions (fog, smoke, ...). However this imagery needs a minimum driver effort as the image understanding is not so natural as the visible or near IR one. A developing field of FIR cameras is ADAS (Advanced Driver Assistance Systems) where FIR processed imagery fused with other sensors data (radar, ...) is providing a driver warning when dangerous situations are occurring. The communication will concentrate on FIR processed imagery for object or obstacles detection on the road or near the road. FIR imagery highlighting hot spots is a powerful detection tool as it provides a good contrast on some of the most common elements of the road scenery (engines, wheels, gas exhaust pipes, pedestrians, 2 wheelers, animals,...). Moreover FIR algorithms are much more robust than visible ones as there is less variability in image contrast with time (day/night, shadows, ...). We based our detection algorithm on one side on the peculiar aspect of vehicles, pedestrians in FIR images and on the other side on the analysis of motion along time, that allows anticipation of future motion. We will show results obtained with FIR processed imagery within the PAROTO project, supported by the French Ministry of Research, that ended in spring 04.

  13. FIDO Rover Retracted Arm and Camera

    Science.gov (United States)

    1999-01-01

    The Field Integrated Design and Operations (FIDO) rover extends the large mast that carries its panoramic camera. The FIDO is being used in ongoing NASA field tests to simulate driving conditions on Mars. FIDO is controlled from the mission control room at JPL's Planetary Robotics Laboratory in Pasadena. FIDO uses a robot arm to manipulate science instruments and it has a new mini-corer or drill to extract and cache rock samples. Several camera systems onboard allow the rover to collect science and navigation images by remote-control. The rover is about the size of a coffee table and weighs as much as a St. Bernard, about 70 kilograms (150 pounds). It is approximately 85 centimeters (about 33 inches) wide, 105 centimeters (41 inches) long, and 55 centimeters (22 inches) high. The rover moves up to 300 meters an hour (less than a mile per hour) over smooth terrain, using its onboard stereo vision systems to detect and avoid obstacles as it travels 'on-the-fly.' During these tests, FIDO is powered by both solar panels that cover the top of the rover and by replaceable, rechargeable batteries.

  14. Gamma camera based FDG PET in oncology

    International Nuclear Information System (INIS)

    Positron Emission Tomography(PET) was introduced as a research tool in the 1970s and it took about 20 years before PET became an useful clinical imaging modality. In the USA, insurance coverage for PET procedures in the 1990s was the turning point, I believe, for this progress. Initially PET was used in neurology but recently more than 80% of PET procedures are in oncological applications. I firmly believe, in the 21st century, one can not manage cancer patients properly without PET and PET is very important medical imaging modality in basic and clinical sciences. PET is grouped into 2 categories; conventional (c) and gamma camera based (CB) PET. CBPET is more readily available utilizing dual-head gamma cameras and commercially available FDG to many medical centers at low cost to patients. In fact there are more CBPET in operation than cPET in the USA. CBPET is inferior to cPET in its performance but clinical studies in oncology is feasible without expensive infrastructures such as staffing, rooms and equipments. At Ajou university Hospital, CBPET was installed in late 1997 for the first time in Korea as well as in Asia and the system has been used successfully and effectively in oncological applications. Our was the fourth PET operation in Korea and I believe this may have been instrumental for other institutions got interested in clinical PET. The following is a brief description of our clinical experience of FDG CBPET in oncology

  15. The Mars NetLander panoramic camera

    Science.gov (United States)

    Jaumann, Ralf; Langevin, Yves; Hauber, Ernst; Oberst, Jürgen; Grothues, Hans-Georg; Hoffmann, Harald; Soufflot, Alain; Bertaux, Jean-Loup; Dimarellis, Emmanuel; Mottola, Stefano; Bibring, Jean-Pierre; Neukum, Gerhard; Albertz, Jörg; Masson, Philippe; Pinet, Patrick; Lamy, Philippe; Formisano, Vittorio

    2000-10-01

    The panoramic camera (PanCam) imaging experiment is designed to obtain high-resolution multispectral stereoscopic panoramic images from each of the four Mars NetLander 2005 sites. The main scientific objectives to be addressed by the PanCam experiment are (1) to locate the landing sites and support the NetLander network sciences, (2) to geologically investigate and map the landing sites, and (3) to study the properties of the atmosphere and of variable phenomena. To place in situ measurements at a landing site into a proper regional context, it is necessary to determine the lander orientation on ground and to exactly locate the position of the landing site with respect to the available cartographic database. This is not possible by tracking alone due to the lack of on-ground orientation and the so-called map-tie problem. Images as provided by the PanCam allow to determine accurate tilt and north directions for each lander and to identify the lander locations based on landmarks, which can also be recognized in appropriate orbiter imagery. With this information, it will be further possible to improve the Mars-wide geodetic control point network and the resulting geometric precision of global map products. The major geoscientific objectives of the PanCam lander images are the recognition of surface features like ripples, ridges and troughs, and the identification and characterization of different rock and surface units based on their morphology, distribution, spectral characteristics, and physical properties. The analysis of the PanCam imagery will finally result in the generation of precise map products for each of the landing sites. So far comparative geologic studies of the Martian surface are restricted to the timely separated Mars Pathfinder and the two Viking Lander Missions. Further lander missions are in preparation (Beagle-2, Mars Surveyor 03). NetLander provides the unique opportunity to nearly double the number of accessible landing site data by providing

  16. Mars Cameras Make Panoramic Photography a Snap

    Science.gov (United States)

    2008-01-01

    If you wish to explore a Martian landscape without leaving your armchair, a few simple clicks around the NASA Web site will lead you to panoramic photographs taken from the Mars Exploration Rovers, Spirit and Opportunity. Many of the technologies that enable this spectacular Mars photography have also inspired advancements in photography here on Earth, including the panoramic camera (Pancam) and its housing assembly, designed by the Jet Propulsion Laboratory and Cornell University for the Mars missions. Mounted atop each rover, the Pancam mast assembly (PMA) can tilt a full 180 degrees and swivel 360 degrees, allowing for a complete, highly detailed view of the Martian landscape. The rover Pancams take small, 1 megapixel (1 million pixel) digital photographs, which are stitched together into large panoramas that sometimes measure 4 by 24 megapixels. The Pancam software performs some image correction and stitching after the photographs are transmitted back to Earth. Different lens filters and a spectrometer also assist scientists in their analyses of infrared radiation from the objects in the photographs. These photographs from Mars spurred developers to begin thinking in terms of larger and higher quality images: super-sized digital pictures, or gigapixels, which are images composed of 1 billion or more pixels. Gigapixel images are more than 200 times the size captured by today s standard 4 megapixel digital camera. Although originally created for the Mars missions, the detail provided by these large photographs allows for many purposes, not all of which are limited to extraterrestrial photography.

  17. The NectarCAM camera project

    CERN Document Server

    Glicenstein, J-F; Barrio, J-A; Blanch, O; Boix, J; Bolmont, J; Boutonnet, C; Cazaux, S; Chabanne, E; Champion, C; Chateau, F; Colonges, S; Corona, P; Couturier, S; Courty, B; Delagnes, E; Delgado, C; Ernenwein, J-P; Fegan, S; Ferreira, O; Fesquet, M; Fontaine, G; Fouque, N; Henault, F; Gascón, D; Herranz, D; Hermel, R; Hoffmann, D; Houles, J; Karkar, S; Khelifi, B; Knödlseder, J; Martinez, G; Lacombe, K; Lamanna, G; LeFlour, T; Lopez-Coto, R; Louis, F; Mathieu, A; Moulin, E; Nayman, P; Nunio, F; Olive, J-F; Panazol, J-L; Petrucci, P-O; Punch, M; Prast, J; Ramon, P; Riallot, M; Ribó, M; Rosier-Lees, S; Sanuy, A; Siero, J; Tavernet, J-P; Tejedor, L A; Toussenel, F; Vasileiadis, G; Voisin, V; Waegebert, V; Zurbach, C

    2013-01-01

    In the framework of the next generation of Cherenkov telescopes, the Cherenkov Telescope Array (CTA), NectarCAM is a camera designed for the medium size telescopes covering the central energy range of 100 GeV to 30 TeV. NectarCAM will be finely pixelated (~ 1800 pixels for a 8 degree field of view, FoV) in order to image atmospheric Cherenkov showers by measuring the charge deposited within a few nanoseconds time-window. It will have additional features like the capacity to record the full waveform with GHz sampling for every pixel and to measure event times with nanosecond accuracy. An array of a few tens of medium size telescopes, equipped with NectarCAMs, will achieve up to a factor of ten improvement in sensitivity over existing instruments in the energy range of 100 GeV to 10 TeV. The camera is made of roughly 250 independent read-out modules, each composed of seven photo-multipliers, with their associated high voltage base and control, a read-out board and a multi-service backplane board. The read-out b...

  18. Focal Plane Metrology for the LSST Camera

    Energy Technology Data Exchange (ETDEWEB)

    A Rasmussen, Andrew P.; Hale, Layton; Kim, Peter; Lee, Eric; Perl, Martin; Schindler, Rafe; Takacs, Peter; Thurston, Timothy; /SLAC

    2007-01-10

    Meeting the science goals for the Large Synoptic Survey Telescope (LSST) translates into a demanding set of imaging performance requirements for the optical system over a wide (3.5{sup o}) field of view. In turn, meeting those imaging requirements necessitates maintaining precise control of the focal plane surface (10 {micro}m P-V) over the entire field of view (640 mm diameter) at the operating temperature (T {approx} -100 C) and over the operational elevation angle range. We briefly describe the hierarchical design approach for the LSST Camera focal plane and the baseline design for assembling the flat focal plane at room temperature. Preliminary results of gravity load and thermal distortion calculations are provided, and early metrological verification of candidate materials under cold thermal conditions are presented. A detailed, generalized method for stitching together sparse metrology data originating from differential, non-contact metrological data acquisition spanning multiple (non-continuous) sensor surfaces making up the focal plane, is described and demonstrated. Finally, we describe some in situ alignment verification alternatives, some of which may be integrated into the camera's focal plane.

  19. Motorcycle detection and counting using stereo camera, IR camera, and microphone array

    Science.gov (United States)

    Ling, Bo; Gibson, David R. P.; Middleton, Dan

    2013-03-01

    Detection, classification, and characterization are the key to enhancing motorcycle safety, motorcycle operations and motorcycle travel estimation. Average motorcycle fatalities per Vehicle Mile Traveled (VMT) are currently estimated at 30 times those of auto fatalities. Although it has been an active research area for many years, motorcycle detection still remains a challenging task. Working with FHWA, we have developed a hybrid motorcycle detection and counting system using a suite of sensors including stereo camera, thermal IR camera and unidirectional microphone array. The IR thermal camera can capture the unique thermal signatures associated with the motorcycle's exhaust pipes that often show bright elongated blobs in IR images. The stereo camera in the system is used to detect the motorcyclist who can be easily windowed out in the stereo disparity map. If the motorcyclist is detected through his or her 3D body recognition, motorcycle is detected. Microphones are used to detect motorcycles that often produce low frequency acoustic signals. All three microphones in the microphone array are placed in strategic locations on the sensor platform to minimize the interferences of background noises from sources such as rain and wind. Field test results show that this hybrid motorcycle detection and counting system has an excellent performance.

  20. Lights, Camera, AG-Tion: Promoting Agricultural and Environmental Education on Camera

    Science.gov (United States)

    Fuhrman, Nicholas E.

    2016-01-01

    Viewing of online videos and television segments has become a popular and efficient way for Extension audiences to acquire information. This article describes a unique approach to teaching on camera that may help Extension educators communicate their messages with comfort and personality. The S.A.L.A.D. approach emphasizes using relevant teaching…

  1. Calibration of the Lunar Reconnaissance Orbiter Camera

    Science.gov (United States)

    Tschimmel, M.; Robinson, M. S.; Humm, D. C.; Denevi, B. W.; Lawrence, S. J.; Brylow, S.; Ravine, M.; Ghaemi, T.

    2008-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) onboard the NASA Lunar Reconnaissance Orbiter (LRO) spacecraft consists of three cameras: the Wide-Angle Camera (WAC) and two identical Narrow Angle Cameras (NAC-L, NAC-R). The WAC is push-frame imager with 5 visible wavelength filters (415 to 680 nm) at a spatial resolution of 100 m/pixel and 2 UV filters (315 and 360 nm) with a resolution of 400 m/pixel. In addition to the multicolor imaging the WAC can operate in monochrome mode to provide a global large- incidence angle basemap and a time-lapse movie of the illumination conditions at both poles. The WAC has a highly linear response, a read noise of 72 e- and a full well capacity of 47,200 e-. The signal-to-noise ratio in each band is 140 in the worst case. There are no out-of-band leaks and the spectral response of each filter is well characterized. Each NAC is a monochrome pushbroom scanner, providing images with a resolution of 50 cm/pixel from a 50-km orbit. A single NAC image has a swath width of 2.5 km and a length of up to 26 km. The NACs are mounted to acquire side-by-side imaging for a combined swath width of 5 km. The NAC is designed to fully characterize future human and robotic landing sites in terms of topography and hazard risks. The North and South poles will be mapped on a 1-meter-scale poleward of 85.5° latitude. Stereo coverage can be provided by pointing the NACs off-nadir. The NACs are also highly linear. Read noise is 71 e- for NAC-L and 74 e- for NAC-R and the full well capacity is 248,500 e- for NAC-L and 262,500 e- for NAC- R. The focal lengths are 699.6 mm for NAC-L and 701.6 mm for NAC-R; the system MTF is 28% for NAC-L and 26% for NAC-R. The signal-to-noise ratio is at least 46 (terminator scene) and can be higher than 200 (high sun scene). Both NACs exhibit a straylight feature, which is caused by out-of-field sources and is of a magnitude of 1-3%. However, as this feature is well understood it can be greatly reduced during ground

  2. Photometric Calibration of Consumer Video Cameras

    Science.gov (United States)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  3. Camera systems in human motion analysis for biomedical applications

    Science.gov (United States)

    Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.

    2015-05-01

    Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.

  4. A wide-angle camera module for disposable endoscopy

    Science.gov (United States)

    Shim, Dongha; Yeon, Jesun; Yi, Jason; Park, Jongwon; Park, Soo Nam; Lee, Nanhee

    2016-06-01

    A wide-angle miniaturized camera module for disposable endoscope is demonstrated in this paper. A lens module with 150° angle of view (AOV) is designed and manufactured. All plastic injection-molded lenses and a commercial CMOS image sensor are employed to reduce the manufacturing cost. The image sensor and LED illumination unit are assembled with a lens module. The camera module does not include a camera processor to further reduce its size and cost. The size of the camera module is 5.5 × 5.5 × 22.3 mm3. The diagonal field of view (FOV) of the camera module is measured to be 110°. A prototype of a disposable endoscope is implemented to perform a pre-clinical animal testing. The esophagus of an adult beagle dog is observed. These results demonstrate the feasibility of a cost-effective and high-performance camera module for disposable endoscopy.

  5. A wide-angle camera module for disposable endoscopy

    Science.gov (United States)

    Shim, Dongha; Yeon, Jesun; Yi, Jason; Park, Jongwon; Park, Soo Nam; Lee, Nanhee

    2016-08-01

    A wide-angle miniaturized camera module for disposable endoscope is demonstrated in this paper. A lens module with 150° angle of view (AOV) is designed and manufactured. All plastic injection-molded lenses and a commercial CMOS image sensor are employed to reduce the manufacturing cost. The image sensor and LED illumination unit are assembled with a lens module. The camera module does not include a camera processor to further reduce its size and cost. The size of the camera module is 5.5 × 5.5 × 22.3 mm3. The diagonal field of view (FOV) of the camera module is measured to be 110°. A prototype of a disposable endoscope is implemented to perform a pre-clinical animal testing. The esophagus of an adult beagle dog is observed. These results demonstrate the feasibility of a cost-effective and high-performance camera module for disposable endoscopy.

  6. Quality assessment of user-generated video using camera motion

    OpenAIRE

    Guo, Jinlin; Gurrin, Cathal; Hopfgartner, Frank; Zhang, ZhenXing; Lao, Songyang

    2013-01-01

    With user-generated video (UGV) becoming so popular on theWeb, the availability of a reliable quality assessment (QA) measure of UGV is necessary for improving the users’ quality of experience in videobased application. In this paper, we explore QA of UGV based on how much irregular camera motion it contains with low-cost manner. A blockmatch based optical flow approach has been employed to extract camera motion features in UGV, based on which, irregular camera motion is calculated and ...

  7. IR Camera Report for the 7 Day Production Test

    Energy Technology Data Exchange (ETDEWEB)

    Holloway, Michael Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-22

    The following report gives a summary of the IR camera performance results and data for the 7 day production run that occurred from 10 Sep 2015 thru 16 Sep 2015. During this production run our goal was to see how well the camera performed its task of monitoring the target window temperature with our improved alignment procedure and emissivity measurements. We also wanted to see if the increased shielding would be effective in protecting the camera from damage and failure.

  8. Experimental demonstration of RGB LED-based optical camera communications

    OpenAIRE

    Luo, Pengfei; Min ZHANG; Ghassemlooy, Zabih; Minh, Hoa Le; Tsai, Hsin-Mu; Tang, Xuan; Png, Lih Chieh; Han, Dahai

    2015-01-01

    Red, green, and blue (RGB) light-emitting diodes (LEDs) are widely used in everyday illumination, particularly where color-changing lighting is required. On the other hand, digital cameras with color filter arrays over image sensors have been also extensively integrated in smart devices. Therefore, optical camera communications (OCC) using RGB LEDs and color cameras is a promising candidate for cost-effective parallel visible light communications (VLC). In this paper, a single RGB LED-based O...

  9. Abnormal Event Detection via Multikernel Learning for Distributed Camera Networks

    OpenAIRE

    Tian Wang; Jie Chen; Paul Honeine; Hichem Snoussi

    2015-01-01

    Distributed camera networks play an important role in public security surveillance. Analyzing video sequences from cameras set at different angles will provide enhanced performance for detecting abnormal events. In this paper, an abnormal detection algorithm is proposed to identify unusual events captured by multiple cameras. The visual event is summarized and represented by the histogram of the optical flow orientation descriptor, and then a multikernel strategy that takes the multiview scen...

  10. PHOTOGRAMMETRIC PROCESSING OF APOLLO 15 METRIC CAMERA OBLIQUE IMAGES

    OpenAIRE

    K. L. Edmundson; O. Alexandrov; Archinal, B. A.; Becker, K.J.; Becker, T. L.; Kirk, R L; Moratto, Z. M.; Nefian, A. V.; Richie, J. O.; Robinson, M S

    2016-01-01

    The integrated photogrammetric mapping system flown on the last three Apollo lunar missions (15, 16, and 17) in the early 1970s incorporated a Metric (mapping) Camera, a high-resolution Panoramic Camera, and a star camera and laser altimeter to provide support data. In an ongoing collaboration, the U.S. Geological Survey’s Astrogeology Science Center, the Intelligent Robotics Group of the NASA Ames Research Center, and Arizona State University are working to achieve the most complete...

  11. Integrating Scene Parallelism in Camera Auto-Calibration

    Institute of Scientific and Technical Information of China (English)

    LIU Yong (刘勇); WU ChengKe (吴成柯); Hung-Tat Tsui

    2003-01-01

    This paper presents an approach for camera auto-calibration from uncalibrated video sequences taken by a hand-held camera. The novelty of this approach lies in that the line parallelism is transformed to the constraints on the absolute quadric during camera autocalibration. This makes some critical cases solvable and the reconstruction more Euclidean. The approach is implemented and validated using simulated data and real image data. The experimental results show the effectiveness of the approach.

  12. IR Camera Report for the 7 Day Production Test

    International Nuclear Information System (INIS)

    The following report gives a summary of the IR camera performance results and data for the 7 day production run that occurred from 10 Sep 2015 thru 16 Sep 2015. During this production run our goal was to see how well the camera performed its task of monitoring the target window temperature with our improved alignment procedure and emissivity measurements. We also wanted to see if the increased shielding would be effective in protecting the camera from damage and failure.

  13. On Pixel Detection Threshold in the Gigavision Camera

    OpenAIRE

    Yang, F.; Sbaiz, L.; Charbon, E.; Susstrunk, S.; Vetterli, M.

    2010-01-01

    Recently, we have proposed a new image device called gigavision camera whose most important characteristic is that pixels have binary response. The response function of a gigavision sensor is non-linear and similar to a logarithmic function, which makes the camera suitable for high dynamic range imaging. One important parameter in the gigavision camera is the threshold for generating binary pixels. Threshold T relates to the number of photo-electrons necessary for the pixel output to switch f...

  14. Central Acceptance Testing for Camera Technologies for CTA

    OpenAIRE

    Bonardi, A.; T. Buanes; Chadwick, P.; Dazzi, F.; A. Förster(CERN, Geneva, Switzerland); Hörandel, J. R.; Punch, M.; Consortium, R. M. Wagner for the CTA

    2015-01-01

    The Cherenkov Telescope Array (CTA) is an international initiative to build the next generation ground based very-high energy gamma-ray observatory. It will consist of telescopes of three different sizes, employing several different technologies for the cameras that detect the Cherenkov light from the observed air showers. In order to ensure the compliance of each camera technology with CTA requirements, CTA will perform central acceptance testing of each camera technology. To assist with thi...

  15. Movement-based interaction in camera spaces: a conceptual framework

    DEFF Research Database (Denmark)

    Eriksson, Eva; Hansen, Thomas Riisgaard; Lykke-Olesen, Andreas

    2007-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movementbased projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space......, relations, and feedback. We see these as central for describing and analysing movement-based systems using camera tracking and we show how these three concepts can be used to analyse other camera tracking applications....

  16. Analysis of Camera Arrays Applicable to the Internet of Things

    OpenAIRE

    Jiachen Yang; Ru Xu; Zhihan Lv; Houbing Song

    2016-01-01

    The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are...

  17. Calibration of a Stereo Radiation Detection Camera Using Planar Homography

    Directory of Open Access Journals (Sweden)

    Seung-Hae Baek

    2016-01-01

    Full Text Available This paper proposes a calibration technique of a stereo gamma detection camera. Calibration of the internal and external parameters of a stereo vision camera is a well-known research problem in the computer vision society. However, few or no stereo calibration has been investigated in the radiation measurement research. Since no visual information can be obtained from a stereo radiation camera, it is impossible to use a general stereo calibration algorithm directly. In this paper, we develop a hybrid-type stereo system which is equipped with both radiation and vision cameras. To calibrate the stereo radiation cameras, stereo images of a calibration pattern captured from the vision cameras are transformed in the view of the radiation cameras. The homography transformation is calibrated based on the geometric relationship between visual and radiation camera coordinates. The accuracy of the stereo parameters of the radiation camera is analyzed by distance measurements to both visual light and gamma sources. The experimental results show that the measurement error is about 3%.

  18. Testing and evaluation of thermal cameras for absolute temperature measurement

    Science.gov (United States)

    Chrzanowski, Krzysztof; Fischer, Joachim; Matyszkiel, Robert

    2000-09-01

    The accuracy of temperature measurement is the most important criterion for the evaluation of thermal cameras used in applications requiring absolute temperature measurement. All the main international metrological organizations currently propose a parameter called uncertainty as a measure of measurement accuracy. We propose a set of parameters for the characterization of thermal measurement cameras. It is shown that if these parameters are known, then it is possible to determine the uncertainty of temperature measurement due to only the internal errors of these cameras. Values of this uncertainty can be used as an objective criterion for comparisons of different thermal measurement cameras.

  19. 360 deg Camera Head for Unmanned Sea Surface Vehicles

    Science.gov (United States)

    Townsend, Julie A.; Kulczycki, Eric A.; Willson, Reginald G.; Huntsberger, Terrance L.; Garrett, Michael S.; Trebi-Ollennu, Ashitey; Bergh, Charles F.

    2012-01-01

    The 360 camera head consists of a set of six color cameras arranged in a circular pattern such that their overlapping fields of view give a full 360 view of the immediate surroundings. The cameras are enclosed in a watertight container along with support electronics and a power distribution system. Each camera views the world through a watertight porthole. To prevent overheating or condensation in extreme weather conditions, the watertight container is also equipped with an electrical cooling unit and a pair of internal fans for circulation.

  20. Mid-IR image acquisition using a standard CCD camera

    DEFF Research Database (Denmark)

    Dam, Jeppe Seidelin; Sørensen, Knud Palmelund; Pedersen, Christian;

    2010-01-01

    Direct image acquisition in the 3-5 µm range is realized using a standard CCD camera and a wavelength up-converter unit. The converter unit transfers the image information to the NIR range were state-of-the-art cameras exist.......Direct image acquisition in the 3-5 µm range is realized using a standard CCD camera and a wavelength up-converter unit. The converter unit transfers the image information to the NIR range were state-of-the-art cameras exist....

  1. Camera traps can be heard and seen by animals.

    Science.gov (United States)

    Meek, Paul D; Ballard, Guy-Anthony; Fleming, Peter J S; Schaefer, Michael; Williams, Warwick; Falzon, Greg

    2014-01-01

    Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals' hearing and produce illumination that can be seen by many species.

  2. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    Science.gov (United States)

    Shortis, Mark

    2015-12-07

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  3. Robust multi-camera view face recognition

    CERN Document Server

    Kisku, Dakshina Ranjan; Gupta, Phalguni; Sing, Jamuna Kanta

    2010-01-01

    This paper presents multi-appearance fusion of Principal Component Analysis (PCA) and generalization of Linear Discriminant Analysis (LDA) for multi-camera view offline face recognition (verification) system. The generalization of LDA has been extended to establish correlations between the face classes in the transformed representation and this is called canonical covariate. The proposed system uses Gabor filter banks for characterization of facial features by spatial frequency, spatial locality and orientation to make compensate to the variations of face instances occurred due to illumination, pose and facial expression changes. Convolution of Gabor filter bank to face images produces Gabor face representations with high dimensional feature vectors. PCA and canonical covariate are then applied on the Gabor face representations to reduce the high dimensional feature spaces into low dimensional Gabor eigenfaces and Gabor canonical faces. Reduced eigenface vector and canonical face vector are fused together usi...

  4. Retinal oximetry with a multiaperture camera

    Science.gov (United States)

    Lemaillet, Paul; Lompado, Art; Ibrahim, Mohamed; Nguyen, Quan Dong; Ramella-Roman, Jessica C.

    2010-02-01

    Oxygen saturation measurements in the retina is an essential measurement in monitoring eye health of diabetic patient. In this paper, preliminary result of oxygen saturation measurements for a healthy patient retina is presented. The retinal oximeter used is based on a regular fundus camera to which was added an optimized optical train designed to perform aperture division whereas a filter array help select the requested wavelengths. Hence, nine equivalent wavelength-dependent sub-images are taken in a snapshot which helps minimizing the effects of eye movements. The setup is calibrated by using a set of reflectance calibration phantoms and a lookuptable (LUT) is computed. An inverse model based on the LUT is presented to extract the optical properties of a patient fundus and further estimate the oxygen saturation in a retina vessel.

  5. Relevance of ellipse eccentricity for camera calibration

    Science.gov (United States)

    Mordwinzew, W.; Tietz, B.; Boochs, F.; Paulus, D.

    2015-05-01

    Plane circular targets are widely used within calibrations of optical sensors through photogrammetric set-ups. Due to this popularity, their advantages and disadvantages are also well studied in the scientific community. One main disadvantage occurs when the projected target is not parallel to the image plane. In this geometric constellation, the target has an elliptic geometry with an offset between its geometric and its projected center. This difference is referred to as ellipse eccentricity and is a systematic error which, if not treated accordingly, has a negative impact on the overall achievable accuracy. The magnitude and direction of eccentricity errors are dependent on various factors. The most important one is the target size. The bigger an ellipse in the image is, the bigger the error will be. Although correction models dealing with eccentricity have been available for decades, it is mostly seen as a planning task in which the aim is to choose the target size small enough so that the resulting eccentricity error remains negligible. Besides the fact that advanced mathematical models are available and that the influence of this error on camera calibration results is still not completely investigated, there are various additional reasons why bigger targets can or should not be avoided. One of them is the growing image resolution as a by-product from advancements in the sensor development. Here, smaller pixels have a lower S/N ratio, necessitating more pixels to assure geometric quality. Another scenario might need bigger targets due to larger scale differences whereas distant targets should still contain enough information in the image. In general, bigger ellipses contain more contour pixels and therefore more information. This supports the target-detection algorithms to perform better even at non-optimal conditions such as data from sensors with a high noise level. In contrast to rather simple measuring situations in a stereo or multi-image mode, the impact

  6. Dark energy camera installation at CTIO: overview

    Science.gov (United States)

    Abbott, Timothy M.; Muñoz, Freddy; Walker, Alistair R.; Smith, Chris; Montane, Andrés.; Gregory, Brooke; Tighe, Roberto; Schurter, Patricio; van der Bliek, Nicole S.; Schumacher, German

    2012-09-01

    The Dark Energy Camera (DECam) has been installed on the V. M. Blanco telescope at Cerro Tololo Inter-American Observatory in Chile. This major upgrade to the facility has required numerous modifications to the telescope and improvements in observatory infrastructure. The telescope prime focus assembly has been entirely replaced, and the f/8 secondary change procedure radically changed. The heavier instrument means that telescope balance has been significantly modified. The telescope control system has been upgraded. NOAO has established a data transport system to efficiently move DECam's output to the NCSA for processing. The observatory has integrated the DECam highpressure, two-phase cryogenic cooling system into its operations and converted the Coudé room into an environmentally-controlled instrument handling facility incorporating a high quality cleanroom. New procedures to ensure the safety of personnel and equipment have been introduced.

  7. Camera Raw解读(3)

    Institute of Scientific and Technical Information of China (English)

    张恣宽

    2010-01-01

    接上期,继续介绍Camera Raw的调整面板。(2).【色调曲线】面板单击【色调曲线】按钮.进入【色调曲线】选项面板(快捷键Ctrl+Alt+2)。该面板主要是对图片中间色调进行精细调整,从Photoshop CS3开始.在曲线背景中加入了色阶中才有的直方图波形,使我们可以直观地看到照片调整前后的色阶变化。

  8. Collimated trans-axial tomographic scintillation camera

    International Nuclear Information System (INIS)

    The principal problem in trans-axial tomographic radioisotope scanning is the length of time required to obtain meaningful data. Patient movement and radioisotope migration during the scanning period can cause distortion of the image. The object of this invention is to reduce the scanning time without degrading the images obtained. A system is described in which a scintillation camera detector is moved to an orbit about the cranial-caudal axis relative to the patient. A collimator is used in which lead septa are arranged so as to admit gamma rays travelling perpendicular to this axis with high spatial resolution and those travelling in the direction of the axis with low spatial resolution, thus increasing the rate of acceptance of radioactive events to contribute to the positional information obtainable without sacrificing spatial resolution. (author)

  9. Neutron camera employing row and column summations

    Science.gov (United States)

    Clonts, Lloyd G.; Diawara, Yacouba; Donahue, Jr, Cornelius; Montcalm, Christopher A.; Riedel, Richard A.; Visscher, Theodore

    2016-06-14

    For each photomultiplier tube in an Anger camera, an R.times.S array of preamplifiers is provided to detect electrons generated within the photomultiplier tube. The outputs of the preamplifiers are digitized to measure the magnitude of the signals from each preamplifier. For each photomultiplier tube, a corresponding summation circuitry including R row summation circuits and S column summation circuits numerically add the magnitudes of the signals from preamplifiers for each row and for each column to generate histograms. For a P.times.Q array of photomultiplier tubes, P.times.Q summation circuitries generate P.times.Q row histograms including R entries and P.times.Q column histograms including S entries. The total set of histograms include P.times.Q.times.(R+S) entries, which can be analyzed by a position calculation circuit to determine the locations of events (detection of a neutron).

  10. Fast Camera Imaging of Hall Thruster Ignition

    International Nuclear Information System (INIS)

    Hall thrusters provide efficient space propulsion by electrostatic acceleration of ions. Rotating electron clouds in the thruster overcome the space charge limitations of other methods. Images of the thruster startup, taken with a fast camera, reveal a bright ionization period which settles into steady state operation over 50 (micro)s. The cathode introduces azimuthal asymmetry, which persists for about 30 (micro)s into the ignition. Plasma thrusters are used on satellites for repositioning, orbit correction and drag compensation. The advantage of plasma thrusters over conventional chemical thrusters is that the exhaust energies are not limited by chemical energy to about an electron volt. For xenon Hall thrusters, the ion exhaust velocity can be 15-20 km/s, compared to 5 km/s for a typical chemical thruster.

  11. Acceptance tests of a new gamma camera

    International Nuclear Information System (INIS)

    For best patient service, a QA programme is needed to produce quantitative/qualitative data and keep records of the results and equipment faults. Gamma cameras must be checked against the manufacturer's specifications.The service manual is usually useful to achieve this goal. Acceptance tests are very important not only to accept a new gamma camera system for routine clinical use but also to have a role in a reference for future measurements. In this study, acceptance tests were performed for a new gamma camera in our department. It is a General Electric MG system with two detectors, two collimators. They are low energy general purpose (LEGP) and medium energy general purpose (MEGP). All intrinsic calibrations and corrections were done by the service engineer at installation (PM tune, dynamic correction, energy calibration, geometric calibration, energy correction, linearity correction and second order corrections).After installation, calibrations and corrections, a close physical inspection of the mechanical and electrical safety aspects of the cameras were done by the responsible physicist of the department. The planar system is based on measurement of system uniformity, resolution/linearity and multiple window spatial registration. All test procedures were performed according to NEMA procedures developed by the manufacturer. Intrinsic uniformity: NEMA uniformity was done first by using service manual and then other isotope uniformities were acquired with 99mTc, 131I, 201Tl and 67Ga. They were evaluated qualitatively and quantitatively, but non-uniformities were observed, especially for detector II, The service engineers repeated all tests and made necessary corrections. We repeated all the intrinsic uniformity tests. 99mTc intrinsic images were also performed at 'no correction', 'no energy correction', 'no linearity correction', 'all correction' and '±10% off peak', and compared. Extrinsic uniformity: At the beginning, collimators were checked for defects

  12. First Light for World's Largest 'Thermometer Camera'

    Science.gov (United States)

    2007-08-01

    LABOCA in Service at APEX The world's largest bolometer camera for submillimetre astronomy is now in service at the 12-m APEX telescope, located on the 5100m high Chajnantor plateau in the Chilean Andes. LABOCA was specifically designed for the study of extremely cold astronomical objects and, with its large field of view and very high sensitivity, will open new vistas in our knowledge of how stars form and how the first galaxies emerged from the Big Bang. ESO PR Photo 35a/07 ESO PR Photo 35a/07 LABOCA on APEX "A large fraction of all the gas in the Universe has extremely cold temperatures of around minus 250 degrees Celsius, a mere 20 degrees above absolute zero," says Karl Menten, director at the Max Planck Institute for Radioastronomy (MPIfR) in Bonn, Germany, that built LABOCA. "Studying these cold clouds requires looking at the light they radiate in the submillimetre range, with very sophisticated detectors." Astronomers use bolometers for this task, which are, in essence, thermometers. They detect incoming radiation by registering the resulting rise in temperature. More specifically, a bolometer detector consists of an extremely thin foil that absorbs the incoming light. Any change of the radiation's intensity results in a slight change in temperature of the foil, which can then be registered by sensitive electronic thermometers. To be able to measure such minute temperature fluctuations requires the bolometers to be cooled down to less than 0.3 degrees above absolute zero, that is below minus 272.85 degrees Celsius. "Cooling to such low temperatures requires using liquid helium, which is no simple feat for an observatory located at 5100m altitude," says Carlos De Breuck, the APEX instrument scientist at ESO. Nor is it simple to measure the weak temperature radiation of astronomical objects. Millimetre and submillimetre radiation opens a window into the enigmatic cold Universe, but the signals from space are heavily absorbed by water vapour in the Earth

  13. Comment on ‘From the pinhole camera to the shape of a lens: the camera-obscura reloaded’

    Science.gov (United States)

    Grusche, Sascha

    2016-09-01

    In the article ‘From the pinhole camera to the shape of a lens: the camera-obscura reloaded’ (Phys. Educ. 50 706), the authors show that a prism array, or an equivalent lens, can be used to bring together multiple camera obscura images from a pinhole array. It should be pointed out that the size of the camera obscura images is conserved by a prism array, but changed by a lens. To avoid this discrepancy in image size, the prism array, or the lens, should be made to touch the pinhole array.

  14. A tiny VIS-NIR snapshot multispectral camera

    Science.gov (United States)

    Geelen, Bert; Blanch, Carolina; Gonzalez, Pilar; Tack, Nicolaas; Lambrechts, Andy

    2015-03-01

    Spectral imaging can reveal a lot of hidden details about the world around us, but is currently confined to laboratory environments due to the need for complex, costly and bulky cameras. Imec has developed a unique spectral sensor concept in which the spectral unit is monolithically integrated on top of a standard CMOS image sensor at wafer level, hence enabling the design of compact, low cost and high acquisition speed spectral cameras with a high design flexibility. This flexibility has previously been demonstrated by imec in the form of three spectral camera architectures: firstly a high spatial and spectral resolution scanning camera, secondly a multichannel snapshot multispectral camera and thirdly a per-pixel mosaic snapshot spectral camera. These snapshot spectral cameras sense an entire multispectral data cube at one discrete point in time, extending the domain of spectral imaging towards dynamic, video-rate applications. This paper describes the integration of our per-pixel mosaic snapshot spectral sensors inside a tiny, portable and extremely user-friendly camera. Our prototype demonstrator cameras can acquire multispectral image cubes, either of 272x512 pixels over 16 bands in the VIS (470-620nm) or of 217x409 pixels over 25 bands in the VNIR (600-900nm) at 170 cubes per second for normal machine vision illumination levels. The cameras themselves are extremely compact based on Ximea xiQ cameras, measuring only 26x26x30mm, and can be operated from a laptop-based USB3 connection, making them easily deployable in very diverse environments.

  15. Wildlife speed cameras: measuring animal travel speed and day range using camera traps

    OpenAIRE

    Rowcliffe, J. M.; Jansen, P A; Kays, R.; Kranstauber, B.; C. Carbone

    2016-01-01

    Travel speed (average speed of travel while active) and day range (average speed over the daily activity cycle) are behavioural metrics that influence processes including energy use, foraging success, disease transmission and human-wildlife interactions, and which can therefore be applied to a range of questions in ecology and conservation. These metrics are usually derived from telemetry or direct observations. Here, we describe and validate an entirely new alternative approach, using camera...

  16. Thermal analysis of the ultraviolet imager camera and electronics

    Science.gov (United States)

    Dirks, Gregory J.

    1991-01-01

    The Ultraviolet Imaging experiment has undergone design changes that necessiate updating the reduced thermal models (RTM's) for both the Camera and Electronics. In addition, there are several mission scenarios that need to be evaluated in terms of thermal response of the instruments. The impact of these design changes and mission scenarios on the thermal performance of the Camera and Electronics assemblies is discussed.

  17. The Camera Never Lies? Photographic Research Methods in Human Geography

    Science.gov (United States)

    Hall, Tim

    2009-01-01

    A camera is an essential tool for human geography students. Most students come back from an overseas fieldtrip, for example, with their camera crammed with images captured on the hoof around their destination. Many of these will find their way into essays, reports and presentations. Photographs are also typically a key element of many human…

  18. A focal plane camera for celestial XUV sources

    International Nuclear Information System (INIS)

    This thesis describes the development and performance of a new type of X-ray camera for the 2-2500A wavelength range (XUV). The camera features high position resolution (FWHM approximately 0.2 mm at 2 A, -13 erg/cm2s in a one year mission. (Auth.)

  19. Calibration of Low Cost RGB and NIR Uav Cameras

    Science.gov (United States)

    Fryskowska, A.; Kedzierski, M.; Grochala, A.; Braula, A.

    2016-06-01

    Non-metric digital cameras are being widely used for photogrammetric studies. The increase in resolution and quality of images obtained by non-metric cameras, allows to use it in low-cost UAV and terrestrial photogrammetry. Imagery acquired with non-metric cameras can be used in 3D modeling of objects or landscapes, reconstructing of historical sites, generating digital elevation models (DTM), orthophotos, or in the assessment of accidents. Non-metric digital camcorders are characterized by instability and ignorance of the interior orientation parameters. Therefore, the use of these devices requires prior calibration. Calibration research was conducted using non-metric camera, different calibration tests and various software. The first part of the paper contains a brief theoretical introduction including the basic definitions, like the construction of non-metric cameras or description of different optical distortions. The second part of the paper contains cameras calibration process, details of the calibration methods and models that have been used. Sony Nex 5 camera calibration has been done using software: Image Master Calib, Matlab - Camera Calibrator application and Agisoft Lens. For the study 2D test fields has been used. As a part of the research a comparative analysis of the results have been done.

  20. Camera Ready: Capturing a Digital History of Chester

    Science.gov (United States)

    Lehman, Kathy

    2008-01-01

    Armed with digital cameras, voice recorders, and movie cameras, students from Thomas Dale High School in Chester, Virginia, have been exploring neighborhoods, interviewing residents, and collecting memories of their hometown. In this article, the author describes "Digital History of Chester", a project for creating a commemorative DVD. This…

  1. Augmenting camera images for operators of Unmanned Aerial Vehicles

    NARCIS (Netherlands)

    Veltman, J.A.; Oving, A.B.

    2003-01-01

    The manual control of the camera of an unmanned aerial vehicle (UAV) can be difficult due to several factors such as 1) time delays between steering input and changes of the monitor content, 2) low update rates of the camera images and 3) lack of situation awareness due to the remote position of the

  2. Camera Layout Design for the Upper Stage Thrust Cone

    Science.gov (United States)

    Wooten, Tevin; Fowler, Bart

    2010-01-01

    Engineers in the Integrated Design and Analysis Division (EV30) use a variety of different tools to aid in the design and analysis of the Ares I vehicle. One primary tool in use is Pro-Engineer. Pro-Engineer is a computer-aided design (CAD) software that allows designers to create computer generated structural models of vehicle structures. For the Upper State thrust cone, Pro-Engineer was used to assist in the design of a layout for two camera housings. These cameras observe the separation between the first and second stage of the Ares I vehicle. For the Ares I-X, one standard speed camera was used. The Ares I design calls for two separate housings, three cameras, and a lighting system. With previous design concepts and verification strategies in mind, a new layout for the two camera design concept was developed with members of the EV32 team. With the new design, Pro-Engineer was used to draw the layout to observe how the two camera housings fit with the thrust cone assembly. Future analysis of the camera housing design will verify the stability and clearance of the camera with other hardware present on the thrust cone.

  3. Assessing the Photogrammetric Potential of Cameras in Portable Devices

    Science.gov (United States)

    Smith, M. J.; Kokkas, N.

    2012-07-01

    In recent years, there have been an increasing number of portable devices, tablets and Smartphone's employing high-resolution digital cameras to satisfy consumer demand. In most cases, these cameras are designed primarily for capturing visually pleasing images and the potential of using Smartphone and tablet cameras for metric applications remains uncertain. The compact nature of the host's devices leads to very small cameras and therefore smaller geometric characteristics. This also makes them extremely portable and with their integration into a multi-function device, which is part of the basic unit cost often makes them readily available. Many application specialists may find them an attractive proposition where some modest photogrammetric capability would be useful. This paper investigates the geometric potential of these cameras for close range photogrammetric applications by: • investigating their geometric characteristics using the self-calibration method of camera calibration and comparing results from a state-of-the-art Digital SLR camera. • investigating their capability for 3D building modelling. Again, these results will be compared with findings from results obtained from a Digital SLR camera. The early results presented show that the iPhone has greater potential for photogrammetric use than the iPad.

  4. Three-Dimensional Particle Image Velocimetry Using a Plenoptic Camera

    NARCIS (Netherlands)

    Lynch, K.P.; Fahringer, T.; Thurow, B.

    2012-01-01

    A novel 3-D, 3-C PIV technique is described, based on volume illumination and a plenoptic camera to measure a velocity field. The technique is based on plenoptic photography, which uses a dense microlens array mounted near a camera sensor to sample the spatial and angular distribution of light enter

  5. Microprocessor-controlled, wide-range streak camera

    International Nuclear Information System (INIS)

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized

  6. Microprocessor-controlled wide-range streak camera

    Science.gov (United States)

    Lewis, Amy E.; Hollabaugh, Craig

    2006-08-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  7. 28 CFR 68.42 - In camera and protective orders.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false In camera and protective orders. 68.42 Section 68.42 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) RULES OF PRACTICE AND PROCEDURE... In camera and protective orders. (a) Privileged communications. Upon application of any person,...

  8. 32 CFR 813.4 - Combat camera operations.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Combat camera operations. 813.4 Section 813.4 National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE SALES AND SERVICES VISUAL INFORMATION DOCUMENTATION PROGRAM § 813.4 Combat camera operations. (a) Air Force COMCAM forces document...

  9. 24 CFR 180.640 - In camera and protective orders.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false In camera and protective orders. 180.640 Section 180.640 Housing and Urban Development Regulations Relating to Housing and Urban... at Hearing § 180.640 In camera and protective orders. The ALJ may limit discovery or the...

  10. Improving Photometric Calibration of Meteor Video Camera Systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2016-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Oce (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the rst point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric ux within the camera band-pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at 0:20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0:05 ?? 0:10 mag in both ltered and un ltered camera observations with no evidence for lingering systematics.

  11. 29 CFR 18.46 - In camera and protective orders.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true In camera and protective orders. 18.46 Section 18.46 Labor Office of the Secretary of Labor RULES OF PRACTICE AND PROCEDURE FOR ADMINISTRATIVE HEARINGS BEFORE THE OFFICE OF ADMINISTRATIVE LAW JUDGES General § 18.46 In camera and protective orders. (a) Privileges....

  12. 49 CFR 511.45 - In camera materials.

    Science.gov (United States)

    2010-10-01

    ... excluded from the public record. Pursuant to 49 CFR part 512, the Chief Counsel of the NHTSA is responsible... 49 Transportation 6 2010-10-01 2010-10-01 false In camera materials. 511.45 Section 511.45... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION ADJUDICATIVE PROCEDURES Hearings § 511.45 In camera materials....

  13. Demonstrations of Optical Spectra with a Video Camera

    Science.gov (United States)

    Kraftmakher, Yaakov

    2012-01-01

    The use of a video camera may markedly improve demonstrations of optical spectra. First, the output electrical signal from the camera, which provides full information about a picture to be transmitted, can be used for observing the radiant power spectrum on the screen of a common oscilloscope. Second, increasing the magnification by the camera…

  14. Holographic motion picture camera with Doppler shift compensation

    Science.gov (United States)

    Kurtz, R. L. (Inventor)

    1976-01-01

    A holographic motion picture camera is reported for producing three dimensional images by employing an elliptical optical system. There is provided in one of the beam paths (the object or reference beam path) a motion compensator which enables the camera to photograph faster moving objects.

  15. Imaging Emission Spectra with Handheld and Cellphone Cameras

    Science.gov (United States)

    Sitar, David

    2012-01-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboratory setting on a shoestring budget and get immediate results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon…

  16. A Simple Spectrophotometer Using Common Materials and a Digital Camera

    Science.gov (United States)

    Widiatmoko, Eko; Widayani; Budiman, Maman; Abdullah, Mikrajuddin; Khairurrijal

    2011-01-01

    A simple spectrophotometer was designed using cardboard, a DVD, a pocket digital camera, a tripod and a computer. The DVD was used as a diffraction grating and the camera as a light sensor. The spectrophotometer was calibrated using a reference light prior to use. The spectrophotometer was capable of measuring optical wavelengths with a…

  17. Preliminary field evaluation of solid state cameras for security applications

    International Nuclear Information System (INIS)

    Recent developments in solid state imager technology have resulted in a series of compact, lightweight, all-solid-state closed circuit television (CCTV) cameras. Although it is widely known that the various solid state cameras have less light sensitivity and lower resolution than their vacuum tube counterparts, the potential for having a much longer Mean Time Between Failure (MTBF) for the all-solid-state cameras is generating considerable interest within the security community. Questions have been raised as to whether the newest and best of the solid state cameras are a viable alternative to the high maintenance vacuum tube cameras in exterior security applications. To help answer these questions, a series of tests were performed by Sandia National Laboratories at various test sites and under several lighting conditions. The results of these tests as well as a description of the test equipment, test sites, and procedures are presented in this report

  18. Accuracy testing of a new intraoral 3D camera.

    Science.gov (United States)

    Mehl, A; Ender, A; Mörmann, W; Attin, T

    2009-01-01

    Surveying intraoral structures by optical means has reached the stage where it is being discussed as a serious clinical alternative to conventional impression taking. Ease of handling and, more importantly, accuracy are important criteria for the clinical suitability of these systems. This article presents a new intraoral camera for the Cerec procedure. It reports on a study investigating the accuracy of this camera and its potential clinical indications. Single-tooth and quadrant images were taken with the camera and the results compared to those obtained with a reference scanner and with the previous 3D camera model. Differences were analyzed by superimposing the data records. Accuracy was higher with the new camera than with the previous model, reaching up to 19 microm in single-tooth images. Quadrant images can also be taken with sufficient accuracy (ca 35 microm) and are simple to perform in clinical practice, thanks to built-in shake detection in automatic capture mode.

  19. Spectral Camera based on Ghost Imaging via Sparsity Constraints.

    Science.gov (United States)

    Liu, Zhentao; Tan, Shiyu; Wu, Jianrong; Li, Enrong; Shen, Xia; Han, Shensheng

    2016-05-16

    The image information acquisition ability of a conventional camera is usually much lower than the Shannon Limit since it does not make use of the correlation between pixels of image data. Applying a random phase modulator to code the spectral images and combining with compressive sensing (CS) theory, a spectral camera based on true thermal light ghost imaging via sparsity constraints (GISC spectral camera) is proposed and demonstrated experimentally. GISC spectral camera can acquire the information at a rate significantly below the Nyquist rate, and the resolution of the cells in the three-dimensional (3D) spectral images data-cube can be achieved with a two-dimensional (2D) detector in a single exposure. For the first time, GISC spectral camera opens the way of approaching the Shannon Limit determined by Information Theory in optical imaging instruments.

  20. Spectral Camera based on Ghost Imaging via Sparsity Constraints

    CERN Document Server

    Liu, Zhentao; Wu, Jianrong; Li, Enrong; Shen, Xia; Han, Shensheng

    2015-01-01

    The information acquisition ability of conventional camera is far lower than the Shannon Limit because of the correlation between pixels of image data. Applying sparse representation of images to reduce the abundance of image data and combined with compressive sensing theory, the spectral camera based on ghost imaging via sparsity constraints (GISC spectral camera) is proposed and demonstrated experimentally. GISC spectral camera can acquire the information at a rate significantly below Nyquist, and the resolution of the cells in the three-dimensional (3D) spectral image data-cube can be achieved with a two-dimensional (2D) detector in a single exposure. For the first time, GISC spectral camera opens the way of approaching the Shannon Limit determined by Information Theory in optical imaging instruments.

  1. Spectral Camera based on Ghost Imaging via Sparsity Constraints

    Science.gov (United States)

    Liu, Zhentao; Tan, Shiyu; Wu, Jianrong; Li, Enrong; Shen, Xia; Han, Shensheng

    2016-05-01

    The image information acquisition ability of a conventional camera is usually much lower than the Shannon Limit since it does not make use of the correlation between pixels of image data. Applying a random phase modulator to code the spectral images and combining with compressive sensing (CS) theory, a spectral camera based on true thermal light ghost imaging via sparsity constraints (GISC spectral camera) is proposed and demonstrated experimentally. GISC spectral camera can acquire the information at a rate significantly below the Nyquist rate, and the resolution of the cells in the three-dimensional (3D) spectral images data-cube can be achieved with a two-dimensional (2D) detector in a single exposure. For the first time, GISC spectral camera opens the way of approaching the Shannon Limit determined by Information Theory in optical imaging instruments.

  2. Calibration of line-scan cameras for precision measurement.

    Science.gov (United States)

    Sun, Bo; Zhu, Jigui; Yang, Linghui; Yang, Shourui; Niu, Zhiyuan

    2016-09-01

    Calibration of line-scan cameras for precision measurement should have large calibration volume and be flexible in the actual measurement field. In this paper, we present a high-precision calibration method. Instead of using a large 3D pattern, we use a small planar pattern and a precalibrated matrix camera to obtain plenty of points with a suitable distribution, which would ensure the precision of the calibration results. The matrix camera removes the necessity of precise adjustment and movement and links the line-scan camera to the world easily, both of which enhance flexibility in the measurement field. The method has been verified by experiments. The experimental results demonstrated that the proposed method gives a practical solution to calibrate line scan cameras for precision measurement. PMID:27607257

  3. Central Acceptance Testing for Camera Technologies for CTA

    CERN Document Server

    Bonardi, A; Chadwick, P; Dazzi, F; Förster, A; Hörandel, J R; Punch, M

    2015-01-01

    The Cherenkov Telescope Array (CTA) is an international initiative to build the next generation ground based very-high energy gamma-ray observatory. It will consist of telescopes of three different sizes, employing several different technologies for the cameras that detect the Cherenkov light from the observed air showers. In order to ensure the compliance of each camera technology with CTA requirements, CTA will perform central acceptance testing of each camera technology. To assist with this, the Camera Test Facilities (CTF) work package is developing a detailed test program covering the most important performance, stability, and durability requirements, including setting up the necessary equipment. Performance testing will include a wide range of tests like signal amplitude, time resolution, dead-time determination, trigger efficiency, performance testing under temperature and humidity variations and several others. These tests can be performed on fully-integrated cameras using a portable setup at the came...

  4. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    Science.gov (United States)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellutta, Paolo; Sherwin, Gary W.

    2011-01-01

    The ability to perform off-road autonomous navigation at any time of day or night is a requirement for some unmanned ground vehicle (UGV) programs. Because there are times when it is desirable for military UGVs to operate without emitting strong, detectable electromagnetic signals, a passive only terrain perception mode of operation is also often a requirement. Thermal infrared (TIR) cameras can be used to provide day and night passive terrain perception. TIR cameras have a detector sensitive to either mid-wave infrared (MWIR) radiation (3-5?m) or long-wave infrared (LWIR) radiation (8-12?m). With the recent emergence of high-quality uncooled LWIR cameras, TIR cameras have become viable passive perception options for some UGV programs. The Jet Propulsion Laboratory (JPL) has used a stereo pair of TIR cameras under several UGV programs to perform stereo ranging, terrain mapping, tree-trunk detection, pedestrian detection, negative obstacle detection, and water detection based on object reflections. In addition, we have evaluated stereo range data at a variety of UGV speeds, evaluated dual-band TIR classification of soil, vegetation, and rock terrain types, analyzed 24 hour water and 12 hour mud TIR imagery, and analyzed TIR imagery for hazard detection through smoke. Since TIR cameras do not currently provide the resolution available from megapixel color cameras, a UGV's daytime safe speed is often reduced when using TIR instead of color cameras. In this paper, we summarize the UGV terrain perception work JPL has performed with TIR cameras over the last decade and describe a calibration target developed by General Dynamics Robotic Systems (GDRS) for TIR cameras and other sensors.

  5. Unmanned ground vehicle perception using thermal infrared cameras

    Science.gov (United States)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellutta, Paolo; Sherwin, Gary W.

    2011-05-01

    The ability to perform off-road autonomous navigation at any time of day or night is a requirement for some unmanned ground vehicle (UGV) programs. Because there are times when it is desirable for military UGVs to operate without emitting strong, detectable electromagnetic signals, a passive only terrain perception mode of operation is also often a requirement. Thermal infrared (TIR) cameras can be used to provide day and night passive terrain perception. TIR cameras have a detector sensitive to either mid-wave infrared (MWIR) radiation (3-5μm) or long-wave infrared (LWIR) radiation (7-14μm). With the recent emergence of high-quality uncooled LWIR cameras, TIR cameras have become viable passive perception options for some UGV programs. The Jet Propulsion Laboratory (JPL) has used a stereo pair of TIR cameras under several UGV programs to perform stereo ranging, terrain mapping, tree-trunk detection, pedestrian detection, negative obstacle detection, and water detection based on object reflections. In addition, we have evaluated stereo range data at a variety of UGV speeds, evaluated dual-band TIR classification of soil, vegetation, and rock terrain types, analyzed 24 hour water and 12 hour mud TIR imagery, and analyzed TIR imagery for hazard detection through smoke. Since TIR cameras do not currently provide the resolution available from megapixel color cameras, a UGV's daytime safe speed is often reduced when using TIR instead of color cameras. In this paper, we summarize the UGV terrain perception work JPL has performed with TIR cameras over the last decade and describe a calibration target developed by General Dynamics Robotic Systems (GDRS) for TIR cameras and other sensors.

  6. True RGB line scan camera for color machine vision applications

    Science.gov (United States)

    Lemstrom, Guy F.

    1994-11-01

    In this paper a true RGB 3-chip color line scan camera is described. The camera was mainly developed for accurate color measuring in industrial applications. Due to the camera's modularity it's also possible to use it as a B/W-camera. The color separation is made with a RGB-beam splitter. The CCD linear arrays are fixed with a high accuracy to the beam splitters output in order to match the pixels of the three different CCDs on each other. This makes the color analyses simple compared to color line arrays where line or pixel matching has to be done. The beam splitter can be custom made to separate spectral components other than standard RGB. The spectral range is from 200 to 1000 nm for most CCDs and two or three spectral areas can be separately measured with the beam splitter. The camera is totally digital and has a 16-bit parallel computer interface to communicate with a signal processing board. Because of the open architecture of the camera it's possible for the customer to design a board with some special functions handling the preprocessing of the data (for example RGB - HSI conversion). The camera can also be equipped with a high speed CPU-board with enough local memory to do some image processing inside the camera before sending the data forward. The camera has been used in real industrial applications and has proven that its high resolution and high dynamic range can be used to measure color differences of small amounts to separate or grade objects such as minerals, food or other materials that can't be measured with a black and white camera.

  7. The Camera Is Not a Methodology: Towards a Framework for Understanding Young Children's Use of Video Cameras

    Science.gov (United States)

    Bird, Jo; Colliver, Yeshe; Edwards, Susan

    2014-01-01

    Participatory research methods argue that young children should be enabled to contribute their perspectives on research seeking to understand their worldviews. Visual research methods, including the use of still and video cameras with young children have been viewed as particularly suited to this aim because cameras have been considered easy and…

  8. WIDE-FIELD ASTRONOMICAL MULTISCALE CAMERAS

    Energy Technology Data Exchange (ETDEWEB)

    Marks, Daniel L.; Brady, David J., E-mail: dbrady@ee.duke.edu [Department of Electrical and Computer Engineering and Fitzpatrick Institute for Photonics, Box 90291, Duke University, Durham, NC 27708 (United States)

    2013-05-15

    In order to produce sufficiently low aberrations with a large aperture, telescopes have a limited field of view. Because of this narrow field, large areas of the sky at a given time are unobserved. We propose several telescopes based on monocentric reflective, catadioptric, and refractive objectives that may be scaled to wide fields of view and achieve 1.''1 resolution, which in most locations is the practical seeing limit of the atmosphere. The reflective and Schmidt catadioptric objectives have relatively simple configurations and enable large fields to be captured at the expense of the obscuration of the mirror by secondary optics, a defect that may be managed by image plane design. The refractive telescope design does not have an obscuration but the objective has substantial bulk. The refractive design is a 38 gigapixel camera which consists of a single monocentric objective and 4272 microcameras. Monocentric multiscale telescopes, with their wide fields of view, may observe phenomena that might otherwise be unnoticed, such as supernovae, glint from orbital space debris, and near-earth objects.

  9. STRAY DOG DETECTION IN WIRED CAMERA NETWORK

    Directory of Open Access Journals (Sweden)

    C. Prashanth

    2013-08-01

    Full Text Available Existing surveillance systems impose high level of security on humans but lacks attention on animals. Stray dogs could be used as an alternative to humans to carry explosive material. It is therefore imperative to ensure the detection of stray dogs for necessary corrective action. In this paper, a novel composite approach to detect the presence of stray dogs is proposed. The captured frame from the surveillance camera is initially pre-processed using Gaussian filter to remove noise. The foreground object of interest is extracted utilizing ViBe algorithm. Histogram of Oriented Gradients (HOG algorithm is used as the shape descriptor which derives the shape and size information of the extracted foreground object. Finally, stray dogs are classified from humans using a polynomial Support Vector Machine (SVM of order 3. The proposed composite approach is simulated in MATLAB and OpenCV. Further it is validated with real time video feeds taken from an existing surveillance system. From the results obtained, it is found that a classification accuracy of about 96% is achieved. This encourages the utilization of the proposed composite algorithm in real time surveillance systems.

  10. Optimal conception of an IR camera

    Energy Technology Data Exchange (ETDEWEB)

    Papini, F.; Petit, J.L.; David, J.P. [Universite d`Aix-Marseille Centre Scientifique de Saint Jerome, 13397 Marseille, Cedex 13 (FR)

    1990-12-31

    This paper deals with the conclusions drawn from infrared thermal analysis experiments that were carried out over a period of several years. In the context of these experiments the authors analyzed the aptitude of a system to switch between two functions an imaging and a measuring system for thermal flux. Temperature measurements were not dealt with in this analysis, as temperature readings introduce numerical values associated with material properties and radiative balance that are in no way characteristic of infrared analysis. The authors` analysis deals with single-detector motion-picture cameras fitted with a line/column scanning system and with signal sampling on the amplified output of the detector. The image was thus reconstituted on a micro-computer, using the pixels from the sampling data, with a numerical depth determined by the digital convertor. This analysis was conducted within the constraints imposed by calibration procedures. These constraints are particularly severe when calibrating the spatial frequencies response function (within the frequency range). This calibration leads to a study of the image`s structure and of its ability to produce output values that are of the same order of those produced by a measuring device.

  11. Cooling the dark energy camera instrument

    Energy Technology Data Exchange (ETDEWEB)

    Schmitt, R.L.; Cease, H.; /Fermilab; DePoy, D.; /Ohio State U.; Diehl, H.T.; Estrada, J.; Flaugher, B.; /Fermilab; Kuhlmann, S.; /Ohio State U.; Onal, Birce; Stefanik, A.; /Fermilab

    2008-06-01

    DECam, camera for the Dark Energy Survey (DES), is undergoing general design and component testing. For an overview see DePoy, et al in these proceedings. For a description of the imager, see Cease, et al in these proceedings. The CCD instrument will be mounted at the prime focus of the CTIO Blanco 4m telescope. The instrument temperature will be 173K with a heat load of 113W. In similar applications, cooling CCD instruments at the prime focus has been accomplished by three general methods. Liquid nitrogen reservoirs have been constructed to operate in any orientation, pulse tube cryocoolers have been used when tilt angles are limited and Joule-Thompson or Stirling cryocoolers have been used with smaller heat loads. Gifford-MacMahon cooling has been used at the Cassegrain but not at the prime focus. For DES, the combined requirements of high heat load, temperature stability, low vibration, operation in any orientation, liquid nitrogen cost and limited space available led to the design of a pumped, closed loop, circulating nitrogen system. At zenith the instrument will be twelve meters above the pump/cryocooler station. This cooling system expected to have a 10,000 hour maintenance interval. This paper will describe the engineering basis including the thermal model, unbalanced forces, cooldown time, the single and two-phase flow model.

  12. Depth perception camera for autonomous vehicle applications

    Science.gov (United States)

    Kornreich, Philipp

    2013-05-01

    An imager that can measure the distance from each pixel to the point on the object that is in focus at the pixel is described. Since it provides numeric information of the distance from the camera to all points in its field of view it is ideally suited for autonomous vehicle navigation and robotic vision. This eliminates the LIDAR conventionally used for range measurements. The light arriving at a pixel through a convex lens adds constructively only if it comes from the object point in focus at this pixel. The light from all other object points cancels. Thus, the lens selects the point on the object who's range is to be determined. The range measurement is accomplished by short light guides at each pixel. The light guides contain a p - n junction and a pair of contacts along its length. They, too, contain light sensing elements along the length. The device uses ambient light that is only coherent in spherical shell shaped light packets of thickness of one coherence length. Each of the frequency components of the broad band light arriving at a pixel has a phase proportional to the distance from an object point to its image pixel.

  13. High Resolution Camera for Mapping Titan Surface

    Science.gov (United States)

    Reinhardt, Bianca

    2011-01-01

    Titan, Saturn's largest moon, has a dense atmosphere and is the only object besides Earth to have stable liquids at its surface. The Cassini/Huygens mission has revealed the extraordinary breadth of geological processes shaping its surface. Further study requires high resolution imaging of the surface, which is restrained by light absorption by methane and scattering from aerosols. The Visual and Infrared Mapping Spectrometer (VIMS) onboard the Cassini spacecraft has demonstrated that Titan's surface can be observed within several windows in the near infrared, allowing us to process several regions in order to create a geological map and to determine the morphology. Specular reflections monitored on the lakes of the North Pole show little scattering at 5 microns, which, combined with the present study of Titan's northern pole area, refutes the paradigm that only radar can achieve high resolution mapping of the surface. The present data allowed us to monitor the evolution of lakes, to identify additional lakes at the Northern Pole, to examine Titan's hypothesis of non-synchronous rotation and to analyze the albedo of the North Pole surface. Future missions to Titan could carry a camera with 5 micron detectors and a carbon fiber radiator for weight reduction.

  14. An ISPA-camera for gamma rays

    CERN Document Server

    Puertolas, D; Pani, R; Leutz, H; Gys, Thierry; De Notaristefani, F; D'Ambrosio, C

    1995-01-01

    With the recently developed ISPA (Imaging Silicon Pixel Array)-tube attached either to a planar YAlO3(Ce) (YAP) disc (1mm thick) or to a matrix of optically-separated YAP-crystals (5mm high, 0.6 x 0.6 mm2 cross-section) we achieved high spatial resolution of 57Co-122 keV photons. The vacuum-sealed ISPA-tube is only 4 cm long with 3.5 cm diameter and consists of a photocathode viewed at 3 cm distance by a silicon pixel chip, directly detecting the photoelectrons. The chip-anode consists of 1024 rectangular pixels with 75 µm x 500 µm edges, each bump-bonded to their individual front-end electronics. The total pixel array read-out time is 10 µs. The measured intrinsic spatial resolutions (FWHM) of this ISPA-camera are 700 µm (planar YAP) and 310 µm (YAP-matrix). Apart from its already demonstrated application for particle tracking with scintillating fibres, the ISPA-tube provides also an excellent tool in medicine, biology and chemistry.

  15. A miniature VGA SWIR camera using MT6415CA ROIC

    Science.gov (United States)

    Eminoglu, Selim; Yilmaz, S. Gokhan; Kocak, Serhat

    2014-06-01

    This paper reports the development of a new miniature VGA SWIR camera called NanoCAM-6415, which is developed to demonstrate the key features of the MT6415CA ROIC such as high integration level, low-noise, and low-power in a small volume. The NanoCAM-6415 uses an InGaAs Focal Plane Array (FPA) with a format of 640 × 512 and pixel pitch of 15 μm built using MT6415CA ROIC. MT6415CA is a low-noise CTIA ROIC, which has a system-on-chip architecture, allows generation of all the required timing and biases on-chip in the ROIC without requiring any external components or inputs, thus enabling the development of compact and low-noise SWIR cameras, with reduced size, weight, and power (SWaP). NanoCAM-6415 camera supports snapshot operation using Integrate-Then-Read (ITR) and Integrate-While-Read (IWR) modes. The camera has three gain settings enabled by the ROIC through programmable Full-Well-Capacity (FWC) values of 10.000 e-, 20.000 e-, and 350.000 e- in the very high gain (VHG), high-gain (HG), and low-gain (LG) modes, respectively. The camera has an input referred noise level of 10 e- rms in the VHG mode at 1 ms integration time, suitable for low-noise SWIR imaging applications. In order to reduce the size and power of the camera, only 2 outputs out of 8 of the ROIC are connected to the external Analog-to-Digital Converters (ADCs) in the camera electronics, providing a maximum frame rate of 50 fps through a 26-pin SDR type Camera Link connector. NanoCAM-6415 SWIR camera without the optics measures 32 mm × 32 mm × 35 mm, weighs 45gr, and dissipates less than 1.8 W using a 5 V supply. These results show that MT6415CA ROIC can successfully be used to develop cameras for SWIR imaging applications where SWaP is a concern. Mikro-Tasarim has also developed new imaging software to demonstrate the functionality of this miniature VGA camera. Mikro-Tasarim provides tested ROIC wafers and also offers compact and easy-to-use test electronics, demo cameras, and hardware

  16. Next-generation digital camera integration and software development issues

    Science.gov (United States)

    Venkataraman, Shyam; Peters, Ken; Hecht, Richard

    1998-04-01

    This paper investigates the complexities associated with the development of next generation digital cameras due to requirements in connectivity and interoperability. Each successive generation of digital camera improves drastically in cost, performance, resolution, image quality and interoperability features. This is being accomplished by advancements in a number of areas: research, silicon, standards, etc. As the capabilities of these cameras increase, so do the requirements for both hardware and software. Today, there are two single chip camera solutions in the market including the Motorola MPC 823 and LSI DCAM- 101. Real time constraints for a digital camera may be defined by the maximum time allowable between capture of images. Constraints in the design of an embedded digital camera include processor architecture, memory, processing speed and the real-time operating systems. This paper will present the LSI DCAM-101, a single-chip digital camera solution. It will present an overview of the architecture and the challenges in hardware and software for supporting streaming video in such a complex device. Issues presented include the development of the data flow software architecture, testing and integration on this complex silicon device. The strategy for optimizing performance on the architecture will also be presented.

  17. Analysis of Camera Arrays Applicable to the Internet of Things.

    Science.gov (United States)

    Yang, Jiachen; Xu, Ru; Lv, Zhihan; Song, Houbing

    2016-03-22

    The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are used in various applications and analyzed in the research work, there are few discussions on the comparison of them. Therefore, we make a detailed analysis about their performance over different shooting distances. From our analysis, we find that the threshold of shooting distance for converged cameras is 7 m. In addition, we design a camera array in our work that can be used as a parallel camera array, as well as a converged camera array and take some images and videos with it to identify the threshold.

  18. Global Calibration of Multiple Cameras Based on Sphere Targets

    Science.gov (United States)

    Sun, Junhua; He, Huabin; Zeng, Debing

    2016-01-01

    Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three), while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view. PMID:26761007

  19. Prism-based single-camera system for stereo display

    Science.gov (United States)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  20. A Scalable Clustered Camera System for Multiple Object Tracking

    Directory of Open Access Journals (Sweden)

    Schlessman Jason

    2008-01-01

    Full Text Available Abstract Reliable and efficient tracking of objects by multiple cameras is an important and challenging problem, which finds wide-ranging application areas. Most existing systems assume that data from multiple cameras is processed on a single processing unit or by a centralized server. However, these approaches are neither scalable nor fault tolerant. We propose multicamera algorithms that operate on peer-to-peer computing systems. Peer-to-peer vision systems require codesign of image processing and distributed computing algorithms as well as sophisticated communication protocols, which should be carefully designed and verified to avoid deadlocks and other problems. This paper introduces the scalable clustered camera system, which is a peer-to-peer multicamera system for multiple object tracking. Instead of transferring control of tracking jobs from one camera to another, each camera in the presented system performs its own tracking, keeping its own trajectories for each target object, which provides fault tolerance. A fast and robust tracking algorithm is proposed to perform tracking on each camera view, while maintaining consistent labeling. In addition, a novel communication protocol is introduced, which can handle the problems caused by communication delays and different processor loads and speeds, and incorporates variable synchronization capabilities, so as to allow flexibility with accuracy tradeoffs. This protocol was exhaustively verified by using the SPIN verification tool. The success of the proposed system is demonstrated on different scenarios captured by multiple cameras placed in different setups. Also, simulation and verification results for the protocol are presented.

  1. A Scalable Clustered Camera System for Multiple Object Tracking

    Directory of Open Access Journals (Sweden)

    Jaswinder P. Singh

    2008-09-01

    Full Text Available Reliable and efficient tracking of objects by multiple cameras is an important and challenging problem, which finds wide-ranging application areas. Most existing systems assume that data from multiple cameras is processed on a single processing unit or by a centralized server. However, these approaches are neither scalable nor fault tolerant. We propose multicamera algorithms that operate on peer-to-peer computing systems. Peer-to-peer vision systems require codesign of image processing and distributed computing algorithms as well as sophisticated communication protocols, which should be carefully designed and verified to avoid deadlocks and other problems. This paper introduces the scalable clustered camera system, which is a peer-to-peer multicamera system for multiple object tracking. Instead of transferring control of tracking jobs from one camera to another, each camera in the presented system performs its own tracking, keeping its own trajectories for each target object, which provides fault tolerance. A fast and robust tracking algorithm is proposed to perform tracking on each camera view, while maintaining consistent labeling. In addition, a novel communication protocol is introduced, which can handle the problems caused by communication delays and different processor loads and speeds, and incorporates variable synchronization capabilities, so as to allow flexibility with accuracy tradeoffs. This protocol was exhaustively verified by using the SPIN verification tool. The success of the proposed system is demonstrated on different scenarios captured by multiple cameras placed in different setups. Also, simulation and verification results for the protocol are presented.

  2. Determining Vision Graphs for Distributed Camera Networks Using Feature Digests

    Directory of Open Access Journals (Sweden)

    Richard J. Radke

    2007-01-01

    Full Text Available We propose a decentralized method for obtaining the vision graph for a distributed, ad-hoc camera network, in which each edge of the graph represents two cameras that image a sufficiently large part of the same environment. Each camera encodes a spatially well-distributed set of distinctive, approximately viewpoint-invariant feature points into a fixed-length “feature digest” that is broadcast throughout the network. Each receiver camera robustly matches its own features with the decompressed digest and decides whether sufficient evidence exists to form a vision graph edge. We also show how a camera calibration algorithm that passes messages only along vision graph edges can recover accurate 3D structure and camera positions in a distributed manner. We analyze the performance of different message formation schemes, and show that high detection rates (>0.8 can be achieved while maintaining low false alarm rates (<0.05 using a simulated 60-node outdoor camera network.

  3. The Alfred Nobel rocket camera. An early aerial photography attempt

    Science.gov (United States)

    Ingemar Skoog, A.

    2010-02-01

    Alfred Nobel (1833-1896), mainly known for his invention of dynamite and the creation of the Nobel Prices, was an engineer and inventor active in many fields of science and engineering, e.g. chemistry, medicine, mechanics, metallurgy, optics, armoury and rocketry. Amongst his inventions in rocketry was the smokeless solid propellant ballistite (i.e. cordite) patented for the first time in 1887. As a very wealthy person he actively supported many Swedish inventors in their work. One of them was W.T. Unge, who was devoted to the development of rockets and their applications. Nobel and Unge had several rocket patents together and also jointly worked on various rocket applications. In mid-1896 Nobel applied for patents in England and France for "An Improved Mode of Obtaining Photographic Maps and Earth or Ground Measurements" using a photographic camera carried by a "…balloon, rocket or missile…". During the remaining of 1896 the mechanical design of the camera mechanism was pursued and cameras manufactured. In April 1897 (after the death of Alfred Nobel) the first aerial photos were taken by these cameras. These photos might be the first documented aerial photos taken by a rocket borne camera. Cameras and photos from 1897 have been preserved. Nobel did not only develop the rocket borne camera but also proposed methods on how to use the photographs taken for ground measurements and preparing maps.

  4. Analysis of Camera Arrays Applicable to the Internet of Things.

    Science.gov (United States)

    Yang, Jiachen; Xu, Ru; Lv, Zhihan; Song, Houbing

    2016-01-01

    The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are used in various applications and analyzed in the research work, there are few discussions on the comparison of them. Therefore, we make a detailed analysis about their performance over different shooting distances. From our analysis, we find that the threshold of shooting distance for converged cameras is 7 m. In addition, we design a camera array in our work that can be used as a parallel camera array, as well as a converged camera array and take some images and videos with it to identify the threshold. PMID:27011189

  5. Global Calibration of Multiple Cameras Based on Sphere Targets

    Directory of Open Access Journals (Sweden)

    Junhua Sun

    2016-01-01

    Full Text Available Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three, while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view.

  6. Analysis of Camera Arrays Applicable to the Internet of Things

    Directory of Open Access Journals (Sweden)

    Jiachen Yang

    2016-03-01

    Full Text Available The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are used in various applications and analyzed in the research work, there are few discussions on the comparison of them. Therefore, we make a detailed analysis about their performance over different shooting distances. From our analysis, we find that the threshold of shooting distance for converged cameras is 7 m. In addition, we design a camera array in our work that can be used as a parallel camera array, as well as a converged camera array and take some images and videos with it to identify the threshold.

  7. THE FLY’S EYE CAMERA SYSTEM

    Directory of Open Access Journals (Sweden)

    László Mészáros

    2014-01-01

    Full Text Available Hacemos una introducci ́on del Fly’s Eye Camera System, un di spositivo de monitorizaci ́on de todo cielo con el prop ́osito de realizar astronom ́ıa de dominio temporal. Es te dise ̃no de sistema de c ́amaras proveer ́a conjuntos de datos complementarios a otros sondeos sin ́opticos como L SST o Pan-STARRS. El campo de visi ́on efectivo se obtiene con 19 c ́amaras dispuestas en un mosaico de forma e sf ́erica. Dichas c ́amaras del dispositivo se apoyan en una montura hexapodal que es completamente capaz d e hacer seguimiento sid ́ereo para exposiciones consecutivas. Esta plataforma tiene muchas ventajas. Prim ero, s ́olo requiere un componente m ́ovil y no incluye partes ́unicas. Por lo tanto este dise ̃no no s ́olo elimina lo s problemas causados por elementos ́unicos, sino que la redundancia del hex ́apodo permite una operaci ́on sin pro blemas incluso si una o dos de las piernas est ́an atoradas. Otra ventaja es que se puede calibrar a si mismo med iante estrellas observadas independientemente de su ubicaci ́on geogr ́afica como de la alineaci ́on polar de la m ontura. Todos los elementos mec ́anicos y electr ́onicos est ́an dise ̃nados dentro de nuestro instituto del Observat orio Konkoly. Actualmente, nuestro instrumento est ́a en fase de pruebas con un hex ́apodo operativo y un n ́umero red ucido de c ́amaras.

  8. Kinect Fusion improvement using depth camera calibration

    Science.gov (United States)

    Pagliari, D.; Menna, F.; Roncella, R.; Remondino, F.; Pinto, L.

    2014-06-01

    Scene's 3D modelling, gesture recognition and motion tracking are fields in rapid and continuous development which have caused growing demand on interactivity in video-game and e-entertainment market. Starting from the idea of creating a sensor that allows users to play without having to hold any remote controller, the Microsoft Kinect device was created. The Kinect has always attract researchers in different fields, from robotics to Computer Vision (CV) and biomedical engineering as well as third-party communities that have released several Software Development Kit (SDK) versions for Kinect in order to use it not only as a game device but as measurement system. Microsoft Kinect Fusion control libraries (firstly released in March 2013) allow using the device as a 3D scanning and produce meshed polygonal of a static scene just moving the Kinect around. A drawback of this sensor is the geometric quality of the delivered data and the low repeatability. For this reason the authors carried out some investigation in order to evaluate the accuracy and repeatability of the depth measured delivered by the Kinect. The paper will present a throughout calibration analysis of the Kinect imaging sensor, with the aim of establishing the accuracy and precision of the delivered information: a straightforward calibration of the depth sensor in presented and then the 3D data are correct accordingly. Integrating the depth correction algorithm and correcting the IR camera interior and exterior orientation parameters, the Fusion Libraries are corrected and a new reconstruction software is created to produce more accurate models.

  9. NEOCam: The Near-Earth Object Camera

    Science.gov (United States)

    Mainzer, Amy K.; NEOCam Science Team

    2016-10-01

    The Near-Earth Object Camera (NEOCam) is a Discovery mission in Phase A study designed to carry out a large-scale survey of the inner solar system's minor planets. Its primary science objectives are to understand the origins of the solar system's small bodies and the processes that evolved them into their present state. The mission will also characterize the impact hazard from near-Earth objects as well as rare populations such as Earth Trojans and interior-to-Earth objects. In the process, NEOCam can identify targets for future robotic or human exploration. Using a 50 cm telescope operating in two infrared wavelengths (4-5.2 and 6-10 um), the mission is expected to detect and characterize close to 100,000 NEOs and thousands of comets. By achieving high survey completeness in the main belt down to kilometer-scale objects, NEOCam-derived size and albedo distributions can be directly compared to those of the NEOs. The hypotheses that small, dark NEOs and comets are preferentially disrupted at low perihelia can be tested by searching for correlations between size, orbital elements, and albedos. NEOCam's Sun-Earth L1 Lagrange point halo orbit enables a large instantaneous field of regard with a view of low solar elongations, high data rates, and a cold thermal environment. Like its predecessor, WISE/NEOWISE, candidate minor planet detections will be rapidly disseminated to the community via the Minor Planet Center. NEOCam images, source databases, and tables of derived physical properties will be delivered to the community via NASA's Infrared Science Archive and PDS.

  10. Heterogeneous Preferences and Demand-Side Lifecycle Theory in Camera Industry: Take 35mm SLR and Medium Format Cameras as Examples

    OpenAIRE

    CHOU, YU-CHIEH

    2012-01-01

    As an essential tool, camera acts as crucial media that assist photographers to complete their photography works or make a record. It is common for general users to use small format cameras in their daily lives. Medium format camera, on the other side, is another camera type with fewer mentioned. Therefore, this dissertation adopts Windrum (2005) approaches in order to retest hypotheses of distinct market niches in both 35mm SLR and medium format cameras. Meanwhile, this research employed dem...

  11. An evolution of image source camera attribution approaches.

    Science.gov (United States)

    Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul

    2016-05-01

    Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics

  12. A universal method for camera calibration in UITS scenes

    Institute of Scientific and Technical Information of China (English)

    Zhaoxue Chen; Pengfei Shi

    2005-01-01

    @@ A universal approach to camera calibration based on features of some representative lines on traffic ground is presented. It uses only a set of three parallel edges with known intervals and one of their intersected lines with known slope to gain the focal length and orientation parameters of a camera. A set of equations that computes related camera parameters has been derived from geometric properties of the calibration pattern. With accurate analytical implementation, precision of the approach is only decided by accuracy of the calibration target selecting. Final experimental results have showed its validity by a snapshot from real automatic visual traffic surveillance (AVTS) scenes.

  13. Integrated radar-camera security system: range test

    Science.gov (United States)

    Zyczkowski, M.; Szustakowski, M.; Ciurapinski, W.; Karol, M.; Markowski, P.

    2012-06-01

    The paper presents the test results of a mobile system for the protection of large-area objects, which consists of a radar and thermal and visual cameras. Radar is used for early detection and localization of an intruder and the cameras with narrow field of view are used for identification and tracking of a moving object. The range evaluation of an integrated system is presented as well as the probability of human detection as a function of the distance from radar-camera unit.

  14. Integrated mobile radar-camera system in airport perimeter security

    Science.gov (United States)

    Zyczkowski, M.; Szustakowski, M.; Ciurapinski, W.; Dulski, R.; Kastek, M.; Trzaskawka, P.

    2011-11-01

    The paper presents the test results of a mobile system for the protection of large-area objects, which consists of a radar and thermal and visual cameras. Radar is used for early detection and localization of an intruder and the cameras with narrow field of view are used for identification and tracking of a moving object. The range evaluation of an integrated system are presented as well as the probability of human detection as a function of the distance from radar-camera unit.

  15. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    Science.gov (United States)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellut, Paolo; Sherwin, Gary

    2011-01-01

    TIR cameras can be used for day/night Unmanned Ground Vehicle (UGV) autonomous navigation when stealth is required. The quality of uncooled TIR cameras has significantly improved over the last decade, making them a viable option at low speed Limiting factors for stereo ranging with uncooled LWIR cameras are image blur and low texture scenes TIR perception capabilities JPL has explored includes: (1) single and dual band TIR terrain classification (2) obstacle detection (pedestrian, vehicle, tree trunks, ditches, and water) (3) perception thru obscurants

  16. Weed detection by UAV with camera guided landing sequence

    DEFF Research Database (Denmark)

    Dyrmann, Mads

    UAVs gain more and more currency in agriculture, as they allow for inspection of even remote areas of farmland. Measurements of weed occurrence in fields is one branch of this growing field of research. A problem with UAVs is that they have a limited energy capacity: Consequently, after a short...... the built-in GPS, allows for the UAV to be navigated within the field of view of a camera, which is mounted on the landing platform. The camera on the platform determines the UAVs position and orientation from markers printed on the UAV, whereby it can be guided in its landing. The UAV has a camera mounted...

  17. BUNDLE ADJUSTMENTS CCD CAMERA CALIBRATION BASED ON COLLINEARITY EQUATION

    Institute of Scientific and Technical Information of China (English)

    Liu Changying; Yu Zhijing; Che Rensheng; Ye Dong; Huang Qingcheng; Yang Dingning

    2004-01-01

    The solid template CCD camera calibration method of bundle adjustments based on collinearity equation is presented considering the characteristics of space large-dimension on-line measurement. In the method, a more comprehensive camera model is adopted which is based on the pinhole model extended with distortions corrections. In the process of calibration, calibration precision is improved by imaging at different locations in the whole measurement space, multi-imaging at the same location and bundle adjustments optimization. The calibration experiment proves that the calibration method is able to fulfill calibration requirement of CCD camera applied to vision measurement.

  18. The Camera of the MAGIC-II Telescope

    CERN Document Server

    Hsu, C C; Fink, D; Göbel, F; Haberer, W; Hose, J; Maier, R; Mirzoyan, R; Pimpl, W; Reimann, O; Rudert, A; Sawallisch, P; Schlammer, J; Schmidl, S; Stipp, A; Teshima, M

    2007-01-01

    The MAGIC 17m diameter Cherenkov telescope will be upgraded with a second telescope within the year 2007. The camera of MAGIC-II will include several new features compared to the MAGIC-I camera. Photomultipliers with the highest available photon collection efficiency have been selected. A modular design allows easier access and flexibility to test new photodetector technologies. The camera will be uniformly equipped with 0.1 degree diamter pixels, which allows the use of an increased trigger area. Finally, the overall signal chain features a large bandwidth to retain the shape of the very fast Cherenkov signals.

  19. Epipolar geometry comparison of SAR and optical camera

    Science.gov (United States)

    Li, Dong; Zhang, Yunhua

    2016-03-01

    In computer vision, optical camera is often used as the eyes of computer. If we replace camera with synthetic aperture radar (SAR), we will then enter a microwave vision of the world. This paper gives a comparison of SAR imaging and camera imaging from the viewpoint of epipolar geometry. The imaging model and epipolar geometry of the two sensors are analyzed in detail. Their difference is illustrated, and their unification is particularly demonstrated. We hope these may benefit researchers in field of computer vision or SAR image processing to construct a computer SAR vision, which is dedicated to compensate and improve human vision by electromagnetically perceiving and understanding the images.

  20. An evolution of image source camera attribution approaches.

    Science.gov (United States)

    Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul

    2016-05-01

    Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics