WorldWideScience

Sample records for camera phone-based wayfinding

  1. Use of a smart phone based thermo camera for skin prick allergy testing: a feasibility study (Conference Presentation)

    Science.gov (United States)

    Barla, Lindi; Verdaasdonk, Rudolf M.; Rustemeyer, Thomas; Klaessens, John; van der Veen, Albert

    2016-02-01

    Allergy testing is usually performed by exposing the skin to small quantities of potential allergens on the inner forearm and scratching the protective epidermis to increase exposure. After 15 minutes the dermatologist performs a visual check for swelling and erythema which is subjective and difficult for e.g. dark skin types. A small smart phone based thermo camera (FLIR One) was used to obtain quantitative images in a feasibility study of 17 patients Directly after allergen exposure on the forearm, thermal images were captured at 30 seconds interval and processed to a time lapse movie over 15 minutes. Considering the 'subjective' reading of the dermatologist as golden standard, in 11/17 pts (65%) the evaluation of dermatologist was confirmed by the thermo camera including 5 of 6 patients without allergic response. In 7 patients thermo showed additional spots. Of the 342 sites tested, the dermatologist detected 47 allergies of which 28 (60%) were confirmed by thermo imaging while thermo imaging showed 12 additional spots. The method can be improved with user dedicated acquisition software and better registration between normal and thermal images. The lymphatic reaction seems to shift from the original puncture site. The interpretation of the thermal images is still subjective since collecting quantitative data is difficult due to motion patient during 15 minutes. Although not yet conclusive, thermal imaging shows to be promising to improve the sensitivity and selectivity of allergy testing using a smart phone based camera.

  2. Diversified Wayfinding Design

    Directory of Open Access Journals (Sweden)

    Lu LIU

    2015-07-01

    Full Text Available The present study illustrated diversified designs under the current wayfinding system design. The advancement of science and technology, urban construction, close communication and cross-boundary trends, etc. made the wayfinding design no longer an isolated design, but integrate with the environment, architecture, space, information, emotion, technology, culture and life. The features for the ways of designing wayfinding were analyzed from three perspectives: the signage system, cross two-dimension and cross-boundaries. The diversified wayfinding design not only provided the sense of order in a particular environment or space, but also beautified the environment and set the mood. Moreover, it could offer people pleasure and delight so as to bring a healthier and more reasonable lifestyle and make the design to integrate with the life in a harmonious way.

  3. Indoor wayfinding and navigation

    CERN Document Server

    2015-01-01

    Due to the widespread use of navigation systems for wayfinding and navigation in the outdoors, researchers have devoted their efforts in recent years to designing navigation systems that can be used indoors. This book is a comprehensive guide to designing and building indoor wayfinding and navigation systems. It covers all types of feasible sensors (for example, Wi-Fi, A-GPS), discussing the level of accuracy, the types of map data needed, the data sources, and the techniques for providing routes and directions within structures.

  4. Influence of Motivation on Wayfinding

    Science.gov (United States)

    Srinivas, Samvith

    2010-01-01

    This research explores the role of affect in the domain of human wayfinding by asking if increased motivation will alter the performance across various routes of increasing complexity. Participants were asked to perform certain navigation tasks within an indoor Virtual Reality (VR) environment under either motivated and not-motivated instructions.…

  5. A wayfinding aid to increase navigator independence

    Directory of Open Access Journals (Sweden)

    Wilfred Waters

    2011-12-01

    Full Text Available Wayfinding aids are of great benefit because users do not have to rely on their learned geographic knowledge or orientation skills alone for successful navigation. Additionally, cognitive resources usually captured by this activity can be spent elsewhere. A challenge, however, remains for wayfinding aid developers. Due to the automation of wayfinding aids, navigator independence may be decreasing via the use of these aids. In order to address this, wayfinding aids might be improved additionally to perform a training role. Since the most versatile wayfinders appear to deploy a dual strategy for geographic orientation, it is proposed that wayfinding aids be improved to foster such an approach. This paper presents the results of an experimental study testing a portion of the suggested enhancement.

  6. Cell phone based balance trainer

    Directory of Open Access Journals (Sweden)

    Lee Beom-Chan

    2012-02-01

    Full Text Available Abstract Background In their current laboratory-based form, existing vibrotactile sensory augmentation technologies that provide cues of body motion are impractical for home-based rehabilitation use due to their size, weight, complexity, calibration procedures, cost, and fragility. Methods We have designed and developed a cell phone based vibrotactile feedback system for potential use in balance rehabilitation training in clinical and home environments. It comprises an iPhone with an embedded tri-axial linear accelerometer, custom software to estimate body tilt, a "tactor bud" accessory that plugs into the headphone jack to provide vibrotactile cues of body tilt, and a battery. Five young healthy subjects (24 ± 2.8 yrs, 3 females and 2 males and four subjects with vestibular deficits (42.25 ± 13.5 yrs, 2 females and 2 males participated in a proof-of-concept study to evaluate the effectiveness of the system. Healthy subjects used the system with eyes closed during Romberg, semi-tandem Romberg, and tandem Romberg stances. Subjects with vestibular deficits used the system with both eyes-open and eyes-closed conditions during semi-tandem Romberg stance. Vibrotactile feedback was provided when the subject exceeded either an anterior-posterior (A/P or a medial-lateral (M/L body tilt threshold. Subjects were instructed to move away from the vibration. Results The system was capable of providing real-time vibrotactile cues that informed corrective postural responses. When feedback was available, both healthy subjects and those with vestibular deficits significantly reduced their A/P or M/L RMS sway (depending on the direction of feedback, had significantly smaller elliptical area fits to their sway trajectory, spent a significantly greater mean percentage time within the no feedback zone, and showed a significantly greater A/P or M/L mean power frequency. Conclusion The results suggest that the real-time feedback provided by this system can be used

  7. Smart phone based bacterial detection using bio functionalized fluorescent nanoparticles

    International Nuclear Information System (INIS)

    We are describing immunochromatographic test strips with smart phone-based fluorescence readout. They are intended for use in the detection of the foodborne bacterial pathogens Salmonella spp. and Escherichia coli O157. Silica nanoparticles (SiNPs) were doped with FITC and Ru(bpy), conjugated to the respective antibodies, and then used in a conventional lateral flow immunoassay (LFIA). Fluorescence was recorded by inserting the nitrocellulose strip into a smart phone-based fluorimeter consisting of a light weight (40 g) optical module containing an LED light source, a fluorescence filter set and a lens attached to the integrated camera of the cell phone in order to acquire high-resolution fluorescence images. The images were analysed by exploiting the quick image processing application of the cell phone and enable the detection of pathogens within few minutes. This LFIA is capable of detecting pathogens in concentrations as low as 105 cfu mL−1 directly from test samples without pre-enrichment. The detection is one order of magnitude better compared to gold nanoparticle-based LFIAs under similar condition. The successful combination of fluorescent nanoparticle-based pathogen detection by LFIAs with a smart phone-based detection platform has resulted in a portable device with improved diagnosis features and having potential application in diagnostics and environmental monitoring. (author)

  8. Learning as way-finding

    DEFF Research Database (Denmark)

    Dau, Susanne

    embodied, emotionally and/or cognitive. Way-finding, is argued, to be a concept for learning processes, knowledge development and identity-shaping where humans learn through motions, feeling and thinking in a world in motion and through combined actions of human and non-human agencies. Furthermore...... of learning used in this paper is inspired by the latest work of the Danish professor Illeris and the interwoven concept of knowledge development as revealed in the SECI-model generated by the Japanese professors Nonaka and Takeuchi. The empirical investigation, which is the basis of the presented assumptions...

  9. Way-Finding Assistance System for Underground Facilities Using Augmented Reality

    Science.gov (United States)

    Yokoi, K.; Yabuki, N.; Fukuda, T.; Michikawa, T.; Motamedi, A.

    2015-05-01

    Way-finding is one of main challenges for pedestrians in large subterranean spaces with complex network of connected labyrinths. This problem is caused by the loss of their sense of directions and orientation due to the lack of landmarks that are occluded by ceilings, walls, and skyscraper. This paper introduces an assistance system for way-finding problem in large subterranean spaces using Augmented Reality (AR). It suggests displaying known landmarks that are invisible in indoor environments on tablet/handheld devices to assist users with relative positioning and indoor way-finding. The location and orientation of the users can be estimated by the indoor positioning systems and sensors available in the common tablet or smartphones devices. The constructed 3D model of a chosen landmark that is in the field of view of the handheld's camera is augmented on the camera's video feed. A prototype system has been implemented to demonstrate the efficiency of the proposed system for way-finding.

  10. Wayfinding : embedding knowledge in hospital environments

    OpenAIRE

    Rooke, Clementinah Ndhlovu; Tzortzopoulos, Patricia; Koskela, Lauri; Rooke, John

    2009-01-01

    The traditional use of signs has failed to overcome the problem of wayfinding in hospitals. As wayfinding problems are clearly linked to healthcare outcomes there is need to find a more integrated approach to solving the problem. In this paper it is shown that it is possible to embed forms of knowledge that make it easier for people to find their way with little need for signs. Evidence from literature and from fieldwork supports this assertion. Methods used for our research in...

  11. Public Space Design - linking wayfinding and wayfaring

    DEFF Research Database (Denmark)

    Lanng, Ditte Bendix; Jensen, Ole B.

    2016-01-01

    ’ framework the chapter addresses key concepts to approach how wayfinding and wayfaring are linked in ordinary mobile situations in public space. The diverse design considerations that begin to materialize in this conceptual linkage can be integrated within the emerging field of ‘mobilities design’, which......This chapter, written from an urban design perspective, concerns the relation between daily life traveling and public space design. While travelers are sometimes set on traveling as fast as possible between point A and point B in smooth and unambiguous ways, at other times they may be inclined to...... engage in a wide range of social encounters and sensorial experiences while on the way. The concepts of wayfinding and wayfaring, respectively, embrace diverse considerations for public space design to ‘stage’ such diverse mobile situations. Wayfinding, as defined by Lynch in 1960, emphasizes the...

  12. Mobile phone based SCADA for industrial automation.

    Science.gov (United States)

    Ozdemir, Engin; Karacor, Mevlut

    2006-01-01

    SCADA is the acronym for "Supervisory Control And Data Acquisition." SCADA systems are widely used in industry for supervisory control and data acquisition of industrial processes. Conventional SCADA systems use PC, notebook, thin client, and PDA as a client. In this paper, a Java-enabled mobile phone has been used as a client in a sample SCADA application in order to display and supervise the position of a sample prototype crane. The paper presents an actual implementation of the on-line controlling of the prototype crane via mobile phone. The wireless communication between the mobile phone and the SCADA server is performed by means of a base station via general packet radio service (GPRS) and wireless application protocol (WAP). Test results have indicated that the mobile phone based SCADA integration using the GPRS or WAP transfer scheme could enhance the performance of the crane in a day without causing an increase in the response times of SCADA functions. The operator can visualize and modify the plant parameters using his mobile phone, without reaching the site. In this way maintenance costs are reduced and productivity is increased. PMID:16480111

  13. Route complexity and simulated physical ageing negatively influence wayfinding.

    Science.gov (United States)

    Zijlstra, Emma; Hagedoorn, Mariët; Krijnen, Wim P; van der Schans, Cees P; Mobach, Mark P

    2016-09-01

    The aim of this age-simulation field experiment was to assess the influence of route complexity and physical ageing on wayfinding. Seventy-five people (aged 18-28) performed a total of 108 wayfinding tasks (i.e., 42 participants performed two wayfinding tasks and 33 performed one wayfinding task), of which 59 tasks were performed wearing gerontologic ageing suits. Outcome variables were wayfinding performance (i.e., efficiency and walking speed) and physiological outcomes (i.e., heart and respiratory rates). Analysis of covariance showed that persons on more complex routes (i.e., more floor and building changes) walked less efficiently than persons on less complex routes. In addition, simulated elderly participants perform worse in wayfinding than young participants in terms of speed (p < 0.001). Moreover, a linear mixed model showed that simulated elderly persons had higher heart rates and respiratory rates compared to young people during a wayfinding task, suggesting that simulated elderly consumed more energy during this task. PMID:27184311

  14. Wayfinding in Healthcare Facilities: Contributions from Environmental Psychology

    OpenAIRE

    Ann Sloan Devlin

    2014-01-01

    The ability to successfully navigate in healthcare facilities is an important goal for patients, visitors, and staff. Despite the fundamental nature of such behavior, it is not infrequent for planners to consider wayfinding only after the fact, once the building or building complex is complete. This review argues that more recognition is needed for the pivotal role of wayfinding in healthcare facilities. First, to provide context, the review presents a brief overview of the relationship betwe...

  15. Mobile phone based mini-spectrometer for rapid screening of skin cancer

    Science.gov (United States)

    Das, Anshuman; Swedish, Tristan; Wahi, Akshat; Moufarrej, Mira; Noland, Marie; Gurry, Thomas; Aranda-Michel, Edgar; Aksel, Deniz; Wagh, Sneha; Sadashivaiah, Vijay; Zhang, Xu; Raskar, Ramesh

    2015-06-01

    We demonstrate a highly sensitive mobile phone based spectrometer that has potential to detect cancerous skin lesions in a rapid, non-invasive manner. Earlier reports of low cost spectrometers utilize the camera of the mobile phone to image the field after moving through a diffraction grating. These approaches are inherently limited by the closed nature of mobile phone image sensors and built in optical elements. The system presented uses a novel integrated grating and sensor that is compact, accurate and calibrated. Resolutions of about 10 nm can be achieved. Additionally, UV and visible LED excitation sources are built into the device. Data collection and analysis is simplified using the wireless interfaces and logical control on the smart phone. Furthermore, by utilizing an external sensor, the mobile phone camera can be used in conjunction with spectral measurements. We are exploring ways to use this device to measure endogenous fluorescence of skin in order to distinguish cancerous from non-cancerous lesions with a mobile phone based dermatoscope.

  16. Coded illumination for motion-blur free imaging of cells on cell-phone based imaging flow cytometer

    Science.gov (United States)

    Saxena, Manish; Gorthi, Sai Siva

    2014-10-01

    Cell-phone based imaging flow cytometry can be realized by flowing cells through the microfluidic devices, and capturing their images with an optically enhanced camera of the cell-phone. Throughput in flow cytometers is usually enhanced by increasing the flow rate of cells. However, maximum frame rate of camera system limits the achievable flow rate. Beyond this, the images become highly blurred due to motion-smear. We propose to address this issue with coded illumination, which enables recovery of high-fidelity images of cells far beyond their motion-blur limit. This paper presents simulation results of deblurring the synthetically generated cell/bead images under such coded illumination.

  17. Rapid Prototyping a Collections-Based Mobile Wayfinding Application

    Science.gov (United States)

    Hahn, Jim; Morales, Alaina

    2011-01-01

    This research presents the results of a project that investigated how students use a library developed mobile app to locate books in the library. The study employed a methodology of formative evaluation so that the development of the mobile app would be informed by user preferences for next generation wayfinding systems. A key finding is the…

  18. Seeing the Axial Line: Evidence from Wayfinding Experiments

    Directory of Open Access Journals (Sweden)

    Beatrix Emo

    2014-07-01

    Full Text Available Space-geometric measures are proposed to explain the location of fixations during wayfinding. Results from an eye tracking study based on real-world stimuli are analysed; the gaze bias shows that attention is paid to structural elements in the built environment. Three space-geometric measures are used to explain the data: sky area, floor area and longest line of sight. Together with the finding that participants choose the more connected street, a relationship is proposed between the individual cognitive processes that occur during wayfinding, relative street connectivity measured through space syntactic techniques and the spatial geometry of the environment. The paper adopts an egocentric approach to gain a greater understanding on how individuals process the axial map.

  19. Lost in the Labyrinthine Library: A Multi-Method Case Study Investigating Public Library User Wayfinding Behavior

    Science.gov (United States)

    Mandel, Lauren Heather

    2012-01-01

    Wayfinding is the method by which humans orient and navigate in space, and particularly in built environments such as cities and complex buildings, including public libraries. In order to wayfind successfully in the built environment, humans need information provided by wayfinding systems and tools, for instance architectural cues, signs, and…

  20. Navigation Assistance: A Trade-Off between Wayfinding Support and Configural Learning Support

    Science.gov (United States)

    Munzer, Stefan; Zimmer, Hubert D.; Baus, Jorg

    2012-01-01

    Current GPS-based mobile navigation assistance systems support wayfinding, but they do not support learning about the spatial configuration of an environment. The present study examined effects of visual presentation modes for navigation assistance on wayfinding accuracy, route learning, and configural learning. Participants (high-school students)…

  1. Applicability of an exposure model for the determination of emissions from mobile phone base stations

    DEFF Research Database (Denmark)

    Breckenkamp, J; Neitzke, H P; Bornkessel, C;

    2008-01-01

    Applicability of a model to estimate radiofrequency electromagnetic field (RF-EMF) strength in households from mobile phone base stations was evaluated with technical data of mobile phone base stations available from the German Net Agency, and dosimetric measurements, performed in an...

  2. Dynamic Operations Wayfinding System (DOWS) for Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald Laurids [Idaho National Laboratory; Ulrich, Thomas Anthony [Idaho National Laboratory; Lew, Roger Thomas [Idaho National Laboratory

    2015-08-01

    A novel software tool is proposed to aid reactor operators in respond- ing to upset plant conditions. The purpose of the Dynamic Operations Wayfind- ing System (DOWS) is to diagnose faults, prioritize those faults, identify paths to resolve those faults, and deconflict the optimal path for the operator to fol- low. The objective of DOWS is to take the guesswork out of the best way to combine procedures to resolve compound faults, mitigate low threshold events, or respond to severe accidents. DOWS represents a uniquely flexible and dy- namic computer-based procedure system for operators.

  3. Colour contribution to children's wayfinding in school environments

    Science.gov (United States)

    Helvacıoǧlu, Elif; Olguntürk, Nilgün

    2011-03-01

    The purpose of this study was to explore the contribution of colour to children's wayfinding ability in school environments and to examine the differences between colours in terms of their remembrance and usability in route learning process. The experiment was conducted with three different sample groups for each of three experiment sets differentiated by their colour arrangement. The participants totalled 100 primary school children aged seven and eight years old. The study was conducted in four phases. In the first phase, the participants were tested for familiarity with the experiment site and also for colour vision deficiencies by using Ishihara's tests for colour-blindness. In the second phase, they were escorted on the experiment route by the tester one by one, from one starting point to one end point and were asked to lead the tester to the end point by the same route. In the third phase, they were asked to describe verbally the route. In the final phase, they were asked to remember the specific colours at their correct locations. It was found that colour has a significant effect on children's wayfinding performances in school environments. However, there were no differences between different colours in terms of their remembrances in route finding tasks. In addition, the correct identifications of specific colours and landmarks were dependent on their specific locations. Contrary to the literature, gender differences were not found to be significant in the accuracy of route learning performances.

  4. Achieving a lean wayfinding system in complex hospital environments: Design and Through-life Management

    OpenAIRE

    Rooke, Clementinah Ndhlovu; Koskela, Lauri; Tzortzopoulos, Patricia

    2010-01-01

    Complex products, such as buildings and other infrastructure, should aim to provide value to the customer over all stages of the product life-cycle. This paper considers some of the challenges associated with maximising customer value when designing, producing, implementing and maintaining a wayfinding system for complex hospital environments. The hypothesis of this paper is that the tri-partite conception of knowledge flow provides a robust evaluative framework for the problems of wayfind...

  5. Exposure to radio waves near mobile phone base stations

    International Nuclear Information System (INIS)

    Measurements of power density have been made at 17 sites where people were concerned about their exposure to radio waves from mobile phone base stations and where technical data, including the frequencies and radiated powers, have been obtained from the operators. Based on the technical data, the radiated power from antennas used with macrocellular base stations in the UK appears to range from a few watts to a few tens of watts, with typical maximum powers around 80 W. Calculations based on this power indicate that compliance distances would be expected to be no more than 3.1 m for the NRPB guidelines and no more than 8.4 m for the ICNIRP public guidelines. Microcellular base stations appear to use powers no more than a few watts and would not be expected to require compliance distances in excess of a few tens of centimetres. Power density from the base stations of interest was measured at 118 locations at the 17 sites and these data were compared with calculations assuming an inverse square law dependence of power density upon distance from the antennas. It was found that the calculations overestimated the measured power density by up to four orders of magnitude at locations that were either not exposed to the main beam from antennas, or shielded by building fabric. For all locations and for distances up to 250 m from the base stations, power density at the measurement positions did not show any trend to decrease with increasing distance. The signals from other sources were frequently found to be of similar strength to the signals from the base stations of interest. Spectral measurements were obtained over the 30 MHz to 2.9 GHz range at 73 of the locations so that total exposure to radio signals could be assessed. The geometric mean total exposure arising from all radio signals at the locations considered was 2 millionths of the NRPB investigation level, or 18 millionths of the lower ICNIRP public reference level; however, the data varied over several decades. The

  6. Mobile Phone-Based Telemonitoring for Heart Failure Management: A Randomized Controlled Trial

    OpenAIRE

    Seto, Emily; Leonard, Kevin J.; Cafazzo, Joseph A.; Barnsley, Jan; Masino, Caterina; Ross, Heather J

    2012-01-01

    Background Previous trials of telemonitoring for heart failure management have reported inconsistent results, largely due to diverse intervention and study designs. Mobile phones are becoming ubiquitous and economical, but the feasibility and efficacy of a mobile phone-based telemonitoring system have not been determined. Objective The objective of this trial was to investigate the effects of a mobile phone-based telemonitoring system on heart failure management and outcomes. Methods One hund...

  7. Phone-based Metric as a Predictor for Basic Personality Traits

    OpenAIRE

    Mønsted, Bjarke; Mollgaard, Anders; Mathiesen, Joachim

    2016-01-01

    Basic personality traits are typically assessed through questionnaires. Here we consider phone-based metrics as a way to asses personality traits. We use data from smartphones with custom data-collection software distributed to 730 individuals. The data includes information about location, physical motion, face-to-face contacts, online social network friends, text messages and calls. The data is further complemented by questionnaire-based data on basic personality traits. From the phone-based...

  8. Smart-Phone Based Magnetic Levitation for Measuring Densities.

    Directory of Open Access Journals (Sweden)

    Stephanie Knowlton

    Full Text Available Magnetic levitation, which uses a magnetic field to suspend objects in a fluid, is a powerful and versatile technology. We develop a compact magnetic levitation platform compatible with a smart-phone to separate micro-objects and estimate the density of the sample based on its levitation height. A 3D printed attachment is mechanically installed over the existing camera unit of a smart-phone. Micro-objects, which may be either spherical or irregular in shape, are suspended in a paramagnetic medium and loaded in a microcapillary tube which is then inserted between two permanent magnets. The micro-objects are levitated and confined in the microcapillary at an equilibrium height dependent on their volumetric mass densities (causing a buoyancy force toward the edge of the microcapillary and magnetic susceptibilities (causing a magnetic force toward the center of the microcapillary relative to the suspending medium. The smart-phone camera captures magnified images of the levitating micro-objects through an additional lens positioned between the sample and the camera lens cover. A custom-developed Android application then analyzes these images to determine the levitation height and estimate the density. Using this platform, we were able to separate microspheres with varying densities and calibrate their levitation heights to known densities to develop a technique for precise and accurate density estimation. We have also characterized the magnetic field, the optical imaging capabilities, and the thermal state over time of this platform.

  9. Smart-Phone Based Magnetic Levitation for Measuring Densities.

    Science.gov (United States)

    Knowlton, Stephanie; Yu, Chu Hsiang; Jain, Nupur; Ghiran, Ionita Calin; Tasoglu, Savas

    2015-01-01

    Magnetic levitation, which uses a magnetic field to suspend objects in a fluid, is a powerful and versatile technology. We develop a compact magnetic levitation platform compatible with a smart-phone to separate micro-objects and estimate the density of the sample based on its levitation height. A 3D printed attachment is mechanically installed over the existing camera unit of a smart-phone. Micro-objects, which may be either spherical or irregular in shape, are suspended in a paramagnetic medium and loaded in a microcapillary tube which is then inserted between two permanent magnets. The micro-objects are levitated and confined in the microcapillary at an equilibrium height dependent on their volumetric mass densities (causing a buoyancy force toward the edge of the microcapillary) and magnetic susceptibilities (causing a magnetic force toward the center of the microcapillary) relative to the suspending medium. The smart-phone camera captures magnified images of the levitating micro-objects through an additional lens positioned between the sample and the camera lens cover. A custom-developed Android application then analyzes these images to determine the levitation height and estimate the density. Using this platform, we were able to separate microspheres with varying densities and calibrate their levitation heights to known densities to develop a technique for precise and accurate density estimation. We have also characterized the magnetic field, the optical imaging capabilities, and the thermal state over time of this platform. PMID:26308615

  10. Smart-Phone Based Magnetic Levitation for Measuring Densities

    OpenAIRE

    Stephanie Knowlton; Chu Hsiang Yu; Nupur Jain; Ionita Calin Ghiran; Savas Tasoglu

    2015-01-01

    Magnetic levitation, which uses a magnetic field to suspend objects in a fluid, is a powerful and versatile technology. We develop a compact magnetic levitation platform compatible with a smart-phone to separate micro-objects and estimate the density of the sample based on its levitation height. A 3D printed attachment is mechanically installed over the existing camera unit of a smart-phone. Micro-objects, which may be either spherical or irregular in shape, are suspended in a paramagnetic me...

  11. Smart phone-based Chemistry Instrumentation: Digitization of Colorimetric Measurements

    International Nuclear Information System (INIS)

    This report presents a mobile instrumentation platform based on a smart phone using its built-in functions for colorimetric diagnosis. The color change as a result of detection is taken as a picture through a CCD camera built in the smart phone, and is evaluated in the form of the hue value to give the well-defined relationship between the color and the concentration. To prove the concept in the present work, proton concentration measurements were conducted on pH paper coupled with a smart phone for demonstration. This report is believed to show the possibility of adapting a smart phone to a mobile analytical transducer, and more applications for bioanalysis are expected to be developed using other built-in functions of the smart phone

  12. Determination of exposure due to mobile phone base stations in an epidemiological study

    International Nuclear Information System (INIS)

    To investigate a supposed relationship between exposure by mobile phone base stations and well-being, an epidemiological cross sectional study is carried out within the German Mobile Telecommunication Research Program. In a parallel project, a method for the classification of electromagnetic exposure due to mobile phone base stations has been developed. This is based on the results of measurements of high frequency immissions in the interior of more than 1100 rooms and at outdoor locations, the calculation of the emissions of mobile phone antennas under free space propagation conditions and empirically determined transmission factors for the propagation of electromagnetic waves in different types of residential areas for passage of walls and windows. Standard tests (correlation-test, kappa-test, Bland-Altman-Plot, analysis of sensitivity and specificity) show that the method for computational exposure assessment developed in this project is applicable for a first classification of exposures due to mobile phone base stations in epidemiological studies. (authors)

  13. Wayfinding and Navigation for People with Disabilities Using Social Navigation Networks

    Directory of Open Access Journals (Sweden)

    Hassan A. Karimi

    2014-10-01

    Full Text Available To achieve safe and independent mobility, people usually depend on published information, prior experience, the knowledge of others, and/or technology to navigate unfamiliar outdoor and indoor environments. Today, due to advances in various technologies, wayfinding and navigation systems and services are commonplace and are accessible on desktop, laptop, and mobile devices. However, despite their popularity and widespread use, current wayfinding and navigation solutions often fail to address the needs of people with disabilities (PWDs. We argue that these shortcomings are primarily due to the ubiquity of the compute-centric approach adopted in these systems and services, where they do not benefit from the experience-centric approach. We propose that following a hybrid approach of combining experience-centric and compute-centric methods will overcome the shortcomings of current wayfinding and navigation solutions for PWDs.

  14. Signage and wayfinding design a complete guide to creating environmental graphic design systems

    CERN Document Server

    Calori, Chris

    2015-01-01

    A new edition of the market-leading guide to signage and wayfinding design This new edition of Signage and Wayfinding Design: A Complete Guide to Creating Environmental Graphic Design Systems has been fully updated to offer you the latest, most comprehensive coverage of the environmental design process-from research and design development to project execution. Utilizing a cross-disciplinary approach that makes the information relevant to architects, interior designers, landscape architects, graphic designers, and industrial engineers alike, the book arms you with the skills needed to apply a

  15. What input data are needed to accurately model electromagnetic fields from mobile phone base stations?

    NARCIS (Netherlands)

    Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel

    2015-01-01

    The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what

  16. Auditory Cues Used for Wayfinding in Urban Environments by Individuals with Visual Impairments

    Science.gov (United States)

    Koutsoklenis, Athanasios; Papadopoulos, Konstantinos

    2011-01-01

    The study presented here examined which auditory cues individuals with visual impairments use more frequently and consider to be the most important for wayfinding in urban environments. It also investigated the ways in which these individuals use the most significant auditory cues. (Contains 1 table and 3 figures.)

  17. Design and Development of a Mobile Sensor Based the Blind Assistance Wayfinding System

    Science.gov (United States)

    Barati, F.; Delavar, M. R.

    2015-12-01

    The blind and visually impaired people are facing a number of challenges in their daily life. One of the major challenges is finding their way both indoor and outdoor. For this reason, routing and navigation independently, especially in urban areas are important for the blind. Most of the blind undertake route finding and navigation with the help of a guide. In addition, other tools such as a cane, guide dog or electronic aids are used by the blind. However, in some cases these aids are not efficient enough in a wayfinding around obstacles and dangerous areas for the blind. As a result, the need to develop effective methods as decision support using a non-visual media is leading to improve quality of life for the blind through their increased mobility and independence. In this study, we designed and implemented an outdoor mobile sensor-based wayfinding system for the blind. The objectives of this study are to guide the blind for the obstacle recognition and the design and implementation of a wayfinding and navigation mobile sensor system for them. In this study an ultrasonic sensor is used to detect obstacles and GPS is employed for positioning and navigation in the wayfinding. This type of ultrasonic sensor measures the interval between sending waves and receiving the echo signals with respect to the speed of sound in the environment to estimate the distance to the obstacles. In this study the coordinates and characteristics of all the obstacles in the study area are already stored in a GIS database. All of these obstacles were labeled on the map. The ultrasonic sensor designed and constructed in this study has the ability to detect the obstacles in a distance of 2cm to 400cm. The implementation and the results obtained from the interview of a number of blind persons who employed the sensor verified that the designed mobile sensor system for wayfinding was very satisfactory.

  18. DESIGN AND DEVELOPMENT OF A MOBILE SENSOR BASED THE BLIND ASSISTANCE WAYFINDING SYSTEM

    Directory of Open Access Journals (Sweden)

    F. Barati

    2015-12-01

    Full Text Available The blind and visually impaired people are facing a number of challenges in their daily life. One of the major challenges is finding their way both indoor and outdoor. For this reason, routing and navigation independently, especially in urban areas are important for the blind. Most of the blind undertake route finding and navigation with the help of a guide. In addition, other tools such as a cane, guide dog or electronic aids are used by the blind. However, in some cases these aids are not efficient enough in a wayfinding around obstacles and dangerous areas for the blind. As a result, the need to develop effective methods as decision support using a non-visual media is leading to improve quality of life for the blind through their increased mobility and independence. In this study, we designed and implemented an outdoor mobile sensor-based wayfinding system for the blind. The objectives of this study are to guide the blind for the obstacle recognition and the design and implementation of a wayfinding and navigation mobile sensor system for them. In this study an ultrasonic sensor is used to detect obstacles and GPS is employed for positioning and navigation in the wayfinding. This type of ultrasonic sensor measures the interval between sending waves and receiving the echo signals with respect to the speed of sound in the environment to estimate the distance to the obstacles. In this study the coordinates and characteristics of all the obstacles in the study area are already stored in a GIS database. All of these obstacles were labeled on the map. The ultrasonic sensor designed and constructed in this study has the ability to detect the obstacles in a distance of 2cm to 400cm. The implementation and the results obtained from the interview of a number of blind persons who employed the sensor verified that the designed mobile sensor system for wayfinding was very satisfactory.

  19. Mobile Phone-Based Unobtrusive Ecological Momentary Assessment of Day-to-Day Mood: An Explorative Study

    Science.gov (United States)

    Ruwaard, Jeroen; Ejdys, Michal; Schrader, Niels; Sijbrandij, Marit; Riper, Heleen

    2016-01-01

    Background Ecological momentary assessment (EMA) is a useful method to tap the dynamics of psychological and behavioral phenomena in real-world contexts. However, the response burden of (self-report) EMA limits its clinical utility. Objective The aim was to explore mobile phone-based unobtrusive EMA, in which mobile phone usage logs are considered as proxy measures of clinically relevant user states and contexts. Methods This was an uncontrolled explorative pilot study. Our study consisted of 6 weeks of EMA/unobtrusive EMA data collection in a Dutch student population (N=33), followed by a regression modeling analysis. Participants self-monitored their mood on their mobile phone (EMA) with a one-dimensional mood measure (1 to 10) and a two-dimensional circumplex measure (arousal/valence, –2 to 2). Meanwhile, with participants’ consent, a mobile phone app unobtrusively collected (meta) data from six smartphone sensor logs (unobtrusive EMA: calls/short message service (SMS) text messages, screen time, application usage, accelerometer, and phone camera events). Through forward stepwise regression (FSR), we built personalized regression models from the unobtrusive EMA variables to predict day-to-day variation in EMA mood ratings. The predictive performance of these models (ie, cross-validated mean squared error and percentage of correct predictions) was compared to naive benchmark regression models (the mean model and a lag-2 history model). Results A total of 27 participants (81%) provided a mean 35.5 days (SD 3.8) of valid EMA/unobtrusive EMA data. The FSR models accurately predicted 55% to 76% of EMA mood scores. However, the predictive performance of these models was significantly inferior to that of naive benchmark models. Conclusions Mobile phone-based unobtrusive EMA is a technically feasible and potentially powerful EMA variant. The method is young and positive findings may not replicate. At present, we do not recommend the application of FSR-based mood

  20. Mobile phone-based clinical guidance for rural health providers in India.

    Science.gov (United States)

    Gautham, Meenakshi; Iyengar, M Sriram; Johnson, Craig W

    2015-12-01

    There are few tried and tested mobile technology applications to enhance and standardize the quality of health care by frontline rural health providers in low-resource settings. We developed a media-rich, mobile phone-based clinical guidance system for management of fevers, diarrhoeas and respiratory problems by rural health providers. Using a randomized control design, we field tested this application with 16 rural health providers and 128 patients at two rural/tribal sites in Tamil Nadu, Southern India. Protocol compliance for both groups, phone usability, acceptability and patient feedback for the experimental group were evaluated. Linear mixed-model analyses showed statistically significant improvements in protocol compliance in the experimental group. Usability and acceptability among patients and rural health providers were very high. Our results indicate that mobile phone-based, media-rich procedural guidance applications have significant potential for achieving consistently standardized quality of care by diverse frontline rural health providers, with patient acceptance. PMID:24621929

  1. Compliance to Cell Phone-Based EMA Among Latino Youth in Outpatient Treatment

    OpenAIRE

    Comulada, WS; Lightfoot, M; Swendeman, D; Grella, C; Wu, N.(Institute of High Energy Physics, Beijing, 100049, People's Republic of China)

    2015-01-01

    © 2015 Copyright © Taylor & Francis Group, LLC. Outpatient treatment practices for adolescent substance users utilize retrospective self-report to monitor drug use. Cell phone-based ecological momentary assessment (CEMA) overcomes retrospective self-report biases and can enhance outpatient treatment, particularly among Latino adolescents, who have been understudied with regard to CEMA. This study explores compliance to text message-based CEMA with youth (n = 28; 93% Latino) in outpatient trea...

  2. Effect of electromagnetic radiations from mobile phone base stations on general health and salivary function

    OpenAIRE

    Singh, Kushpal; Nagaraj, Anup; Yousuf, Asif; Ganta, Shravani; Pareek, Sonia; Vishnani, Preeti

    2016-01-01

    Objective: Cell phones use electromagnetic, nonionizing radiations in the microwave range, which some believe may be harmful to human health. The present study aimed to determine the effect of electromagnetic radiations (EMRs) on unstimulated/stimulated salivary flow rate and other health-related problems between the general populations residing in proximity to and far away from mobile phone base stations. Materials and Methods: A total of four mobile base stations were randomly selected from...

  3. Novel versatile smart phone based Microplate readers for on-site diagnoses.

    Science.gov (United States)

    Fu, Qiangqiang; Wu, Ze; Li, Xiuqing; Yao, Cuize; Yu, Shiting; Xiao, Wei; Tang, Yong

    2016-07-15

    Microplate readers are important diagnostic instruments, used intensively for various readout test kits (biochemical analysis kits and ELISA kits). However, due to their expensive and non-portability, commercial microplate readers are unavailable for home testing, community and rural hospitals, especially in developing countries. In this study, to provide a field-portable, cost-effective and versatile diagnostic tool, we reported a novel smart phone based microplate reader. The basic principle of this devise relies on a smart phone's optical sensor that measures transmitted light intensities of liquid samples. To prove the validity of these devises, developed smart phone based microplate readers were applied to readout results of various analytical targets. These targets included analanine aminotransferase (ALT; limit of detection (LOD) was 17.54 U/L), alkaline phosphatase (AKP; LOD was 15.56 U/L), creatinine (LOD was 1.35μM), bovine serum albumin (BSA; LOD was 0.0041mg/mL), prostate specific antigen (PSA; LOD was 0.76pg/mL), and ractopamine (Rac; LOD was 0.31ng/mL). The developed smart phone based microplate readers are versatile, portable, and inexpensive; they are unique because of their ability to perform under circumstances where resources and expertize are limited. PMID:27019031

  4. Using Technology to Overcome the Tyranny of Space: Information Provision and Wayfinding

    OpenAIRE

    Julian Hine; Derek Swan; Judith Scott; David Binnie; John Sharp

    2000-01-01

    Urban wayfinding technology offers many possibilities by which older people and mobility-impaired users can overcome the barriers encountered on every-day journeys in the built environment. Previous work has highlighted the extent to which personal mobility and independence are significant determinants of the quality of life amongst both elderly and visually impaired groups. The paper outlines the development of the auditory location finder (ALF), which is a beacon-based local information sys...

  5. The Asovi System: Towards a solution for indoor orientation and wayfinding for the visually impaired

    OpenAIRE

    Saffery, Frank

    2012-01-01

    Wireless communication technology is currently an expanding resource from which solutions into indoor orientation and wayfinding for the visually impaired can be explored. However, as a technology in its infancy a prevalent system in the field has yet to be established. Further to this, the potential of combining wireless communication technology with a commercially viable interface capable of providing feedback for the end user is as yet unexplored. Research in current wireless and mobile te...

  6. An Approach for Indoor Wayfinding Replicating Main Principles of AN Outdoor Navigation System for Cyclists

    Science.gov (United States)

    Makri, A.; Zlatanova, S.; Verbree, E.

    2015-05-01

    This work presents an approach to enhance navigation in indoor environments based on a landmark concept. It has already been proved by empirical research that by using landmarks the wayfinding task can be significantly simplified. Navigation based on landmarks relies on the presence of landmarks at each point along a route where wayfinders might need assistance. The approach presented here is based on the Dutch system for navigation of cyclists. The landmarks that are used in the proposed approach are special signposts containing the necessary directional information in order to guide the wayfinder in the space. The system is quite simple, efficient and satisfactory in providing navigational assistance in indoor space. An important contribution of this research is the generation of an approach to automatically determine the decision points in indoor environments, which makes it possible to apply it to navigational assistance systems in any building. The proposed system is verified by placing numbered landmark-signs in a specific building. Several tests are performed and the results are analysed. The findings of the experiment are very promising, showing that participants reach the destinations without detours.

  7. Perceived externalities of cell phone base stations: the case of property prices in Hamburg, Germany

    OpenAIRE

    Brandt, Sebastian; Maennig, Wolfgang

    2012-01-01

    We examine the impact of cell phone base stations on prices of condominiums in Hamburg, Germany. This is the first hedonic study on this subject for housing prices in Europe and the first ever to examine the price impact of base stations within a whole metropolis. We distinguish between individual masts and groups of masts. On the basis of a dataset of over 1000 base stations set up in Hamburg, we find that only immediate proximity to groups of antenna masts is perceived as harmful by residen...

  8. Age-related wayfinding differences in real large-scale environments: detrimental motor control effects during spatial learning are mediated by executive decline?

    Directory of Open Access Journals (Sweden)

    Mathieu Taillade

    Full Text Available The aim of this study was to evaluate motor control activity (active vs. passive condition with regards to wayfinding and spatial learning difficulties in large-scale spaces for older adults. We compared virtual reality (VR-based wayfinding and spatial memory (survey and route knowledge performances between 30 younger and 30 older adults. A significant effect of age was obtained on the wayfinding performances but not on the spatial memory performances. Specifically, the active condition deteriorated the survey measure in all of the participants and increased the age-related differences in the wayfinding performances. Importantly, the age-related differences in the wayfinding performances, after an active condition, were further mediated by the executive measures. All of the results relative to a detrimental effect of motor activity are discussed in terms of a dual task effect as well as executive decline associated with aging.

  9. A cell-phone-based brain-computer interface for communication in daily life

    Science.gov (United States)

    Wang, Yu-Te; Wang, Yijun; Jung, Tzyy-Ping

    2011-04-01

    Moving a brain-computer interface (BCI) system from a laboratory demonstration to real-life applications still poses severe challenges to the BCI community. This study aims to integrate a mobile and wireless electroencephalogram (EEG) system and a signal-processing platform based on a cell phone into a truly wearable and wireless online BCI. Its practicality and implications in a routine BCI are demonstrated through the realization and testing of a steady-state visual evoked potential (SSVEP)-based BCI. This study implemented and tested online signal processing methods in both time and frequency domains for detecting SSVEPs. The results of this study showed that the performance of the proposed cell-phone-based platform was comparable, in terms of the information transfer rate, with other BCI systems using bulky commercial EEG systems and personal computers. To the best of our knowledge, this study is the first to demonstrate a truly portable, cost-effective and miniature cell-phone-based platform for online BCIs.

  10. Mobile Phone Based System Opportunities to Home-based Managing of Chemotherapy Side Effects

    Science.gov (United States)

    Davoodi, Somayeh; Mohammadzadeh, Zeinab; Safdari, Reza

    2016-01-01

    Objective: Applying mobile base systems in cancer care especially in chemotherapy management have remarkable growing in recent decades. Because chemotherapy side effects have significant influences on patient’s lives, therefore it is necessary to take ways to control them. This research has studied some experiences of using mobile phone based systems to home-based monitor of chemotherapy side effects in cancer. Methods: In this literature review study, search was conducted with keywords like cancer, chemotherapy, mobile phone, information technology, side effects and self managing, in Science Direct, Google Scholar and Pub Med databases since 2005. Results: Today, because of the growing trend of the cancer, we need methods and innovations such as information technology to manage and control it. Mobile phone based systems are the solutions that help to provide quick access to monitor chemotherapy side effects for cancer patients at home. Investigated studies demonstrate that using of mobile phones in chemotherapy management have positive results and led to patients and clinicians satisfactions. Conclusion: This study shows that the mobile phone system for home-based monitoring chemotherapy side effects works well. In result, knowledge of cancer self-management and the rate of patient’s effective participation in care process improved. PMID:27482134

  11. Proposing a Multi-Criteria Path Optimization Method in Order to Provide a Ubiquitous Pedestrian Wayfinding Service

    Science.gov (United States)

    Sahelgozin, M.; Sadeghi-Niaraki, A.; Dareshiri, S.

    2015-12-01

    A myriad of novel applications have emerged nowadays for different types of navigation systems. One of their most frequent applications is Wayfinding. Since there are significant differences between the nature of the pedestrian wayfinding problems and of those of the vehicles, navigation services which are designed for vehicles are not appropriate for pedestrian wayfinding purposes. In addition, diversity in environmental conditions of the users and in their preferences affects the process of pedestrian wayfinding with mobile devices. Therefore, a method is necessary that performs an intelligent pedestrian routing with regard to this diversity. This intelligence can be achieved by the help of a Ubiquitous service that is adapted to the Contexts. Such a service possesses both the Context-Awareness and the User-Awareness capabilities. These capabilities are the main features of the ubiquitous services that make them flexible in response to any user in any situation. In this paper, it is attempted to propose a multi-criteria path optimization method that provides a Ubiquitous Pedestrian Way Finding Service (UPWFS). The proposed method considers four criteria that are summarized in Length, Safety, Difficulty and Attraction of the path. A conceptual framework is proposed to show the influencing factors that have effects on the criteria. Then, a mathematical model is developed on which the proposed path optimization method is based. Finally, data of a local district in Tehran is chosen as the case study in order to evaluate performance of the proposed method in real situations. Results of the study shows that the proposed method was successful to understand effects of the contexts in the wayfinding procedure. This demonstrates efficiency of the proposed method in providing a ubiquitous pedestrian wayfinding service.

  12. Where is my car? Examining wayfinding behavior in a parking lot

    Directory of Open Access Journals (Sweden)

    Rodrigo Mora

    2014-08-01

    Full Text Available This article examines wayfinding behavior in an extended parking lot belonging to one of the largest shopping malls in Santiago, Chile. About 500 people were followed while going to the mall and returning from it, and their trajectories were mapped and analyzed. The results indicate that inbound paths were, in average, 10% shorter that outbound paths, and that people stopped three times more frequently when leaving the mall than when accessing it. It is argued that these results are in line with previous research on the subject, which stress the importance of environmental information in shaping people`s behavior.

  13. Determinants and stability over time of perception of health risks related to mobile phone base stations

    DEFF Research Database (Denmark)

    Kowall, Bernd; Breckenkamp, Jürgen; Blettner, Maria;

    2012-01-01

    about other environmental and health risks, is associated with psychological strain, and is stable on the individual level over time. METHODS: Self-administered questionnaires filled in by 3,253 persons aged 15-69 years in 2004 and 2006 in Germany. RESULTS: Risk perception of MPBS was strongly...... 2004 expressed these concerns again 2 years later, the corresponding figure for attribution of health complaints to MPBS was 31.3%. CONCLUSION: Risk perception of MPBS is strongly associated with general concern, anxiety, depression, and stress, and rather instable over time.......OBJECTIVE: Perception of possible health risks related to mobile phone base stations (MPBS) is an important factor in citizens' opposition against MPBS and is associated with health complaints. The aim of the present study is to assess whether risk perception of MPBS is associated with concerns...

  14. Implicit attitudes toward nuclear power and mobile phone base stations: support for the affect heuristic.

    Science.gov (United States)

    Siegrist, Michael; Keller, Carmen; Cousin, Marie-Eve

    2006-08-01

    The implicit association test (IAT) measures automatic associations. In the present research, the IAT was adapted to measure implicit attitudes toward technological hazards. In Study 1, implicit and explicit attitudes toward nuclear power were examined. Implicit measures (i.e., the IAT) revealed negative attitudes toward nuclear power that were not detected by explicit measures (i.e., a questionnaire). In Study 2, implicit attitudes toward EMF (electro-magnetic field) hazards were examined. Results showed that cell phone base stations and power lines are judged to be similarly risky and, further, that base stations are more closely related to risk concepts than home appliances are. No differences between experts and lay people were observed. Results of the present studies are in line with the affect heuristic proposed by Slovic and colleagues. Affect seems to be an important factor in risk perception. PMID:16948694

  15. Applicability of an exposure model for the determination of emissions from mobile phone base stations

    International Nuclear Information System (INIS)

    Applicability of a model to estimate radiofrequency electromagnetic field (RF-EMF) strength in households from mobile phone base stations was evaluated with technical data of mobile phone base stations available from the German Net Agency, and dosimetric measurements, performed in an epidemiological study. Estimated exposure and exposure measured with dosemeters in 1322 participating households were compared. For that purpose, the upper 10. percentiles of both outcomes were defined as the 'higher exposed' groups. To assess the agreement of the defined 'higher' exposed groups, kappa coefficient, sensitivity and specificity were calculated. The present results show only a weak agreement of calculations and measurements (kappa values between -0.03 and 0.28, sensitivity between 7.1 and 34.6). Only in some of the sub-analyses, a higher agreement was found, e.g. when measured instead of interpolated geo-coordinates were used to calculate the distance between households and base stations, which is one important parameter in modelling exposure. During the development of the exposure model, more precise input data were available for its internal validation, which yielded kappa values between 0.41 and 0.68 and sensitivity between 55 and 76 for different types of housing areas. Contrary to this, the calculation of exposure - on the basis of the available imprecise data from the epidemiological study - is associated with a relatively high degree of uncertainty. Thus, the model can only be applied in epidemiological studies, when the uncertainty of the input data is considerably reduced. Otherwise, the use of dosemeters to determine the exposure from RF-EMF in epidemiological studies is recommended. (authors)

  16. The Effect of Gender, Wayfinding Strategy and Navigational Support on Wayfinding Behaviour%性别、寻路策略与导航方式对寻路行为的影响

    Institute of Scientific and Technical Information of China (English)

    房慧聪; 周琳

    2012-01-01

    The wayfinding strategy and the navigational support mode are two important factors in human wayfinding behavior. Although many lines of evidences have displayed the gender differences in the use of wayfinding strategy and the effectiveness of some navigational support designs, the interaction of these two factors still remained to be studied. The present study was aimed to investigate the effect of gender, wayfinding strategy and navigational support mode on wayfinding behavior. 120 subjects were screened by the classic Wayfinding Strategy Scale developed by Lawton and then were assigned to different navigational support mode in a VR maze program scripted with 3Dmax and Virtools. In the practice stage, the subjects were required to get familiar with the operation rules, such as moving forward or backward, turning left or right by pressing the cursor keys. Then, the subjects entered the formal test, in which they were asked to arrive at the exit of the maze as quickly as possible with the aid of a given navigational support mode. The navigation time and the route map were recorded when the subjects successfully completed the task. Firstly, our data showed that the navigation time in males with lower-score in orientation strategy was the shortest under the condition of the guide sign support in the VR maze, while it was the longest under the condition of the YAH map support. Moreover, they were significantly different between the two treatments. However, the effect of the navigational support mode on wayfinding performance was not significantly different in the males with higher score in orientation strategy. These data indicated that orientation strategy was an important factor to predict the male's navigational performance. Secondly, our data also showed that the effect of the navigational support mode on the female's wayfinding performance was statistically significant. The navigation time was the shortest under the condition of the guide sign support, and it was

  17. Comparison of the effectiveness of three retinal camera technologies for malarial retinopathy detection in Malawi

    Science.gov (United States)

    Soliz, Peter; Nemeth, Sheila C.; Barriga, E. Simon; Harding, Simon P.; Lewallen, Susan; Taylor, Terrie E.; MacCormick, Ian J.; Joshi, Vinayak S.

    2016-03-01

    The purpose of this study was to test the suitability of three available camera technologies (desktop, portable, and iphone based) for imaging comatose children who presented with clinical symptoms of malaria. Ultimately, the results of the project would form the basis for a design of a future camera to screen for malaria retinopathy (MR) in a resource challenged environment. The desktop, portable, and i-phone based cameras were represented by the Topcon, Pictor Plus, and Peek cameras, respectively. These cameras were tested on N=23 children presenting with symptoms of cerebral malaria (CM) at a malaria clinic, Queen Elizabeth Teaching Hospital in Malawi, Africa. Each patient was dilated for binocular indirect ophthalmoscopy (BIO) exam by an ophthalmologist followed by imaging with all three cameras. Each of the cases was graded according to an internationally established protocol and compared to the BIO as the clinical ground truth. The reader used three principal retinal lesions as markers for MR: hemorrhages, retinal whitening, and vessel discoloration. The study found that the mid-priced Pictor Plus hand-held camera performed considerably better than the lower price mobile phone-based camera, and slightly the higher priced table top camera. When comparing the readings of digital images against the clinical reference standard (BIO), the Pictor Plus camera had sensitivity and specificity for MR of 100% and 87%, respectively. This compares to a sensitivity and specificity of 87% and 75% for the i-phone based camera and 100% and 75% for the desktop camera. The drawback of all the cameras were their limited field of view which did not allow complete view of the periphery where vessel discoloration occurs most frequently. The consequence was that vessel discoloration was not addressed in this study. None of the cameras offered real-time image quality assessment to ensure high quality images to afford the best possible opportunity for reading by a remotely located

  18. Non-specific physical symptoms in relation to actual and perceived proximity to mobile phone base stations and powerlines.

    NARCIS (Netherlands)

    Baliatsas, C.; Kamp, I. van; Kelfkens, G.; Schipper, M.; Bolte, J.; Yzermans, J.; Lebret, E.

    2011-01-01

    BACKGROUND: Evidence about a possible causal relationship between non-specific physical symptoms (NSPS) and exposure to electromagnetic fields (EMF) emitted by sources such as mobile phone base stations (BS) and powerlines is insufficient. So far little epidemiological research has been published on

  19. Pilot study of a cell phone-based exercise persistence intervention post-rehabilitation for COPD

    Directory of Open Access Journals (Sweden)

    Huong Q Nguyen

    2009-08-01

    Full Text Available Huong Q Nguyen1, Dawn P Gill1, Seth Wolpin1, Bonnie G Steele2, Joshua O Benditt11University of Washington, seattle, WA, USA; 2VA Puget Sound Health Care System, Seattle, WA, USAObjective: To determine the feasibility and efficacy of a six-month, cell phone-based exercise persistence intervention for patients with chronic obstructive pulmonary disease (COPD following pulmonary rehabilitation.Methods: Participants who completed a two-week run-in were randomly assigned to either MOBILE-Coached (n = 9 or MOBILE-Self-Monitored (n = 8. All participants met with a nurse to develop an individualized exercise plan, were issued a pedometer and exercise booklet, and instructed to continue to log their daily exercise and symptoms. MOBILE-Coached also received weekly reinforcement text messages on their cell phones; reports of worsening symptoms were automatically flagged for follow-up. Usability and satisfaction were assessed. Participants completed incremental cycle and six minute walk (6MW tests, wore an activity monitor for 14 days, and reported their health-related quality of life (HRQL at baseline, three, and six months.Results: The sample had a mean age of 68 ± 11 and forced expiratory volume in one second (FEV1 of 40 ± 18% predicted. Participants reported that logging their exercise and symptoms was easy and that keeping track of their exercise helped them remain active. There were no differences between groups over time in maximal workload, 6MW distance, or HRQL (p > 0.05; however, MOBILE-Self-Monitored increased total steps/day whereas MOBILE-Coached logged fewer steps over six months (p = 0.04.Conclusions: We showed that it is feasible to deliver a cell phone-based exercise persistence intervention to patients with COPD post-rehabilitation and that the addition of coaching appeared to be no better than self-monitoring. The latter finding needs to be interpreted with caution since this was a purely exploratory study.Trial registration: Clinical

  20. An iPhone-based digital image colorimeter for detecting tetracycline in milk.

    Science.gov (United States)

    Masawat, Prinya; Harfield, Antony; Namwong, Anan

    2015-10-01

    An iPhone-based digital image colorimeter (DIC) was fabricated as a portable tool for monitoring tetracycline (TC) in bovine milk. An application named ColorConc was developed for the iPhone that utilizes an image matching algorithm to determine the TC concentration in a solution. The color values; red (R), green (G), blue (B), hue (H), saturation (S), brightness (V), and gray (Gr) were measured from each pictures of the TC standard solutions. TC solution extracted from milk samples using solid phase extraction (SPE) was captured and the concentration was predicted by comparing color values with those collected in a database. The amount of TC could be determined in the concentration range of 0.5-10 μg mL(-1). The proposed DIC-iPhone is able to provide a limit of detection (LOD) of 0.5 μg mL(-1) and limit of quantitation (LOQ) of 1.5 μg mL(-1). The enrichment factor was 70 and color of the extracted milk sample was a strong yellow solution after SPE. Therefore, the SPE-DIC-iPhone could be used for the assay of TC residues in milk at the concentration lower than LOD and LOQ of the proposed technique. PMID:25872422

  1. Mobile Phone Based RIMS for Traffic Control a Case Study of Tanzania

    Directory of Open Access Journals (Sweden)

    Angela-Aida Karugila Runyoro

    2015-04-01

    Full Text Available Vehicles saturation in transportation infrastructure causes traffic congestion, accidents, transportation delays and environment pollution. This problem can be resolved with proper management of traffic flow. Existing traffic management systems are challenged on capturing and processing real-time road data from wide area road networks. The main purpose of this study is to address the gap by implementing a mobile phone based Road Information Management System. The proposed system integrates three modules for data collection, storage and information dissemination. The modules works together to enable real-time traffic control. Disseminated information from the system, enables road users to adjust their travelling habit, also it allows the traffic lights to control the traffic in relation to the real-time situation occurring on the road. In this paper the system implementation and testing was performed. The results indicated that there is a possibility to track traffic data using Global Positioning System enabled mobile phones, and after processing the collected data, real-time traffic status was displayed on web interface. This enabled road users to know in advance the situation occurring on the roads and hence make proper travelling decision. Further research should consider adjusting the traffic lights control system to understand the disseminated real-time traffic information.

  2. What input data are needed to accurately model electromagnetic fields from mobile phone base stations?

    Science.gov (United States)

    Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel

    2015-01-01

    The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available. PMID:24472756

  3. Study of variations of radiofrequency power density from mobile phone base stations with distance

    International Nuclear Information System (INIS)

    The variations of radiofrequency (RF) radiation power density with distance around some mobile phone base stations (BTSs), in ten randomly selected locations in Ibadan, western Nigeria, were studied. Measurements were made with a calibrated hand-held spectrum analyser. The maximum Global System of Mobile (GSM) communication 1800 signal power density was 323.91 μW m-2 at 250 m radius of a BTS and that of GSM 900 was 1119.00 μW m-2 at 200 m radius of another BTS. The estimated total maximum power density was 2972.00 μW m-2 at 50 m radius of a different BTS. This study shows that the maximum carrier signal power density and the total maximum power density from a BTS may be observed averagely at 200 and 50 m of its radius, respectively. The result of this study demonstrates that exposure of people to RF radiation from phone BTSs in Ibadan city is far less than the recommended limits by International scientific bodies. (authors)

  4. Measurement and analysis of radiofrequency radiations from some mobile phone base stations in Ghana

    International Nuclear Information System (INIS)

    A survey of the radiofrequency electromagnetic radiation at public access points in the vicinity of 50 cellular phone base stations has been carried out. The primary objective was to measure and analyse the electromagnetic field strength levels emitted by antennae installed and operated by the Ghana Telecommunications Company. On all the sites measurements were made using a hand-held spectrum analyser to determine the electric field level with the 900 and 1800 MHz frequency bands. The results indicated that power densities at public access points varied from as low as 0.01 μW m-2 to as high as 10 μW m-2 for the frequency of 900 MHz. At a transmission frequency of 1800 MHz, the variation of power densities is from 0.01 to 100 μW m-2. The results were found to be in compliant with the International Commission on Non-ionizing Radiological Protection guidance level but were 20 times higher than the results generally obtained for such a practice elsewhere. There is therefore a need to re-assess the situation to ensure reduction in the present level as an increase in mobile phone usage is envisaged within the next few years. (authors)

  5. Way-finding during a fire emergency: an experimental study in a virtual environment.

    Science.gov (United States)

    Meng, Fanxing; Zhang, Wei

    2014-01-01

    The way-finding behaviour and response during a fire emergency in a virtual environment (VE) was experimentally investigated. Forty participants, divided into two groups, were required to find the emergency exit as soon as possible in a virtual hotel building because of a fire escape demand under condition 1 (VE without virtual fire, control group) and condition 2 (VE with virtual fire, treatment group). Compared to the control group, the treatment group induced significantly higher skin conductivity and heart rate, experienced more stress, took longer time to notice the evacuation signs, had quicker visual search and had a longer escape time to find the exit. These results indicated that the treatment condition induced higher physiological and psychological stress, and had influenced the escape behaviour compared to the control group. In practice, fire evacuation education and fire evacuation system design should consider the response characteristics in a fire emergency. PMID:24697193

  6. Natural perceptual wayfinding for urban accessibility of the elderly with early-stage AD

    Directory of Open Access Journals (Sweden)

    Giuliana Frau

    2015-04-01

    Full Text Available Population ageing and the increase in neurodegenerative diseases that lead to dementia, together with growing urbanisation, cause us to reflect on an important aspect of life in the city for elderly people: the ability to move around independently without getting lost and to find their way back home. By reviewing the existing literature on the theme of wayfinding and analysing some data on residual capacities in the early stages of Alzheimer’s Disease, the concept of ‘natural perceptual wayfinding’ is introduced, aimed, on the one hand, at improving urban accessibility of people with dementia and, on the other, at reconsidering a topic of vital importance, even if normally neglected in the dwelling design.

  7. Mapping Cyclists’ Experiences and Agent-Based Modelling of Their Wayfinding Behaviour

    DEFF Research Database (Denmark)

    Snizek, Bernhard

    with spatial agents and model calibration data creation This paper has two objectives, which are to develop and present a method for simulating single GPS-based trajectories by applying an agent-based model and to acquire parameter values for CopenhagenABM, an agent-based model of cyclists’ behaviour. The core......This dissertation is about modelling cycling transport behaviour. It is partly about urban experiences seen by the cyclist and about modelling, more specifically the agent-based modelling of cyclists' wayfinding behaviour. The dissertation consists of three papers. The first deals...... into consideration. The resulting routes' overlap with routes taken from the real world was calculated and used as a qualifier for the capacity of the model to explain the real world phenomenon. The analyses and the conclusions from these model results are discussed at the end of the paper. CopenhagenABM: An Agent-based...

  8. Structural hippocampal anomalies in a schizophrenia population correlate with navigation performance on a wayfinding task

    Directory of Open Access Journals (Sweden)

    Andrée-Anne Ledoux

    2014-03-01

    Full Text Available Episodic memory, related to the hippocampus, has been found to be impaired in schizophrenia. Further, hippocampal anomalies have also been observed in schizophrenia. This study investigated whether average hippocampal grey matter (GM would differentiate performance on a hippocampus-dependent memory task in patients with schizophrenia and healthy controls. Twenty-one patients with schizophrenia and twenty-two control participants were scanned with an MRI while being tested on a wayfinding task in a virtual town (e.g., find the grocery store from the school. Regressions were performed for both groups individually and together using GM and performance on the wayfinding task. Results indicate that controls successfully completed the task more often than patients, took less time, and made fewer errors. Additionally, controls had significantly more hippocampal GM than patients. Poor performance was associated with a GM decrease in the right hippocampus for both groups. Within group regressions found an association between right hippocampi GM and performance in controls and an association between the left hippocampi GM and performance in patients. A second analysis revealed that different anatomical GM regions, known to be associated with the hippocampus, such as the parahippocampal cortex, amygdala, medial and orbital prefrontal cortices, covaried with the hippocampus in the control group. Interestingly, the cuneus and cingulate gyrus also covaried with the hippocampus in the patient group but the orbital frontal cortex did not, supporting the hypothesis of impaired connectivity between the hippocampus and the frontal cortex in schizophrenia. These results present important implications for creating intervention programs aimed at measuring functional and structural changes in the hippocampus in schizophrenia.

  9. Mobile phone-based asthma self-management aid for adolescents (mASMAA): a feasibility study

    OpenAIRE

    Rhee H; Allen J.; Mammen J; Swift M

    2014-01-01

    Hyekyun Rhee,1 James Allen,2 Jennifer Mammen,1 Mary Swift21School of Nursing, 2Department of Computer Science, University of Rochester, Rochester, NY, USAPurpose: Adolescents report high asthma-related morbidity that can be prevented by adequate self-management of the disease. Therefore, there is a need for a developmentally appropriate strategy to promote effective asthma self-management. Mobile phone-based technology is portable, commonly accessible, and well received by adolescents. The pu...

  10. Cell Phone-Based and Adherence Device Technologies for HIV Care and Treatment in Resource-Limited Settings: Recent Advances.

    Science.gov (United States)

    Campbell, Jeffrey I; Haberer, Jessica E

    2015-12-01

    Numerous cell phone-based and adherence monitoring technologies have been developed to address barriers to effective HIV prevention, testing, and treatment. Because most people living with HIV and AIDS reside in resource-limited settings (RLS), it is important to understand the development and use of these technologies in RLS. Recent research on cell phone-based technologies has focused on HIV education, linkage to and retention in care, disease tracking, and antiretroviral therapy adherence reminders. Advances in adherence devices have focused on real-time adherence monitors, which have been used for both antiretroviral therapy and pre-exposure prophylaxis. Real-time monitoring has recently been combined with cell phone-based technologies to create real-time adherence interventions using short message service (SMS). New developments in adherence technologies are exploring ingestion monitoring and metabolite detection to confirm adherence. This article provides an overview of recent advances in these two families of technologies and includes research on their acceptability and cost-effectiveness when available. It additionally outlines key challenges and needed research as use of these technologies continues to expand and evolve. PMID:26439917

  11. The feasibility of cell phone based electronic diaries for STI/HIV research

    Directory of Open Access Journals (Sweden)

    Hensel Devon J

    2012-06-01

    Full Text Available Abstract Background Self-reports of sensitive, socially stigmatized or illegal behavior are common in STI/HIV research, but can raise challenges in terms of data reliability and validity. The use of electronic data collection tools, including ecological momentary assessment (EMA, can increase the accuracy of this information by allowing a participant to self-administer a survey or diary entry, in their own environment, as close to the occurrence of the behavior as possible. In this paper, we evaluate the feasibility of using cell phone-based EMA as a tool for understanding sexual risk and STI among adult men and women. Methods As part of a larger prospective clinical study on sexual risk behavior and incident STI in clinically recruited adult men and women, using study-provided cell phones, participants (N = 243 completed thrice–daily EMA diaries monitoring individual and partner-specific emotional attributes, non-sexual activities, non-coital or coital sexual behaviors, and contraceptive behaviors. Using these data, we assess feasibility in terms of participant compliance, behavior reactivity, general method acceptability and method efficacy for capturing behaviors. Results Participants were highly compliant with diary entry protocol and schedule: over the entire 12 study weeks, participants submitted 89.7% (54,914/61,236 of the expected diary entries, with an average of 18.86 of the 21 expected diaries (85.7% each week. Submission did not differ substantially across gender, race/ethnicity and baseline sexually transmitted infection status. A sufficient volume and range of sexual behaviors were captured, with reporting trends in different legal and illegal behaviors showing small variation over time. Participants found the methodology to be acceptable, enjoyed and felt comfortable participating in the study. Conclusion Achieving the correct medium of data collection can drastically improve, or degrade, the timeliness and quality of an

  12. Mobile phone base stations and adverse health effects: phase 1 of a population-based, cross-sectional study in Germany

    DEFF Research Database (Denmark)

    Blettner, M; Schlehofer, B; Breckenkamp, J; Kowall, B; Schmiedel, S; Reis, U; Potthoff, P; Schüz, J; Berg-Beckhoff, Gabriele

    2009-01-01

    OBJECTIVE: The aim of this first phase of a cross-sectional study from Germany was to investigate whether proximity of residence to mobile phone base stations as well as risk perception is associated with health complaints. METHODS: The researchers conducted a population-based, multi-phase, cross......-sectional study within the context of a large panel survey regularly carried out by a private research institute in Germany. In the initial phase, reported on in this paper, 30,047 persons from a total of 51,444 who took part in the nationwide survey also answered questions on how mobile phone base stations...... participants were concerned about adverse health effects of mobile phone base stations, while an additional 10.3% attributed their personal adverse health effects to the exposure from them. Participants who were concerned about or attributed adverse health effects to mobile phone base stations and those living...

  13. Camera calibration

    OpenAIRE

    Andrade-Cetto, J.

    2001-01-01

    This report is a tutorial on pattern based camera calibration for computer vision. The methods presented here allow for the computation of the intrinsic and extrinsic parameters of a camera. These methods are widely available in the literature, and they are only summarized here as an easy and comprehensive reference for researchers at the Institute and their collaborators.

  14. Gamma camera

    International Nuclear Information System (INIS)

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  15. Wayfinding: a quality factor in human design approach to healthcare facilities.

    Science.gov (United States)

    Del Nord, R

    1999-01-01

    The specific aim of this paper is the systematic analysis of interactions and reciprocal conditions existing between the physical space of hospital buildings and the different categories of individuals that come in contact with them. The physical and environmental facilities of hospital architecture often influence the therapeutic character of space and the employees. If the values of the individual are to be safeguarded in this context, priority needs to be given to such factors as communication, privacy, etc. This would mean the involvement of other professional groups such as psychologists, sociologists, ergonomists, etc. at the hospital building planning stage. This paper will outline the result of some research conducted at the University Research Center "TESIS" of Florence to provide better understanding of design strategies applied to reduce the pathology of spaces within the healthcare environment. The case studies will highlight the parameters and the possible architectural solutions to wayfinding and the humanization of spaces, with particular emphasis on lay-outs, technologies, furniture and finishing design. PMID:10622912

  16. Measurements of RF/MW radiation emitted from selected mobile-phone base-stations in Sudan

    International Nuclear Information System (INIS)

    Scattering of mobile-phone base-stations within populated areas is a source of some misscomfortableness to many people. As there is no one agreed on safety level for the MPE for RF/MW all over the world, measurements of radiation emitted from base-stations is a necessity. In this work we screened out some mobile-phone base-stations inside and outside Khartoum city in Sudan. Measurements were done indoor and outdoor to maximum horizontal distance of about 300 m from the base of the base-stations. Results obtained were then compared to the maximum and minimum MPE values admitted in different countries in the world. The maximum MPE value (i.e- 0.57 mW/cm2) consider only the thermal effects of the RF/MW, while other values tend to reduce the exposure limits to as minimum as possible for safety considerations (considering non-thermal effects). Some of the values obtained were consistent with some reported biological effects. We recommended the removal of some base-stations from sensitive areas like schools, kindergardens, hostels, hospitals, etc. (author)

  17. Scintillation camera and positron camera

    International Nuclear Information System (INIS)

    A short description is given of earlier forms of the gamma-ray camera. The principle of operation of the scintillation camera is reviewed. Here the locations of scintillations occurring in a flat thallium-activated sodium iodide crystal are determined from the amount of light picked up by a number of phototubes simultaneously viewing the crystal. The signals from the phototubes are fed to a deflection computor circuit which reproduces the scintillations on a cathode-ray tube screen. There they are photographed by a conventional scope camera. Examples are shown of the resolution now obtained as shown by test phantoms. A discussion is presented of the camera's use in visualizing the thyroid in clinical practice. (author)

  18. Phases in development of an interactive mobile phone-based system to support self-management of hypertension

    Directory of Open Access Journals (Sweden)

    Hallberg I

    2014-05-01

    Full Text Available Inger Hallberg,1,11 Charles Taft,1,11 Agneta Ranerup,2,11 Ulrika Bengtsson,1,11 Mikael Hoffmann,3,10 Stefan Höfer,4 Dick Kasperowski,5 Åsa Mäkitalo,6 Mona Lundin,6 Lena Ring,7,8 Ulf Rosenqvist,9 Karin Kjellgren1,10,11 1Institute of Health and Care Sciences, 2Department of Applied Information Technology, University of Gothenburg, Gothenburg, 3The NEPI Foundation, Linköping, Sweden; 4Department of Medical Psychology, Innsbruck Medical University, Innsbruck, Austria; 5Department of Philosophy, Linguistics and Theory of Science, 6Department of Education, Communication and Learning, University of Gothenburg, Gothenburg, 7Centre for Research Ethics and Bioethics, Uppsala University, 8Department of Use of Medical Products, Medical Products Agency, Uppsala, 9Department of Medical Specialist and Department of Medical and Health Sciences, Linköping University, Motala, 10Department of Medical and Health Sciences, Linköping University, Linköping, 11Centre for Person-Centred Care, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden Abstract: Hypertension is a significant risk factor for heart disease and stroke worldwide. Effective treatment regimens exist; however, treatment adherence rates are poor (30%–50%. Improving self-management may be a way to increase adherence to treatment. The purpose of this paper is to describe the phases in the development and preliminary evaluation of an interactive mobile phone-based system aimed at supporting patients in self-managing their hypertension. A person-centered and participatory framework emphasizing patient involvement was used. An interdisciplinary group of researchers, patients with hypertension, and health care professionals who were specialized in hypertension care designed and developed a set of questions and motivational messages for use in an interactive mobile phone-based system. Guided by the US Food and Drug Administration framework for the development of patient-reported outcome

  19. Assessment of radiofrequency/microwave radiation emitted by the antennas of rooftop-mounted mobile phone base stations

    International Nuclear Information System (INIS)

    Radiofrequency (RF) and microwave (MW) radiation exposures from the antennas of rooftop-mounted mobile telephone base stations have become a serious issue in recent years due to the rapidly evolving technologies in wireless telecommunication systems. In Malaysia, thousands of mobile telephone base stations have been erected all over the country, most of which are mounted on the rooftops. In view of public concerns, measurements of the RF/MW levels emitted by the base stations were carried out in this study. The values were compared with the exposure limits set by several organisations and countries. Measurements were performed at 200 sites around 47 mobile phone base stations. It was found that the RF/MW radiation from these base stations were well below the maximum exposure limits set by various agencies. (authors)

  20. CCD Camera

    Science.gov (United States)

    Roth, Roger R.

    1983-01-01

    A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

  1. A portable smart phone-based plasmonic nanosensor readout platform that measures transmitted light intensities of nanosubstrates using an ambient light sensor.

    Science.gov (United States)

    Fu, Qiangqiang; Wu, Ze; Xu, Fangxiang; Li, Xiuqing; Yao, Cuize; Xu, Meng; Sheng, Liangrong; Yu, Shiting; Tang, Yong

    2016-05-21

    Plasmonic nanosensors may be used as tools for diagnostic testing in the field of medicine. However, quantification of plasmonic nanosensors often requires complex and bulky readout instruments. Here, we report the development of a portable smart phone-based plasmonic nanosensor readout platform (PNRP) for accurate quantification of plasmonic nanosensors. This device operates by transmitting excitation light from a LED through a nanosubstrate and measuring the intensity of the transmitted light using the ambient light sensor of a smart phone. The device is a cylinder with a diameter of 14 mm, a length of 38 mm, and a gross weight of 3.5 g. We demonstrated the utility of this smart phone-based PNRP by measuring two well-established plasmonic nanosensors with this system. In the first experiment, the device measured the morphology changes of triangular silver nanoprisms (AgNPRs) in an immunoassay for the detection of carcinoembryonic antigen (CEA). In the second experiment, the device measured the aggregation of gold nanoparticles (AuNPs) in an aptamer-based assay for the detection of adenosine triphosphate (ATP). The results from the smart phone-based PNRP were consistent with those from commercial spectrophotometers, demonstrating that the smart phone-based PNRP enables accurate quantification of plasmonic nanosensors. PMID:27137512

  2. A comparative study of radiofrequency emission from roof top mobile phone base station antennas and tower mobile phone base antennas located at some selected cell sites in Accra, Ghana

    International Nuclear Information System (INIS)

    RF radiation exposure from antennas mounted on rooftop mobile phone base stations have become a serious issue in recent years due to the rapidly developing technologies in wireless telecommunication. The heightening numbers of base station and their closeness to the general public has led to possible health concerns as a result of exposure to RF radiations. The primary objective of this study was to assess the level of RF radiation emitted from roof top mobile phone base station antennas and compare the measured results with the guidelines set by International Commission on Non-ionization Radiation. The maximum and minimum average power density measured from the rooftop sites inside buildings were 2.46xI0-2 and 1.68x10-3 W/m2 respectively whereas that for outside buildings at the same rooftop site was also 7.44x 10-5 and 3.35x 10-3 W/m2 respectively. Public exposure quotient also ranged between 3.74x10-10 to 1.31x10-07 inside buildings whilst that for outside varied between 7.44x 10-10 to 1.65x 10-06. Occupational exposure quotient inside buildings varied between 1.66x 10-11 to 2.11 x 10-09 whereas that for outside ranged from 3.31x10-09 to 3.30x10-07 all at the rooftop site. The results obtained for a typical tower base station also indicated that the maximum and minimum average power density was 4.57x10-1 W/m2 and 7.13x10-3 W/m2 respectively. The public exposure quotient varied between 1.58x10-09 to 1.01x10-07 whilst that for occupational exposure quotient ranged between 3.17x10-10 to 2.03x10-08. The values of power densities levels inside buildings at rooftop sites are low compared to that of tower sites. This could be due to high attenuation caused by thick concrete walls and ceilings. The results obtained were found to be in compliance with ICNIRP and FCC guidance levels of 4.5 W/m2 and 6 W/m2 respectively. (au)

  3. Mobile phone-based asthma self-management aid for adolescents (mASMAA: a feasibility study

    Directory of Open Access Journals (Sweden)

    Rhee H

    2014-01-01

    Full Text Available Hyekyun Rhee,1 James Allen,2 Jennifer Mammen,1 Mary Swift21School of Nursing, 2Department of Computer Science, University of Rochester, Rochester, NY, USAPurpose: Adolescents report high asthma-related morbidity that can be prevented by adequate self-management of the disease. Therefore, there is a need for a developmentally appropriate strategy to promote effective asthma self-management. Mobile phone-based technology is portable, commonly accessible, and well received by adolescents. The purpose of this study was to develop and evaluate the feasibility and acceptability of a comprehensive mobile phone-based asthma self-management aid for adolescents (mASMAA that was designed to facilitate symptom monitoring, treatment adherence, and adolescent–parent partnership. The system used state-of-the-art natural language-understanding technology that allowed teens to use unconstrained English in their texts, and to self-initiate interactions with the system.Materials and methods: mASMAA was developed based on an existing natural dialogue system that supports broad coverage of everyday natural conversation in English. Fifteen adolescent–parent dyads participated in a 2-week trial that involved adolescents' daily scheduled and unscheduled interactions with mASMAA and parents responding to daily reports on adolescents' asthma condition automatically generated by mASMAA. Subsequently, four focus groups were conducted to systematically obtain user feedback on the system. Frequency data on the daily usage of mASMAA over the 2-week period were tabulated, and content analysis was conducted for focus group interview data.Results: Response rates for daily text messages were 81%–97% in adolescents. The average number of self-initiated messages to mASMAA was 19 per adolescent. Symptoms were the most common topic of teen-initiated messages. Participants concurred that use of mASMAA improved awareness of symptoms and triggers, promoted treatment adherence and

  4. Integrating mobile-phone based assessment for psychosis into people’s everyday lives and clinical care: a qualitative study

    Directory of Open Access Journals (Sweden)

    Palmier-Claus Jasper E

    2013-01-01

    Full Text Available Abstract Background Over the past decade policy makers have emphasised the importance of healthcare technology in the management of long-term conditions. Mobile-phone based assessment may be one method of facilitating clinically- and cost-effective intervention, and increasing the autonomy and independence of service users. Recently, text-message and smartphone interfaces have been developed for the real-time assessment of symptoms in individuals with schizophrenia. Little is currently understood about patients’ perceptions of these systems, and how they might be implemented into their everyday routine and clinical care. Method 24 community based individuals with non-affective psychosis completed a randomised repeated-measure cross-over design study, where they filled in self-report questions about their symptoms via text-messages on their own phone, or via a purpose designed software application for Android smartphones, for six days. Qualitative interviews were conducted in order to explore participants’ perceptions and experiences of the devices, and thematic analysis was used to analyse the data. Results Three themes emerged from the data: i the appeal of usability and familiarity, ii acceptability, validity and integration into domestic routines, and iii perceived impact on clinical care. Although participants generally found the technology non-stigmatising and well integrated into their everyday activities, the repetitiveness of the questions was identified as a likely barrier to long-term adoption. Potential benefits to the quality of care received were seen in terms of assisting clinicians, faster and more efficient data exchange, and aiding patient-clinician communication. However, patients often failed to see the relevance of the systems to their personal situations, and emphasised the threat to the person centred element of their care. Conclusions The feedback presented in this paper suggests that patients are conscious of the

  5. Subjective symptoms reported by people living in the vicinity of cellular phone base stations: A review of the studies

    International Nuclear Information System (INIS)

    The problem of health effects of electromagnetic fields (EMF) emitted by cellular phone base stations evokes much interest in view of the fact that people living in their vicinity are fated to continuous exposure to EMF. None of the studies carried out throughout the world have revealed excessive values of standards adopted by the International Commission on Non-Ionizing Radiation Protection (ICNIRP). A questionnaire was used as a study tool. The results of the questionnaire survey reveal that people living in the vicinity of base stations report various complaints mostly of the circulatory system, but also of sleep disturbances, irritability, depression, blurred vision, concentration difficulties, nausea, lack of appetite, headache and vertigo. The performed studies showed the relationship between the incidence of individual symptoms, the level of exposure, and the distance between a residential area and a base station. This association was observed in both groups of persons, those who linked their complaints with the presence of the base station and those who did not notice such a relation. Further studies, clinical and those based on questionnaires, are needed to explain the background of reported complaints. (author)

  6. effect of electromagnetic fields from cellular phone base stations on some physiological and biophysical properties of rats

    International Nuclear Information System (INIS)

    the hazards of exposure to EMFs are observed on different tissues, the mechanistic by which EMFs can produce such effect still need to be delineated. the present study aims to monitor the possibility of modulation in the different physiological and biophysical properties of the organs after exposure to microwave produced from mobile phone base station at frequency of 900 MHz. one hundred and ten pregnant rats were exposed for periods of 5 and 12 week's at distances of 8,15 and 25 meter from the station antenna (0.01,0.05 and 0.036 MW/cm2) the groups exposed for 5 weeks classified into two halves, one half was used for direct effect studies and the other was used for delayed effects studies (45 days post irradiation). haematological investigations demonstrated non significant changes in (RBC's), (Hb),(PCV) and (MCV)of exposed and delayed rats for 5,12 weeks. the young's of exposed rats show non considerable increase in RBC's, Hb and PCV. significant increases were observed in serum total protein , albumin and globulin levels in 5 and 12 weeks exposed rats and more significant increase in delayed rats

  7. Supporting the self-management of hypertension: Patients' experiences of using a mobile phone-based system.

    Science.gov (United States)

    Hallberg, I; Ranerup, A; Kjellgren, K

    2016-02-01

    Globally, hypertension is poorly controlled and its treatment consists mainly of preventive behavior, adherence to treatment and risk-factor management. The aim of this study was to explore patients' experiences of an interactive mobile phone-based system designed to support the self-management of hypertension. Forty-nine patients were interviewed about their experiences of using the self-management system for 8 weeks regarding: (i) daily answers on self-report questions concerning lifestyle, well-being, symptoms, medication intake and side effects; (ii) results of home blood-pressure measurements; (iii) reminders and motivational messages; and (iv) access to a web-based platform for visualization of the self-reports. The audio-recorded interviews were analyzed using qualitative thematic analysis. The patients considered the self-management system relevant for the follow-up of hypertension and found it easy to use, but some provided insight into issues for improvement. They felt that using the system offered benefits, for example, increasing their participation during follow-up consultations; they further perceived that it helped them gain understanding of the interplay between blood pressure and daily life, which resulted in increased motivation to follow treatment. Increased awareness of the importance of adhering to prescribed treatment may be a way to minimize the cardiovascular risks of hypertension. PMID:25903164

  8. Short on camera geometry and camera calibration

    OpenAIRE

    Magnusson, Maria

    2010-01-01

    We will present the basic theory for the camera geometry. Our goal is camera calibration and the tools necessary for this. We start with homogeneous matrices that can be used to describe geometric transformations in a simple manner. Then we consider the pinhole camera model, the simplified camera model that we will show how to calibrate. A camera matrix describes the mapping from the 3D world to a camera image. The camera matrix can be determined through a number of corresponding points measu...

  9. Optimization of measurement methods for a multi-frequency electromagnetic field from mobile phone base station using broadband EMF meter

    Directory of Open Access Journals (Sweden)

    Paweł Bieńkowski

    2015-10-01

    Full Text Available Background: This paper presents the characteristics of the mobile phone base station (BS as an electromagnetic field (EMF source. The most common system configurations with their construction are described. The parameters of radiated EMF in the context of the access to methods and other parameters of the radio transmission are discussed. Attention was also paid to antennas that are used in this technology. Material and Methods: The influence of individual components of a multi-frequency EMF, most commonly found in the BS surroundings, on the resultant EMF strength value indicated by popular broadband EMF meters was analyzed. The examples of metrological characteristics of the most common EMF probes and 2 measurement scenarios of the multisystem base station, with and without microwave relays, are shown. Results: The presented method for measuring the multi-frequency EMF using 2 broadband probes allows for the significant minimization of measurement uncertainty. Equations and formulas that can be used to calculate the actual EMF intensity from multi-frequency sources are shown. They have been verified in the laboratory conditions on a specific standard setup as well as in real conditions in a survey of the existing base station with microwave relays. Conclusions: Presented measurement methodology of multi-frequency EMF from BS with microwave relays, validated both in laboratory and real conditions. It has been proven that the described measurement methodology is the optimal approach to the evaluation of EMF exposure in BS surrounding. Alternative approaches with much greater uncertainty (precaution method or more complex measuring procedure (sources exclusion method are also presented. Med Pr 2015;66(5:701–712

  10. Clinically defined non-specific symptoms in the vicinity of mobile phone base stations: A retrospective before-after study.

    Science.gov (United States)

    Baliatsas, Christos; van Kamp, Irene; Bolte, John; Kelfkens, Gert; van Dijk, Christel; Spreeuwenberg, Peter; Hooiveld, Mariette; Lebret, Erik; Yzermans, Joris

    2016-09-15

    The number of mobile phone base station(s) (MPBS) has been increasing to meet the rapid technological changes and growing needs for mobile communication. The primary objective of the present study was to test possible changes in prevalence and number of NSS in relation to MPBS exposure before and after increase of installed MPBS antennas. A retrospective cohort study was conducted, comparing two time periods with high contrast in terms of number of installed MPBS. Symptom data were based on electronic health records from 1069 adult participants, registered in 9 general practices in different regions in the Netherlands. All participants were living within 500m from the nearest bases station. Among them, 55 participants reported to be sensitive to MPBS at T1. A propagation model combined with a questionnaire was used to assess indoor exposure to RF-EMF from MPBS at T1. Estimation of exposure at T0 was based on number of antennas at T0 relative to T1. At T1, there was a >30% increase in the total number of MPBS antennas. A higher prevalence for most NSS was observed in the MPBS-sensitive group at T1 compared to baseline. Exposure estimates were not associated with GP-registered NSS in the total sample. Some significant interactions were observed between MPBS-sensitivity and exposure estimates on risk of symptoms. Using clinically defined outcomes and a time difference of >6years it was demonstrated that RF-EMF exposure to MPBS was not associated with the development of NSS. Nonetheless, there was some indication for a higher risk of NSS for the MPBS-sensitive group, mainly in relation to exposure to UMTS, but this should be interpreted with caution. Results have to be verified by future longitudinal studies with a particular focus on potentially susceptible population subgroups of large sample size and integrated exposure assessment. PMID:27219506

  11. Non-specific physical symptoms in relation to actual and perceived proximity to mobile phone base stations and powerlines

    Directory of Open Access Journals (Sweden)

    Bolte John

    2011-06-01

    Full Text Available Abstract Background Evidence about a possible causal relationship between non-specific physical symptoms (NSPS and exposure to electromagnetic fields (EMF emitted by sources such as mobile phone base stations (BS and powerlines is insufficient. So far little epidemiological research has been published on the contribution of psychological components to the occurrence of EMF-related NSPS. The prior objective of the current study is to explore the relative importance of actual and perceived proximity to base stations and psychological components as determinants of NSPS, adjusting for demographic, residency and area characteristics. Methods Analysis was performed on data obtained in a cross-sectional study on environment and health in 2006 in the Netherlands. In the current study, 3611 adult respondents (response rate: 37% in twenty-two Dutch residential areas completed a questionnaire. Self-reported instruments included a symptom checklist and assessment of environmental and psychological characteristics. The computation of the distance between household addresses and location of base stations and powerlines was based on geo-coding. Multilevel regression models were used to test the hypotheses regarding the determinants related to the occurrence of NSPS. Results After adjustment for demographic and residential characteristics, analyses yielded a number of statistically significant associations: Increased report of NSPS was predominantly predicted by higher levels of self-reported environmental sensitivity; perceived proximity to base stations and powerlines, lower perceived control and increased avoidance (coping behavior were also associated with NSPS. A trend towards a moderator effect of perceived environmental sensitivity on the relation between perceived proximity to BS and NSPS was verified (p = 0.055. There was no significant association between symptom occurrence and actual distance to BS or powerlines. Conclusions Perceived proximity to BS

  12. A web- and mobile phone-based intervention to prevent obesity in 4-year-olds (MINISTOP): a population-based randomized controlled trial

    OpenAIRE

    Delisle, Christine; Sandin, Sven; Forsum, Elisabet; Henriksson, Hanna; Trolle-Lagerros, Ylva; Larsson, Christel; Maddison, Ralph; Ortega Porcel, Francisco B.; Ruiz, Jonatan R.; Silfvernagel, Kristin; Timpka, Toomas; L??f, Marie

    2015-01-01

    Background: Childhood obesity is an increasing health problem globally. Overweight and obesity may be established as early as 2-5 years of age, highlighting the need for evidence-based effective prevention and treatment programs early in life. In adults, mobile phone based interventions for weight management (mHealth) have demonstrated positive effects on body mass, however, their use in child populations has yet to be examined. The aim of this paper is to report the study design and methodol...

  13. E-Rehabilitation – an Internet and mobile phone based tailored intervention to enhance self-management of Cardiovascular Disease: study protocol for a randomized controlled trial

    OpenAIRE

    Antypas Konstantinos; Wangberg Silje C

    2012-01-01

    Abstract Background Cardiac rehabilitation is very important for the recovery and the secondary prevention of cardiovascular disease, and one of its main strategies is to increase the level of physical activity. Internet and mobile phone based interventions have been successfully used to help people to achieve this. One of the components that are related to the efficacy of these interventions is tailoring of content to the individual. This trial is studying the effect of a longitudinally tail...

  14. Are people living next to mobile phone base stations more strained? Relationship of health concerns, self-estimated distance to base station, and psychological parameters

    OpenAIRE

    Augner Christoph; Hacker Gerhard

    2009-01-01

    Background and Aims: Coeval with the expansion of mobile phone technology and the associated obvious presence of mobile phone base stations, some people living close to these masts reported symptoms they attributed to electromagnetic fields (EMF). Public and scientific discussions arose with regard to whether these symptoms were due to EMF or were nocebo effects. The aim of this study was to find out if people who believe that they live close to base stations show psychological or psychobiol...

  15. Protocol and Recruitment Results from a Randomized Controlled Trial Comparing Group Phone-Based versus Newsletter Interventions for Weight Loss Maintenance among Rural Breast Cancer Survivors

    OpenAIRE

    Befort, Christie A; Klemp, Jennifer R.; Fabian, Carol; Perri, Michael G; Sullivan, Debra K.; Schmitz, Kathryn H; Diaz, Francisco J.; Shireman, Theresa

    2014-01-01

    Obesity is a risk factor for breast cancer recurrence and death. Women who reside in rural areas have higher obesity prevalence and suffer from breast cancer treatment-related disparities compared to urban women. The objective of this 5-year randomized controlled trial is to compare methods for delivering extended care for weight loss maintenance among rural breast cancer survivors. Group phone-based counseling via conference calls addresses access barriers, is more cost-effective than indivi...

  16. Proactive PTZ Camera Control

    Science.gov (United States)

    Qureshi, Faisal Z.; Terzopoulos, Demetri

    We present a visual sensor network—comprising wide field-of-view (FOV) passive cameras and pan/tilt/zoom (PTZ) active cameras—capable of automatically capturing closeup video of selected pedestrians in a designated area. The passive cameras can track multiple pedestrians simultaneously and any PTZ camera can observe a single pedestrian at a time. We propose a strategy for proactive PTZ camera control where cameras plan ahead to select optimal camera assignment and handoff with respect to predefined observational goals. The passive cameras supply tracking information that is used to control the PTZ cameras.

  17. Telemonitoring and Mobile Phone-Based Health Coaching Among Finnish Diabetic and Heart Disease Patients: Randomized Controlled Trial

    Science.gov (United States)

    Karhula, Tuula; Rääpysjärvi, Katja; Pakanen, Mira; Itkonen, Pentti; Tepponen, Merja; Junno, Ulla-Maija; Jokinen, Tapio; van Gils, Mark; Lähteenmäki, Jaakko; Kohtamäki, Kari; Saranummi, Niilo

    2015-01-01

    Background There is a strong will and need to find alternative models of health care delivery driven by the ever-increasing burden of chronic diseases. Objective The purpose of this 1-year trial was to study whether a structured mobile phone-based health coaching program, which was supported by a remote monitoring system, could be used to improve the health-related quality of life (HRQL) and/or the clinical measures of type 2 diabetes and heart disease patients. Methods A randomized controlled trial was conducted among type 2 diabetes patients and heart disease patients of the South Karelia Social and Health Care District. Patients were recruited by sending invitations to randomly selected patients using the electronic health records system. Health coaches called patients every 4 to 6 weeks and patients were encouraged to self-monitor their weight, blood pressure, blood glucose (diabetics), and steps (heart disease patients) once per week. The primary outcome was HRQL measured by the Short Form (36) Health Survey (SF-36) and glycosylated hemoglobin (HbA1c) among diabetic patients. The clinical measures assessed were blood pressure, weight, waist circumference, and lipid levels. Results A total of 267 heart patients and 250 diabetes patients started in the trial, of which 246 and 225 patients concluded the end-point assessments, respectively. Withdrawal from the study was associated with the patients’ unfamiliarity with mobile phones—of the 41 dropouts, 85% (11/13) of the heart disease patients and 88% (14/16) of the diabetes patients were familiar with mobile phones, whereas the corresponding percentages were 97.1% (231/238) and 98.6% (208/211), respectively, among the rest of the patients (P=.02 and P=.004). Withdrawal was also associated with heart disease patients’ comorbidities—40% (8/20) of the dropouts had at least one comorbidity, whereas the corresponding percentage was 18.9% (47/249) among the rest of the patients (P=.02). The intervention showed

  18. Mobile phone base stations and adverse health effects: phase 2 of a cross-sectional study with measured radio frequency electromagnetic fields

    DEFF Research Database (Denmark)

    Berg-Beckhoff, Gabriele; Blettner, M; Kowall, B;

    2009-01-01

    OBJECTIVE: The aim of the cross-sectional study was to test the hypothesis that exposure to continuous low-level radio frequency electromagnetic fields (RF-EMFs) emitted from mobile phone base stations was related to various health disturbances. METHODS: For the investigation people living mainly...... stations affected their health and they gave information on sleep disturbances, headaches, health complaints and mental and physical health using standardised health questionnaires. Information on stress was also collected. Multiple linear regression models were used with health outcomes as dependent...

  19. Vacuum Camera Cooler

    Science.gov (United States)

    Laugen, Geoffrey A.

    2011-01-01

    Acquiring cheap, moving video was impossible in a vacuum environment, due to camera overheating. This overheating is brought on by the lack of cooling media in vacuum. A water-jacketed camera cooler enclosure machined and assembled from copper plate and tube has been developed. The camera cooler (see figure) is cup-shaped and cooled by circulating water or nitrogen gas through copper tubing. The camera, a store-bought "spy type," is not designed to work in a vacuum. With some modifications the unit can be thermally connected when mounted in the cup portion of the camera cooler. The thermal conductivity is provided by copper tape between parts of the camera and the cooled enclosure. During initial testing of the demonstration unit, the camera cooler kept the CPU (central processing unit) of this video camera at operating temperature. This development allowed video recording of an in-progress test, within a vacuum environment.

  20. Harpicon camera for HDTV

    Science.gov (United States)

    Tanada, Jun

    1992-08-01

    Ikegami has been involved in broadcast equipment ever since it was established as a company. In conjunction with NHK it has brought forth countless television cameras, from black-and-white cameras to color cameras, HDTV cameras, and special-purpose cameras. In the early days of HDTV (high-definition television, also known as "High Vision") cameras the specifications were different from those for the cameras of the present-day system, and cameras using all kinds of components, having different arrangements of components, and having different appearances were developed into products, with time spent on experimentation, design, fabrication, adjustment, and inspection. But recently the knowhow built up thus far in components, , printed circuit boards, and wiring methods has been incorporated in camera fabrication, making it possible to make HDTV cameras by metbods similar to the present system. In addition, more-efficient production, lower costs, and better after-sales service are being achieved by using the same circuits, components, mechanism parts, and software for both HDTV cameras and cameras that operate by the present system.

  1. Digital Pinhole Camera

    Science.gov (United States)

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  2. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen......- erate cinematographic game experiences reducing, however, the player’s feeling of agency. We propose a methodology to integrate the player in the camera control loop that allows to design and generate personalised cinematographic expe- riences. Furthermore, we present an evaluation of the afore......- mentioned methodology showing that the generated camera movements are positively perceived by novice asnd intermediate players....

  3. Automated Camera Calibration

    Science.gov (United States)

    Chen, Siqi; Cheng, Yang; Willson, Reg

    2006-01-01

    Automated Camera Calibration (ACAL) is a computer program that automates the generation of calibration data for camera models used in machine vision systems. Machine vision camera models describe the mapping between points in three-dimensional (3D) space in front of the camera and the corresponding points in two-dimensional (2D) space in the camera s image. Calibrating a camera model requires a set of calibration data containing known 3D-to-2D point correspondences for the given camera system. Generating calibration data typically involves taking images of a calibration target where the 3D locations of the target s fiducial marks are known, and then measuring the 2D locations of the fiducial marks in the images. ACAL automates the analysis of calibration target images and greatly speeds the overall calibration process.

  4. Results of a cross-sectional study on the association of electromagnetic fields emitted from mobile phone base stations and health complaints

    International Nuclear Information System (INIS)

    Background: Despite the fact that adverse health effects are not confirmed for exposure to radiofrequency electromagnetic field (RFEMF) levels below the limit values, as defined in the guidelines of the International Commission on Non-Ionizing Radiation Protection, many persons are worried about possible adverse health effects caused by the RF-EMF emitted from mobile phone base stations, or they attribute their unspecific health complaints like headache or sleep disturbances to these fields. Method: In the framework of a cross-sectional study a questionnaire was sent to 4150 persons living in predominantly urban areas. Participants were asked whether base stations affected their health. Health complaints were measured with standardized health questionnaires for sleep disturbances, headache, health complaints and mental and physical health. 3,526 persons responded (85%) to the questionnaire and 1,808 (51%) agreed to dosimetric measurements in their flats. Exposure was measured in 1,500 flats. Results: The measurements accomplished in the bedrooms in most cases showed very low exposure values, most often below sensitivity limit of the dosimeter. An association of exposure with the occurrence of health complaints was not found, but an association between the attribution of adverse health effects to base stations and the occurrence of health complaints. Conclusions: However, concerns about health and the attribution of adverse health effects to these mobile phone base stations should be taken serious and require a risk communication with concerned persons. Future research should focus on the processes of perception and appraisal of RF-EMF risks, and ascertain the determinants of concerns and attributions in the context of RF-EMF. (orig.)

  5. Are people living next to mobile phone base stations more strained? Relationship of health concerns, self-estimated distance to base station, and psychological parameters

    Directory of Open Access Journals (Sweden)

    Augner Christoph

    2009-01-01

    Full Text Available Background and Aims: Coeval with the expansion of mobile phone technology and the associated obvious presence of mobile phone base stations, some people living close to these masts reported symptoms they attributed to electromagnetic fields (EMF. Public and scientific discussions arose with regard to whether these symptoms were due to EMF or were nocebo effects. The aim of this study was to find out if people who believe that they live close to base stations show psychological or psychobiological differences that would indicate more strain or stress. Furthermore, we wanted to detect the relevant connections linking self-estimated distance between home and the next mobile phone base station (DBS, daily use of mobile phone (MPU, EMF-health concerns, electromagnetic hypersensitivity, and psychological strain parameters. Design, Materials and Methods: Fifty-seven participants completed standardized and non-standardized questionnaires that focused on the relevant parameters. In addition, saliva samples were used as an indication to determine the psychobiological strain by concentration of alpha-amylase, cortisol, immunoglobulin A (IgA, and substance P. Results: Self-declared base station neighbors (DBS ≤ 100 meters had significantly higher concentrations of alpha-amylase in their saliva, higher rates in symptom checklist subscales (SCL somatization, obsessive-compulsive, anxiety, phobic anxiety, and global strain index PST (Positive Symptom Total. There were no differences in EMF-related health concern scales. Conclusions: We conclude that self-declared base station neighbors are more strained than others. EMF-related health concerns cannot explain these findings. Further research should identify if actual EMF exposure or other factors are responsible for these results.

  6. Protocol and recruitment results from a randomized controlled trial comparing group phone-based versus newsletter interventions for weight loss maintenance among rural breast cancer survivors.

    Science.gov (United States)

    Befort, Christie A; Klemp, Jennifer R; Fabian, Carol; Perri, Michael G; Sullivan, Debra K; Schmitz, Kathryn H; Diaz, Francisco J; Shireman, Theresa

    2014-03-01

    Obesity is a risk factor for breast cancer recurrence and death. Women who reside in rural areas have higher obesity prevalence and suffer from breast cancer treatment-related disparities compared to urban women. The objective of this 5-year randomized controlled trial is to compare methods for delivering extended care for weight loss maintenance among rural breast cancer survivors. Group phone-based counseling via conference calls addresses access barriers, is more cost-effective than individual phone counseling, and provides group support which may be ideal for rural breast cancer survivors who are more likely to have unmet support needs. Women (n=210) diagnosed with Stage 0 to III breast cancer in the past 10 years who are ≥ 3 months out from initial cancer treatments, have a BMI 27-45 kg/m(2), and have physician clearance were enrolled from multiple cancer centers. During Phase I (months 0 to 6), all women receive a behavioral weight loss intervention delivered through group phone sessions. Women who successfully lose 5% of weight enter Phase II (months 6 to 18) and are randomized to one of two extended care arms: continued group phone-based treatment or a mail-based newsletter. During Phase III, no contact is made (months 18 to 24). The primary outcome is weight loss maintenance from 6 to 18 months. Secondary outcomes include quality of life, serum biomarkers, and cost-effectiveness. This study will provide essential information on how to reach rural survivors in future efforts to establish weight loss support for breast cancer survivors as a standard of care. PMID:24486636

  7. GRACE star camera noise

    Science.gov (United States)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  8. Analytical multicollimator camera calibration

    Science.gov (United States)

    Tayman, W.P.

    1978-01-01

    Calibration with the U.S. Geological survey multicollimator determines the calibrated focal length, the point of symmetry, the radial distortion referred to the point of symmetry, and the asymmetric characteristiecs of the camera lens. For this project, two cameras were calibrated, a Zeiss RMK A 15/23 and a Wild RC 8. Four test exposures were made with each camera. Results are tabulated for each exposure and averaged for each set. Copies of the standard USGS calibration reports are included. ?? 1978.

  9. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  10. Polarization encoded color camera.

    Science.gov (United States)

    Schonbrun, Ethan; Möller, Guðfríður; Di Caprio, Giuseppe

    2014-03-15

    Digital cameras would be colorblind if they did not have pixelated color filters integrated into their image sensors. Integration of conventional fixed filters, however, comes at the expense of an inability to modify the camera's spectral properties. Instead, we demonstrate a micropolarizer-based camera that can reconfigure its spectral response. Color is encoded into a linear polarization state by a chiral dispersive element and then read out in a single exposure. The polarization encoded color camera is capable of capturing three-color images at wavelengths spanning the visible to the near infrared. PMID:24690806

  11. LSST Camera Optics Design

    Energy Technology Data Exchange (ETDEWEB)

    Riot, V J; Olivier, S; Bauman, B; Pratuch, S; Seppala, L; Gilmore, D; Ku, J; Nordby, M; Foss, M; Antilogus, P; Morgado, N

    2012-05-24

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics will meet their performance goals.

  12. Rapid imaging, detection and quantification of Giardia lamblia cysts using mobile-phone based fluorescent microscopy and machine learning.

    Science.gov (United States)

    Koydemir, Hatice Ceylan; Gorocs, Zoltan; Tseng, Derek; Cortazar, Bingen; Feng, Steve; Chan, Raymond Yan Lok; Burbano, Jordi; McLeod, Euan; Ozcan, Aydogan

    2015-03-01

    Rapid and sensitive detection of waterborne pathogens in drinkable and recreational water sources is crucial for treating and preventing the spread of water related diseases, especially in resource-limited settings. Here we present a field-portable and cost-effective platform for detection and quantification of Giardia lamblia cysts, one of the most common waterborne parasites, which has a thick cell wall that makes it resistant to most water disinfection techniques including chlorination. The platform consists of a smartphone coupled with an opto-mechanical attachment weighing ~205 g, which utilizes a hand-held fluorescence microscope design aligned with the camera unit of the smartphone to image custom-designed disposable water sample cassettes. Each sample cassette is composed of absorbent pads and mechanical filter membranes; a membrane with 8 μm pore size is used as a porous spacing layer to prevent the backflow of particles to the upper membrane, while the top membrane with 5 μm pore size is used to capture the individual Giardia cysts that are fluorescently labeled. A fluorescence image of the filter surface (field-of-view: ~0.8 cm(2)) is captured and wirelessly transmitted via the mobile-phone to our servers for rapid processing using a machine learning algorithm that is trained on statistical features of Giardia cysts to automatically detect and count the cysts captured on the membrane. The results are then transmitted back to the mobile-phone in less than 2 minutes and are displayed through a smart application running on the phone. This mobile platform, along with our custom-developed sample preparation protocol, enables analysis of large volumes of water (e.g., 10-20 mL) for automated detection and enumeration of Giardia cysts in ~1 hour, including all the steps of sample preparation and analysis. We evaluated the performance of this approach using flow-cytometer-enumerated Giardia-contaminated water samples, demonstrating an average cyst capture

  13. Camera Operator and Videographer

    Science.gov (United States)

    Moore, Pam

    2007-01-01

    Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…

  14. The Circular Camera Movement

    DEFF Research Database (Denmark)

    Hansen, Lennard Højbjerg

    2014-01-01

    It has been an accepted precept in film theory that specific stylistic features do not express specific content. Nevertheless, it is possible to find many examples in the history of film in which stylistic features do express specific content: for instance, the circular camera movement is used...... circular camera movement. Keywords: embodied perception, embodied style, explicit narration, interpretation, style pattern, television style...

  15. CCD Luminescence Camera

    Science.gov (United States)

    Janesick, James R.; Elliott, Tom

    1987-01-01

    New diagnostic tool used to understand performance and failures of microelectronic devices. Microscope integrated to low-noise charge-coupled-device (CCD) camera to produce new instrument for analyzing performance and failures of microelectronics devices that emit infrared light during operation. CCD camera also used to indentify very clearly parts that have failed where luminescence typically found.

  16. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas...... detection, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  17. Structured light camera calibration

    Science.gov (United States)

    Garbat, P.; Skarbek, W.; Tomaszewski, M.

    2013-03-01

    Structured light camera which is being designed with the joined effort of Institute of Radioelectronics and Institute of Optoelectronics (both being large units of the Warsaw University of Technology within the Faculty of Electronics and Information Technology) combines various hardware and software contemporary technologies. In hardware it is integration of a high speed stripe projector and a stripe camera together with a standard high definition video camera. In software it is supported by sophisticated calibration techniques which enable development of advanced application such as real time 3D viewer of moving objects with the free viewpoint or 3D modeller for still objects.

  18. Camera as Cultural Critique

    DEFF Research Database (Denmark)

    Suhr, Christian

    2015-01-01

    What does the use of cameras entail for the production of cultural critique in anthropology? Visual anthropological analysis and cultural critique starts at the very moment a camera is brought into the field or existing visual images are engaged. The framing, distances, and interactions between...... researchers, cameras, and filmed subjects already inherently comprise analytical decisions. It is these ethnographic qualities inherent in audiovisual and photographic imagery that make it of particular value to a participatory anthropological enterprise that seeks to resist analytic closure and seeks instead...

  19. Streak camera time calibration procedures

    Science.gov (United States)

    Long, J.; Jackson, I.

    1978-01-01

    Time calibration procedures for streak cameras utilizing a modulated laser beam are described. The time calibration determines a writing rate accuracy of 0.15% with a rotating mirror camera and 0.3% with an image converter camera.

  20. PDA-phone-based instant transmission of radiological images over a CDMA network by combining the PACS screen with a Bluetooth-interfaced local wireless link.

    Science.gov (United States)

    Kim, Dong Keun; Yoo, Sun K; Park, Jeong Jin; Kim, Sun Ho

    2007-06-01

    Remote teleconsultation by specialists is important for timely, correct, and specialized emergency surgical and medical decision making. In this paper, we designed a new personal digital assistant (PDA)-phone-based emergency teleradiology system by combining cellular communication with Bluetooth-interfaced local wireless links. The mobility and portability resulting from the use of PDAs and wireless communication can provide a more effective means of emergency teleconsultation without requiring the user to be limited to a fixed location. Moreover, it enables synchronized radiological image sharing between the attending physician in the emergency room and the remote specialist on picture archiving and communication system terminals without distorted image acquisition. To enable rapid and fine-quality radiological image transmission over a cellular network in a secure manner, progressive compression and security mechanisms have been incorporated. The proposed system is tested over a code division Multiple Access 1x-Evolution Data-Only network to evaluate the performance and to demonstrate the feasibility of this system in a real-world setting. PMID:17505870

  1. Wayfinding in Social Networks

    Science.gov (United States)

    Liben-Nowell, David

    With the recent explosion of popularity of commercial social-networking sites like Facebook and MySpace, the size of social networks that can be studied scientifically has passed from the scale traditionally studied by sociologists and anthropologists to the scale of networks more typically studied by computer scientists. In this chapter, I will highlight a recent line of computational research into the modeling and analysis of the small-world phenomenon - the observation that typical pairs of people in a social network are connected by very short chains of intermediate friends - and the ability of members of a large social network to collectively find efficient routes to reach individuals in the network. I will survey several recent mathematical models of social networks that account for these phenomena, with an emphasis on both the provable properties of these social-network models and the empirical validation of the models against real large-scale social-network data.

  2. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  3. The BCAM Camera

    CERN Document Server

    Hashemi, K S

    2000-01-01

    The BCAM, or Boston CCD Angle Monitor, is a camera looking at one or more light sources. We describe the application of the The BCAM, or Boston CCD Angle Monitor, is a camera looking at one or more light sources. We describe the application of the BCAM to the ATLAS forward muon detector alignment system. We show that the camera's performance is only weakly dependent upon the brightness, focus and diameter of the source image. Its resolution is dominated by turbulence along the external light path. The camera electronics is radiation-resistant. With a field of view of ± 10 mrad, it tracks the bearing of a light source 16 m away with better than 3 µrad accuracy, well within the ATLAS requirements.

  4. TARGETLESS CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    L. Barazzetti

    2012-09-01

    Full Text Available In photogrammetry a camera is considered calibrated if its interior orientation parameters are known. These encompass the principal distance, the principal point position and some Additional Parameters used to model possible systematic errors. The current state of the art for automated camera calibration relies on the use of coded targets to accurately determine the image correspondences. This paper presents a new methodology for the efficient and rigorous photogrammetric calibration of digital cameras which does not require any longer the use of targets. A set of images depicting a scene with a good texture are sufficient for the extraction of natural corresponding image points. These are automatically matched with feature-based approaches and robust estimation techniques. The successive photogrammetric bundle adjustment retrieves the unknown camera parameters and their theoretical accuracies. Examples, considerations and comparisons with real data and different case studies are illustrated to show the potentialities of the proposed methodology.

  5. Gamma ray camera

    International Nuclear Information System (INIS)

    An improved Anger-type gamma ray camera utilizes a proximity-type image intensifier tube. It has a greater capability for distinguishing between incident and scattered radiation, and greater spatial resolution capabilities

  6. Camera Calibration Using Silhouettes

    OpenAIRE

    Boyer, Edmond

    2005-01-01

    This report addresses the problem of estimating camera parameters from images where object silhouettes only are known. Several modeling applications make use of silhouettes, and while calibration methods are well known when considering points or lines matched along image sequences, the problem appears to be more difficult when considering silhouettes. However, such primitives encode also information on camera parameters by the fact that their associated viewing cones should present a common i...

  7. TOUCHSCREEN USING WEB CAMERA

    Directory of Open Access Journals (Sweden)

    Kuntal B. Adak

    2015-10-01

    Full Text Available In this paper we present a web camera based touchscreen system which uses a simple technique to detect and locate finger. We have used a camera and regular screen to achieve our goal. By capturing the video and calculating position of finger on the screen, we can determine the touch position and do some function on that location. Our method is very easy and simple to implement. Even our system requirement is less expensive compare to other techniques.

  8. Gamma camera system

    International Nuclear Information System (INIS)

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  9. Spacecraft camera image registration

    Science.gov (United States)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)

    1987-01-01

    A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).

  10. CAOS-CMOS camera.

    Science.gov (United States)

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems. PMID:27410361

  11. The Dark Energy Camera

    Energy Technology Data Exchange (ETDEWEB)

    Flaugher, B. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States). et al.

    2015-04-11

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250-μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 Mpixel focal plane comprises 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. The CCDs have 15μm x 15μm pixels with a plate scale of 0.263" per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  12. The Dark Energy Camera

    CERN Document Server

    Flaugher, B; Honscheid, K; Abbott, T M C; Alvarez, O; Angstadt, R; Annis, J T; Antonik, M; Ballester, O; Beaufore, L; Bernstein, G M; Bernstein, R A; Bigelow, B; Bonati, M; Boprie, D; Brooks, D; Buckley-Geer, E J; Campa, J; Cardiel-Sas, L; Castander, F J; Castilla, J; Cease, H; Cela-Ruiz, J M; Chappa, S; Chi, E; Cooper, C; da Costa, L N; Dede, E; Derylo, G; DePoy, D L; de Vicente, J; Doel, P; Drlica-Wagner, A; Eiting, J; Elliott, A E; Emes, J; Estrada, J; Neto, A Fausti; Finley, D A; Flores, R; Frieman, J; Gerdes, D; Gladders, M D; Gregory, B; Gutierrez, G R; Hao, J; Holland, S E; Holm, S; Huffman, D; Jackson, C; James, D J; Jonas, M; Karcher, A; Karliner, I; Kent, S; Kessler, R; Kozlovsky, M; Kron, R G; Kubik, D; Kuehn, K; Kuhlmann, S; Kuk, K; Lahav, O; Lathrop, A; Lee, J; Levi, M E; Lewis, P; Li, T S; Mandrichenko, I; Marshall, J L; Martinez, G; Merritt, K W; Miquel, R; Munoz, F; Neilsen, E H; Nichol, R C; Nord, B; Ogando, R; Olsen, J; Palio, N; Patton, K; Peoples, J; Plazas, A A; Rauch, J; Reil, K; Rheault, J -P; Roe, N A; Rogers, H; Roodman, A; Sanchez, E; Scarpine, V; Schindler, R H; Schmidt, R; Schmitt, R; Schubnell, M; Schultz, K; Schurter, P; Scott, L; Serrano, S; Shaw, T M; Smith, R C; Soares-Santos, M; Stefanik, A; Stuermer, W; Suchyta, E; Sypniewski, A; Tarle, G; Thaler, J; Tighe, R; Tran, C; Tucker, D; Walker, A R; Wang, G; Watson, M; Weaverdyck, C; Wester, W; Woods, R; Yanny, B

    2015-01-01

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250 micron thick fully-depleted CCDs cooled inside a vacuum Dewar. The 570 Mpixel focal plane comprises 62 2kx4k CCDs for imaging and 12 2kx2k CCDs for guiding and focus. The CCDs have 15 microns x15 microns pixels with a plate scale of 0.263 arc sec per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construct...

  13. Commercialization of radiation tolerant camera

    International Nuclear Information System (INIS)

    In this project, radiation tolerant camera which tolerates 106 - 108 rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  14. Camera Calibration: a USU Implementation

    OpenAIRE

    Ma, Lili; Chen, YangQuan; Moore, Kevin L.

    2003-01-01

    The task of camera calibration is to estimate the intrinsic and extrinsic parameters of a camera model. Though there are some restricted techniques to infer the 3-D information about the scene from uncalibrated cameras, effective camera calibration procedures will open up the possibility of using a wide range of existing algorithms for 3-D reconstruction and recognition. The applications of camera calibration include vision-based metrology, robust visual platooning and visual docking of mobil...

  15. Extrinsic recalibration in camera networks

    OpenAIRE

    Hermans, Chris; Dumont, Maarten; Bekaert, Philippe

    2007-01-01

    This work addresses the practical problem of keeping a camera network calibrated during a recording session. When dealing with real-time applications, a robust calibration of the camera network needs to be assured, without the burden of a full system recalibration at every (un)intended camera displacement. In this paper we present an efficient algorithm to detect when the extrinsic parameters of a camera are no longer valid, and reintegrate the displaced camera into the previously calibrated ...

  16. Selective-imaging camera

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  17. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...... camera. We approach this problem by modelling it as a dynamic multi-objective optimisation problem and show how this metaphor allows a much richer expressiveness than a classical single objective approach. Finally, we showcase the application of a multi-objective evolutionary algorithm to generate a shot...

  18. Artificial human vision camera

    Science.gov (United States)

    Goudou, J.-F.; Maggio, S.; Fagno, M.

    2014-10-01

    In this paper we present a real-time vision system modeling the human vision system. Our purpose is to inspire from human vision bio-mechanics to improve robotic capabilities for tasks such as objects detection and tracking. This work describes first the bio-mechanical discrepancies between human vision and classic cameras and the retinal processing stage that takes place in the eye, before the optic nerve. The second part describes our implementation of these principles on a 3-camera optical, mechanical and software model of the human eyes and associated bio-inspired attention model.

  19. The Star Formation Camera

    OpenAIRE

    Scowen, Paul A.; Jansen, Rolf; Beasley, Matthew; Calzetti, Daniela; Desch, Steven; Fullerton, Alex; Gallagher, John; Lisman, Doug; Macenka, Steve; Malhotra, Sangeeta; McCaughrean, Mark; Nikzad, Shouleh; O'Connell, Robert; Oey, Sally; Padgett, Deborah

    2009-01-01

    The Star Formation Camera (SFC) is a wide-field (~15'x19, >280 arcmin^2), high-resolution (18x18 mas pixels) UV/optical dichroic camera designed for the Theia 4-m space-borne space telescope concept. SFC will deliver diffraction-limited images at lambda > 300 nm in both a blue (190-517nm) and a red (517-1075nm) channel simultaneously. Our aim is to conduct a comprehensive and systematic study of the astrophysical processes and environments relevant for the births and life cycles of stars and ...

  20. Advanced Virgo phase cameras

    Science.gov (United States)

    van der Schaaf, L.; Agatsuma, K.; van Beuzekom, M.; Gebyehu, M.; van den Brand, J.

    2016-05-01

    A century after the prediction of gravitational waves, detectors have reached the sensitivity needed to proof their existence. One of them, the Virgo interferometer in Pisa, is presently being upgraded to Advanced Virgo (AdV) and will come into operation in 2016. The power stored in the interferometer arms raises from 20 to 700 kW. This increase is expected to introduce higher order modes in the beam, which could reduce the circulating power in the interferometer, limiting the sensitivity of the instrument. To suppress these higher-order modes, the core optics of Advanced Virgo is equipped with a thermal compensation system. Phase cameras, monitoring the real-time status of the beam constitute a critical component of this compensation system. These cameras measure the phases and amplitudes of the laser-light fields at the frequencies selected to control the interferometer. The measurement combines heterodyne detection with a scan of the wave front over a photodetector with pin-hole aperture. Three cameras observe the phase front of these laser sidebands. Two of them monitor the in-and output of the interferometer arms and the third one is used in the control of the aberrations introduced by the power recycling cavity. In this paper the working principle of the phase cameras is explained and some characteristic parameters are described.

  1. Make a Pinhole Camera

    Science.gov (United States)

    Fisher, Diane K.; Novati, Alexander

    2009-01-01

    On Earth, using ordinary visible light, one can create a single image of light recorded over time. Of course a movie or video is light recorded over time, but it is a series of instantaneous snapshots, rather than light and time both recorded on the same medium. A pinhole camera, which is simple to make out of ordinary materials and using ordinary…

  2. Photogrammetric camera calibration

    Science.gov (United States)

    Tayman, W.P.; Ziemann, H.

    1984-01-01

    Section 2 (Calibration) of the document "Recommended Procedures for Calibrating Photogrammetric Cameras and Related Optical Tests" from the International Archives of Photogrammetry, Vol. XIII, Part 4, is reviewed in the light of recent practical work, and suggestions for changes are made. These suggestions are intended as a basis for a further discussion. ?? 1984.

  3. Communities, Cameras, and Conservation

    Science.gov (United States)

    Patterson, Barbara

    2012-01-01

    Communities, Cameras, and Conservation (CCC) is the most exciting and valuable program the author has seen in her 30 years of teaching field science courses. In this citizen science project, students and community volunteers collect data on mountain lions ("Puma concolor") at four natural areas and public parks along the Front Range of Colorado.…

  4. The LSST Camera Overview

    Energy Technology Data Exchange (ETDEWEB)

    Gilmore, Kirk; Kahn, Steven A.; Nordby, Martin; Burke, David; O' Connor, Paul; Oliver, John; Radeka, Veljko; Schalk, Terry; Schindler, Rafe; /SLAC

    2007-01-10

    The LSST camera is a wide-field optical (0.35-1um) imager designed to provide a 3.5 degree FOV with better than 0.2 arcsecond sampling. The detector format will be a circular mosaic providing approximately 3.2 Gigapixels per image. The camera includes a filter mechanism and, shuttering capability. It is positioned in the middle of the telescope where cross-sectional area is constrained by optical vignetting and heat dissipation must be controlled to limit thermal gradients in the optical beam. The fast, f/1.2 beam will require tight tolerances on the focal plane mechanical assembly. The focal plane array operates at a temperature of approximately -100 C to achieve desired detector performance. The focal plane array is contained within an evacuated cryostat, which incorporates detector front-end electronics and thermal control. The cryostat lens serves as an entrance window and vacuum seal for the cryostat. Similarly, the camera body lens serves as an entrance window and gas seal for the camera housing, which is filled with a suitable gas to provide the operating environment for the shutter and filter change mechanisms. The filter carousel can accommodate 5 filters, each 75 cm in diameter, for rapid exchange without external intervention.

  5. The world's fastest camera

    CERN Multimedia

    Piquepaille, Roland

    2006-01-01

    This image processor is not your typical digital camera. It took 6 years to 20 people and $6 million to build the "Regional Calorimeter Trigger"(RCT) which will be a component of the Compact Muon Solenoid (CMS) experiment, one of the detectors on the Large Hadron Collider (LHC) in Geneva, Switzerland (1 page)

  6. Image Sensors Enhance Camera Technologies

    Science.gov (United States)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  7. MISR radiometric camera-by-camera Cloud Mask V004

    Data.gov (United States)

    National Aeronautics and Space Administration — This file contains the Radiometric camera-by-camera Cloud Mask dataset. It is used to determine whether a scene is classified as clear or cloudy. A new parameter...

  8. Efficacy and External Validity of Electronic and Mobile Phone-Based Interventions Promoting Vegetable Intake in Young Adults: A Systematic Review Protocol

    Science.gov (United States)

    Chen, Juliana; Allman-Farinelli, Margaret

    2015-01-01

    Background Despite social marketing campaigns and behavior change interventions, young adults remain among the lowest consumers of vegetables. The digital era offers potential new avenues for both social marketing and individually tailored programs, through texting, web, and mobile applications. The effectiveness and generalizability of such programs have not been well documented. Objective The aim of this systematic review is to evaluate the efficacy and external validity of social marketing, electronic, and mobile phone-based (mHealth) interventions aimed at increasing vegetable intake in young adults. Methods The Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) protocol will be used to conduct this systematic review. The search strategy will be executed across eleven electronic databases using combinations of the following search terms: “online intervention”, “computer-assisted therapy”, “internet”, “website”, “cell phones”, “cyber”, “telemedicine”, “email”, “social marketing”, “social media”, “mass media”, “young adult”, and “fruit and vegetables”. The reference lists of included studies will also be searched for additional citations. Titles and abstracts will be screened against inclusion criteria and full texts of potentially eligible papers will be assessed by two independent reviewers. Data from eligible papers will be extracted. Quality and risk of bias will be assessed using the Effective Public Health Practice Project (EPHPP) Quality Assessment Tool for Quantitative Studies and The Cochrane Collaboration Risk of Bias assessment tool respectively. The external validity of the studies will be determined based on components such as reach, adoption, and representativeness of participants; intervention implementation and adaption; and program maintenance and institutionalization. Results will be reported quantitatively and qualitatively. Results Our research is in progress. A draft

  9. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  10. Gamma ray camera

    International Nuclear Information System (INIS)

    An Anger gamma ray camera is improved by the substitution of a gamma ray sensitive, proximity type image intensifier tube for the scintillator screen in the Anger camera. The image intensifier tube has a negatively charged flat scintillator screen, a flat photocathode layer, and a grounded, flat output phosphor display screen, all of which have the same dimension to maintain unit image magnification; all components are contained within a grounded metallic tube, with a metallic, inwardly curved input window between the scintillator screen and a collimator. The display screen can be viewed by an array of photomultipliers or solid state detectors. There are two photocathodes and two phosphor screens to give a two stage intensification, the two stages being optically coupled by a light guide. (author)

  11. Automated Camera Array Fine Calibration

    Science.gov (United States)

    Clouse, Daniel; Padgett, Curtis; Ansar, Adnan; Cheng, Yang

    2008-01-01

    Using aerial imagery, the JPL FineCalibration (JPL FineCal) software automatically tunes a set of existing CAHVOR camera models for an array of cameras. The software finds matching features in the overlap region between images from adjacent cameras, and uses these features to refine the camera models. It is not necessary to take special imagery of a known target and no surveying is required. JPL FineCal was developed for use with an aerial, persistent surveillance platform.

  12. Camera Surveillance Quadrotor

    OpenAIRE

    Hjelm, Emil; Yousif, Robert

    2015-01-01

    A quadrotor is a helicopter with four rotors placed at equal distance from the crafts centre of gravity, controlled by letting the different rotors generate different amount of thrust. It uses various sensors to stay stable in the air, correct readings from these sensors are therefore critical. By reducing vibrations, electromagnetic interference and external disturbances the quadrotor’s stability can increase. The purpose of this project is to analyse the feasibility of a quadrotor camera su...

  13. The DRAGO gamma camera

    International Nuclear Information System (INIS)

    In this work, we present the results of the experimental characterization of the DRAGO (DRift detector Array-based Gamma camera for Oncology), a detection system developed for high-spatial resolution gamma-ray imaging. This camera is based on a monolithic array of 77 silicon drift detectors (SDDs), with a total active area of 6.7 cm2, coupled to a single 5-mm-thick CsI(Tl) scintillator crystal. The use of an array of SDDs provides a high quantum efficiency for the detection of the scintillation light together with a very low electronics noise. A very compact detection module based on the use of integrated readout circuits was developed. The performances achieved in gamma-ray imaging using this camera are reported here. When imaging a 0.2 mm collimated 57Co source (122 keV) over different points of the active area, a spatial resolution ranging from 0.25 to 0.5 mm was measured. The depth-of-interaction capability of the detector, thanks to the use of a Maximum Likelihood reconstruction algorithm, was also investigated by imaging a collimated beam tilted to an angle of 45 deg. with respect to the scintillator surface. Finally, the imager was characterized with in vivo measurements on mice, in a real preclinical environment.

  14. The Star Formation Camera

    CERN Document Server

    Scowen, Paul A; Beasley, Matthew; Calzetti, Daniela; Desch, Steven; Fullerton, Alex; Gallagher, John; Lisman, Doug; Macenka, Steve; Malhotra, Sangeeta; McCaughrean, Mark; Nikzad, Shouleh; O'Connell, Robert; Oey, Sally; Padgett, Deborah; Rhoads, James; Roberge, Aki; Siegmund, Oswald; Shaklan, Stuart; Smith, Nathan; Stern, Daniel; Tumlinson, Jason; Windhorst, Rogier; Woodruff, Robert

    2009-01-01

    The Star Formation Camera (SFC) is a wide-field (~15'x19, >280 arcmin^2), high-resolution (18x18 mas pixels) UV/optical dichroic camera designed for the Theia 4-m space-borne space telescope concept. SFC will deliver diffraction-limited images at lambda > 300 nm in both a blue (190-517nm) and a red (517-1075nm) channel simultaneously. Our aim is to conduct a comprehensive and systematic study of the astrophysical processes and environments relevant for the births and life cycles of stars and their planetary systems, and to investigate and understand the range of environments, feedback mechanisms, and other factors that most affect the outcome of the star and planet formation process. This program addresses the origins and evolution of stars, galaxies, and cosmic structure and has direct relevance for the formation and survival of planetary systems like our Solar System and planets like Earth. We present the design and performance specifications resulting from the implementation study of the camera, conducted ...

  15. The DRAGO gamma camera

    Science.gov (United States)

    Fiorini, C.; Gola, A.; Peloso, R.; Longoni, A.; Lechner, P.; Soltau, H.; Strüder, L.; Ottobrini, L.; Martelli, C.; Lui, R.; Madaschi, L.; Belloli, S.

    2010-04-01

    In this work, we present the results of the experimental characterization of the DRAGO (DRift detector Array-based Gamma camera for Oncology), a detection system developed for high-spatial resolution gamma-ray imaging. This camera is based on a monolithic array of 77 silicon drift detectors (SDDs), with a total active area of 6.7 cm2, coupled to a single 5-mm-thick CsI(Tl) scintillator crystal. The use of an array of SDDs provides a high quantum efficiency for the detection of the scintillation light together with a very low electronics noise. A very compact detection module based on the use of integrated readout circuits was developed. The performances achieved in gamma-ray imaging using this camera are reported here. When imaging a 0.2 mm collimated C57o source (122 keV) over different points of the active area, a spatial resolution ranging from 0.25 to 0.5 mm was measured. The depth-of-interaction capability of the detector, thanks to the use of a Maximum Likelihood reconstruction algorithm, was also investigated by imaging a collimated beam tilted to an angle of 45° with respect to the scintillator surface. Finally, the imager was characterized with in vivo measurements on mice, in a real preclinical environment.

  16. Carbohydrate Estimation by a Mobile Phone-Based System Versus Self-Estimations of Individuals With Type 1 Diabetes Mellitus: A Comparative Study

    Science.gov (United States)

    Dehais, Joachim; Anthimopoulos, Marios; Shevchik, Sergey; Botwey, Ransford Henry; Duke, David; Stettler, Christoph; Diem, Peter

    2016-01-01

    Background Diabetes mellitus is spreading throughout the world and diabetic individuals have been shown to often assess their food intake inaccurately; therefore, it is a matter of urgency to develop automated diet assessment tools. The recent availability of mobile phones with enhanced capabilities, together with the advances in computer vision, have permitted the development of image analysis apps for the automated assessment of meals. GoCARB is a mobile phone-based system designed to support individuals with type 1 diabetes during daily carbohydrate estimation. In a typical scenario, the user places a reference card next to the dish and acquires two images using a mobile phone. A series of computer vision modules detect the plate and automatically segment and recognize the different food items, while their 3D shape is reconstructed. Finally, the carbohydrate content is calculated by combining the volume of each food item with the nutritional information provided by the USDA Nutrient Database for Standard Reference. Objective The main objective of this study is to assess the accuracy of the GoCARB prototype when used by individuals with type 1 diabetes and to compare it to their own performance in carbohydrate counting. In addition, the user experience and usability of the system is evaluated by questionnaires. Methods The study was conducted at the Bern University Hospital, “Inselspital” (Bern, Switzerland) and involved 19 adult volunteers with type 1 diabetes, each participating once. Each study day, a total of six meals of broad diversity were taken from the hospital’s restaurant and presented to the participants. The food items were weighed on a standard balance and the true amount of carbohydrate was calculated from the USDA nutrient database. Participants were asked to count the carbohydrate content of each meal independently and then by using GoCARB. At the end of each session, a questionnaire was completed to assess the user’s experience with Go

  17. PAU camera: detectors characterization

    Science.gov (United States)

    Casas, Ricard; Ballester, Otger; Cardiel-Sas, Laia; Castilla, Javier; Jiménez, Jorge; Maiorino, Marino; Pío, Cristóbal; Sevilla, Ignacio; de Vicente, Juan

    2012-07-01

    The PAU Camera (PAUCam) [1,2] is a wide field camera that will be mounted at the corrected prime focus of the William Herschel Telescope (Observatorio del Roque de los Muchachos, Canary Islands, Spain) in the next months. The focal plane of PAUCam is composed by a mosaic of 18 CCD detectors of 2,048 x 4,176 pixels each one with a pixel size of 15 microns, manufactured by Hamamatsu Photonics K. K. This mosaic covers a field of view (FoV) of 60 arcmin (minutes of arc), 40 of them are unvignetted. The behaviour of these 18 devices, plus four spares, and their electronic response should be characterized and optimized for the use in PAUCam. This job is being carried out in the laboratories of the ICE/IFAE and the CIEMAT. The electronic optimization of the CCD detectors is being carried out by means of an OG (Output Gate) scan and maximizing it CTE (Charge Transfer Efficiency) while the read-out noise is minimized. The device characterization itself is obtained with different tests. The photon transfer curve (PTC) that allows to obtain the electronic gain, the linearity vs. light stimulus, the full-well capacity and the cosmetic defects. The read-out noise, the dark current, the stability vs. temperature and the light remanence.

  18. Stereoscopic camera design

    Science.gov (United States)

    Montgomery, David J.; Jones, Christopher K.; Stewart, James N.; Smith, Alan

    2002-05-01

    It is clear from the literature that the majority of work in stereoscopic imaging is directed towards the development of modern stereoscopic displays. As costs come down, wider public interest in this technology is expected to increase. This new technology would require new methods of image formation. Advances in stereo computer graphics will of course lead to the creation of new stereo computer games, graphics in films etc. However, the consumer would also like to see real-world stereoscopic images, pictures of family, holiday snaps etc. Such scenery would have wide ranges of depth to accommodate and would need also to cope with moving objects, such as cars, and in particular other people. Thus, the consumer acceptance of auto/stereoscopic displays and 3D in general would be greatly enhanced by the existence of a quality stereoscopic camera. This paper will cover an analysis of existing stereoscopic camera designs and show that they can be categorized into four different types, with inherent advantages and disadvantages. A recommendation is then made with regard to 3D consumer still and video photography. The paper will go on to discuss this recommendation and describe its advantages and how it can be realized in practice.

  19. Novel gamma cameras

    International Nuclear Information System (INIS)

    The gamma-ray cameras described are based on radiation imaging devices which permit the direct recording of the distribution of radioactive material from a radiative source, such as a human organ. They consist in principle of a collimator, a converter matrix converting gamma photons to electrons, and an electron image multiplier producing a multiplied electron output, and means for reading out the information. The electron image multiplier is a device which produces a multiplied electron image. It can be in principle, either gas avalanche electron multiplier or a multi-channel plate. The multi-channel plate employed is a novel device, described elsewhere. The three described embodiments, in which the converter matrix can be either of metal type or of scintillation crystal type, were designed and are being developed

  20. Neutron Imaging Camera

    Science.gov (United States)

    Hunter, Stanley D.; DeNolfo, Georgia; Floyd, Sam; Krizmanic, John; Link, Jason; Son, Seunghee; Guardala, Noel; Skopec, Marlene; Stark, Robert

    2008-01-01

    We describe the Neutron Imaging Camera (NIC) being developed for DTRA applications by NASA/GSFC and NSWC/Carderock. The NIC is based on the Three-dimensional Track Imager (3-DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution. 3-D tracking of charged particles. The incident direction of fast neutrons, E(sub N) > 0.5 MeV. arc reconstructed from the momenta and energies of the proton and triton fragments resulting from 3He(n,p)3H interactions in the 3-DTI volume. We present angular and energy resolution performance of the NIC derived from accelerator tests.

  1. Focussed radiographic camera

    International Nuclear Information System (INIS)

    A radiographic camera of the form employing a scintillator for producing optical photons in response to incident gamma and x-radiation is described. A collimator is positioned between a subject emitting such radiation and the scintillator for guiding the radiation to the scintillator and a detector of optical photons for signaling the positions of points of impingement of quanta of the incident radiation upon the scintillator to produce an image of the subject. A Fresnel focussing means is located alongside the scintillator for directing the optical photons to the detector. The Fresnel focussing means takes the form of a segmented mirror at the front surface of the scintillator and a Fresnel lens at the back surface of the scintillator

  2. Performance evaluation of CCD- and mobile-phone-based near-infrared fluorescence imaging systems with molded and 3D-printed phantoms

    Science.gov (United States)

    Wang, Bohan; Ghassemi, Pejhman; Wang, Jianting; Wang, Quanzeng; Chen, Yu; Pfefer, Joshua

    2016-03-01

    Increasing numbers of devices are emerging which involve biophotonic imaging on a mobile platform. Therefore, effective test methods are needed to ensure that these devices provide a high level of image quality. We have developed novel phantoms for performance assessment of near infrared fluorescence (NIRF) imaging devices. Resin molding and 3D printing techniques were applied for phantom fabrication. Comparisons between two imaging approaches - a CCD-based scientific camera and an NIR-enabled mobile phone - were made based on evaluation of the contrast transfer function and penetration depth. Optical properties of the phantoms were evaluated, including absorption and scattering spectra and fluorescence excitation-emission matrices. The potential viability of contrastenhanced biological NIRF imaging with a mobile phone is demonstrated, and color-channel-specific variations in image quality are documented. Our results provide evidence of the utility of novel phantom-based test methods for quantifying image quality in emerging NIRF devices.

  3. Field-testing of a cost-effective mobile-phone based microscope for screening of Schistosoma haematobium infection (Conference Presentation)

    Science.gov (United States)

    Ceylan Koydemir, Hatice; Bogoch, Isaac I.; Tseng, Derek; Ephraim, Richard K. D.; Duah, Evans; Tee, Joseph; Andrews, Jason R.; Ozcan, Aydogan

    2016-03-01

    Schistosomiasis is a parasitic and neglected tropical disease, and affects school-aged children disproportionately affected. Here we present field-testing results of a handheld and cost effective smartphone-based microscope in rural Ghana, Africa, for point-of-care diagnosis of S. haematobium infection. In this mobile-phone microscope, a custom-designed 3D printed opto-mechanical attachment (~150g) is placed in contact with the smartphone camera-lens, creating an imaging-system with a half-pitch resolution of ~0.87µm. This unit includes an external lens (also taken from a mobile-phone camera), a sample tray, a z-stage to adjust the focus, two light-emitting-diodes (LEDs) and two diffusers for uniform illumination of the sample. In our field-testing, 60 urine samples, collected from children, were used, where the prevalence of the infection was 72.9%. After concentration of the sample with centrifugation, the sediment was placed on a glass-slide and S. haematobium eggs were first identified/quantified using conventional benchtop microscopy by an expert diagnostician, and then a second expert, blinded to these results, determined the presence/absence of eggs using our mobile-phone microscope. Compared to conventional microscopy, our mobile-phone microscope had a diagnostic sensitivity of 72.1%, specificity of 100%, positive-predictive-value of 100%, and a negative-predictive-value of 57.1%. Furthermore, our mobile-phone platform demonstrated a sensitivity of 65.7% and 100% for low-intensity infections (≤50 eggs/10 mL urine) and high-intensity infections (<50 eggs/10 mL urine), respectively. We believe that this cost-effective and field-portable mobile-phone microscope may play an important role in the diagnosis of schistosomiasis and various other global health challenges.

  4. LISS-4 camera for Resourcesat

    Science.gov (United States)

    Paul, Sandip; Dave, Himanshu; Dewan, Chirag; Kumar, Pradeep; Sansowa, Satwinder Singh; Dave, Amit; Sharma, B. N.; Verma, Anurag

    2006-12-01

    The Indian Remote Sensing Satellites use indigenously developed high resolution cameras for generating data related to vegetation, landform /geomorphic and geological boundaries. This data from this camera is used for working out maps at 1:12500 scale for national level policy development for town planning, vegetation etc. The LISS-4 Camera was launched onboard Resourcesat-1 satellite by ISRO in 2003. LISS-4 is a high-resolution multi-spectral camera with three spectral bands and having a resolution of 5.8m and swath of 23Km from 817 Km altitude. The panchromatic mode provides a swath of 70Km and 5-day revisit. This paper briefly discusses the configuration of LISS-4 Camera of Resourcesat-1, its onboard performance and also the changes in the Camera being developed for Resourcesat-2. LISS-4 camera images the earth in push-broom mode. It is designed around a three mirror un-obscured telescope, three linear 12-K CCDs and associated electronics for each band. Three spectral bands are realized by splitting the focal plane in along track direction using an isosceles prism. High-speed Camera Electronics is designed for each detector with 12- bit digitization and digital double sampling of video. Seven bit data selected from 10 MSBs data by Telecommand is transmitted. The total dynamic range of the sensor covers up to 100% albedo. The camera structure has heritage of IRS- 1C/D. The optical elements are precisely glued to specially designed flexure mounts. The camera is assembled onto a rotating deck on spacecraft to facilitate +/- 26° steering in Pitch-Yaw plane. The camera is held on spacecraft in a stowed condition before deployment. The excellent imageries from LISS-4 Camera onboard Resourcesat-1 are routinely used worldwide. Such second Camera is being developed for Resourcesat-2 launch in 2007 with similar performance. The Camera electronics is optimized and miniaturized. The size and weight are reduced to one third and the power to half of the values in Resourcesat

  5. Association of Exposure to Radio-Frequency Electromagnetic Field Radiation (RF-EMFR) Generated by Mobile Phone Base Stations with Glycated Hemoglobin (HbA1c) and Risk of Type 2 Diabetes Mellitus

    Science.gov (United States)

    Meo, Sultan Ayoub; Alsubaie, Yazeed; Almubarak, Zaid; Almutawa, Hisham; AlQasem, Yazeed; Muhammed Hasanato, Rana

    2015-01-01

    Installation of mobile phone base stations in residential areas has initiated public debate about possible adverse effects on human health. This study aimed to determine the association of exposure to radio frequency electromagnetic field radiation (RF-EMFR) generated by mobile phone base stations with glycated hemoglobin (HbA1c) and occurrence of type 2 diabetes mellitus. For this study, two different elementary schools (school-1 and school-2) were selected. We recruited 159 students in total; 96 male students from school-1, with age range 12–16 years, and 63 male students with age range 12–17 years from school-2. Mobile phone base stations with towers existed about 200 m away from the school buildings. RF-EMFR was measured inside both schools. In school-1, RF-EMFR was 9.601 nW/cm2 at frequency of 925 MHz, and students had been exposed to RF-EMFR for a duration of 6 h daily, five days in a week. In school-2, RF-EMFR was 1.909 nW/cm2 at frequency of 925 MHz and students had been exposed for 6 h daily, five days in a week. 5–6 mL blood was collected from all the students and HbA1c was measured by using a Dimension Xpand Plus Integrated Chemistry System, Siemens. The mean HbA1c for the students who were exposed to high RF-EMFR was significantly higher (5.44 ± 0.22) than the mean HbA1c for the students who were exposed to low RF-EMFR (5.32 ± 0.34) (p = 0.007). Moreover, students who were exposed to high RF-EMFR generated by MPBS had a significantly higher risk of type 2 diabetes mellitus (p = 0.016) relative to their counterparts who were exposed to low RF-EMFR. It is concluded that exposure to high RF-EMFR generated by MPBS is associated with elevated levels of HbA1c and risk of type 2 diabetes mellitus. PMID:26580639

  6. Association of Exposure to Radio-Frequency Electromagnetic Field Radiation (RF-EMFR Generated by Mobile Phone Base Stations with Glycated Hemoglobin (HbA1c and Risk of Type 2 Diabetes Mellitus

    Directory of Open Access Journals (Sweden)

    Sultan Ayoub Meo

    2015-11-01

    Full Text Available Installation of mobile phone base stations in residential areas has initiated public debate about possible adverse effects on human health. This study aimed to determine the association of exposure to radio frequency electromagnetic field radiation (RF-EMFR generated by mobile phone base stations with glycated hemoglobin (HbA1c and occurrence of type 2 diabetes mellitus. For this study, two different elementary schools (school-1 and school-2 were selected. We recruited 159 students in total; 96 male students from school-1, with age range 12–16 years, and 63 male students with age range 12–17 years from school-2. Mobile phone base stations with towers existed about 200 m away from the school buildings. RF-EMFR was measured inside both schools. In school-1, RF-EMFR was 9.601 nW/cm2 at frequency of 925 MHz, and students had been exposed to RF-EMFR for a duration of 6 h daily, five days in a week. In school-2, RF-EMFR was 1.909 nW/cm2 at frequency of 925 MHz and students had been exposed for 6 h daily, five days in a week. 5–6 mL blood was collected from all the students and HbA1c was measured by using a Dimension Xpand Plus Integrated Chemistry System, Siemens. The mean HbA1c for the students who were exposed to high RF-EMFR was significantly higher (5.44 ± 0.22 than the mean HbA1c for the students who were exposed to low RF-EMFR (5.32 ± 0.34 (p = 0.007. Moreover, students who were exposed to high RF-EMFR generated by MPBS had a significantly higher risk of type 2 diabetes mellitus (p = 0.016 relative to their counterparts who were exposed to low RF-EMFR. It is concluded that exposure to high RF-EMFR generated by MPBS is associated with elevated levels of HbA1c and risk of type 2 diabetes mellitus.

  7. Gamma camera system

    International Nuclear Information System (INIS)

    The invention provides a composite solid state detector for use in deriving a display, by spatial coordinate information, of the distribution or radiation emanating from a source within a region of interest, comprising several solid state detector components, each having a given surface arranged for exposure to impinging radiation and exhibiting discrete interactions therewith at given spatially definable locations. The surface of each component and the surface disposed opposite and substantially parallel thereto are associated with impedence means configured to provide for each opposed surface outputs for signals relating the given location of the interactions with one spatial coordinate parameter of one select directional sense. The detector components are arranged to provide groupings of adjacently disposed surfaces mutually linearly oriented to exhibit a common directional sense of the spatial coordinate parameter. Means interconnect at least two of the outputs associated with each of the surfaces within a given grouping for collecting the signals deriving therefrom. The invention also provides a camera system for imaging the distribution of a source of gamma radiation situated within a region of interest

  8. Results of a cross-sectional study on the association of electromagnetic fields emitted from mobile phone base stations and health complaints; Ergebnisse einer Querschnittsstudie zum Zusammenhang von elektromagnetischen Feldern von Mobilfunksendeanlagen und unspezifischen gesundheitlichen Beschwerden

    Energy Technology Data Exchange (ETDEWEB)

    Breckenkamp, Juergen; Berg-Beckhoff, Gabriele [Bielefeld Univ. (Germany). Arbeitsgebiet Epidemiologie und International Public Health; Blettner, Maria [Mainz Univ. (Germany). Inst. fuer Medizinische Biometrie, Epidemiologie und Informatik; Kowall, Bernd [Duesseldorf Univ. (Germany). Deutsches Diabetes Zentrum; Schuez, Joachim [Institute of Cancer Epidemiology, Strandboulevarden (Denmark). Dept. of Biostatistics and Epidemiology; Schlehofer, Brigitte [Deutsches Krebsforschungszentrum Heidelberg (Germany). Arbeitsgebiet Umweltepidemiologie; Schmiedel, Sven [Mainz Univ. (Germany). Inst. fuer Medizinische Biometrie, Epidemiologie und Informatik; Institute of Cancer Epidemiology, Strandboulevarden (Denmark). Dept. of Biostatistics and Epidemiology; Bornkessel, Christian [Institut fuer Mobil- und Satellitenfunktechnik (IMST GmbH), Pruefzentrum EMV, Kamp-Lintfort (Germany); Reis, Ursula; Potthoff, Peter [TNS Healthcare GmbH, Muenchen (Germany)

    2010-07-01

    Background: Despite the fact that adverse health effects are not confirmed for exposure to radiofrequency electromagnetic field (RFEMF) levels below the limit values, as defined in the guidelines of the International Commission on Non-Ionizing Radiation Protection, many persons are worried about possible adverse health effects caused by the RF-EMF emitted from mobile phone base stations, or they attribute their unspecific health complaints like headache or sleep disturbances to these fields. Method: In the framework of a cross-sectional study a questionnaire was sent to 4150 persons living in predominantly urban areas. Participants were asked whether base stations affected their health. Health complaints were measured with standardized health questionnaires for sleep disturbances, headache, health complaints and mental and physical health. 3,526 persons responded (85%) to the questionnaire and 1,808 (51%) agreed to dosimetric measurements in their flats. Exposure was measured in 1,500 flats. Results: The measurements accomplished in the bedrooms in most cases showed very low exposure values, most often below sensitivity limit of the dosimeter. An association of exposure with the occurrence of health complaints was not found, but an association between the attribution of adverse health effects to base stations and the occurrence of health complaints. Conclusions: However, concerns about health and the attribution of adverse health effects to these mobile phone base stations should be taken serious and require a risk communication with concerned persons. Future research should focus on the processes of perception and appraisal of RF-EMF risks, and ascertain the determinants of concerns and attributions in the context of RF-EMF. (orig.)

  9. Development of underwater camera using high-definition camera

    International Nuclear Information System (INIS)

    In order to reduce the time for core verification or visual inspection of BWR fuels, the underwater camera using a High-Definition camera has been developed. As a result of this development, the underwater camera has 2 lights and 370 x 400 x 328mm dimensions and 20.5kg weight. Using the camera, 6 or so spent-fuel IDs are identified at 1 or 1.5m distance at a time, and 0.3mmφ pin-hole is recognized at 1.5m distance and 20 times zoom-up. Noises caused by radiation less than 15 Gy/h are not affected the images. (author)

  10. A liquid xenon radioisotope camera.

    Science.gov (United States)

    Zaklad, H.; Derenzo, S. E.; Muller, R. A.; Smadja, G.; Smits, R. G.; Alvarez, L. W.

    1972-01-01

    A new type of gamma-ray camera is discussed that makes use of electron avalanches in liquid xenon and is currently under development. It is shown that such a radioisotope camera promises many advantages over any other existing gamma-ray cameras. Spatial resolution better than 1 mm and counting rates higher than one million C/sec are possible. An energy resolution of 11% FWHM has recently been achieved with a collimated Hg-203 source using a parallel-plate ionization chamber containing a Frisch grid.

  11. Exposure interlock for oscilloscope cameras

    Science.gov (United States)

    Spitzer, C. R.; Stainback, J. D. (Inventor)

    1973-01-01

    An exposure interlock has been developed for oscilloscope cameras which cuts off ambient light from the oscilloscope screen before the shutter of the camera is tripped. A flap is provided which may be selectively positioned to an open position which enables viewing of the oscilloscope screen and a closed position which cuts off the oscilloscope screen from view and simultaneously cuts off ambient light from the oscilloscope screen. A mechanical interlock is provided between the flap to be activated to its closed position before the camera shutter is tripped, thereby preventing overexposure of the film.

  12. On Single-scanline Camera Calibration

    OpenAIRE

    Horaud, Radu; Mohr, Roger; Lorecki, Boguslaw

    1993-01-01

    A method for calibrating single scanline CCD cameras is described. It is shown that the more classical 2D camera calibration techniques are necessary but not sufficient for solving the 1D camera calibration problem. A model for single scanline cameras is proposed, and a two-step procedure for estimating its parameters is provided. It is also shown how the extrinsic camera parameters can be determined geometrically without making explicit the intrinsic camera parameters. The accuracy of the ca...

  13. An Inexpensive Digital Infrared Camera

    Science.gov (United States)

    Mills, Allan

    2012-01-01

    Details are given for the conversion of an inexpensive webcam to a camera specifically sensitive to the near infrared (700-1000 nm). Some experiments and practical applications are suggested and illustrated. (Contains 9 figures.)

  14. Wide Dynamic Range CCD Camera

    Science.gov (United States)

    Younse, J. M.; Gove, R. J.; Penz, P. A.; Russell, D. E.

    1984-11-01

    A liquid crystal attenuator (LCA) operated as a variable neutral density filter has been attached to a charge-coupled device (CCD) imager to extend the dynamic range of a solid-state TV camera by an order of magnitude. Many applications are best served by a camera with a dynamic range of several thousand. For example, outside security systems must operate unattended with "dawn-to-dusk" lighting conditions. Although this can be achieved with available auto-iris lens assemblies, more elegant solutions which provide the small size, low power, high reliability advantages of solid state technology are now available. This paper will describe one such unique way of achieving these dynamic ranges using standard optics by making the CCD imager's glass cover a controllable neutral density filter. The liquid crystal attenuator's structure and theoretical properties for this application will be described along with measured transmittance. A small integrated TV camera which utilizes a "virtual-phase" CCD sensor coupled to a LCA will be described and test results for a number of the camera's optical and electrical parameters will be given. These include the following camera parameters: dynamic range, Modulation Transfer Function (MTF), spectral response, and uniformity. Also described will be circuitry which senses the ambient scene illuminance and automatically provides feedback signals to appropriately adjust the transmittance of the LCA. Finally, image photographs using this camera, under various scene illuminations, will be shown.

  15. Response to Comments on Meo et al. Association of Exposure to Radio-Frequency Electromagnetic Field Radiation (RF-EMFR Generated by Mobile Phone Base Stations with Glycated Hemoglobin (HbA1c and Risk of Type 2 Diabetes Mellitus. Int. J. Environ. Res. Public Health, 2015, 12, 14519–14528

    Directory of Open Access Journals (Sweden)

    Sultan Ayoub Meo

    2016-02-01

    Full Text Available We highly appreciate the readers’ interest [1] in our article [2] titled “Association of Exposure to Radio-Frequency Electromagnetic Field Radiation (RF-EMFR Generated by Mobile Phone Base Stations with Glycated Hemoglobin (HbA1c and Risk of Type 2 Diabetes Mellitus” published in the International Journal of Environmental Research and Public Health [2].[...

  16. Response to Comments on Meo et al. Association of Exposure to Radio-Frequency Electromagnetic Field Radiation (RF-EMFR) Generated by Mobile Phone Base Stations with Glycated Hemoglobin (HbA1c) and Risk of Type 2 Diabetes Mellitus. Int. J. Environ. Res. Public Health, 2015, 12, 14519–14528

    OpenAIRE

    Sultan Ayoub Meo; Yazeed Alsubaie; Zaid Almubarak; Hisham Almutawa; Yazeed AlQasem; Rana Muhammed Hasanato

    2016-01-01

    We highly appreciate the readers’ interest [1] in our article [2] titled “Association of Exposure to Radio-Frequency Electromagnetic Field Radiation (RF-EMFR) Generated by Mobile Phone Base Stations with Glycated Hemoglobin (HbA1c) and Risk of Type 2 Diabetes Mellitus” published in the International Journal of Environmental Research and Public Health [2].[...

  17. Sub-Camera Calibration of a Penta-Camera

    Science.gov (United States)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  18. The GISMO-2 Bolometer Camera

    Science.gov (United States)

    Staguhn, Johannes G.; Benford, Dominic J.; Fixsen, Dale J.; Hilton, Gene; Irwin, Kent D.; Jhabvala, Christine A.; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; Miller, Timothy M.; Moseley, Samuel H.; Sharp, Elemer H.; Wollack, Edward J.

    2012-01-01

    We present the concept for the GISMO-2 bolometer camera) which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISM0-2 will operate Simultaneously in the 1 mm and 2 mm atmospherical windows. The 1 mm channel uses a 32 x 40 TES-based Backshort Under Grid (BUG) bolometer array, the 2 mm channel operates with a 16 x 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISM0-2 was strongly influenced by our experience with the GISMO 2 mm bolometer camera which is successfully operating at the 30m telescope. GISMO is accessible to the astronomical community through the regular IRAM call for proposals.

  19. Cameras for semiconductor process control

    Science.gov (United States)

    Porter, W. A.; Parker, D. L.

    1977-01-01

    The application of X-ray topography to semiconductor process control is described, considering the novel features of the high speed camera and the difficulties associated with this technique. The most significant results on the effects of material defects on device performance are presented, including results obtained using wafers processed entirely within this institute. Defects were identified using the X-ray camera and correlations made with probe data. Also included are temperature dependent effects of material defects. Recent applications and improvements of X-ray topographs of silicon-on-sapphire and gallium arsenide are presented with a description of a real time TV system prototype and of the most recent vacuum chuck design. Discussion is included of our promotion of the use of the camera by various semiconductor manufacturers.

  20. Dark Energy Camera for Blanco

    Energy Technology Data Exchange (ETDEWEB)

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images from the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.

  1. Aerial camera auto focusing system

    Science.gov (United States)

    Wang, Xuan; Lan, Gongpu; Gao, Xiaodong; Liang, Wei

    2012-10-01

    Before the aerial photographic task, the cameras focusing work should be performed at first to compensate the defocus caused by the changes of the temperature, pressure etc. A new method of aerial camera auto focusing is proposed through traditional photoelectric self-collimation combined with image processing method. Firstly, the basic principles of optical self-collimation and image processing are introduced. Secondly, the limitations of the two are illustrated and the benefits of the new method are detailed. Then the basic principle, the system composition and the implementation of this new method are presented. Finally, the data collection platform is set up reasonably and the focus evaluation function curve is draw. The results showed that: the method can be used in the Aerial camera focusing field, adapt to the aviation equipment trends of miniaturization and lightweight .This paper is helpful to the further work of accurate and automatic focusing.

  2. EDICAM (Event Detection Intelligent Camera)

    International Nuclear Information System (INIS)

    Highlights: ► We present EDICAM's hardware modules. ► We present EDICAM's main design concepts. ► This paper will describe EDICAM firmware architecture. ► Operation principles description. ► Further developments. -- Abstract: A new type of fast framing camera has been developed for fusion applications by the Wigner Research Centre for Physics during the last few years. A new concept was designed for intelligent event driven imaging which is capable of focusing image readout to Regions of Interests (ROIs) where and when predefined events occur. At present these events mean intensity changes and external triggers but in the future more sophisticated methods might also be defined. The camera provides 444 Hz frame rate at full resolution of 1280 × 1024 pixels, but monitoring of smaller ROIs can be done in the 1–116 kHz range even during exposure of the full image. Keeping space limitations and the harsh environment in mind the camera is divided into a small Sensor Module and a processing card interconnected by a fast 10 Gbit optical link. This camera hardware has been used for passive monitoring of the plasma in different devices for example at ASDEX Upgrade and COMPASS with the first version of its firmware. The new firmware and software package is now available and ready for testing the new event processing features. This paper will present the operation principle and features of the Event Detection Intelligent Camera (EDICAM). The device is intended to be the central element in the 10-camera monitoring system of the Wendelstein 7-X stellarator

  3. MIOTIC study: a prospective, multicenter, randomized study to evaluate the long-term efficacy of mobile phone-based Internet of Things in the management of patients with stable COPD.

    Science.gov (United States)

    Zhang, Jing; Song, Yuan-Lin; Bai, Chun-Xue

    2013-01-01

    Chronic obstructive pulmonary disease (COPD) is a common disease that leads to huge economic and social burden. Efficient and effective management of stable COPD is essential to improve quality of life and reduce medical expenditure. The Internet of Things (IoT), a recent breakthrough in communication technology, seems promising in improving health care delivery, but its potential strengths in COPD management remain poorly understood. We have developed a mobile phone-based IoT (mIoT) platform and initiated a randomized, multicenter, controlled trial entitled the 'MIOTIC study' to investigate the influence of mIoT among stable COPD patients. In the MIOTIC study, at least 600 patients with stable GOLD group C or D COPD and with a history of at least two moderate-to-severe exacerbations within the previous year will be randomly allocated to the control group, which receives routine follow-up, or the intervention group, which receives mIoT management. Endpoints of the study include (1) frequency and severity of acute exacerbation; (2) symptomatic evaluation; (3) pre- and post-bronchodilator forced expiratory volume in 1 second (FEV1) and FEV1/forced vital capacity (FVC) measurement; (4) exercise capacity; and (5) direct medical cost per year. Results from this study should provide direct evidence for the suitability of mIoT in stable COPD patient management. PMID:24082784

  4. Full Stokes polarization imaging camera

    Science.gov (United States)

    Vedel, M.; Breugnot, S.; Lechocinski, N.

    2011-10-01

    Objective and background: We present a new version of Bossa Nova Technologies' passive polarization imaging camera. The previous version was performing live measurement of the Linear Stokes parameters (S0, S1, S2), and its derivatives. This new version presented in this paper performs live measurement of Full Stokes parameters, i.e. including the fourth parameter S3 related to the amount of circular polarization. Dedicated software was developed to provide live images of any Stokes related parameters such as the Degree Of Linear Polarization (DOLP), the Degree Of Circular Polarization (DOCP), the Angle Of Polarization (AOP). Results: We first we give a brief description of the camera and its technology. It is a Division Of Time Polarimeter using a custom ferroelectric liquid crystal cell. A description of the method used to calculate Data Reduction Matrix (DRM)5,9 linking intensity measurements and the Stokes parameters is given. The calibration was developed in order to maximize the condition number of the DRM. It also allows very efficient post processing of the images acquired. Complete evaluation of the precision of standard polarization parameters is described. We further present the standard features of the dedicated software that was developed to operate the camera. It provides live images of the Stokes vector components and the usual associated parameters. Finally some tests already conducted are presented. It includes indoor laboratory and outdoor measurements. This new camera will be a useful tool for many applications such as biomedical, remote sensing, metrology, material studies, and others.

  5. Camera assisted multimodal user interaction

    Science.gov (United States)

    Hannuksela, Jari; Silvén, Olli; Ronkainen, Sami; Alenius, Sakari; Vehviläinen, Markku

    2010-01-01

    Since more processing power, new sensing and display technologies are already available in mobile devices, there has been increased interest in building systems to communicate via different modalities such as speech, gesture, expression, and touch. In context identification based user interfaces, these independent modalities are combined to create new ways how the users interact with hand-helds. While these are unlikely to completely replace traditional interfaces, they will considerably enrich and improve the user experience and task performance. We demonstrate a set of novel user interface concepts that rely on built-in multiple sensors of modern mobile devices for recognizing the context and sequences of actions. In particular, we use the camera to detect whether the user is watching the device, for instance, to make the decision to turn on the display backlight. In our approach the motion sensors are first employed for detecting the handling of the device. Then, based on ambient illumination information provided by a light sensor, the cameras are turned on. The frontal camera is used for face detection, while the back camera provides for supplemental contextual information. The subsequent applications triggered by the context can be, for example, image capturing, or bar code reading.

  6. Gamma camera with reflectivity mask

    International Nuclear Information System (INIS)

    A gamma camera is described with a plurality of photodetectors arranged for locating flashes of light produced by a scintillator in response to incident radiation. Masking material is arranged in a radially symmetric pattern on the front face of the scintillator about the axis of each photodetector to reduce the amount of internal reflection of optical photons induced by gamma ray photons

  7. Gamma camera with reflectivity mask

    International Nuclear Information System (INIS)

    In accordance with the present invention there is provided a radiographic camera comprising: a scintillator; a plurality of photodectors positioned to face said scintillator; a plurality of masked regions formed upon a face of said scintillator opposite said photdetectors and positioned coaxially with respective ones of said photodetectors for decreasing the amount of internal reflection of optical photons generated within said scintillator. (auth)

  8. Camera Movement in Narrative Cinema

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2007-01-01

    Just like art historians have focused on e.g. composition or lighting, this dissertation takes a single stylistic parameter as its object of study: camera movement. Within film studies this localized avenue of middle-level research has become increasingly viable under the aegis of a perspective k...

  9. Replacing 16-mm film cameras with high-definition digital cameras

    Science.gov (United States)

    Balch, Kris S.

    1995-09-01

    For many years 16 mm film cameras have been used in severe environments. These film cameras are used on Hy-G automotive sleds, airborne gun cameras, range tracking and other hazardous environments. The companies and government agencies using these cameras are in need of replacing them with a more cost effective solution. Film-based cameras still produce the best resolving capability, however, film development time, chemical disposal, recurring media cost, and faster digital analysis are factors influencing the desire for a 16 mm film camera replacement. This paper will describe a new camera from Kodak that has been designed to replace 16 mm high speed film cameras.

  10. Architectural Design Document for Camera Models

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study.......Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study....

  11. Lytro camera technology: theory, algorithms, performance analysis

    Science.gov (United States)

    Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio

    2013-03-01

    The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.

  12. An optical metasurface planar camera

    CERN Document Server

    Arbabi, Amir; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei

    2016-01-01

    Optical metasurfaces are 2D arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optical design by enabling complex low cost systems where multiple metasurfaces are lithographically stacked on top of each other and are integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here, we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has an f-number of 0.9, an angle-of-view larger than 60$^\\circ$$\\times$60$^\\circ$, and operates at 850 nm wavelength with large transmission. The camera exhibits high image quality, which indicates the potential of this technology to produce a paradigm shift in future designs of imaging systems for microscopy, photograp...

  13. Electronographic cameras for space astronomy.

    Science.gov (United States)

    Carruthers, G. R.; Opal, C. B.

    1972-01-01

    Magnetically-focused electronographic cameras have been under development at the Naval Research Laboratory for use in far-ultraviolet imagery and spectrography, primarily in astronomical and optical-geophysical observations from sounding rockets and space vehicles. Most of this work has been with cameras incorporating internal optics of the Schmidt or wide-field all-reflecting types. More recently, we have begun development of electronographic spectrographs incorporating an internal concave grating, operating at normal or grazing incidence. We also are developing electronographic image tubes of the conventional end-window-photo-cathode type, for far-ultraviolet imagery at the focus of a large space telescope, with image formats up to 120 mm in diameter.

  14. The Dark Energy Survey Camera

    Science.gov (United States)

    Flaugher, Brenna

    2012-03-01

    The Dark Energy Survey Collaboration has built the Dark Energy Camera (DECam), a 3 square degree, 520 Megapixel CCD camera which is being mounted on the Blanco 4-meter telescope at CTIO. DECam will be used to carry out the 5000 sq. deg. Dark Energy Survey, using 30% of the telescope time over a 5 year period. During the remainder of the time, and after the survey, DECam will be available as a community instrument. Construction of DECam is complete. The final components were shipped to Chile in Dec. 2011 and post-shipping checkout is in progress in Dec-Jan. Installation and commissioning on the telescope are taking place in 2012. A summary of lessons learned and an update of the performance of DECam and the status of the DECam installation and commissioning will be presented.

  15. Sky camera geometric calibration using solar observations

    OpenAIRE

    Urquhart, B.; Kurtz, B; J. Kleissl

    2016-01-01

    A camera model and associated automated calibration procedure for stationary daytime sky imaging cameras is presented. The specific modeling and calibration needs are motivated by remotely deployed cameras used to forecast solar power production where cameras point skyward and use 180° fisheye lenses. Sun position in the sky and on the image plane provides a simple and automated approach to calibration; special equipment or calibration patterns are not required. Sun positio...

  16. Securing Embedded Smart Cameras with Trusted Computing

    OpenAIRE

    Thomas Winkler; Bernhard Rinner

    2011-01-01

    Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only ...

  17. Filter characterization in digital cameras

    OpenAIRE

    Solli, Martin

    2004-01-01

    The use of spectrophotometers for color measurements on printed substrates is widely spread among paper producers as well as within the printing industry. Spectrophotometer measurements are precise, but time-consuming procedures and faster methods are desirable. Previously presented work on color calibration of flatbed scanners has shown that they can be used for fast color measurements with acceptable results. Furthermore, the rapid development of digital cameras has made it possible to tran...

  18. Graphic design of pinhole cameras

    Science.gov (United States)

    Edwards, H. B.; Chu, W. P.

    1979-01-01

    The paper describes a graphic technique for the analysis and optimization of pinhole size and focal length. The technique is based on the use of the transfer function of optical elements described by Scott (1959) to construct the transfer function of a circular pinhole camera. This transfer function is the response of a component or system to a pattern of lines having a sinusoidally varying radiance at varying spatial frequencies. Some specific examples of graphic design are presented.

  19. Solid-state array cameras.

    Science.gov (United States)

    Strull, G; List, W F; Irwin, E L; Farnsworth, D L

    1972-05-01

    Over the past few years there has been growing interest shown in the rapidly maturing technology of totally solid-state imaging. This paper presents a synopsis of developments made in this field at the Westinghouse ATL facilities with emphasis on row-column organized monolithic arrays of diffused junction phototransistors. The complete processing sequence applicable to the fabrication of modern highdensity arrays is described from wafer ingot preparation to final sensor testing. Special steps found necessary for high yield processing, such as surface etching prior to both sawing and lapping, are discussed along with the rationale behind their adoption. Camera systems built around matrix array photosensors are presented in a historical time-wise progression beginning with the first 50 x 50 element converter developed in 1965 and running through the most recent 400 x 500 element system delivered in 1972. The freedom of mechanical architecture made available to system designers by solid-state array cameras is noted from the description of a bare-chip packaged cubic inch camera. Hybrid scan systems employing one-dimensional line arrays are cited, and the basic tradeoffs to their use are listed. PMID:20119094

  20. Unassisted 3D camera calibration

    Science.gov (United States)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  1. 16 CFR 501.1 - Camera film.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Camera film. 501.1 Section 501.1 Commercial Practices FEDERAL TRADE COMMISSION RULES, REGULATIONS, STATEMENT OF GENERAL POLICY OR INTERPRETATION AND... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the...

  2. 21 CFR 892.1110 - Positron camera.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Positron camera. 892.1110 Section 892.1110 Food... DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A positron camera is a device intended to image the distribution of positron-emitting radionuclides in the...

  3. 21 CFR 886.1120 - Opthalmic camera.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Opthalmic camera. 886.1120 Section 886.1120 Food... DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1120 Opthalmic camera. (a) Identification. An ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding...

  4. MIOTIC study: a prospective, multicenter, randomized study to evaluate the long-term efficacy of mobile phone-based Internet of Things in the management of patients with stable COPD

    Directory of Open Access Journals (Sweden)

    Zhang J

    2013-09-01

    Full Text Available Jing Zhang, Yuan-lin Song, Chun-xue Bai Department of Pulmonary Medicine, Zhongshan Hospital, Fudan University, Shanghai, People's Republic of China Abstract: Chronic obstructive pulmonary disease (COPD is a common disease that leads to huge economic and social burden. Efficient and effective management of stable COPD is essential to improve quality of life and reduce medical expenditure. The Internet of Things (IoT, a recent breakthrough in communication technology, seems promising in improving health care delivery, but its potential strengths in COPD management remain poorly understood. We have developed a mobile phone-based IoT (mIoT platform and initiated a randomized, multicenter, controlled trial entitled the ‘MIOTIC study’ to investigate the influence of mIoT among stable COPD patients. In the MIOTIC study, at least 600 patients with stable GOLD group C or D COPD and with a history of at least two moderate-to-severe exacerbations within the previous year will be randomly allocated to the control group, which receives routine follow-up, or the intervention group, which receives mIoT management. Endpoints of the study include (1 frequency and severity of acute exacerbation; (2 symptomatic evaluation; (3 pre- and post-bronchodilator forced expiratory volume in 1 second (FEV1 and FEV1/forced vital capacity (FVC measurement; (4 exercise capacity; and (5 direct medical cost per year. Results from this study should provide direct evidence for the suitability of mIoT in stable COPD patient management. Keywords: Internet of Things, mobile phone, chronic obstructive pulmonary disease, efficacy

  5. 基于智能手机平台的主动安全预警系统关键技术研究%Research on Smart-phone Based Active Safety Warning Technology

    Institute of Scientific and Technical Information of China (English)

    金茂菁

    2012-01-01

    Vehicle active safety systems have been proved effective for saving lives and reducing traffic accident. However, these systems were more expensive and less widespread than smart-phones,the multi-sensor functions and information processing ability inside the cell phone were greatly improved for further application. Firstly, the parameters and functions for these sensors are introduced in this study. Then, the framework of smart-phone based active safety system is proposed. The function for safety warning system is designed using forward collision warning, lane departure warning system as examples, and is well explained. The field experiment using two typically smart-phone system and professional system was conduct for function and accuracy analysis finally. The results indicate that the accuracy of smartphone based system was acceptable; the high-equipped smart-phones can realize even more active safety warning function.%介绍了智能手机中传感器类型,提出了基于智能手机的预警系统体系框架,设计了安全预警功能.采用道路实验横向测评了两款智能手机安全预警软件和一款专业设备在功能、可靠性等方面的指标,前碰撞和车道偏离预警结果表明:智能手机可以实现专业系统的主动安全预警功能并进一步拓展,预警信息发布精度也在可接受范围内.

  6. Single Camera Calibration in 3D Vision

    OpenAIRE

    Caius SULIMAN; Puiu, Dan; Moldoveanu, Florin

    2009-01-01

    Camera calibration is a necessary step in 3D vision in order to extract metric information from 2D images. A camera is considered to be calibrated when the parameters of the camera are known (i.e. principal distance, lens distorsion, focal length etc.). In this paper we deal with a single camera calibration method and with the help of this method we try to find the intrinsic and extrinsic camera parameters. The method was implemented with succes in the programming and simulation environment M...

  7. HHEBBES! All sky camera system: status update

    Science.gov (United States)

    Bettonvil, F.

    2015-01-01

    A status update is given of the HHEBBES! All sky camera system. HHEBBES!, an automatic camera for capturing bright meteor trails, is based on a DSLR camera and a Liquid Crystal chopper for measuring the angular velocity. Purpose of the system is to a) recover meteorites; b) identify origin/parental bodies. In 2015, two new cameras were rolled out: BINGO! -alike HHEBBES! also in The Netherlands-, and POgLED, in Serbia. BINGO! is a first camera equipped with a longer focal length fisheye lens, to further increase the accuracy. Several minor improvements have been done and the data reduction pipeline was used for processing two prominent Dutch fireballs.

  8. Modelling Virtual Camera Behaviour Through Player Gaze

    DEFF Research Database (Denmark)

    Picardi, Andrea; Burelli, Paolo; Yannakakis, Georgios N.

    2012-01-01

    In a three-dimensional virtual environment, aspects such as narrative and interaction largely depend on the placement and animation of the virtual camera. Therefore, virtual camera control plays a critical role in player experience and, thereby, in the overall quality of a computer game. Both game...... on the relationship between virtual camera, game-play and player behaviour. We run a game user experiment to shed some light on this relationship and identify relevant dif- ferences between camera behaviours through different game sessions, playing behaviours and player gaze patterns. Re- sults show that users can...... be efficiently profiled in dissimilar clusters according to camera control as part of their game- play behaviour....

  9. 3D camera tracking from disparity images

    Science.gov (United States)

    Kim, Kiyoung; Woo, Woontack

    2005-07-01

    In this paper, we propose a robust camera tracking method that uses disparity images computed from known parameters of 3D camera and multiple epipolar constraints. We assume that baselines between lenses in 3D camera and intrinsic parameters are known. The proposed method reduces camera motion uncertainty encountered during camera tracking. Specifically, we first obtain corresponding feature points between initial lenses using normalized correlation method. In conjunction with matching features, we get disparity images. When the camera moves, the corresponding feature points, obtained from each lens of 3D camera, are robustly tracked via Kanade-Lukas-Tomasi (KLT) tracking algorithm. Secondly, relative pose parameters of each lens are calculated via Essential matrices. Essential matrices are computed from Fundamental matrix calculated using normalized 8-point algorithm with RANSAC scheme. Then, we determine scale factor of translation matrix by d-motion. This is required because the camera motion obtained from Essential matrix is up to scale. Finally, we optimize camera motion using multiple epipolar constraints between lenses and d-motion constraints computed from disparity images. The proposed method can be widely adopted in Augmented Reality (AR) applications, 3D reconstruction using 3D camera, and fine surveillance systems which not only need depth information, but also camera motion parameters in real-time.

  10. Characterization of the Series 1000 Camera System

    Energy Technology Data Exchange (ETDEWEB)

    Kimbrough, J; Moody, J; Bell, P; Landen, O

    2004-04-07

    The National Ignition Facility requires a compact network addressable scientific grade CCD camera for use in diagnostics ranging from streak cameras to gated x-ray imaging cameras. Due to the limited space inside the diagnostic, an analog and digital input/output option in the camera controller permits control of both the camera and the diagnostic by a single Ethernet link. The system consists of a Spectral Instruments Series 1000 camera, a PC104+ controller, and power supply. The 4k by 4k CCD camera has a dynamic range of 70 dB with less than 14 electron read noise at a 1MHz readout rate. The PC104+ controller includes 16 analog inputs, 4 analog outputs and 16 digital input/output lines for interfacing to diagnostic instrumentation. A description of the system and performance characterization is reported.

  11. Video clustering using camera motion

    OpenAIRE

    Tort Alsina, Laura

    2012-01-01

    Com el moviment de càmera en un clip de vídeo pot ser útil per a la seva classificació en termes semàntics. [ANGLÈS] This document contains the work done in INP Grenoble during the second semester of the academic year 2011-2012, completed in Barcelona during the first months of the 2012-2013. The work presented consists in a camera motion study in different types of video in order to group fragments that have some similarity in the content. In the document it is explained how the data extr...

  12. Jacques : Your underwater camera companion

    OpenAIRE

    Edlund, Martin

    2014-01-01

    300 million pictures are uploaded everyday on Facebook alone. We live in a society where photography, filming and self-documentation are a natural part of our lives. But how does it inflict on our experiences when we always are considering camera angles, filters and compositions? We might very well ruin the experiences we so badly want to save. Scuba diving is a special experience. We enter a world with another space of movement, surroundings and animal life. An experience that can only be ex...

  13. Lens assemblies for multispectral camera

    Science.gov (United States)

    Lepretre, Francois

    1994-09-01

    In the framework of a contract with the Indian Space Research Organization (ISRO), MATRA DEFENSE - DOD/UAO have developed, produced and tested 36 types LISS 1 - LISS 2 lenses and 12 LISS 3 lenses equipped with their interferential filters. These lenses are intended to form the optical systems of multispectral imaging sensors aboard Indian earth observation satellites IRS 1A, 1B, 1C, and 1D. It should be noted that the multispectrum cameras of the IRS 1A - 1B satellite have been in operation for two years and have given very satisfactory results according to ISRO. Each of these multispectrum LISS 3 cameras consists of lenses, each working in a different spectral bandwidth (B2: 520 - 590 nm; B3: 620 - 680 nm; B4: 770 - 860 nm; B5: 1550 - 1700 nm). In order to superimpose the images of each spectral band without digital processing, the image formats (60 mm) of the lenses are registered better that 2 micrometers and remain as such throughout all the environmental tests. Similarly, due to the absence of precise thermal control aboard the satellite, the lenses are as athermal as possible.

  14. The Dark Energy Camera (DECam)

    CERN Document Server

    Honscheid, K; Abbott, T; Annis, J; Antonik, M; Barcel, M; Bernstein, R; Bigelow, B; Brooks, D; Buckley-Geer, E; Campa, J; Cardiel, L; Castander, F; Castilla, J; Cease, H; Chappa, S; Dede, E; Derylo, G; Diehl, T; Doel, P; De Vicente, J; Eiting, J; Estrada, J; Finley, D; Flaugher, B; Gaztañaga, E; Gerdes, D; Gladders, M; Guarino, V; Gutíerrez, G; Hamilton, J; Haney, M; Holland, S; Huffman, D; Karliner, I; Kau, D; Kent, S; Kozlovsky, M; Kubik, D; Kühn, K; Kuhlmann, S; Kuk, K; Leger, F; Lin, H; Martínez, G; Martínez, M; Merritt, W; Mohr, J; Moore, P; Moore, T; Nord, B; Ogando, R; Olsen, J; Onal, B; Peoples, J; Qian, T; Roe, N; Sánchez, E; Scarpine, V; Schmidt, R; Schmitt, R; Schubnell, M; Schultz, K; Selen, M; Shaw, T; Simaitis, V; Slaughter, J; Smith, C; Spinka, H; Stefanik, A; Stuermer, W; Talaga, R; Tarle, G; Thaler, J; Tucker, D; Walker, A; Worswick, S; Zhao, A

    2008-01-01

    In this paper we describe the Dark Energy Camera (DECam), which will be the primary instrument used in the Dark Energy Survey. DECam will be a 3 sq. deg. mosaic camera mounted at the prime focus of the Blanco 4m telescope at the Cerro-Tololo International Observatory (CTIO). It consists of a large mosaic CCD focal plane, a five element optical corrector, five filters (g,r,i,z,Y), a modern data acquisition and control system and the associated infrastructure for operation in the prime focus cage. The focal plane includes of 62 2K x 4K CCD modules (0.27"/pixel) arranged in a hexagon inscribed within the roughly 2.2 degree diameter field of view and 12 smaller 2K x 2K CCDs for guiding, focus and alignment. The CCDs will be 250 micron thick fully-depleted CCDs that have been developed at the Lawrence Berkeley National Laboratory (LBNL). Production of the CCDs and fabrication of the optics, mechanical structure, mechanisms, and control system for DECam are underway; delivery of the instrument to CTIO is scheduled ...

  15. Laboratory calibration and characterization of video cameras

    Science.gov (United States)

    Burner, A. W.; Snow, W. L.; Shortis, M. R.; Goad, W. K.

    1990-01-01

    Some techniques for laboratory calibration and characterization of video cameras used with frame grabber boards are presented. A laser-illuminated displaced reticle technique (with camera lens removed) is used to determine the camera/grabber effective horizontal and vertical pixel spacing as well as the angle of nonperpendicularity of the axes. The principal point of autocollimation and point of symmetry are found by illuminating the camera with an unexpanded laser beam, either aligned with the sensor or lens. Lens distortion and the principal distance are determined from images of a calibration plate suitably aligned with the camera. Calibration and characterization results for several video cameras are presented. Differences between these laboratory techniques and test range and plumb line calibration are noted.

  16. MAGIC-II Camera Slow Control Software

    CERN Document Server

    Steinke, B; Tridon, D Borla

    2009-01-01

    The Imaging Atmospheric Cherenkov Telescope MAGIC I has recently been extended to a stereoscopic system by adding a second 17 m telescope, MAGIC-II. One of the major improvements of the second telescope is an improved camera. The Camera Control Program is embedded in the telescope control software as an independent subsystem. The Camera Control Program is an effective software to monitor and control the camera values and their settings and is written in the visual programming language LabVIEW. The two main parts, the Central Variables File, which stores all information of the pixel and other camera parameters, and the Comm Control Routine, which controls changes in possible settings, provide a reliable operation. A safety routine protects the camera from misuse by accidental commands, from bad weather conditions and from hardware errors by automatic reactions.

  17. Action selection for single-camera SLAM

    OpenAIRE

    Vidal-Calleja, Teresa A.; Sanfeliu, Alberto; Andrade-Cetto, J

    2010-01-01

    A method for evaluating, at video rate, the quality of actions for a single camera while mapping unknown indoor environments is presented. The strategy maximizes mutual information between measurements and states to help the camera avoid making ill-conditioned measurements that are appropriate to lack of depth in monocular vision systems. Our system prompts a user with the appropriate motion commands during 6-DOF visual simultaneous localization and mapping with a handheld camera. Additionall...

  18. Movement-based Interaction in Camera Spaces

    DEFF Research Database (Denmark)

    Eriksson, Eva; Riisgaard Hansen, Thomas; Lykke-Olesen, Andreas

    2006-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movement-based projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space......, relations, and feedback. We see these as central for describing and analysing movement-based systems using camera tracking and we show how these three concepts can be used to analyse other camera tracking applications....

  19. Omnidirectional Underwater Camera Design and Calibration

    OpenAIRE

    Josep Bosch; Nuno Gracias; Pere Ridao; David Ribas

    2015-01-01

    This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing i...

  20. Camera calibration from road lane markings

    OpenAIRE

    Fung, GSK; Yung, NHC; Pang, GKH

    2003-01-01

    Three-dimensional computer vision techniques have been actively studied for the purpose of visual traffic surveillance. To determine the 3-D environment, camera calibration is a crucial step to resolve the relationship between the 3-D world coordinates and their corresponding image coordinates. A novel camera calibration using the geometry properties of road lane markings is proposed. A set of equations that computes the camera parameters from the image coordinates of the road lane markings a...

  1. Camera calibration from surfaces of revolution

    OpenAIRE

    Wong, KYK; Mendonça, PRS; Cipolla, R.

    2003-01-01

    This paper addresses the problem of calibrating a pinhole camera from images of a surface of revolution. Camera calibration is the process of determining the intrinsic or internal parameters (i.e., aspect ratio, focal length, and principal point) of a camera, and it is important for both motion estimation and metric reconstruction of 3D models. In this paper, a novel and simple calibration technique is introduced, which is based on exploiting the symmetry of images of surfaces of revolution. ...

  2. Increased Automation in Stereo Camera Calibration Techniques

    OpenAIRE

    Brandi House; Kevin Nickels

    2006-01-01

    Robotic vision has become a very popular field in recent years due to the numerous promising applications it may enhance. However, errors within the cameras and in their perception of their environment can cause applications in robotics to fail. To help correct these internal and external imperfections, stereo camera calibrations are performed. There are currently many accurate methods of camera calibration available; however, most or all of them are time consuming and labor intensive. This r...

  3. Decision about buying a gamma camera

    International Nuclear Information System (INIS)

    A large part of the referral to a nuclear medicine department is usually for imaging studies. Sooner or later, the nuclear medicine specialist will be called upon to make a decision about when and what type of gamma camera to buy. There is no longer an option of choosing between a rectilinear scanner and a gamma camera as the former is virtually out of the market. The decision that one has to make is when to invest in a gamma camera, and then on what basis to select the gamma camera

  4. Advanced High-Definition Video Cameras

    Science.gov (United States)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  5. High-speed cameras at Los Alamos

    Science.gov (United States)

    Brixner, Berlyn

    1997-05-01

    In 1943, there was no camera with the microsecond resolution needed for research in Atomic Bomb development. We had the Mitchell camera (100 fps), the Fastax (10 000), the Marley (100 000), the drum streak (moving slit image) 10-5 s resolution, and electro-optical shutters for 10-6 s. Julian Mack invented a rotating-mirror camera for 10-7 s, which was in use by 1944. Small rotating mirror changes secured a resolution of 10-8 s. Photography of oscilloscope traces soon recorded 10-6 resolution, which was later improved to 10-8 s. Mack also invented two time resolving spectrographs for studying the radiation of the first atomic explosion. Much later, he made a large aperture spectrograph for shock wave spectra. An image dissecting drum camera running at 107 frames per second (fps) was used for studying high velocity jets. Brixner invented a simple streak camera which gave 10-8 s resolution. Using a moving film camera, an interferometer pressure gauge was developed for measuring shock-front pressures up to 100 000 psi. An existing Bowen 76-lens frame camera was speeded up by our turbine driven mirror to make 1 500 000 fps. Several streak cameras were made with writing arms from 4 1/2 to 40 in. and apertures from f/2.5 to f/20. We made framing cameras with top speeds of 50 000, 1 000 000, 3 500 000, and 14 000 000 fps.

  6. Omnidirectional Underwater Camera Design and Calibration

    Directory of Open Access Journals (Sweden)

    Josep Bosch

    2015-03-01

    Full Text Available This paper presents the development of an underwater omnidirectional multi-camera system (OMS based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3 and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach.

  7. Research of Camera Calibration Based on DSP

    OpenAIRE

    Zheng Zhang; Yukun Wan; Lixin Cai

    2013-01-01

    To take advantage of the high-efficiency and stability of DSP in the data processing and the functions of OpenCV library, this study brought forward a scheme that camera calibration in DSP embedded system calibration. An arithmetic of camera calibration based on OpenCV is designed by analyzing the camera model and lens distortion. The transplantation of EMCV to DSP is completed and the arithmetic of camera calibration is migrated and optimized based on the CCS development environment and the ...

  8. Explosive Transient Camera (ETC) Program

    Science.gov (United States)

    Ricker, George

    1991-01-01

    Since the inception of the ETC program, a wide range of new technologies was developed to support this astronomical instrument. The prototype unit was installed at ETC Site 1. The first partially automated observations were made and some major renovations were later added to the ETC hardware. The ETC was outfitted with new thermoelectrically-cooled CCD cameras and a sophisticated vacuum manifold, which, together, made the ETC a much more reliable unit than the prototype. The ETC instrumentation and building were placed under full computer control, allowing the ETC to operate as an automated, autonomous instrument with virtually no human intervention necessary. The first fully-automated operation of the ETC was performed, during which the ETC monitored the error region of the repeating soft gamma-ray burster SGR 1806-21.

  9. Framework for Evaluating Camera Opinions

    Directory of Open Access Journals (Sweden)

    K.M. Subramanian

    2015-03-01

    Full Text Available Opinion mining plays a most important role in text mining applications in brand and product positioning, customer relationship management, consumer attitude detection and market research. The applications lead to new generation of companies/products meant for online market perception, online content monitoring and reputation management. Expansion of the web inspires users to contribute/express opinions via blogs, videos and social networking sites. Such platforms provide valuable information for analysis of sentiment pertaining a product or service. This study investigates the performance of various feature extraction methods and classification algorithm for opinion mining. Opinions expressed in Amazon website for cameras are collected and used for evaluation. Features are extracted from the opinions using Term Document Frequency and Inverse Document Frequency (TDFIDF. Feature transformation is achieved through Principal Component Analysis (PCA and kernel PCA. Naïve Bayes, K Nearest Neighbor and Classification and Regression Trees (CART classification algorithms classify the features extracted.

  10. HRSC: High resolution stereo camera

    Science.gov (United States)

    Neukum, G.; Jaumann, R.; Basilevsky, A.T.; Dumke, A.; Van Gasselt, S.; Giese, B.; Hauber, E.; Head, J. W., III; Heipke, C.; Hoekzema, N.; Hoffmann, H.; Greeley, R.; Gwinner, K.; Kirk, R.; Markiewicz, W.; McCord, T.B.; Michael, G.; Muller, Jan-Peter; Murray, J.B.; Oberst, J.; Pinet, P.; Pischel, R.; Roatsch, T.; Scholten, F.; Willner, K.

    2009-01-01

    The High Resolution Stereo Camera (HRSC) on Mars Express has delivered a wealth of image data, amounting to over 2.5 TB from the start of the mapping phase in January 2004 to September 2008. In that time, more than a third of Mars was covered at a resolution of 10-20 m/pixel in stereo and colour. After five years in orbit, HRSC is still in excellent shape, and it could continue to operate for many more years. HRSC has proven its ability to close the gap between the low-resolution Viking image data and the high-resolution Mars Orbiter Camera images, leading to a global picture of the geological evolution of Mars that is now much clearer than ever before. Derived highest-resolution terrain model data have closed major gaps and provided an unprecedented insight into the shape of the surface, which is paramount not only for surface analysis and geological interpretation, but also for combination with and analysis of data from other instruments, as well as in planning for future missions. This chapter presents the scientific output from data analysis and highlevel data processing, complemented by a summary of how the experiment is conducted by the HRSC team members working in geoscience, atmospheric science, photogrammetry and spectrophotometry. Many of these contributions have been or will be published in peer-reviewed journals and special issues. They form a cross-section of the scientific output, either by summarising the new geoscientific picture of Mars provided by HRSC or by detailing some of the topics of data analysis concerning photogrammetry, cartography and spectral data analysis.

  11. MISR FIRSTLOOK radiometric camera-by-camera Cloud Mask V001

    Data.gov (United States)

    National Aeronautics and Space Administration — This file contains the FIRSTLOOK Radiometric camera-by-camera Cloud Mask (RCCM) dataset produced using ancillary inputs (RCCT) from the previous time period. It is...

  12. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many ca

  13. Improving Situational Awareness in camera surveillance by combining top-view maps with camera images

    NARCIS (Netherlands)

    Kooi, F.L.; Zeeders, R.

    2009-01-01

    The goal of the experiment described is to improve today's camera surveillance in public spaces. Three designs with the camera images combined on a top-view map were compared to each other and to the current situation in camera surveillance. The goal was to test which design makes spatial relationsh

  14. Camera self-calibration from translation by referring to a known camera.

    Science.gov (United States)

    Zhao, Bin; Hu, Zhaozheng

    2015-09-01

    This paper presents a novel linear method for camera self-calibration by referring to a known (or calibrated) camera. The method requires at least three images, with two images generated by the uncalibrated camera from pure translation and one image generated by the known reference camera. We first propose a method to compute the infinite homography from scene depths. Based on this, we use two images generated by translating the uncalibrated camera to recover scene depths, which are further utilized to linearly compute the infinite homography between an arbitrary uncalibrated image, and the image from the known camera. With the known camera as reference, the computed infinite homography is readily decomposed for camera calibration. The proposed self-calibration method has been tested with simulation and real image data. Experimental results demonstrate that the method is practical and accurate. This paper proposes using a "known reference camera" for camera calibration. The pure translation, as required in the method, is much more maneuverable, compared with some strict motions in the literature, such as pure rotation. The proposed self-calibration method has good potential for solving online camera calibration problems, which has important applications, especially for multicamera and zooming camera systems. PMID:26368906

  15. Centering mount for a gamma camera

    International Nuclear Information System (INIS)

    A device for centering a γ-camera detector in case of radionuclide diagnosis is described. It permits the use of available medical coaches instead of a table with a transparent top. The device can be used for centering a detector (when it is fixed at the low end of a γ-camera) on a required area of the patient's body

  16. Case Camera obscura 1995–2014

    OpenAIRE

    Inkinen, Ari

    2015-01-01

    Sininauhaliitossa kehitettiin vuonna 1995 elämyksellinen arvo- ja päihdekasvatusohjelma Camera obscura. Toimintamallin toimintakonsepti ja sen sisältö ovat ainutlaatuisia. Sosiaaliseen vahvistamiseen perustuva toimintamalli integroitiin osaksi koulun opetusohjelmaa ja toteutettiin yhteistyössä paikallisten nuorisoalan toimijoiden kanssa. Vuorovaikutukseen, kokemusoppimiseen ja nuoren kohtaamiseen perustuvaa toimintamallia on toteutettu ja kehitetty erilaisten hankkeiden avulla. Camera obscura...

  17. Creating and Using a Camera Obscura

    Science.gov (United States)

    Quinnell, Justin

    2012-01-01

    The camera obscura (Latin for "darkened room") is the earliest optical device and goes back over 2500 years. The small pinhole or lens at the front of the room allows light to enter and this is then "projected" onto a screen inside the room. This differs from a camera, which projects its image onto light-sensitive material. Originally images were…

  18. Rosetta Star Tracker and Navigation Camera

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera.......Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera....

  19. Matching image color from different cameras

    Science.gov (United States)

    Fairchild, Mark D.; Wyble, David R.; Johnson, Garrett M.

    2008-01-01

    Can images from professional digital SLR cameras be made equivalent in color using simple colorimetric characterization? Two cameras were characterized, these characterizations were implemented on a variety of images, and the results were evaluated both colorimetrically and psychophysically. A Nikon D2x and a Canon 5D were used. The colorimetric analyses indicated that accurate reproductions were obtained. The median CIELAB color differences between the measured ColorChecker SG and the reproduced image were 4.0 and 6.1 for the Canon (chart and spectral respectively) and 5.9 and 6.9 for the Nikon. The median differences between cameras were 2.8 and 3.4 for the chart and spectral characterizations, near the expected threshold for reliable image difference perception. Eight scenes were evaluated psychophysically in three forced-choice experiments in which a reference image from one of the cameras was shown to observers in comparison with a pair of images, one from each camera. The three experiments were (1) a comparison of the two cameras with the chart-based characterizations, (2) a comparison with the spectral characterizations, and (3) a comparison of chart vs. spectral characterization within and across cameras. The results for the three experiments are 64%, 64%, and 55% correct respectively. Careful and simple colorimetric characterization of digital SLR cameras can result in visually equivalent color reproduction.

  20. Fazendo 3d com uma camera so

    CERN Document Server

    Lunazzi, J J

    2010-01-01

    A simple system to make stereo photography or videos based in just two mirrors was made in 1989 and recently adapted to a digital camera setup. Um sistema simples para fazer fotografia ou videos em estereo baseado em dois espelhos que dividem o campo da imagem foi criado no ano 1989, e recentemente adaptado para camera digital.

  1. Thermal Cameras in School Laboratory Activities

    Science.gov (United States)

    Haglund, Jesper; Jeppsson, Fredrik; Hedberg, David; Schönborn, Konrad J.

    2015-01-01

    Thermal cameras offer real-time visual access to otherwise invisible thermal phenomena, which are conceptually demanding for learners during traditional teaching. We present three studies of students' conduction of laboratory activities that employ thermal cameras to teach challenging thermal concepts in grades 4, 7 and 10-12. Visualization of…

  2. CCD Color Camera Characterization for Image Measurements

    NARCIS (Netherlands)

    Withagen, P.J.; Groen, F.C.A.; Schutte, K.

    2007-01-01

    In this article, we will analyze a range of different types of cameras for its use in measurements. We verify a general model of a charged coupled device camera using experiments. This model includes gain and offset, additive and multiplicative noise, and gamma correction. It is shown that for sever

  3. AIM: Ames Imaging Module Spacecraft Camera

    Science.gov (United States)

    Thompson, Sarah

    2015-01-01

    The AIM camera is a small, lightweight, low power, low cost imaging system developed at NASA Ames. Though it has imaging capabilities similar to those of $1M plus spacecraft cameras, it does so on a fraction of the mass, power and cost budget.

  4. Cameras Monitor Spacecraft Integrity to Prevent Failures

    Science.gov (United States)

    2014-01-01

    The Jet Propulsion Laboratory contracted Malin Space Science Systems Inc. to outfit Curiosity with four of its cameras using the latest commercial imaging technology. The company parlayed the knowledge gained under working with NASA to develop an off-the-shelf line of cameras, along with a digital video recorder, designed to help troubleshoot problems that may arise on satellites in space.

  5. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  6. Adapting virtual camera behaviour through player modelling

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Research in virtual camera control has focused primarily on finding methods to allow designers to place cameras effectively and efficiently in dynamic and unpredictable environments, and to generate complex and dynamic plans for cinematography in virtual environments. In this article, we propose a...

  7. Flow visualization by mobile phone cameras

    Science.gov (United States)

    Cierpka, Christian; Hain, Rainer; Buchmann, Nicolas A.

    2016-06-01

    Mobile smart phones were completely changing people's communication within the last ten years. However, these devices do not only offer communication through different channels but also devices and applications for fun and recreation. In this respect, mobile phone cameras include now relatively fast (up to 240 Hz) cameras to capture high-speed videos of sport events or other fast processes. The article therefore explores the possibility to make use of this development and the wide spread availability of these cameras in the terms of velocity measurements for industrial or technical applications and fluid dynamics education in high schools and at universities. The requirements for a simplistic PIV (particle image velocimetry) system are discussed. A model experiment of a free water jet was used to prove the concept and shed some light on the achievable quality and determine bottle necks by comparing the results obtained with a mobile phone camera with data taken by a high-speed camera suited for scientific experiments.

  8. New two-dimensional photon camera

    Science.gov (United States)

    Papaliolios, C.; Mertz, L.

    1982-01-01

    A photon-sensitive camera, applicable to speckle imaging of astronomical sources, high-resolution spectroscopy of faint galaxies in a crossed-dispersion spectrograph, or narrow-band direct imaging of galaxies, is presented. The camera is shown to supply 8-bit by 8-bit photon positions (256 x 256 pixels) for as many as 10 to the 6th photons/sec with a maximum linear resolution of approximately 10 microns. The sequence of photon positions is recorded digitally with a VHS-format video tape recorder or formed into an immediate image via a microcomputer. The four basic elements of the camera are described in detail: a high-gain image intensifier with fast-decay output phosphor, a glass-prism optical-beam splitter, a set of Gray-coded masks, and a photomultiplier tube for each mask. The characteristics of the camera are compared to those of other photon cameras.

  9. Airborne Digital Camera. A digital view from above; Airborne Digital Camera. Der digitale Blick von oben

    Energy Technology Data Exchange (ETDEWEB)

    Roeser, H.P. [DLR Deutsches Zentrum fuer Luft- und Raumfahrt e.V., Berlin (Germany). Inst. fuer Weltraumsensorik und Planetenerkundung

    1999-09-01

    The Airborne Digital Camera is based on the WAOSS camera of the MARS-96 mission. The camera will provide a new basis for airborne photogrammetry and remote exploration. The ADC project aims at the development of the first commercial digital airborne camera. [German] Die Wurzeln des Projektes Airborne Digital Camera (ADC) liegen in der Mission MARS-96. Die hierfuer konzipierte Marskamera WAOSS lieferte die Grundlage fuer das innovative Konzept einer digitalen Flugzeugkamera. Diese ist auf dem Weg, die flugzeuggestuetzte Photogrammetrie und Fernerkundung auf eine technologisch voellig neue Basis zu stellen. Ziel des Projektes ADC ist die Entwicklung der ersten kommerziellen digitalen Luftbildkamera. (orig.)

  10. True three-dimensional camera

    Science.gov (United States)

    Kornreich, Philipp; Farell, Bart

    2013-01-01

    An imager that can measure the distance from each pixel to the point on the object that is in focus at the pixel is described. This is accomplished by short photo-conducting lightguides at each pixel. In the eye the rods and cones are the fiber-like lightguides. The device uses ambient light that is only coherent in spherical shell-shaped light packets of thickness of one coherence length. Modern semiconductor technology permits the construction of lightguides shorter than a coherence length of ambient light. Each of the frequency components of the broad band light arriving at a pixel has a phase proportional to the distance from an object point to its image pixel. Light frequency components in the packet arriving at a pixel through a convex lens add constructively only if the light comes from the object point in focus at this pixel. The light in packets from all other object points cancels. Thus the pixel receives light from one object point only. The lightguide has contacts along its length. The lightguide charge carriers are generated by the light patterns. These light patterns, and thus the photocurrent, shift in response to the phase of the input signal. Thus, the photocurrent is a function of the distance from the pixel to its object point. Applications include autonomous vehicle navigation and robotic vision. Another application is a crude teleportation system consisting of a camera and a three-dimensional printer at a remote location.

  11. Cloud Computing with Context Cameras

    Science.gov (United States)

    Pickles, A. J.; Rosing, W. E.

    2016-05-01

    We summarize methods and plans to monitor and calibrate photometric observations with our autonomous, robotic network of 2m, 1m and 40cm telescopes. These are sited globally to optimize our ability to observe time-variable sources. Wide field "context" cameras are aligned with our network telescopes and cycle every ˜2 minutes through BVr'i'z' filters, spanning our optical range. We measure instantaneous zero-point offsets and transparency (throughput) against calibrators in the 5-12m range from the all-sky Tycho2 catalog, and periodically against primary standards. Similar measurements are made for all our science images, with typical fields of view of ˜0.5 degrees. These are matched against Landolt, Stetson and Sloan standards, and against calibrators in the 10-17m range from the all-sky APASS catalog. Such measurements provide pretty good instantaneous flux calibration, often to better than 5%, even in cloudy conditions. Zero-point and transparency measurements can be used to characterize, monitor and inter-compare sites and equipment. When accurate calibrations of Target against Standard fields are required, monitoring measurements can be used to select truly photometric periods when accurate calibrations can be automatically scheduled and performed.

  12. New camera systems for fuel services

    International Nuclear Information System (INIS)

    AREVA NP Fuel Services have many years of experience in visual examination and measurements on fuel assemblies and associated core components by using state of the art cameras and measuring technologies. The used techniques allow the surface and dimensional characterization of materials and shapes by visual examination. New enhanced and sophisticated technologies for fuel services f. e. are two shielded color camera systems for use under water and close inspection of a fuel assembly. Nowadays the market requirements for detecting and characterization of small defects (lower than the 10th of one mm) or cracks and analyzing surface appearances on an irradiated fuel rod cladding or fuel assembly structure parts have increased. Therefore it is common practice to use movie cameras with higher resolution. The radiation resistance of high resolution CCD cameras is in general very low and it is not possible to use them unshielded close to a fuel assembly. By extending the camera with a mirror system and shielding around the sensitive parts, the movie camera can be utilized for fuel assembly inspection. AREVA NP Fuel Services is now equipped with such kind of movie cameras. (orig.)

  13. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2-deg deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  14. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (''bang-bang'') closed-loop control with a +-2-deg deadband. The deadband area is desirable to avoid operator ''seasickness'' caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator System SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system

  15. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-20 deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  16. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different...... this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of...

  17. Multi-Camera Calibration Using a Globe

    OpenAIRE

    Shen, Rui; Cheng, Irene; Basu, Anup

    2008-01-01

    The need for calibration of multiple cameras working together in a network, or for the acquisition of free viewpoint video for 3D TV, is becoming increasingly important in recent years. In this paper we present a novel approach for calibrating multiple cameras using an ordinary globe that is usually available in every household. This method makes it possible to reduce multi-camera calibration to a level that is attainable by non-technical users. Our technique requires only one view of the glo...

  18. Calibration of detector sensitivity in positron cameras

    International Nuclear Information System (INIS)

    An improved method for calibrating detector sensitivities in a positron camera has been developed. The calibration phantom is a cylinder of activity placed near the center of the camera and fully within the field of view. The calibration data is processed in such a manner that the following two important properties are achieved. The estimate of a detector sensitivity is unaffected by the sensitivities of the other detectors. The estimates are insensitive to displacements of the calibrating phantom from the camera center. Both of these properties produce a more accurate detector calibration

  19. Uncertainty of temperature measurement with thermal cameras

    Science.gov (United States)

    Chrzanowski, Krzysztof; Matyszkiel, Robert; Fischer, Joachim; Barela, Jaroslaw

    2001-06-01

    All main international metrological organizations are proposing a parameter called uncertainty as a measure of the accuracy of measurements. A mathematical model that enables the calculations of uncertainty of temperature measurement with thermal cameras is presented. The standard uncertainty or the expanded uncertainty of temperature measurement of the tested object can be calculated when the bounds within which the real object effective emissivity (epsilon) r, the real effective background temperature Tba(r), and the real effective atmospheric transmittance (tau) a(r) are located and can be estimated; and when the intrinsic uncertainty of the thermal camera and the relative spectral sensitivity of the thermal camera are known.

  20. Fuzzy logic control for camera tracking system

    Science.gov (United States)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  1. Close-range photogrammetry with video cameras

    Science.gov (United States)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1985-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  2. Neural network method for characterizing video cameras

    Science.gov (United States)

    Zhou, Shuangquan; Zhao, Dazun

    1998-08-01

    This paper presents a neural network method for characterizing color video camera. A multilayer feedforward network with the error back-propagation learning rule for training, is used as a nonlinear transformer to model a camera, which realizes a mapping from the CIELAB color space to RGB color space. With SONY video camera, D65 illuminant, Pritchard Spectroradiometer, 410 JIS color charts as training data and 36 charts as testing data, results show that the mean error of training data is 2.9 and that of testing data is 4.0 in a 2563 RGB space.

  3. Screen-Camera Calibration Using Gray Codes

    OpenAIRE

    FRANCKEN, Yannick; Hermans, Chris; Bekaert, Philippe

    2009-01-01

    In this paper we present a method for efficient calibration of a screen-camera setup, in which the camera is not directly facing the screen. A spherical mirror is used to make the screen visible to the camera. Using Gray code illumination patterns, we can uniquely identify the reflection of each screen pixel on the imaged spherical mirror. This allows us to compute a large set of 2D-3D correspondences, using only two sphere locations. Compared to previous work, this means we require less manu...

  4. Self-calibration of Large Scale Camera Networks

    OpenAIRE

    Goorts, Patrik; MAESEN, Steven; Liu, Yunjun; Dumont, Maarten; Bekaert, Philippe; Lafruit, Gauthier

    2014-01-01

    In this paper, we present a method to calibrate large scale camera networks for multi-camera computer vision applications in sport scenes. The calibration process determines precise camera parameters, both within each camera (focal length, principal point, etc) and inbetween the cameras (their relative position and orientation). To this end, we first extract candidate image correspondences over adjacent cameras, without using any calibration object, solely relying on existing feature matching...

  5. CALIBRATION AND EPIPOLAR GEOMETRY OF GENERIC HETEROGENOUS CAMERA SYSTEMS

    OpenAIRE

    Luber, A.; Rueß, D; Manthey, K.; Reulke, R.

    2012-01-01

    The application of perspective camera systems in photogrammetry and computer vision is state of the art. In recent years nonperspective and especially omnidirectional camera systems were increasingly used in close-range photogrammetry tasks. In general perspective camera model, i. e. pinhole model, cannot be applied when using non-perspective camera systems. However, several camera models for different omnidirectional camera systems are proposed in literature. Using different types o...

  6. Towards Adaptive Virtual Camera Control In Computer Games

    OpenAIRE

    Burelli, Paolo; Yannakakis, Georgios N.

    2011-01-01

    Automatic camera control aims to define a framework to control virtual camera movements in dynamic and unpredictable virtual environments while ensuring a set of desired visual properties. We inves- tigate the relationship between camera placement and playing behaviour in games and build a user model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platf...

  7. Action selection for single-camera SLAM.

    Science.gov (United States)

    Vidal-Calleja, Teresa A; Sanfeliu, Alberto; Andrade-Cetto, Juan

    2010-12-01

    A method for evaluating, at video rate, the quality of actions for a single camera while mapping unknown indoor environments is presented. The strategy maximizes mutual information between measurements and states to help the camera avoid making ill-conditioned measurements that are appropriate to lack of depth in monocular vision systems. Our system prompts a user with the appropriate motion commands during 6-DOF visual simultaneous localization and mapping with a handheld camera. Additionally, the system has been ported to a mobile robotic platform, thus closing the control-estimation loop. To show the viability of the approach, simulations and experiments are presented for the unconstrained motion of a handheld camera and for the motion of a mobile robot with nonholonomic constraints. When combined with a path planner, the technique safely drives to a marked goal while, at the same time, producing an optimal estimated map. PMID:20350845

  8. Traffic Cameras, MDTA Cameras, Camera locations at MDTA, Camera location inside the tunnel (SENSITIVE), Published in 2010, 1:1200 (1in=100ft) scale, Maryland Transportation Authority.

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — This Traffic Cameras dataset, published at 1:1200 (1in=100ft) scale, was produced all or in part from Field Survey/GPS information as of 2010. It is described as...

  9. The twisted cubic and camera calibration

    OpenAIRE

    Buchanan, Thomas

    1988-01-01

    We state a uniqueness theorem for camera calibration in terms of the twisted cubic. The theorem assumes the general linear model and is essentially a reformulation of Seydewitz's star generation theorem.

  10. Camera Based Navigation System with Augmented Reality

    Directory of Open Access Journals (Sweden)

    M. Marcu

    2012-06-01

    Full Text Available Nowadays smart mobile devices have enough processing power, memory, storage and always connected wireless communication bandwidth that makes them available for any type of application. Augmented reality (AR proposes a new type of applications that tries to enhance the real world by superimposing or combining virtual objects or computer generated information with it. In this paper we present a camera based navigation system with augmented reality integration. The proposed system aims to the following: the user points the camera of the smartphone towards a point of interest, like a building or any other place, and the application searches for relevant information about that specific place and superimposes the data over the video feed on the display. When the user moves the camera away, changing its orientation, the data changes as well, in real-time, with the proper information about the place that is now in the camera view.

  11. Calibration Procedures on Oblique Camera Setups

    Science.gov (United States)

    Kemper, G.; Melykuti, B.; Yu, C.

    2016-06-01

    Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager) is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna -IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first step with the help of

  12. Research of Camera Calibration Based on DSP

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2013-09-01

    Full Text Available To take advantage of the high-efficiency and stability of DSP in the data processing and the functions of OpenCV library, this study brought forward a scheme that camera calibration in DSP embedded system calibration. An arithmetic of camera calibration based on OpenCV is designed by analyzing the camera model and lens distortion. The transplantation of EMCV to DSP is completed and the arithmetic of camera calibration is migrated and optimized based on the CCS development environment and the DSP/BIOS system. On the premise of realizing calibration function, this arithmetic improves the efficiency of program execution and the precision of calibration and lays the foundation for further research of the visual location based on DSP embedded system.

  13. Ge Quantum Dot Infrared Imaging Camera Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Luna Innovations Incorporated proposes to develop a high performance Ge quantum dots-based infrared (IR) imaging camera on Si substrate. The high sensitivity, large...

  14. Lunar Reconnaissance Orbiter Camera (LROC) instrument overview

    Science.gov (United States)

    Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.

  15. A Survey of Catadioptric Omnidirectional Camera Calibration

    Directory of Open Access Journals (Sweden)

    Yan Zhang

    2013-02-01

    Full Text Available For dozen years, computer vision becomes more popular, in which omnidirectional camera has a larger field of view and widely been used in many fields, such as: robot navigation, visual surveillance, virtual reality, three-dimensional reconstruction, and so on. Camera calibration is an essential step to obtain three-dimensional geometric information from a two-dimensional image. Meanwhile, the omnidirectional camera image has catadioptric distortion, which need to be corrected in many applications, thus the study of such camera calibration method has important theoretical significance and practical applications. This paper firstly introduces the research status of catadioptric omnidirectional imaging system; then the image formation process of catadioptric omnidirectional imaging system has been given; finally a simple classification of omnidirectional imaging method is given, and we discussed the advantages and disadvantages of these methods.

  16. Contrail study with ground-based cameras

    Directory of Open Access Journals (Sweden)

    U. Schumann

    2013-08-01

    Full Text Available Photogrammetric methods and analysis results for contrails observed with wide-angle cameras are described. Four cameras of two different types (view angle −1. With this information, the aircraft causing the contrails are identified by comparison to traffic waypoint data. The observations are compared with synthetic camera pictures of contrails simulated with the contrail prediction model CoCiP, a Lagrangian model using air traffic movement data and numerical weather prediction (NWP data as input. The results provide tests for the NWP and contrail models. The cameras show spreading and thickening contrails suggesting ice-supersaturation in the ambient air. The ice-supersaturated layer is found thicker and more humid in this case than predicted by the NWP model used. The simulated and observed contrail positions agree up to differences caused by uncertain wind data. The contrail widths, which depend on wake vortex spreading, ambient shear and turbulence, were partly wider than simulated.

  17. Portable mini gamma camera for medical applications

    CERN Document Server

    Porras, E; Benlloch, J M; El-Djalil-Kadi-Hanifi, M; López, S; Pavon, N; Ruiz, J A; Sánchez, F; Sebastiá, A

    2002-01-01

    A small, portable and low-cost gamma camera for medical applications has been developed and clinically tested. This camera, based on a scintillator crystal and a Position Sensitive Photo-Multiplier Tube, has a useful field of view of 4.6 cm diameter and provides 2.2 mm of intrinsic spatial resolution. Its mobility and light weight allow to reach the patient from any desired direction. This camera images small organs with high efficiency and so addresses the demand for devices of specific clinical applications. In this paper, we present the camera and briefly describe the procedures that have led us to choose its configuration and the image reconstruction method. The clinical tests and diagnostic capability are also presented and discussed.

  18. Portable mini gamma camera for medical applications

    International Nuclear Information System (INIS)

    A small, portable and low-cost gamma camera for medical applications has been developed and clinically tested. This camera, based on a scintillator crystal and a Position Sensitive Photo-Multiplier Tube, has a useful field of view of 4.6 cm diameter and provides 2.2 mm of intrinsic spatial resolution. Its mobility and light weight allow to reach the patient from any desired direction. This camera images small organs with high efficiency and so addresses the demand for devices of specific clinical applications. In this paper, we present the camera and briefly describe the procedures that have led us to choose its configuration and the image reconstruction method. The clinical tests and diagnostic capability are also presented and discussed

  19. Aviation spectral camera infinity target simulation system

    Science.gov (United States)

    Liu, Xinyue; Ming, Xing; Liu, Jiu; Guo, Wenji; Lv, Gunbo

    2014-11-01

    With the development of science and technology, the applications of aviation spectral camera becoming more widely. Developing a test system of dynamic target is more important. Aviation spectral camera infinity target simulation system can be used to test the resolution and the modulation transfer function of camera. The construction and work principle of infinity target simulation system were introduced in detail. Dynamic target generator based digital micromirror device (DMD) and required performance of collimation System were analyzed and reported. The dynamic target generator based on DMD had the advantages of replacing image convenient, size small and flexible. According to the requirement of tested camera, by rotating and moving mirror, has completed a full field infinity dynamic target test plan.

  20. Color correction algorithms for digital cameras

    OpenAIRE

    Bianco,

    2010-01-01

    The image recorded by a digital camera mainly depends on three factors: the physical content of the scene, the illumination incident on the scene, and the characteristics of the camera. This leads to a problem for many applications where the main interest is in the color rendition accuracy of the scene acquired. It is known that the color reproduction accuracy of a digital imaging acquisition device is a key factor to the overall perceived image quality, and that there are mainly two modules ...

  1. Imaging camera with multiwire proportional chamber

    International Nuclear Information System (INIS)

    The camera for imaging radioisotope dislocations for use in nuclear medicine or for other applications, claimed in the patent, is provided by two multiwire lattices for the x-coordinate connected to a first coincidence circuit, and by two multiwire lattices for the y-coordinate connected to a second coincidence circuit. This arrangement eliminates the need of using a collimator and increases camera sensitivity while reducing production cost. (Ha)

  2. Adaptive visual servoing by simultaneous camera calibration

    OpenAIRE

    Pomares, J.; Chaumette, François; Torres, F.

    2007-01-01

    Calibration techniques allow the estimation of the intrinsic parameters of a camera. This paper describes an adaptive visual servoing scheme which employs the visual data measured during the task to determine the camera intrinsic parameters. This approach is based on the virtual visual servoing approach. However, in order to increase the robustness of the calibration several aspects have been introduced in this approach with respect to the previous developed virtual vi...

  3. Compact Optical Technique for Streak Camera Calibration

    International Nuclear Information System (INIS)

    The National Ignition Facility is under construction at the Lawrence Livermore National Laboratory for the U.S. Department of Energy Stockpile Stewardship Program. Optical streak cameras are an integral part of the experimental diagnostics instrumentation. To accurately reduce data from the streak cameras a temporal calibration is required. This article describes a technique for generating trains of precisely timed short-duration optical pulses that are suitable for temporal calibrations

  4. Calibration of multi-camera photogrammetric systems

    OpenAIRE

    I. Detchev; M. Mazaheri; Rondeel, S.; Habib, A

    2014-01-01

    Due to the low-cost and off-the-shelf availability of consumer grade cameras, multi-camera photogrammetric systems have become a popular means for 3D reconstruction. These systems can be used in a variety of applications such as infrastructure monitoring, cultural heritage documentation, biomedicine, mobile mapping, as-built architectural surveys, etc. In order to ensure that the required precision is met, a system calibration must be performed prior to the data collection campaign. ...

  5. Mercuric iodide X-ray camera

    Science.gov (United States)

    Patt, B. E.; del Duca, A.; Dolin, R.; Ortale, C.

    1986-02-01

    A prototype X-ray camera utilizing a 1.5- by 1.5-in., 1024-element, thin mercuric iodide detector array has been tested and evaluated. The microprocessor-based camera is portable and operates at room temperature. Events can be localized within 1-2 mm at energies below 60 keV and within 5-6 mm at energies on the order of 600 keV.

  6. Mercuric iodide x-ray camera

    International Nuclear Information System (INIS)

    A prototype x-ray camera utilizing a 1.5- by 1.5-in., 1024-element, thin mercuric iodide detector array has been tested and evaluated. The microprocessor-based camera is portable and operates at room temperature. Events can be localized within 1 to 2 mm at energies below 60 keV and within 5 to 6 mm at energies on the order of 600 keV. 5 refs., 7 figs

  7. Mercuric iodide X-ray camera

    Energy Technology Data Exchange (ETDEWEB)

    Patt, B.E.; Del Duca, A.; Dolin, R.; Ortale, C.

    1986-02-01

    A prototype x-ray camera utilizing a 1.5- by 1.5-inch, 1024-element, thin mercuric iodide detector array has been tested and evaluated. The microprocessor-based camera is portable and operates at room temperature. Events can be localized within 1-2 mm at energies below 60 keV and within 5-6 mm at energies on the order of 600 keV.

  8. Mercuric iodide x-ray camera

    Energy Technology Data Exchange (ETDEWEB)

    Patt, B.E.; Del Duca, A.; Dolin, R.; Ortale, C.

    1985-01-01

    A prototype x-ray camera utilizing a 1.5- by 1.5-in., 1024-element, thin mercuric iodide detector array has been tested and evaluated. The microprocessor-based camera is portable and operates at room temperature. Events can be localized within 1 to 2 mm at energies below 60 keV and within 5 to 6 mm at energies on the order of 600 keV. 5 refs., 7 figs.

  9. Mercuric iodide X-ray camera

    International Nuclear Information System (INIS)

    A prototype x-ray camera utilizing a 1.5- by 1.5-inch, 1024-element, thin mercuric iodide detector array has been tested and evaluated. The microprocessor-based camera is portable and operates at room temperature. Events can be localized within 1-2 mm at energies below 60 keV and within 5-6 mm at energies on the order of 600 keV

  10. CMOS Camera Array With Onboard Memory

    Science.gov (United States)

    Gat, Nahum

    2009-01-01

    A compact CMOS (complementary metal oxide semiconductor) camera system has been developed with high resolution (1.3 Megapixels), a USB (universal serial bus) 2.0 interface, and an onboard memory. Exposure times, and other operating parameters, are sent from a control PC via the USB port. Data from the camera can be received via the USB port and the interface allows for simple control and data capture through a laptop computer.

  11. AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA)

    OpenAIRE

    Veena G.S; Chandrika Prasad; Khaleel K

    2013-01-01

    The proposed work aims to create a smart application camera, with the intention of eliminating the need for a human presence to detect any unwanted sinister activities, such as theft in this case. Spread among the campus, are certain valuable biometric identification systems at arbitrary locations. The application monitosr these systems (hereafter referred to as “object”) using our smart camera system based on an OpenCV platform. By using OpenCV Haar Training, employing the Vio...

  12. Image noise induced errors in camera positioning

    OpenAIRE

    G. Chesi; Hung, YS

    2007-01-01

    The problem of evaluating worst-case camera positioning error induced by unknown-but-bounded (UBB) image noise for a given object-camera configuration is considered. Specifically, it is shown that upper bounds to the rotation and translation worst-case error for a certain image noise intensity can be obtained through convex optimizations. These upper bounds, contrary to lower bounds provided by standard optimization tools, allow one to design robust visual servo systems. © 2007 IEEE.

  13. Camera identification with deep convolutional networks

    OpenAIRE

    Baroffio, Luca; Bondi, Luca; Bestagini, Paolo; Tubaro, Stefano

    2016-01-01

    The possibility of detecting which camera has been used to shoot a specific picture is of paramount importance for many forensics tasks. This is extremely useful for copyright infringement cases, ownership attribution, as well as for detecting the authors of distributed illicit material (e.g., pedo-pornographic shots). Due to its importance, the forensics community has developed a series of robust detectors that exploit characteristic traces left by each camera on the acquired images during t...

  14. The TNG Near Infrared Camera Spectrometer

    OpenAIRE

    Baffa, C.; Comoretto, G.; Gennari, S.; F. Lisi; Oliva, E; Biliotti, V.; Checcucci, A.; Gavrioussev, V.; Giani, E; Ghinassi, F.; Hunt, L. K.; Maiolino, R.; Mannuci, F.; Marcucci, G.; Sozzi, M.

    2001-01-01

    NICS (acronym for Near Infrared Camera Spectrometer) is the near-infrared cooled camera-spectrometer that has been developed by the Arcetri Infrared Group at the Arcetri Astrophysical Observatory, in collaboration with the CAISMI-CNR for the TNG (the Italian National Telescope Galileo at La Palma, Canary Islands, Spain). As NICS is in its scientific commissioning phase, we report its observing capabilities in the near-infrared bands at the TNG, along with the measured performance and the limi...

  15. An imaging system for a gamma camera

    International Nuclear Information System (INIS)

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  16. Localization and Optimization Problems for Camera Networks

    OpenAIRE

    Borra, Domenica

    2013-01-01

    In the framework of networked control systems, we focus on networks of autonomous PTZ cameras. A large set of cameras communicating each other through a network is a widely used architecture in application areas like video surveillance, tracking and motion. First, we consider relative localization in sensor networks, and we tackle the issue of investigating the error propagation, in terms of the mean error on each component of the optimal estimator of the position vector. The relative error i...

  17. The Large APEX Bolometer Camera LABOCA

    OpenAIRE

    Siringo, G.; Kreysa, E.; Kovacs, A.; Schuller, F.; Weiss, A; Esch, W.; Gemuend, H. P.; Jethava, N.; Lundershausen, G.; Colin, A.; Guesten, R.; Menten, K. M.; Beelen, A; Bertoldi, F.; Beeman, J.W.

    2009-01-01

    The Large APEX Bolometer Camera, LABOCA, has been commissioned for operation as a new facility instrument t the Atacama Pathfinder Experiment 12m submillimeter telescope. This new 295-bolometer total power camera, operating in the 870 micron atmospheric window, combined with the high efficiency of APEX and the excellent atmospheric transmission at the site, offers unprecedented capability in mapping submillimeter continuum emission for a wide range of astronomical purposes.

  18. A stereoscopic lens for digital cinema cameras

    Science.gov (United States)

    Lipton, Lenny; Rupkalvis, John

    2015-03-01

    Live-action stereoscopic feature films are, for the most part, produced using a costly post-production process to convert planar cinematography into stereo-pair images and are only occasionally shot stereoscopically using bulky dual-cameras that are adaptations of the Ramsdell rig. The stereoscopic lens design described here might very well encourage more live-action image capture because it uses standard digital cinema cameras and workflow to save time and money.

  19. Performance comparison of streak camera recording systems

    International Nuclear Information System (INIS)

    Streak camera based diagnostics are vital to the inertial confinement fusion program at Sandia National Laboratories. Performance characteristics of various readout systems coupled to an EGG-AVO streak camera were analyzed and compared to scaling estimates. The purpose of the work was to determine the limits of the streak camera performance and the optimal fielding conditions for the Amador Valley Operations (AVO) streak camera systems. The authors measured streak camera limitations in spatial resolution and sensitivity. Streak camera limits on spatial resolution are greater than 18 lp/mm at 4% contrast. However, it will be difficult to make use of any resolution greater than this because of high spatial frequency variation in the photocathode sensitivity. They have measured a signal to noise of 3,000 with 0.3 mW/cm2 of 830 nm light at a 10 ns/mm sweep speed. They have compared lens coupling systems with and without micro-channel plate intensifiers and systems using film or charge coupled device (CCD) readout. There were no conditions where film was found to be an improvement over the CCD readout. Systems utilizing a CCD readout without an intensifier have comparable resolution, for these source sizes and at a nominal cost in signal to noise of 3, over those with an intensifier. Estimates of the signal-to-noise for different light coupling methods show how performance can be improved

  20. Lag Camera: A Moving Multi-Camera Array for Scene-Acquisition

    Directory of Open Access Journals (Sweden)

    Yi Xu

    2007-04-01

    Full Text Available Many applications, such as telepresence, virtual reality, and interactive walkthroughs, require a three-dimensional (3Dmodel of real-world environments. Methods, such as lightfields, geometric reconstruction and computer vision use cameras to acquire visual samples of the environment and construct a model. Unfortunately, obtaining models of real-world locations is a challenging task. In particular, important environments are often actively in use, containing moving objects, such as people entering and leaving the scene. The methods previously listed have difficulty in capturing the color and structure of the environment while in the presence of moving and temporary occluders. We describe a class of cameras called lag cameras. The main concept is to generalize a camera to take samples over space and time. Such a camera, can easily and interactively detect moving objects while continuously moving through the environment. Moreover, since both the lag camera and occluder are moving, the scene behind the occluder is captured by the lag camera even from viewpoints where the occluder lies in between the lag camera and the hidden scene. We demonstrate an implementation of a lag camera, complete with analysis and captured environments.

  1. Traffic monitoring with distributed smart cameras

    Science.gov (United States)

    Sidla, Oliver; Rosner, Marcin; Ulm, Michael; Schwingshackl, Gert

    2012-01-01

    The observation and monitoring of traffic with smart visions systems for the purpose of improving traffic safety has a big potential. Today the automated analysis of traffic situations is still in its infancy--the patterns of vehicle motion and pedestrian flow in an urban environment are too complex to be fully captured and interpreted by a vision system. 3In this work we present steps towards a visual monitoring system which is designed to detect potentially dangerous traffic situations around a pedestrian crossing at a street intersection. The camera system is specifically designed to detect incidents in which the interaction of pedestrians and vehicles might develop into safety critical encounters. The proposed system has been field-tested at a real pedestrian crossing in the City of Vienna for the duration of one year. It consists of a cluster of 3 smart cameras, each of which is built from a very compact PC hardware system in a weatherproof housing. Two cameras run vehicle detection and tracking software, one camera runs a pedestrian detection and tracking module based on the HOG dectection principle. All 3 cameras use sparse optical flow computation in a low-resolution video stream in order to estimate the motion path and speed of objects. Geometric calibration of the cameras allows us to estimate the real-world co-ordinates of detected objects and to link the cameras together into one common reference system. This work describes the foundation for all the different object detection modalities (pedestrians, vehicles), and explains the system setup, tis design, and evaluation results which we have achieved so far.

  2. 16 CFR 1025.45 - In camera materials.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false In camera materials. 1025.45 Section 1025.45... PROCEEDINGS Hearings § 1025.45 In camera materials. (a) Definition. In camera materials are documents... excluded from the public record. (b) In camera treatment of documents and testimony. The Presiding...

  3. Plenoptic processing methods for distributed camera arrays

    Science.gov (United States)

    Boyle, Frank A.; Yancey, Jerry W.; Maleh, Ray; Deignan, Paul

    2011-05-01

    Recent advances in digital photography have enabled the development and demonstration of plenoptic cameras with impressive capabilities. They function by recording sub-aperture images that can be combined to re-focus images or to generate stereoscopic pairs. Plenoptic methods are being explored for fusing images from distributed arrays of cameras, with a view toward applications in which hardware resources are limited (e.g. size, weight, power constraints). Through computer simulation and experimental studies, the influences of non-idealities such as camera position uncertainty are being considered. Component image rescaling and balancing methods are being explored to compensate. Of interest is the impact on precision passive ranging and super-resolution. In a preliminary experiment, a set of images from a camera array was recorded and merged to form a 3D representation of a scene. Conventional plenoptic refocusing was demonstrated and techniques were explored for balancing the images. Nonlinear methods were explored for combining the images limited the ghosting caused by sub-sampling. Plenoptic processing was explored as a means for determining 3D information from airborne video. Successive frames were processed as camera array elements to extract the heights of structures. Practical means were considered for rendering the 3D information in color.

  4. Phase camera experiment for Advanced Virgo

    Science.gov (United States)

    Agatsuma, Kazuhiro; van Beuzekom, Martin; van der Schaaf, Laura; van den Brand, Jo

    2016-07-01

    We report on a study of the phase camera, which is a frequency selective wave-front sensor of a laser beam. This sensor is utilized for monitoring sidebands produced by phase modulations in a gravitational wave (GW) detector. Regarding the operation of the GW detectors, the laser modulation/demodulation method is used to measure mirror displacements and used for the position controls. This plays a significant role because the quality of controls affect the noise level of the GW detector. The phase camera is able to monitor each sideband separately, which has a great benefit for the manipulation of the delicate controls. Also, overcoming mirror aberrations will be an essential part of Advanced Virgo (AdV), which is a GW detector close to Pisa. Especially low-frequency sidebands can be affected greatly by aberrations in one of the interferometer cavities. The phase cameras allow tracking such changes because the state of the sidebands gives information on mirror aberrations. A prototype of the phase camera has been developed and is currently tested. The performance checks are almost completed and the installation of the optics at the AdV site has started. After the installation and commissioning, the phase camera will be combined to a thermal compensation system that consists of CO2 lasers and compensation plates. In this paper, we focus on the prototype and show some limitations from the scanner performance.

  5. Testing of capsules used in radiography cameras

    International Nuclear Information System (INIS)

    The C-182 non-radioactive (dummy) radiography capsules manufactured by Atomic Energy of Canada Limited were mechanically tested by performing a prescribed number of cycles under preset conditions in a Model 100-3 Pneumat- A-Ray radiography camera. The capsules were observed throughout the cycling trials and tested for changes in dimension, weight, and leakage. After completion of the prescribed cycling trials each capsule was further tested for potential leakage by dye penetrant examination, sectioned at the equator and each half tested by dye penetrant examination, then sectioned again longitudinally and metallurgically examined. The results indicate that the capsules cycled under typical field conditions can become significantly deformed, and that deformation is generally related to the number of cycles that the capsules undergo. The deformation occurs almost exclusively on the end of the capsule entering the camera first. When the headhose cushion is removed the deformation occurs on both ends of the capsule. The deformation is related only to the pneumatic operating mode of the camera and there was no evidence for deformation when the camera was used under pipeline mode of operation. The only leak observed in this series of tests was not related to the deformed end of the capsule, but rather to the weld end of the capsule when the non weld end of the capsule was deformed from entering the camera. The leak was shown by dye penetrant examination and by photomicrographs of the cross section of the affected capsule

  6. Advanced system for Gamma Cameras modernization

    International Nuclear Information System (INIS)

    Analog and digital gamma cameras still largely used in developing countries. Many of them rely in old hardware electronics, which in many cases limits their use in actual nuclear medicine diagnostic studies. Consequently, there are different worldwide companies that produce medical equipment engaged into a partial or total Gamma Cameras modernization. Present work has demonstrated the possibility of substitution of almost entire signal processing electronics placed at inside a Gamma Camera detector head by a digitizer PCI card. this card includes four 12 Bits Analog-to-Digital-Converters of 50 MHz speed. It has been installed in a PC and controlled through software developed in Lab View. Besides, there were done some changes to the hardware inside the detector head including redesign of the Orientation Display Block (ODA card). Also a new electronic design was added to the Microprocessor Control Block (MPA card) which comprised a PIC micro controller acting as a tuning system for individual Photomultiplier Tubes. The images, obtained by measurement of 99mTc point radioactive source, using modernized camera head demonstrate its overall performance. The system was developed and tested in an old Gamma Camera ORBITER II SIEMENS GAMMASONIC at National Institute of Oncology and Radiobiology (INOR) under CAMELUD project supported by National Program PNOULU and IAEA . (Author)

  7. Calibration of Action Cameras for Photogrammetric Purposes

    Directory of Open Access Journals (Sweden)

    Caterina Balletti

    2014-09-01

    Full Text Available The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a easy to handle, (b capable of performing under extreme conditions and more importantly (c able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  8. Calibration of action cameras for photogrammetric purposes.

    Science.gov (United States)

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-01-01

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898

  9. Designing Camera Networks by Convex Quadratic Programming

    KAUST Repository

    Ghanem, Bernard

    2015-05-04

    ​In this paper, we study the problem of automatic camera placement for computer graphics and computer vision applications. We extend the problem formulations of previous work by proposing a novel way to incorporate visibility constraints and camera-to-camera relationships. For example, the placement solution can be encouraged to have cameras that image the same important locations from different viewing directions, which can enable reconstruction and surveillance tasks to perform better. We show that the general camera placement problem can be formulated mathematically as a convex binary quadratic program (BQP) under linear constraints. Moreover, we propose an optimization strategy with a favorable trade-off between speed and solution quality. Our solution is almost as fast as a greedy treatment of the problem, but the quality is significantly higher, so much so that it is comparable to exact solutions that take orders of magnitude more computation time. Because it is computationally attractive, our method also allows users to explore the space of solutions for variations in input parameters. To evaluate its effectiveness, we show a range of 3D results on real-world floorplans (garage, hotel, mall, and airport). ​

  10. Modulated CMOS camera for fluorescence lifetime microscopy.

    Science.gov (United States)

    Chen, Hongtao; Holst, Gerhard; Gratton, Enrico

    2015-12-01

    Widefield frequency-domain fluorescence lifetime imaging microscopy (FD-FLIM) is a fast and accurate method to measure the fluorescence lifetime of entire images. However, the complexity and high costs involved in construction of such a system limit the extensive use of this technique. PCO AG recently released the first luminescence lifetime imaging camera based on a high frequency modulated CMOS image sensor, QMFLIM2. Here we tested and provide operational procedures to calibrate the camera and to improve the accuracy using corrections necessary for image analysis. With its flexible input/output options, we are able to use a modulated laser diode or a 20 MHz pulsed white supercontinuum laser as the light source. The output of the camera consists of a stack of modulated images that can be analyzed by the SimFCS software using the phasor approach. The nonuniform system response across the image sensor must be calibrated at the pixel level. This pixel calibration is crucial and needed for every camera settings, e.g. modulation frequency and exposure time. A significant dependency of the modulation signal on the intensity was also observed and hence an additional calibration is needed for each pixel depending on the pixel intensity level. These corrections are important not only for the fundamental frequency, but also for the higher harmonics when using the pulsed supercontinuum laser. With these post data acquisition corrections, the PCO CMOS-FLIM camera can be used for various biomedical applications requiring a large frame and high speed acquisition. PMID:26500051

  11. Gamma cameras - a method of evaluation

    International Nuclear Information System (INIS)

    Full text: With the sophistication and longevity of the modern gamma camera it is not often that the need arises to evaluate a gamma camera for purchase. We have recently been placed in the position of retiring our two single headed cameras of some vintage and replacing them with a state of the art dual head variable angle gamma camera. The process used for the evaluation consisted of five parts: (1) Evaluation of the technical specification as expressed in the tender document; (2) A questionnaire adapted from the British Society of Nuclear Medicine; (3) Site visits to assess gantry configuration, movement, patient access and occupational health, welfare and safety considerations; (4) Evaluation of the processing systems offered; (5) Whole of life costing based on equally configured systems. The results of each part of the evaluation were expressed using a weighted matrix analysis with each of the criteria assessed being weighted in accordance with their importance to the provision of an effective nuclear medicine service for our centre and the particular importance to paediatric nuclear medicine. This analysis provided an objective assessment of each gamma camera system from which a purchase recommendation was made. Copyright (2000) The Australian and New Zealand Society of Nuclear Medicine Inc

  12. Camera Calibration with Radial Variance Component Estimation

    Science.gov (United States)

    Mélykuti, B.; Kruck, E. J.

    2014-11-01

    Camera calibration plays a more and more important role in recent times. Beside real digital aerial survey cameras the photogrammetric market is dominated by a big number of non-metric digital cameras mounted on UAVs or other low-weight flying platforms. The in-flight calibration of those systems has a significant role to enhance the geometric accuracy of survey photos considerably. It is expected to have a better precision of photo measurements in the center of images then along the edges or in the corners. With statistical methods the accuracy of photo measurements in dependency of the distance of points from image center has been analyzed. This test provides a curve for the measurement precision as function of the photo radius. A high number of camera types have been tested with well penetrated point measurements in image space. The result of the tests led to a general consequence to show a functional connection between accuracy and radial distance and to give a method how to check and enhance the geometrical capability of the cameras in respect to these results.

  13. Design of Endoscopic Capsule With Multiple Cameras.

    Science.gov (United States)

    Gu, Yingke; Xie, Xiang; Li, Guolin; Sun, Tianjia; Wang, Dan; Yin, Zheng; Zhang, Pengfei; Wang, Zhihua

    2015-08-01

    In order to reduce the miss rate of the wireless capsule endoscopy, in this paper, we propose a new system of the endoscopic capsule with multiple cameras. A master-slave architecture, including an efficient bus architecture and a four level clock management architecture, is applied for the Multiple Cameras Endoscopic Capsule (MCEC). For covering more area of the gastrointestinal tract wall with low power, multiple cameras with a smart image capture strategy, including movement sensitive control and camera selection, are used in the MCEC. To reduce the data transfer bandwidth and power consumption to prolong the MCEC's working life, a low complexity image compressor with PSNR 40.7 dB and compression rate 86% is implemented. A chipset is designed and implemented for the MCEC and a six cameras endoscopic capsule prototype is implemented by using the chipset. With the smart image capture strategy, the coverage rate of the MCEC prototype can achieve 98% and its power consumption is only about 7.1 mW. PMID:25376042

  14. Hidden cameras everything you need to know about covert recording, undercover cameras and secret filming

    CERN Document Server

    Plomin, Joe

    2016-01-01

    Providing authoritative information on the practicalities of using hidden cameras to expose abuse or wrongdoing, this book is vital reading for anyone who may use or encounter secret filming. It gives specific advice on using phones or covert cameras and unravels the complex legal and ethical issues that need to be considered.

  15. Mobile phone camera benchmarking: combination of camera speed and image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  16. Development of broad-view camera unit for laparoscopic surgery.

    Science.gov (United States)

    Kawahara, Tomohiro; Takaki, Takeshi; Ishii, Idaku; Okajima, Masazumi

    2009-01-01

    A disadvantage of laparoscopic surgery is the narrow operative field provided by the endoscope camera. This paper describes a newly developed broad-view camera unit for use with the Broad-View Camera System, which is capable of providing a wider view of the internal organs during laparoscopic surgery. The developed camera unit is composed of a miniature color CMOS camera, an indwelling needle, and an extra-thin connector. The specific design of the camera unit and the method for positioning it are shown. The performance of the camera unit has been confirmed through basic and animal experiments. PMID:19963983

  17. The development of large-aperture test system of infrared camera and visible CCD camera

    Science.gov (United States)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  18. Acceptance/Operational Test Report for Tank 241-AN-104 camera and camera purge control system

    International Nuclear Information System (INIS)

    This Acceptance/Operational Test Procedure (ATP/OTP) will document the satisfactory operation of the camera purge panel, purge control panel, color camera system and associated control components destined for installation. The final acceptance of the complete system will be performed in the field. The purge panel and purge control panel will be tested for its safety interlock which shuts down the camera and pan-and-tilt inside the tank vapor space during loss of purge pressure and that the correct purge volume exchanges are performed as required by NFPA 496. This procedure is separated into seven sections. This Acceptance/Operational Test Report documents the successful acceptance and operability testing of the 241-AN-104 camera system and camera purge control system

  19. Global Calibration of Multiple Cameras Based on Sphere Targets

    OpenAIRE

    Junhua Sun; Huabin He; Debing Zeng

    2016-01-01

    Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three), while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphe...

  20. Mobile Camera Array Calibration for Light Field Acquisition

    OpenAIRE

    Xu, Yichao; Maeno, Kazuki; Nagahara, Hajime; Taniguchi, Rin-ichiro

    2014-01-01

    The light field camera is useful for computer graphics and vision applications. Calibration is an essential step for these applications. After calibration, we can rectify the captured image by using the calibrated camera parameters. However, the large camera array calibration method, which assumes that all cameras are on the same plane, ignores the orientation and intrinsic parameters. The multi-camera calibration technique usually assumes that the working volume and viewpoints are fixed. In ...

  1. Results of the prototype camera for FACT

    International Nuclear Information System (INIS)

    The maximization of the photon detection efficiency (PDE) is a key issue in the development of cameras for Imaging Atmospheric Cherenkov Telescopes. Geiger-mode Avalanche Photodiodes (G-APD) are a promising candidate to replace the commonly used photomultiplier tubes by offering a larger PDE and in addition a facilitated handling. The FACT (First G-APD Cherenkov Telescope) project evaluates the feasibility of this change by building a camera based on 1440 G-APDs for an existing small telescope. As a first step towards a full camera, a prototype module using 144 G-APDs was successfully built and tested. The strong temperature dependence of G-APDs is compensated using a feedback system, which allows to keep the gain of the G-APDs constant to 0.5%.

  2. The Calibration of the FACT Camera

    International Nuclear Information System (INIS)

    Full text: The First G-APD Cherenkov Telescope (FACT) collaboration builds a camera for an Imaging Atmospheric Cherenkov Telescope which is based on G-APDs and a readout using the Domino Ring Sampling (DRS4) chip. The amplitude calibration of the readout chain must account for a wide variety of effects specific to this design of the camera, eg. the strong temperature dependence of the G-APDs, the quality of the gluing between the optical components as well as the characteristics of the DRS4 chip. The basis for this calibration are an online feedback system to stabilize the gain of the G-APDs, laboratory measurements and special runs during data taking. In this talk, the calibration system for FACT is presented including the current experience with the camera in laboratory measurements. (author)

  3. PEOPLE REIDENTIFCATION IN A DISTRIBUTED CAMERA NETWORK

    Directory of Open Access Journals (Sweden)

    Icaro Oliveira de Oliveira

    2010-06-01

    Full Text Available This paper presents an approach to the object reidentification problem in a distributed camera network system. The reidentification or reacquisition problem consists essentially on the matching process of images acquired from different cameras. This work is applied in a monitored environment by cameras. This application is important to modern security systems, in which the targets presence identification in the environment expands the capacity of action by security agents in real time and provides important parameters like localization for each target. We used target’s interest points and target’s color with features for reidentification. The satisfactory results were obtained from real experiments in public video datasets and synthetic images with noise.

  4. Camera placement in integer lattices (extended abstract)

    Science.gov (United States)

    Pocchiola, Michel; Kranakis, Evangelos

    1990-09-01

    Techniques for studying an art gallery problem (the camera placement problem) in the infinite lattice (L sup d) of d tuples of integers are considered. A lattice point A is visible from a camera C positioned at a vertex of (L sup d) if A does not equal C and if the line segment joining A and C crosses no other lattice vertex. By using a combination of probabilistic, combinatorial optimization and algorithmic techniques the position they must occupy in the lattice (L sup d) in the order to maximize their visibility can be determined in polynomial time, for any given number s less than or equal to (5 sup d) of cameras. This improves previous results for s less than or equal to (3 sup d).

  5. Collimated trans-axial tomographic scintillation camera

    International Nuclear Information System (INIS)

    The objects of this invention are first to reduce the time required to obtain statistically significant data in trans-axial tomographic radioisotope scanning using a scintillation camera. Secondly, to provide a scintillation camera system to increase the rate of acceptance of radioactive events to contribute to the positional information obtainable from a known radiation source without sacrificing spatial resolution. Thirdly to reduce the scanning time without loss of image clarity. The system described comprises a scintillation camera detector, means for moving this in orbit about a cranial-caudal axis relative to a patient and a collimator having septa defining apertures such that gamma rays perpendicular to the axis are admitted with high spatial resolution, parallel to the axis with low resolution. The septa may be made of strips of lead. Detailed descriptions are given. (U.K.)

  6. Progress in gamma-camera quality control

    International Nuclear Information System (INIS)

    The latest developments in the art of quality control of gamma cameras are emphasized in a simple historical manner. The exhibit describes methods developed by the Bureau of Radiological Health (BRH) in comparison with previously accepted techniques for routine evaluation of gamma-camera performance. Gamma cameras require periodic testing of their performance parameters to ensure that their optimum imaging capability is maintained. Quality control parameters reviewed are field uniformity, spatial distortion, intrinsic and spatial resolution, and temporal resolution. The methods developed for the measurement of these parameters are simple, not requiring additional electronic equipment or computers. The data has been arranged in six panels as follows: schematic diagrams of the most important test patterns used in nuclear medicine; field uniformity; regional displacements in transmission pattern image; spatial resolution using the BRH line-source phantom; instrinsic resolution using the BRH Test Pattern; and Temporal resolution and count losses at high counting rates

  7. Small Orbital Stereo Tracking Camera Technology Development

    Science.gov (United States)

    Bryan, Tom; MacLeod, Todd; Gagliano, Larry

    2016-01-01

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  8. Lightweight, Compact, Long Range Camera Design

    Science.gov (United States)

    Shafer, Donald V.

    1983-08-01

    The model 700 camera is the latest in a 30-year series of LOROP cameras developed by McDonnell Douglas Astronautics Company (MDAC) and their predecessor companies. The design achieves minimum size and weight and is optimized for low-contrast performance. The optical system includes a 66-inch focal length, f/5.6, apochromatic lens and three folding mirrors imaging on a 4.5-inch square format. A three-axis active stabilization system provides the capability for long exposure time and, hence, fine grain films can be used. The optical path forms a figure "4" behind the lens. In front of the lens is a 45° pointing mirror. This folded configuration contributed greatly to the lightweight and compact design. This sequential autocycle frame camera has three modes of operation with one, two, and three step positions to provide a choice of swath widths within the range of lateral coverage. The magazine/shutter assembly rotates in relationship with the pointing mirror and aircraft drift angle to maintain film format alignment with the flight path. The entire camera is angular rate stabilized in roll, pitch, and yaw. It also employs a lightweight, electro-magnetically damped, low-natural-frequency spring suspension for passive isolation from aircraft vibration inputs. The combined film transport and forward motion compensation (FMC) mechanism, which is operated by a single motor, is contained in a magazine that can, depending on accessibility which is installation dependent, be changed in flight. The design also stresses thermal control, focus control, structural stiffness, and maintainability. The camera is operated from a remote control panel. This paper describes the leading particulars and features of the camera as related to weight and configuration.

  9. Scintillating track image camera-SCITIC

    CERN Document Server

    Sato, Akira; Ieiri, Masaharu; Iwata, Soma; Kadowaki, Tetsuhito; Kurosawa, Maki; Nagae, Tomohumi; Nakai, Kozi

    2004-01-01

    A new type of track detector, scintillating track image camera (SCITIC) has been developed. Scintillating track images of particles in a scintillator are focused by an optical lens system on a photocathode on image intesifier tube (IIT). The image signals are amplified by an IIT-cascade and stored by a CCD camera. The performance of the detector has been tested with cosmic-ray muons and with pion- and proton-beams from the KEK 12-GeV proton synchrotron. Data of the test experiments have shown promising features of SCITIC as a triggerable track detector with a variety of possibilities. 7 Refs.

  10. A multidetector scintillation camera with 254 channels

    DEFF Research Database (Denmark)

    Sveinsdottir, E; Larsen, B; Rommer, P;

    1977-01-01

    A computer-based scintillation camera has been designed for both dynamic and static radionuclide studies. The detecting head has 254 independent sodium iodide crystals, each with a photomultiplier and amplifier. In dynamic measurements simultaneous events can be recorded, and 1 million total counts...... per second can be accommodated with less than 0.5% loss in any one channel. This corresponds to a calculated deadtime of 5 nsec. The multidetector camera is being used for 133Xe dynamic studies of regional cerebral blood flow in man and for 99mTc and 197 Hg static imaging of the brain....

  11. Analysis of Brown camera distortion model

    Science.gov (United States)

    Nowakowski, Artur; Skarbek, Władysław

    2013-10-01

    Contemporary image acquisition devices introduce optical distortion into image. It results in pixel displacement and therefore needs to be compensated for many computer vision applications. The distortion is usually modeled by the Brown distortion model, which parameters can be included in camera calibration task. In this paper we describe original model, its dependencies and analyze orthogonality with regard to radius for its decentering distortion component. We also report experiments with camera calibration algorithm included in OpenCV library, especially a stability of distortion parameters estimation is evaluated.

  12. Performance assessment of gamma cameras. Part 1

    International Nuclear Information System (INIS)

    The Dept. of Health and Social Security and the Scottish Home and Health Dept. has sponsored a programme of measurements of the important performance characteristics of 15 leading types of gamma cameras providing a routine radionuclide imaging service in hospitals throughout the UK. Measurements have been made of intrinsic resolution, system resolution, non-uniformity, spatial distortion, count rate performance, sensitivity, energy resolution and shield leakage. The main aim of this performance assessment was to provide sound information to the NHS to ease the task of those responsible for the purchase of gamma cameras. (U.K.)

  13. Scintillating track image camera-SCITIC

    International Nuclear Information System (INIS)

    A new type of track detector, scintillating track image camera (SCITIC) has been developed. Scintillating track images of particles in a scintillator are focused by an optical lens system on a photocathode on image intensifier tube (IIT). The image signals are amplified by an IIT-cascade and stored by a CCD camera. The performance of the detector has been tested with cosmic-ray muons and with pion- and proton-beams from the KEK 12-GeV proton synchrotron. Data of the test experiments have shown promising features of SCITIC as a triggerable track detector with a variety of possibilities. (author)

  14. Compact Optical Technique for Streak Camera Calibration

    International Nuclear Information System (INIS)

    The National Ignition Facility (NIF) is under construction at the Lawrence Livermore National Laboratory (LLNL) for the U.S. Department of Energy Stockpile Stewardship Program. Optical streak cameras are an integral part of the experimental diagnostics instrumentation. To accurately reduce data from the streak cameras a temporal calibration is required. This article describes a technique for generating trains of precisely timed short-duration optical pulses1 (optical comb generators) that are suitable for temporal calibrations. These optical comb generators (Figure 1) are used with the LLNL optical streak cameras. They are small, portable light sources that produce a series of temporally short, uniformly spaced, optical pulses. Comb generators have been produced with 0.1, 0.5, 1, 3, 6, and 10-GHz pulse trains of 780-nm wavelength light with individual pulse durations of ∼25-ps FWHM. Signal output is via a fiber-optic connector. Signal is transported from comb generator to streak camera through multi-mode, graded-index optical fibers. At the NIF, ultra-fast streak-cameras are used by the Laser Fusion Program experimentalists to record fast transient optical signals. Their temporal resolution is unmatched by any other transient recorder. Their ability to spatially discriminate an image along the input slit allows them to function as a one-dimensional image recorder, time-resolved spectrometer, or multichannel transient recorder. Depending on the choice of photocathode, they can be made sensitive to photon energies from 1.1 eV to 30 keV and beyond. Comb generators perform two important functions for LLNL streak-camera users. First, comb generators are used as a precision time-mark generator for calibrating streak camera sweep rates. Accuracy is achieved by averaging many streak camera images of comb generator signals. Time-base calibrations with portable comb generators are easily done in both the calibration laboratory and in situ. Second, comb signals are applied

  15. Camera Trajectory fromWide Baseline Images

    Science.gov (United States)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the

  16. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  17. Towards Adaptive Virtual Camera Control In Computer Games

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2011-01-01

    Automatic camera control aims to define a framework to control virtual camera movements in dynamic and unpredictable virtual environments while ensuring a set of desired visual properties. We inves- tigate the relationship between camera placement and playing behaviour in games and build a user...... model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platform game, we cluster gaze and camera information to identify camera behaviour profiles and we employ...... machine learning to build predictive models of the virtual camera behaviour. The perfor- mance of the models on unseen data reveals accuracies above 70% for all the player behaviour types identified. The characteristics of the gener- ated models, their limits and their use for creating adaptive automatic...

  18. Digital Camera Project Fosters Communication Skills

    Science.gov (United States)

    Fisher, Ashley; Lazaros, Edward J.

    2009-01-01

    This article details the many benefits of educators' use of digital camera technology and provides an activity in which students practice taking portrait shots of classmates, manipulate the resulting images, and add language arts practice by interviewing their subjects to produce a photo-illustrated Word document. This activity gives…

  19. Teaching Camera Calibration by a Constructivist Methodology

    Science.gov (United States)

    Samper, D.; Santolaria, J.; Pastor, J. J.; Aguilar, J. J.

    2010-01-01

    This article describes the Metrovisionlab simulation software and practical sessions designed to teach the most important machine vision camera calibration aspects in courses for senior undergraduate students. By following a constructivist methodology, having received introductory theoretical classes, students use the Metrovisionlab application to…

  20. Camera Systems Rapidly Scan Large Structures

    Science.gov (United States)

    2013-01-01

    Needing a method to quickly scan large structures like an aircraft wing, Langley Research Center developed the line scanning thermography (LST) system. LST works in tandem with a moving infrared camera to capture how a material responds to changes in temperature. Princeton Junction, New Jersey-based MISTRAS Group Inc. now licenses the technology and uses it in power stations and industrial plants.

  1. Video Analysis with a Web Camera

    Science.gov (United States)

    Wyrembeck, Edward P.

    2009-01-01

    Recent advances in technology have made video capture and analysis in the introductory physics lab even more affordable and accessible. The purchase of a relatively inexpensive web camera is all you need if you already have a newer computer and Vernier's Logger Pro 3 software. In addition to Logger Pro 3, other video analysis tools such as…

  2. Solutions to the linear camera calibration problem

    Science.gov (United States)

    Grosky, William I.; Tamburino, Louis A.

    1987-01-01

    The general linear camera calibration problem is formulated and several classification schemes for various subcases of this problem are developed. For each subcase, simple solutions are found that satisfy all necessary constraints. The results improve those already in the literature with respect to simplicity, efficiency, and coverage. However, the classification scheme is not exhaustive.

  3. EOD Facilities Manual. Camera Calibration Laboratory Capabilities

    Science.gov (United States)

    1972-01-01

    The tests and equipment are described for measuring the exact performance characteristics of camera systems for earth resources, space, and other applications. The tests discussed include: modulation transfer function, field irradiance, veiling glare, T-number tests, shutter speed, spectral transmission, and focal length.

  4. Camera! Action! Collaborate with Digital Moviemaking

    Science.gov (United States)

    Swan, Kathleen Owings; Hofer, Mark; Levstik, Linda S.

    2007-01-01

    Broadly defined, digital moviemaking integrates a variety of media (images, sound, text, video, narration) to communicate with an audience. There is near-ubiquitous access to the necessary software (MovieMaker and iMovie are bundled free with their respective operating systems) and hardware (computers with Internet access, digital cameras, etc.).…

  5. A novel super-resolution camera model

    Science.gov (United States)

    Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli

    2015-05-01

    Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.

  6. Lights, Camera, Read! Arizona Reading Program Manual.

    Science.gov (United States)

    Arizona State Dept. of Library, Archives and Public Records, Phoenix.

    This document is the manual for the Arizona Reading Program (ARP) 2003 entitled "Lights, Camera, Read!" This theme spotlights books that were made into movies, and allows readers to appreciate favorite novels and stories that have progressed to the movie screen. The manual consists of eight sections. The Introduction includes welcome letters from…

  7. Face identification in videos from mobile cameras

    NARCIS (Netherlands)

    Mu, Meiru; Spreeuwers, Luuk; Veldhuis, Raymond

    2014-01-01

    It is still challenging to recognize faces reliably in videos from mobile camera, although mature automatic face recognition technology for still images has been available for quite some time. Suppose we want to be alerted when suspects appear in the recording of a police Body-Cam, even a good face

  8. GAMPIX: A new generation of gamma camera

    Science.gov (United States)

    Gmar, M.; Agelou, M.; Carrel, F.; Schoepff, V.

    2011-10-01

    Gamma imaging is a technique of great interest in several fields such as homeland security or decommissioning/dismantling of nuclear facilities in order to localize hot spots of radioactivity. In the nineties, previous works led by CEA LIST resulted in the development of a first generation of gamma camera called CARTOGAM, now commercialized by AREVA CANBERRA. Even if its performances can be adapted to many applications, its weight of 15 kg can be an issue. For several years, CEA LIST has been developing a new generation of gamma camera, called GAMPIX. This system is mainly based on the Medipix2 chip, hybridized to a 1 mm thick CdTe substrate. A coded mask replaces the pinhole collimator in order to increase the sensitivity of the gamma camera. Hence, we obtained a very compact device (global weight less than 1 kg without any shielding), which is easy to handle and to use. In this article, we present the main characteristics of GAMPIX and we expose the first experimental results illustrating the performances of this new generation of gamma camera.

  9. Case on Camera--An Audience Verdict.

    Science.gov (United States)

    Wober, J. M.

    In July 1984, British Channel 4 began televising Case on Camera, a series based on genuine arbitration of civil cases carried out by a retired judge, recorded as it happened, and edited into half hour programs. Because of the Independent Broadcasting Authority's concern for the rights to privacy, a systematic study of public reaction to the series…

  10. Development of a multispectral camera system

    Science.gov (United States)

    Sugiura, Hiroaki; Kuno, Tetsuya; Watanabe, Norihiro; Matoba, Narihiro; Hayashi, Junichiro; Miyake, Yoichi

    2000-05-01

    A highly accurate multispectral camera and the application software have been developed as a practical system to capture digital images of the artworks stored in galleries and museums. Instead of recording color data in the conventional three RGB primary colors, the newly developed camera and the software carry out a pixel-wise estimation of spectral reflectance, the color data specific to the object, to enable the practical multispectral imaging. In order to realize the accurate multispectral imaging, the dynamic range of the camera is set to 14 bits or over and the output bits to 14 bits so as to allow capturing even when the difference in light quantity between the each channel is large. Further, a small-size rotary color filter was simultaneously developed to keep the camera to a practical size. We have developed software capable of selecting the optimum combination of color filters available in the market. Using this software, n types of color filter can be selected from m types of color filter giving a minimum Euclidean distance or minimum color difference in CIELAB color space between actual and estimated spectral reflectance as to 147 types of oil paint samples.

  11. Lightweight Electronic Camera for Research on Clouds

    Science.gov (United States)

    Lawson, Paul

    2006-01-01

    "Micro-CPI" (wherein "CPI" signifies "cloud-particle imager") is the name of a small, lightweight electronic camera that has been proposed for use in research on clouds. It would acquire and digitize high-resolution (3- m-pixel) images of ice particles and water drops at a rate up to 1,000 particles (and/or drops) per second.

  12. Fog camera to visualize ionizing charged particles

    International Nuclear Information System (INIS)

    The human being can not perceive the different types of ionizing radiation, natural or artificial, present in the nature, for what appropriate detection systems have been developed according to the sensibility to certain radiation type and certain energy type. The objective of this work was to build a fog camera to visualize the traces, and to identify the trajectories, produced by charged particles with high energy, coming mainly of the cosmic rays. The origin of the cosmic rays comes from the solar radiation generated by solar eruptions where the protons compose most of this radiation. It also comes, of the galactic radiation which is composed mainly of charged particles and gamma rays that comes from outside of the solar system. These radiation types have energy time millions higher that those detected in the earth surface, being more important as the height on the sea level increases. These particles in their interaction produce secondary particles that are detectable by means of this cameras type. The camera operates by means of a saturated atmosphere of alcohol vapor. In the moment in that a charged particle crosses the cold area of the atmosphere, the medium is ionized and the particle acts like a condensation nucleus of the alcohol vapor, leaving a visible trace of its trajectory. The built camera was very stable, allowing the detection in continuous form and the observation of diverse events. (Author)

  13. FPS camera sync and reset chassis

    International Nuclear Information System (INIS)

    The sync and reset chassis provides all the circuitry required to synchronize an event to be studied, a remote free-running focus projection and scanning (FPS) data-acquisition TV camera, and a video signal recording system. The functions, design, and operation of this chassis are described in detail

  14. Increased Automation in Stereo Camera Calibration Techniques

    Directory of Open Access Journals (Sweden)

    Brandi House

    2006-08-01

    Full Text Available Robotic vision has become a very popular field in recent years due to the numerous promising applications it may enhance. However, errors within the cameras and in their perception of their environment can cause applications in robotics to fail. To help correct these internal and external imperfections, stereo camera calibrations are performed. There are currently many accurate methods of camera calibration available; however, most or all of them are time consuming and labor intensive. This research seeks to automate the most labor intensive aspects of a popular calibration technique developed by Jean-Yves Bouguet. His process requires manual selection of the extreme corners of a checkerboard pattern. The modified process uses embedded LEDs in the checkerboard pattern to act as active fiducials. Images are captured of the checkerboard with the LEDs on and off in rapid succession. The difference of the two images automatically highlights the location of the four extreme corners, and these corner locations take the place of the manual selections. With this modification to the calibration routine, upwards of eighty mouse clicks are eliminated per stereo calibration. Preliminary test results indicate that accuracy is not substantially affected by the modified procedure. Improved automation to camera calibration procedures may finally penetrate the barriers to the use of calibration in practice.

  15. Shadowgraph illumination techniques for framing cameras

    Energy Technology Data Exchange (ETDEWEB)

    Malone, R.M.; Flurer, R.L.; Frogget, B.C. [Bechtel Nevada, Los Alamos, NM (United States). Los Alamos Operations; Sorenson, D.S.; Holmes, V.H.; Obst, A.W. [Los Alamos National Lab., NM (United States)

    1997-06-01

    Many pulse power applications in use at the Pegasus facility at the Los Alamos National Laboratory require specialized imaging techniques. Due to the short event duration times, visible images are recorded by high speed electronic framing cameras. Framing cameras provide the advantages of high speed movies of back light experiments. These high speed framing cameras require bright illumination sources to record images with 10 ns integration times. High power lasers offer sufficient light for back illuminating the target assemblies; however, laser speckle noise lowers the contrast in the image. Laser speckle noise also limits the effective resolution. This discussion focuses on the use of telescopes to collect images 50 feet away. Both light field and dark field illumination techniques are compared. By adding relay lenses between the assembly target and the telescope, a high resolution magnified image can be recorded. For dark field illumination, these relay lenses can be used to separate the object field from the illumination laser. The illumination laser can be made to focus onto the opaque secondary of a Schmidt telescope. Thus, the telescope only collects scattered light from the target assembly. This dark field illumination eliminates the laser speckle noise and allows high resolution images to be recorded. Using the secondary of the telescope to block the illumination laser makes dark field illumination an ideal choice for the framing camera.

  16. Parametrizable cameras for 3D computational steering

    NARCIS (Netherlands)

    Mulder, J.D.; Wijk, J.J. van

    1997-01-01

    We present a method for the definition of multiple views in 3D interfaces for computational steering. The method uses the concept of a point-based parametrizable camera object. This concept enables a user to create and configure multiple views on his custom 3D interface in an intuitive graphical man

  17. New nuclear medicine gamma camera systems

    International Nuclear Information System (INIS)

    The acquisition of the Open E.CAM and DIACAM gamma cameras by Makati Medical Center is expected to enhance the capabilities of its nuclear medicine facilities. When used as an aid to diagnosis, nuclear medicine entails the introduction of a minute amount of radioactive material into the patient; thus, no reaction or side-effect is expected. When it reaches the particular target organ, depending on the radiopharmaceutical, a lesion will appear as a decrease (cold) area or increase (hot) area in the radioactive distribution as recorded byu the gamma cameras. Gamma camera images in slices or SPECT (Single Photon Emission Computer Tomography), increase the sensitivity and accuracy in detecting smaller and deeply seated lesions, which otherwise may not be detected in the regular single planar images. Due to the 'open' design of the equipment, claustrophobic patients will no longer feel enclosed during the procedure. These new gamma cameras yield improved resolution and superb image quality, and the higher photon sensitivity shortens imaging acquisition time. The E.CAM, which is the latest generation gamma camera, is featured by its variable angle dual-head system, the only one available in the Philipines, and the excellent choice for Myocardial Perfusion Imaging (MPI). From the usual 45 minutes, the acquisition time for gated SPECT imaging of the heart has now been remarkably reduced to 12 minutes. 'Gated' infers snap-shots of the heart in selected phases of its contraction and relaxation as triggered by ECG. The DIACAM is installed in a room with access outside the main entrance of the department, intended specially for bed-borne patients. Both systems are equipped with a network of high performance Macintosh ICOND acquisition and processing computers. Added to the hardware is the ICON processing software which allows total simultaneous acquisition and processing capabilities in the same operator's terminal. Video film and color printers are also provided. Together

  18. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  19. X-ray imaging using digital cameras

    Science.gov (United States)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  20. Measuring rainfall with low-cost cameras

    Science.gov (United States)

    Allamano, Paola; Cavagnero, Paolo; Croci, Alberto; Laio, Francesco

    2016-04-01

    In Allamano et al. (2015), we propose to retrieve quantitative measures of rainfall intensity by relying on the acquisition and analysis of images captured from professional cameras (SmartRAIN technique in the following). SmartRAIN is based on the fundamentals of camera optics and exploits the intensity changes due to drop passages in a picture. The main steps of the method include: i) drop detection, ii) blur effect removal, iii) estimation of drop velocities, iv) drop positioning in the control volume, and v) rain rate estimation. The method has been applied to real rain events with errors of the order of ±20%. This work aims to bridge the gap between the need of acquiring images via professional cameras and the possibility of exporting the technique to low-cost webcams. We apply the image processing algorithm to frames registered with low-cost cameras both in the lab (i.e., controlled rain intensity) and field conditions. The resulting images are characterized by lower resolutions and significant distortions with respect to professional camera pictures, and are acquired with fixed aperture and a rolling shutter. All these hardware limitations indeed exert relevant effects on the readability of the resulting images, and may affect the quality of the rainfall estimate. We demonstrate that a proper knowledge of the image acquisition hardware allows one to fully explain the artefacts and distortions due to the hardware. We demonstrate that, by correcting these effects before applying the image processing algorithm, quantitative rain intensity measures are obtainable with a good accuracy also with low-cost modules.

  1. Analysis of RED ONE Digital Cinema Camera and RED Workflow

    OpenAIRE

    Foroughi Mobarakeh, Taraneh

    2009-01-01

    RED Digital Cinema is a rather new company that has developed a camera that has shaken the world of the film industry, the RED One camera. RED One is a digital cinema camera with the characteristics of a 35mm film camera. With a custom made 12 megapixel CMOS sensor it offers images with a filmic look that cannot be achieved with many other digital cinema cameras. With a new camera comes a new set of media files to work with, which brings new software applications supporting them. RED Digital ...

  2. Disaster Response for Effective Mapping and Wayfinding

    NARCIS (Netherlands)

    Gunawan L.T.

    2013-01-01

    The research focuses on guiding the affected population towards a safe location in a disaster area by utilizing their self-help capacity with prevalent mobile technology. In contrast to the traditional centralized information management systems for disaster response, this research proposes a decen-

  3. Control of the movement of a ROV camera; Controle de posicionamento da camera de um ROV

    Energy Technology Data Exchange (ETDEWEB)

    Lima, Alexandre S. de; Dutra, Max Suell [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE); Reis, Ney Robinson S. dos [PETROBRAS, Rio de Janeiro, RJ (Brazil). Centro de Pesquisas; Santos, Auderi V. dos [Pontificia Univ. Catolica do Rio de Janeiro, RJ (Brazil)

    2004-07-01

    The ROV's (Remotely Operated Vehicles) are used for installation and maintenance of underwater exploration systems in the oil industry. These systems are operated in distant areas thus being of essential importance the use of a cameras for the visualization of the work area. The synchronization necessary in the accomplishment of the tasks when operating the manipulator and the movement of the camera for the operator is a complex task. For the accomplishment of this synchronization is presented in this work the analysis of the interconnection of the systems. The concatenation of the systems is made through the interconnection of the electric signals of the proportional valves of the actuators of the manipulator with the signals of the proportional valves of the actuators of the camera. With this interconnection the approach accompaniment of the movement of the manipulator for the camera, keeping the object of the visualization of the field of vision of the operator is obtained. (author)

  4. National Guidelines for Digital Camera Systems Certification

    Science.gov (United States)

    Yaron, Yaron; Keinan, Eran; Benhamu, Moshe; Regev, Ronen; Zalmanzon, Garry

    2016-06-01

    Digital camera systems are a key component in the production of reliable, geometrically accurate, high-resolution geospatial products. These systems have replaced film imaging in photogrammetric data capturing. Today, we see a proliferation of imaging sensors collecting photographs in different ground resolutions, spectral bands, swath sizes, radiometric characteristics, accuracies and carried on different mobile platforms. In addition, these imaging sensors are combined with navigational tools (such as GPS and IMU), active sensors such as laser scanning and powerful processing tools to obtain high quality geospatial products. The quality (accuracy, completeness, consistency, etc.) of these geospatial products is based on the use of calibrated, high-quality digital camera systems. The new survey regulations of the state of Israel specify the quality requirements for each geospatial product including: maps at different scales and for different purposes, elevation models, orthophotographs, three-dimensional models at different levels of details (LOD) and more. In addition, the regulations require that digital camera systems used for mapping purposes should be certified using a rigorous mapping systems certification and validation process which is specified in the Director General Instructions. The Director General Instructions for digital camera systems certification specify a two-step process as follows: 1. Theoretical analysis of system components that includes: study of the accuracy of each component and an integrative error propagation evaluation, examination of the radiometric and spectral response curves for the imaging sensors, the calibration requirements, and the working procedures. 2. Empirical study of the digital mapping system that examines a typical project (product scale, flight height, number and configuration of ground control points and process). The study examine all the aspects of the final product including; its accuracy, the product pixels size

  5. World's fastest and most sensitive astronomical camera

    Science.gov (United States)

    2009-06-01

    The next generation of instruments for ground-based telescopes took a leap forward with the development of a new ultra-fast camera that can take 1500 finely exposed images per second even when observing extremely faint objects. The first 240x240 pixel images with the world's fastest high precision faint light camera were obtained through a collaborative effort between ESO and three French laboratories from the French Centre National de la Recherche Scientifique/Institut National des Sciences de l'Univers (CNRS/INSU). Cameras such as this are key components of the next generation of adaptive optics instruments of Europe's ground-based astronomy flagship facility, the ESO Very Large Telescope (VLT). ESO PR Photo 22a/09 The CCD220 detector ESO PR Photo 22b/09 The OCam camera ESO PR Video 22a/09 OCam images "The performance of this breakthrough camera is without an equivalent anywhere in the world. The camera will enable great leaps forward in many areas of the study of the Universe," says Norbert Hubin, head of the Adaptive Optics department at ESO. OCam will be part of the second-generation VLT instrument SPHERE. To be installed in 2011, SPHERE will take images of giant exoplanets orbiting nearby stars. A fast camera such as this is needed as an essential component for the modern adaptive optics instruments used on the largest ground-based telescopes. Telescopes on the ground suffer from the blurring effect induced by atmospheric turbulence. This turbulence causes the stars to twinkle in a way that delights poets, but frustrates astronomers, since it blurs the finest details of the images. Adaptive optics techniques overcome this major drawback, so that ground-based telescopes can produce images that are as sharp as if taken from space. Adaptive optics is based on real-time corrections computed from images obtained by a special camera working at very high speeds. Nowadays, this means many hundreds of times each second. The new generation instruments require these

  6. Principle of some gamma cameras (efficiencies, limitations, development)

    International Nuclear Information System (INIS)

    The quality of scintigraphic images is shown to depend on the efficiency of both the input collimator and the detector. Methods are described by which the quality of these images may be improved by adaptations to either the collimator (Fresnel zone camera, Compton effect camera) or the detector (Anger camera, image amplification camera). The Anger camera and image amplification camera are at present the two main instruments whereby acceptable space and energy resolutions may be obtained. A theoretical comparative study of their efficiencies is carried out, independently of their technological differences, after which the instruments designed or under study at the LETI are presented: these include the image amplification camera, the electron amplifier tube camera using a semi-conductor target CdTe and HgI2 detector

  7. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    Science.gov (United States)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced

  8. Enhancement of document images from cameras

    Science.gov (United States)

    Taylor, Michael J.; Dance, Christopher R.

    1998-04-01

    As digital cameras become cheaper and more powerful, driven by the consumer digital photography market, we anticipate significant value in extending their utility as a general office peripheral by adding a paper scanning capability. The main technical challenges in realizing this new scanning interface are insufficient resolution, blur and lighting variations. We have developed an efficient technique for the recovery of text from digital camera images, which simultaneously treats these three problems, unlike other local thresholding algorithms which do not cope with blur and resolution enhancement. The technique first performs deblurring by deconvolution, and then resolution enhancement by linear interpolation. We compare the performance of a threshold derived from the local mean and variance of all pixel values within a neighborhood with a threshold derived from the local mean of just those pixels with high gradient. We assess performance using OCR error scores.

  9. Camera Raw解读(1)

    Institute of Scientific and Technical Information of China (English)

    张恣宽

    2010-01-01

    Camera Raw是Adobe公司研发的,它是Photoshop软件中的一个RAW格式文件的转换插件。虽然一些大的相机生产商,如尼康、佳能公司各自都有自主开发的RAW格式转换软件,性能也很好,但Adobe以其Photoshop软件开发的优势,将RAW格式转换融合在Photoshop软件中,使RAW格式转换优势更加突出,功能十分强大。特别是PhotoshopCS4中的Camera Raw5,功能更加强大。

  10. SLAM using camera and IMU sensors.

    Energy Technology Data Exchange (ETDEWEB)

    Rothganger, Fredrick H.; Muguira, Maritza M.

    2007-01-01

    Visual simultaneous localization and mapping (VSLAM) is the problem of using video input to reconstruct the 3D world and the path of the camera in an 'on-line' manner. Since the data is processed in real time, one does not have access to all of the data at once. (Contrast this with structure from motion (SFM), which is usually formulated as an 'off-line' process on all the data seen, and is not time dependent.) A VSLAM solution is useful for mobile robot navigation or as an assistant for humans exploring an unknown environment. This report documents the design and implementation of a VSLAM system that consists of a small inertial measurement unit (IMU) and camera. The approach is based on a modified Extended Kalman Filter. This research was performed under a Laboratory Directed Research and Development (LDRD) effort.

  11. First polarised light with the NIKA camera

    CERN Document Server

    Ritacco, A; Adane, A; Ade, P; André, P; Beelen, A; Belier, B; Benoît, A; Bideaud, A; Billot, N; Bourrion, O; Calvo, M; Catalano, A; Coiffard, G; Comis, B; D'Addabbo, A; Désert, F -X; Doyle, S; Goupy, J; Kramer, C; Leclercq, S; Macías-Pérez, J F; Martino, J; Mauskopf, P; Maury, A; Mayet, F; Monfardini, A; Pajot, F; Pascale, E; Perotto, L; Pisano, G; Ponthieu, N; Rebolo-Iglesias, M; Réveret, V; Rodriguez, L; Savini, G; Schuster, K; Sievers, A; Thum, C; Triqueneaux, S; Tucker, C; Zylka, R

    2015-01-01

    NIKA is a dual-band camera operating with 315 frequency multiplexed LEKIDs cooled at 100 mK. NIKA is designed to observe the sky in intensity and polarisation at 150 and 260 GHz from the IRAM 30-m telescope. It is a test-bench for the final NIKA2 camera. The incoming linear polarisation is modulated at four times the mechanical rotation frequency by a warm rotating multi-layer Half Wave Plate. Then, the signal is analysed by a wire grid and finally absorbed by the LEKIDs. The small time constant (< 1ms ) of the LEKID detectors combined with the modulation of the HWP enables the quasi-simultaneous measurement of the three Stokes parameters I, Q, U, representing linear polarisation. In this pa- per we present results of recent observational campaigns demonstrating the good performance of NIKA in detecting polarisation at mm wavelength.

  12. Cervical SPECT Camera for Parathyroid Imaging

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2012-08-31

    Primary hyperparathyroidism characterized by one or more enlarged parathyroid glands has become one of the most common endocrine diseases in the world affecting about 1 per 1000 in the United States. Standard treatment is highly invasive exploratory neck surgery called Parathyroidectomy. The surgery has a notable mortality rate because of the close proximity to vital structures. The move to minimally invasive parathyroidectomy is hampered by the lack of high resolution pre-surgical imaging techniques that can accurately localize the parathyroid with respect to surrounding structures. We propose to develop a dedicated ultra-high resolution (~ 1 mm) and high sensitivity (10x conventional camera) cervical scintigraphic imaging device. It will be based on a multiple pinhole-camera SPECT system comprising a novel solid state CZT detector that offers the required performance. The overall system will be configured to fit around the neck and comfortably image a patient.

  13. Non-iterative method for camera calibration.

    Science.gov (United States)

    Hong, Yuzhen; Ren, Guoqiang; Liu, Enhai

    2015-09-01

    This paper presents a new and effective technique to calibrate a camera without nonlinear iteration optimization. To this end, the centre-of-distortion is accurately estimated firstly. Based on the radial distortion division model, point correspondences between model plane and its image were used to compute the homography and distortion coefficients afterwards. Once the homographies of calibration images are obtained, the camera intrinsic parameters are solved analytically. All the solution techniques applied in this calibration process are non-iterative that do not need any initial guess, with no risk of local minima. Moreover, estimation of the distortion coefficients and intrinsic parameters could be successfully decoupled, yielding the more stable and reliable result. Both simulative and real experiments have been carried out to show that the proposed method is reliable and effective. Without nonlinear iteration optimization, the proposed method is computationally efficient and can be applied to real-time online calibration. PMID:26368490

  14. Advanced EVA Suit Camera System Development Project

    Science.gov (United States)

    Mock, Kyla

    2016-01-01

    The National Aeronautics and Space Administration (NASA) at the Johnson Space Center (JSC) is developing a new extra-vehicular activity (EVA) suit known as the Advanced EVA Z2 Suit. All of the improvements to the EVA Suit provide the opportunity to update the technology of the video imagery. My summer internship project involved improving the video streaming capabilities of the cameras that will be used on the Z2 Suit for data acquisition. To accomplish this, I familiarized myself with the architecture of the camera that is currently being tested to be able to make improvements on the design. Because there is a lot of benefit to saving space, power, and weight on the EVA suit, my job was to use Altium Design to start designing a much smaller and simplified interface board for the camera's microprocessor and external components. This involved checking datasheets of various components and checking signal connections to ensure that this architecture could be used for both the Z2 suit and potentially other future projects. The Orion spacecraft is a specific project that may benefit from this condensed camera interface design. The camera's physical placement on the suit also needed to be determined and tested so that image resolution can be maximized. Many of the options of the camera placement may be tested along with other future suit testing. There are multiple teams that work on different parts of the suit, so the camera's placement could directly affect their research or design. For this reason, a big part of my project was initiating contact with other branches and setting up multiple meetings to learn more about the pros and cons of the potential camera placements we are analyzing. Collaboration with the multiple teams working on the Advanced EVA Z2 Suit is absolutely necessary and these comparisons will be used as further progress is made for the overall suit design. This prototype will not be finished in time for the scheduled Z2 Suit testing, so my time was

  15. AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA

    Directory of Open Access Journals (Sweden)

    Veena G.S

    2013-12-01

    Full Text Available The proposed work aims to create a smart application camera, with the intention of eliminating the need for a human presence to detect any unwanted sinister activities, such as theft in this case. Spread among the campus, are certain valuable biometric identification systems at arbitrary locations. The application monitosr these systems (hereafter referred to as “object” using our smart camera system based on an OpenCV platform. By using OpenCV Haar Training, employing the Viola-Jones algorithm implementation in OpenCV, we teach the machine to identify the object in environmental conditions. An added feature of face recognition is based on Principal Component Analysis (PCA to generate Eigen Faces and the test images are verified by using distance based algorithm against the eigenfaces, like Euclidean distance algorithm or Mahalanobis Algorithm. If the object is misplaced, or an unauthorized user is in the extreme vicinity of the object, an alarm signal is raised.

  16. Blind identification of cellular phone cameras

    Science.gov (United States)

    Çeliktutan, Oya; Avcibas, Ismail; Sankur, Bülent

    2007-02-01

    In this paper, we focus on blind source cell-phone identification problem. It is known various artifacts in the image processing pipeline, such as pixel defects or unevenness of the responses in the CCD sensor, black current noise, proprietary interpolation algorithms involved in color filter array [CFA] leave telltale footprints. These artifacts, although often imperceptible, are statistically stable and can be considered as a signature of the camera type or even of the individual device. For this purpose, we explore a set of forensic features, such as binary similarity measures, image quality measures and higher order wavelet statistics in conjunction SVM classifier to identify the originating cell-phone type. We provide identification results among 9 different brand cell-phone cameras. In addition to our initial results, we applied a set of geometrical operations to original images in order to investigate how much our proposed method is robust under these manipulations.

  17. Toward the characterization of infrared cameras

    Science.gov (United States)

    Tzannes, Alexis P.; Mooney, Jonathan M.

    1993-11-01

    This work focuses on characterizing the performance of various staring PtSi infrared cameras, based on estimating their spatial frequency response. Applying a modified knife edge technique, we arrive at an estimate of the edge spread function (ESF), which is used to obtain a profile through the center of the two-dimensional Modulation Transfer Function (MTF). The MTF of various cameras in the horizontal and vertical direction is measured and compared to the ideal system MTF. The influence of charge transfer efficiency (CTE) on the knife edge measurement and resulting MTF is also modeled and discussed. An estimate of the CTE can actually be obtained from the shape of the ESF in the horizontal direction. The effect of pixel fill factor on the estimated MTF in the horizontal and vertical directions is compared and explained.

  18. A detector for submillimeter gamma cameras

    International Nuclear Information System (INIS)

    Anger cameras (SPECT etc.) presently used in nuclear medicine employ as active detector NaI crystals, obtaining intrinsic spatial resolutions ≥3 mm. Arrays made of optically isolated single crystal elements of YAP:Ce, having sub-millimeter aperture size, read out by position sensitive photomultipliers, allow to build active detectors to employ in SPECT systems, with intrinsic spatial resolution below the millimeter, and with time resolution of the order of tens of nanoseconds. In this paper preliminary results of measurements carried out on different kinds of YAP:Ce arrays are reported. The measurements have been performed aiming to optimize the geometrical and physical parameters of the crystals in order to accomplish a SPEM (single photon emission mammography) camera detector. (orig.)

  19. Declarative camera control for automatic cinematography

    Energy Technology Data Exchange (ETDEWEB)

    Christianson, D.B.; Anderson, S.E.; Li-wei He [Univ. of Washington, Seattle, WA (United States)] [and others

    1996-12-31

    Animations generated by interactive 3D computer graphics applications are typically portrayed either from a particular character`s point of view or from a small set of strategically-placed viewpoints. By ignoring camera placement, such applications fail to realize important storytelling capabilities that have been explored by cinematographers for many years. In this paper, we describe several of the principles of cinematography and show how they can be formalized into a declarative language, called the Declarative Camera Control Language (DCCL). We describe the application of DCCL within the context of a simple interactive video game and argue that DCCL represents cinematic knowledge at the same level of abstraction as expert directors by encoding 16 idioms from a film textbook. These idioms produce compelling animations, as demonstrated on the accompanying videotape.

  20. Calibrating a depth camera but ignoring it for SLAM

    OpenAIRE

    Castro, Daniel Herrera

    2014-01-01

    Recent improvements in resolution, accuracy, and cost have made depth cameras a very popular alternative for 3D reconstruction and navigation. Thus, accurate depth camera calibration a very relevant aspect of many 3D pipelines. We explore what are the limits of a practical depth camera calibration algorithm: how to accurately calibrate a noisy depth camera without a precise calibration object and without using brightness or depth discontinuities. We present an algorithm that uses an external ...

  1. Dynamic Vision Sensor Camera Based Bare Hand Gesture Recognition

    OpenAIRE

    kashmera ashish khedkkar safaya; Rekha Lathi

    2012-01-01

    This Paper proposes a method to recognize bare hand gestures using dynamic vision sensor (DVS) camera. DVS camera only responds asynchronously to pixels that have temporal changes in intensity which different from conventional camera. This paper attempts to recognize three different hand gestures rock, paper and scissors and using those hand gestures design mouse free interface.   Keywords: Dynamic vision sensor camera, Hand gesture recognition

  2. Dynamic Vision Sensor Camera Based Bare Hand Gesture Recognition

    Directory of Open Access Journals (Sweden)

    kashmera ashish khedkkar safaya

    2012-05-01

    Full Text Available This Paper proposes a method to recognize bare hand gestures using dynamic vision sensor (DVS camera. DVS camera only responds asynchronously to pixels that have temporal changes in intensity which different from conventional camera. This paper attempts to recognize three different hand gestures rock, paper and scissors and using those hand gestures design mouse free interface.   Keywords: Dynamic vision sensor camera, Hand gesture recognition

  3. Situational Awareness from a Low-Cost Camera System

    Science.gov (United States)

    Freudinger, Lawrence C.; Ward, David; Lesage, John

    2010-01-01

    A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.

  4. Accurate calibration of stereo cameras for machine vision

    OpenAIRE

    Li, Liangfu; Feng, Zuren; Feng, Yuanjing

    2004-01-01

    Camera calibration is an important task for machine vision, whose goal is to obtain the internal and external parameters of each camera. With these parameters, the 3D positions of a scene point, which is identified and matched in two stereo images, can be determined by the triangulation theory. This paper presents a new accurate estimation of CCD camera parameters for machine vision. We present a fast technique to estimate the camera center with special arrangement of calibration target and t...

  5. Euclidean Reconstruction and Affine Camera Calibration Using Controlled Robot Motions

    OpenAIRE

    Horaud, Radu; Christy, Stéphane; Mohr, Roger

    1997-01-01

    We are addressing the problem of Euclidean reconstruction with an uncalibrated affine camera and the calibration of this camera. We investigate constraints under which the Euclidean shape and motion problem becomes linear. The theoretical study described in this paper leads us to impose some practical constraints that the camera is mounted onto a robot arm and that the robot is executing controlled motions whose parameters are known. The affine camera model considered here is just an approxim...

  6. Indoor PTZ Camera Calibration with Concurrent PT Axes

    OpenAIRE

    Sanchez-Riera, Jordi; Salvador, Jordi; Casas, Josep R.

    2009-01-01

    The introduction of active (pan-tilt-zoom or PTZ) cameras in Smart Rooms in addition to fixed static cameras allows to improve resolution in volumetric reconstruction, adding the capability to track smaller objects with higher precision in actual 3D world coordinates. To accomplish this goal, precise camera calibration data should be available for any pan, tilt, and zoom settings of each PTZ camera. The PTZ calibration method proposed in this paper introduces a novel solution to the problem o...

  7. Sparse Camera Network for Visual Surveillance -- A Comprehensive Survey

    OpenAIRE

    Song, Mingli; Tao, Dachent; Maybank, Stephen J.

    2013-01-01

    Technological advances in sensor manufacture, communication, and computing are stimulating the development of new applications that are transforming traditional vision systems into pervasive intelligent camera networks. The analysis of visual cues in multi-camera networks enables a wide range of applications, from smart home and office automation to large area surveillance and traffic surveillance. While dense camera networks - in which most cameras have large overlapping fields of view - are...

  8. Super-Resolution in Plenoptic Cameras Using FPGAs

    OpenAIRE

    Joel Pérez; Eduardo Magdaleno; Fernando Pérez; Manuel Rodríguez; David Hernández; Jaime Corrales

    2014-01-01

    Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSI...

  9. A multi-camera framework for interactive video games

    OpenAIRE

    Cuypers, Tom; VANAKEN, Cedric; FRANCKEN, Yannick; Van Reeth, Frank; Bekaert, Philippe

    2008-01-01

    We present a framework that allows for a straightforward development of multi-camera controlled interactive video games. Compared to traditional gaming input devices, cameras provide players with many degrees of freedom and a natural kind of interaction. The use of cameras can even obsolete the need for special clothing or other tracking devices. This partly accounted for the success of the currently popular single-camera video games like the Sony Eyetoy. However, these games are fairly limit...

  10. A useful tool for intraoperative photography: underwater camera case.

    Science.gov (United States)

    Tatlidede, Soner; Egemen, Onur; Bas, Lutfu

    2008-03-01

    The use of cameras in the operating room is increasing. However, there is not always a free person or an assistant who is familiar with your camera. In order to take faster and high quality photographs in the operating room, we use under water camera cases. These cases are produced for each type of camera and can be gas sterilized prior to operation. PMID:18443501

  11. Calibration of omnidirectional cameras in practice: A comparison of methods

    OpenAIRE

    Puig, Luis; Bermúdez, Jesús; Sturm, Peter; Guerrero, Josechu

    2012-01-01

    International audience Omnidirectional cameras are becoming increasingly popular in computer vision and robotics. Camera calibration is a step before performing any task involving metric scene measurement, required in nearly all robotics tasks. In recent years many different methods to calibrate central omnidirectional cameras have been developed, based on different camera models and often limited to a specific mirror shape. In this paper we review the existing methods designed to calibrat...

  12. Fundus camera systems: a comparative analysis

    OpenAIRE

    DeHoog, Edward; Schwiegerling, James

    2009-01-01

    Retinal photography requires the use of a complex optical system, called a fundus camera, capable of illuminating and imaging the retina simultaneously. The patent literature shows two design forms but does not provide the specifics necessary for a thorough analysis of the designs to be performed. We have constructed our own designs based on the patent literature in optical design software and compared them for illumination efficiency, image quality, ability to accommodate for patient refract...

  13. Delay in camera-to-display systems

    OpenAIRE

    2011-01-01

    Today we see an increasing number of time dependent visual computer systems, ranging from interactive video installations, via high definition teleconferencing to the high performance computer vision disciplines for example in industry and robotics. Common for all of these are the requirement for low and predictable delays from the system itself and its components. In this thesis, we look into the delay of camera-to-display computer systems to understand the properties of their delay com...

  14. User tracking using a wearable camera

    OpenAIRE

    Redzic, Milan; Brennan, Conor; O'Connor, Noel E.

    2012-01-01

    Abstract—This paper addresses automatic indoor user tracking based on fusion of WLAN and image sensing. Our motivation is the increasing prevalence of wearable cameras, some of which can also capture WLAN data. We propose a novel tracking method that can be employed when using image-based, WLAN-based and fusion-based approach only. The effectiveness of combining the strengths of these two complementary modalities is demonstrated for a very challenging data.

  15. Using a portable holographic camera in cosmetology

    Science.gov (United States)

    Bakanas, R.; Gudaitis, G. A.; Zacharovas, S. J.; Ratcliffe, D. B.; Hirsch, S.; Frey, S.; Thelen, A.; Ladrière, N.; Hering, P.

    2006-07-01

    The HSF-MINI portable holographic camera is used to record holograms of the human face. The recorded holograms are analyzed using a unique three-dimensional measurement system that provides topometric data of the face with resolution less than or equal to 0.5 mm. The main advantages of this method over other, more traditional methods (such as laser triangulation and phase-measurement triangulation) are discussed.

  16. Toward standardising gamma camera quality control procedures

    Science.gov (United States)

    Alkhorayef, M. A.; Alnaaimi, M. A.; Alduaij, M. A.; Mohamed, M. O.; Ibahim, S. Y.; Alkandari, F. A.; Bradley, D. A.

    2015-11-01

    Attaining high standards of efficiency and reliability in the practice of nuclear medicine requires appropriate quality control (QC) programs. For instance, the regular evaluation and comparison of extrinsic and intrinsic flood-field uniformity enables the quick correction of many gamma camera problems. Whereas QC tests for uniformity are usually performed by exposing the gamma camera crystal to a uniform flux of gamma radiation from a source of known activity, such protocols can vary significantly. Thus, there is a need for optimization and standardization, in part to allow direct comparison between gamma cameras from different vendors. In the present study, intrinsic uniformity was examined as a function of source distance, source activity, source volume and number of counts. The extrinsic uniformity and spatial resolution were also examined. Proper standard QC procedures need to be implemented because of the continual development of nuclear medicine imaging technology and the rapid expansion and increasing complexity of hybrid imaging system data. The present work seeks to promote a set of standard testing procedures to contribute to the delivery of safe and effective nuclear medicine services.

  17. SPECT detectors: the Anger Camera and beyond.

    Science.gov (United States)

    Peterson, Todd E; Furenlid, Lars R

    2011-09-01

    The development of radiation detectors capable of delivering spatial information about gamma-ray interactions was one of the key enabling technologies for nuclear medicine imaging and, eventually, single-photon emission computed tomography (SPECT). The continuous sodium iodide scintillator crystal coupled to an array of photomultiplier tubes, almost universally referred to as the Anger Camera after its inventor, has long been the dominant SPECT detector system. Nevertheless, many alternative materials and configurations have been investigated over the years. Technological advances as well as the emerging importance of specialized applications, such as cardiac and preclinical imaging, have spurred innovation such that alternatives to the Anger Camera are now part of commercial imaging systems. Increased computing power has made it practical to apply advanced signal processing and estimation schemes to make better use of the information contained in the detector signals. In this review we discuss the key performance properties of SPECT detectors and survey developments in both scintillator and semiconductor detectors and their readouts with an eye toward some of the practical issues at least in part responsible for the continuing prevalence of the Anger Camera in the clinic. PMID:21828904

  18. SPECT detectors: the Anger Camera and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, Todd E [Institute of Imaging Science, Department of Radiology and Radiological Sciences, Department of Physics, and Program in Chemical and Physical Biology, Vanderbilt University, Nashville, TN (United States); Furenlid, Lars R, E-mail: todd.e.peterson@vanderbilt.edu [Center for Gamma-Ray Imaging, Department of Radiology, and College of Optical Sciences, University of Arizona, Tucson, AZ (United States)

    2011-09-07

    The development of radiation detectors capable of delivering spatial information about gamma-ray interactions was one of the key enabling technologies for nuclear medicine imaging and, eventually, single-photon emission computed tomography (SPECT). The continuous sodium iodide scintillator crystal coupled to an array of photomultiplier tubes, almost universally referred to as the Anger Camera after its inventor, has long been the dominant SPECT detector system. Nevertheless, many alternative materials and configurations have been investigated over the years. Technological advances as well as the emerging importance of specialized applications, such as cardiac and preclinical imaging, have spurred innovation such that alternatives to the Anger Camera are now part of commercial imaging systems. Increased computing power has made it practical to apply advanced signal processing and estimation schemes to make better use of the information contained in the detector signals. In this review we discuss the key performance properties of SPECT detectors and survey developments in both scintillator and semiconductor detectors and their readouts with an eye toward some of the practical issues at least in part responsible for the continuing prevalence of the Anger Camera in the clinic. (topical review)

  19. Terrain mapping camera for Chandrayaan-1

    Indian Academy of Sciences (India)

    A S Kiran Kumar; A Roy Chowdhury

    2005-12-01

    The Terrain Mapping Camera (TMC)on India ’s first satellite for lunar exploration,Chandrayaan-1, is for generating high-resolution 3-dimensional maps of the Moon.With this instrument,a complete topographic map of the Moon with 5 m spatial resolution and 10-bit quantization will be available for scienti fic studies.The TMC will image within the panchromatic spectral band of 0.4 to 0.9 m with a stereo view in the fore,nadir and aft directions of the spacecraft movement and have a B/H ratio of 1.The swath coverage will be 20 km.The camera is configured for imaging in the push broom-mode with three linear detectors in the image plane.The camera will have four gain settings to cover the varying illumination conditions of the Moon.Additionally,a provision of imaging with reduced resolution,for improving Signal-to-Noise Ratio (SNR)in polar regions,which have poor illumination conditions throughout,has been made.SNR of better than 100 is expected in the ± 60° latitude region for mature mare soil,which is one of the darkest regions on the lunar surface. This paper presents a brief description of the TMC instrument.

  20. Theory and applications of smart cameras

    CERN Document Server

    2016-01-01

    This book presents an overview of smart camera systems, considering practical applications but also reviewing fundamental aspects of the underlying technology.  It introduces in a tutorial style the principles of sensing and signal processing, and also describes topics such as wireless connection to the Internet of Things (IoT) which is expected to be the biggest market for smart cameras. It is an excellent guide to the fundamental of smart camera technology, and the chapters complement each other well as the authors have worked as a team under the auspice of GFP(Global Frontier Project), the largest-scale funded research in Korea.  This is the third of three books based on the Integrated Smart Sensors research project, which describe the development of innovative devices, circuits, and system-level enabling technologies.  The aim of the project was to develop common platforms on which various devices and sensors can be loaded, and to create systems offering significant improvements in information processi...

  1. Improvement of passive THz camera images

    Science.gov (United States)

    Kowalski, Marcin; Piszczek, Marek; Palka, Norbert; Szustakowski, Mieczyslaw

    2012-10-01

    Terahertz technology is one of emerging technologies that has a potential to change our life. There are a lot of attractive applications in fields like security, astronomy, biology and medicine. Until recent years, terahertz (THz) waves were an undiscovered, or most importantly, an unexploited area of electromagnetic spectrum. The reasons of this fact were difficulties in generation and detection of THz waves. Recent advances in hardware technology have started to open up the field to new applications such as THz imaging. The THz waves can penetrate through various materials. However, automated processing of THz images can be challenging. The THz frequency band is specially suited for clothes penetration because this radiation does not point any harmful ionizing effects thus it is safe for human beings. Strong technology development in this band have sparked with few interesting devices. Even if the development of THz cameras is an emerging topic, commercially available passive cameras still offer images of poor quality mainly because of its low resolution and low detectors sensitivity. Therefore, THz image processing is very challenging and urgent topic. Digital THz image processing is a really promising and cost-effective way for demanding security and defense applications. In the article we demonstrate the results of image quality enhancement and image fusion of images captured by a commercially available passive THz camera by means of various combined methods. Our research is focused on dangerous objects detection - guns, knives and bombs hidden under some popular types of clothing.

  2. Auto convergence for stereoscopic 3D cameras

    Science.gov (United States)

    Zhang, Buyue; Kothandaraman, Sreenivas; Batur, Aziz Umit

    2012-03-01

    Viewing comfort is an important concern for 3-D capable consumer electronics such as 3-D cameras and TVs. Consumer generated content is typically viewed at a close distance which makes the vergence-accommodation conflict particularly pronounced, causing discomfort and eye fatigue. In this paper, we present a Stereo Auto Convergence (SAC) algorithm for consumer 3-D cameras that reduces the vergence-accommodation conflict on the 3-D display by adjusting the depth of the scene automatically. Our algorithm processes stereo video in realtime and shifts each stereo frame horizontally by an appropriate amount to converge on the chosen object in that frame. The algorithm starts by estimating disparities between the left and right image pairs using correlations of the vertical projections of the image data. The estimated disparities are then analyzed by the algorithm to select a point of convergence. The current and target disparities of the chosen convergence point determines how much horizontal shift is needed. A disparity safety check is then performed to determine whether or not the maximum and minimum disparity limits would be exceeded after auto convergence. If the limits would be exceeded, further adjustments are made to satisfy the safety limits. Finally, desired convergence is achieved by shifting the left and the right frames accordingly. Our algorithm runs real-time at 30 fps on a TI OMAP4 processor. It is tested using an OMAP4 embedded prototype stereo 3-D camera. It significantly improves 3-D viewing comfort.

  3. Single eye or camera with depth perception

    Science.gov (United States)

    Kornreich, Philipp; Farell, Bart

    2012-10-01

    An imager that can measure the distance from each pixel to the point on the object that is in focus at the pixel is described. This is accomplished by a short photoconducting lossi lightguide section at each pixel. The eye or camera lens selects the object point who's range is to be determined at the pixel. Light arriving at an image point trough a convex lens adds constructively only if it comes from the object point that is in focus at this pixel.. Light waves from all other object points cancel. Thus the lightguide at this pixel receives light from one object point only. This light signal has a phase component proportional to the range. The light intensity modes and thus the photocurrent in the lightguides shift in response to the phase of the incoming light. Contacts along the length of the lightguide collect the photocurrent signal containing the range information. Applications of this camera include autonomous vehicle navigation and robotic vision. An interesting application is as part of a crude teleportation system consisting of this camera and a three dimensional printer at a remote location.

  4. Imaging performances of the DRAGO gamma camera

    International Nuclear Information System (INIS)

    In this work, we present the results of the experimental characterization of the DRAGO gamma camera. This camera is based on a monolithic array of 77 Silicon Drift Detectors (SDDs), with a total active area of 6.7 cm2, coupled to a single CsI(Tl) scintillator crystal, 5 mm thick. The use of an array of SDDs provides high quantum efficiency for the detection of the scintillation light together with a very low electronics noise. A very compact detection module based on the use of integrated readout circuits has been developed. Performances achieved in gamma-ray imaging using this camera are here reported. When imaging a 0.2 mm collimated 57Co source (122 keV) over different points of the active area, a spatial resolution ranging between 0.25 and 0.5 mm has been measured. The depth of interaction capability of the detector, thanks to a maximum likelihood reconstruction algorithm here adopted, has been also investigated by imaging a collimated beam tilted to an angle of 45 deg. with respect to the scintillator surface.

  5. Imaging of gamma emitters using scintillation cameras

    Science.gov (United States)

    Ricard, Marcel

    2004-07-01

    Since their introduction by Hal Anger in the late 1950s, the gamma cameras have been widely used in the field of nuclear medicine. The original concept is based on the association of a large field of view scintillator optically coupled with an array of photomultiplier tubes (PMTs), in order to locate the position of interactions inside the crystal. Using a dedicated accessory, like a parallel hole collimator, to focus the field of view toward a predefined direction, it is possible to built up an image of the radioactive distribution. In terms of imaging performances, three main characteristics are commonly considered: uniformity, spatial resolution and energy resolution. Major improvements were mainly due to progress in terms of industrial process regarding analogical electronic, crystal growing or PMTs manufacturing. Today's gamma camera is highly digital, from the PMTs to the display. All the corrections are applied "on the fly" using up to date signal processing techniques. At the same time some significant progresses have been achieved in the field of collimators. Finally, two new technologies have been implemented, solid detectors like CdTe or CdZnTe, and pixellized scintillators plus photodiodes or position sensitive photomultiplier tubes. These solutions are particularly well adapted to build dedicated gamma camera for breast or intraoperative imaging.

  6. Imaging of gamma emitters using scintillation cameras

    Energy Technology Data Exchange (ETDEWEB)

    Ricard, Marcel E-mail: ricard@igr.fr

    2004-07-11

    Since their introduction by Hal Anger in the late 1950s, the gamma cameras have been widely used in the field of nuclear medicine. The original concept is based on the association of a large field of view scintillator optically coupled with an array of photomultiplier tubes (PMTs), in order to locate the position of interactions inside the crystal. Using a dedicated accessory, like a parallel hole collimator, to focus the field of view toward a predefined direction, it is possible to built up an image of the radioactive distribution. In terms of imaging performances, three main characteristics are commonly considered: uniformity, spatial resolution and energy resolution. Major improvements were mainly due to progress in terms of industrial process regarding analogical electronic, crystal growing or PMTs manufacturing. Today's gamma camera is highly digital, from the PMTs to the display. All the corrections are applied 'on the fly' using up to date signal processing techniques. At the same time some significant progresses have been achieved in the field of collimators. Finally, two new technologies have been implemented, solid detectors like CdTe or CdZnTe, and pixellized scintillators plus photodiodes or position sensitive photomultiplier tubes. These solutions are particularly well adapted to build dedicated gamma camera for breast or intraoperative imaging.

  7. Imaging of gamma emitters using scintillation cameras

    International Nuclear Information System (INIS)

    Since their introduction by Hal Anger in the late 1950s, the gamma cameras have been widely used in the field of nuclear medicine. The original concept is based on the association of a large field of view scintillator optically coupled with an array of photomultiplier tubes (PMTs), in order to locate the position of interactions inside the crystal. Using a dedicated accessory, like a parallel hole collimator, to focus the field of view toward a predefined direction, it is possible to built up an image of the radioactive distribution. In terms of imaging performances, three main characteristics are commonly considered: uniformity, spatial resolution and energy resolution. Major improvements were mainly due to progress in terms of industrial process regarding analogical electronic, crystal growing or PMTs manufacturing. Today's gamma camera is highly digital, from the PMTs to the display. All the corrections are applied 'on the fly' using up to date signal processing techniques. At the same time some significant progresses have been achieved in the field of collimators. Finally, two new technologies have been implemented, solid detectors like CdTe or CdZnTe, and pixellized scintillators plus photodiodes or position sensitive photomultiplier tubes. These solutions are particularly well adapted to build dedicated gamma camera for breast or intraoperative imaging

  8. BAE systems' SMART chip camera FPA development

    Science.gov (United States)

    Sengupta, Louise; Auroux, Pierre-Alain; McManus, Don; Harris, D. Ahmasi; Blackwell, Richard J.; Bryant, Jeffrey; Boal, Mihir; Binkerd, Evan

    2015-06-01

    BAE Systems' SMART (Stacked Modular Architecture High-Resolution Thermal) Chip Camera provides very compact long-wave infrared (LWIR) solutions by combining a 12 μm wafer-level packaged focal plane array (FPA) with multichip-stack, application-specific integrated circuit (ASIC) and wafer-level optics. The key innovations that enabled this include a single-layer 12 μm pixel bolometer design and robust fabrication process, as well as wafer-level lid packaging. We used advanced packaging techniques to achieve an extremely small-form-factor camera, with a complete volume of 2.9 cm3 and a thermal core weight of 5.1g. The SMART Chip Camera supports up to 60 Hz frame rates, and requires less than 500 mW of power. This work has been supported by the Defense Advanced Research Projects Agency's (DARPA) Low Cost Thermal Imager - Manufacturing (LCTI-M) program, and BAE Systems' internal research and development investment.

  9. Scintillating array gamma camera for clinical use

    International Nuclear Information System (INIS)

    Dedicated gamma cameras for specific clinical application are representing a new trend in nuclear medicine. They are based on position sensitive photo multiplier tubes (PSPMT). The main intrinsic limitation of large area PSPMT (5'' diameter) is the photocathode glass window. Coupling to a planar scintillation crystal strongly affects the useful active area and the intrinsic spatial resolution. To overcome this limitation at University of Rome ''La Sapienza'' was developed the first 5'' diameter gamma camera consisting of a Hamamatsu R3292 PSPMT coupled to 50 x 50 YAP:Ce scintillating array. The array pixel size is 2 x 2 mm2 and the overall dimension of multi-crystal is 10 x 10 x 1 cm3. Resistive chains were used to calculate the centroid. The scintillating array produces a focused light spot minimising the spread introduced by PSPMT glass window. The intrinsic spatial resolution varied between 2 and 2.7 mm. The position linearity and useful active area resulted in good agreement with intrinsic one obtained by light spot irradiation. The real limitation was the poor energy resolution of an individual crystal (40%) and the poor uniformity response of PSPMT (within ±15%). A correction matrix was then carried out by which a 57% of total energy resolution was obtained for the whole matrix. The camera is currently operating as single photon emission mammography (SPEM) and it is producing breast functional images for malignant tumour detection using the same geometry as standard X-ray mammography. (orig.)

  10. 21 CFR 892.1100 - Scintillation (gamma) camera.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Scintillation (gamma) camera. 892.1100 Section 892...) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1100 Scintillation (gamma) camera. (a) Identification. A scintillation (gamma) camera is a device intended to image the distribution of radionuclides...

  11. 15 CFR 743.3 - Thermal imaging camera reporting.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Thermal imaging camera reporting. 743... REPORTING § 743.3 Thermal imaging camera reporting. (a) General requirement. Exports of thermal imaging cameras must be reported to BIS as provided in this section. (b) Transactions to be reported. Exports...

  12. 39 CFR 3001.31a - In camera orders.

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false In camera orders. 3001.31a Section 3001.31a Postal... Applicability § 3001.31a In camera orders. (a) Definition. Except as hereinafter provided, documents and testimony made subject to in camera orders are not made a part of the public record, but are...

  13. 16 CFR 3.45 - In camera orders.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false In camera orders. 3.45 Section 3.45... PRACTICE FOR ADJUDICATIVE PROCEEDINGS Hearings § 3.45 In camera orders. (a) Definition. Except as hereinafter provided, material made subject to an in camera order will be kept confidential and not placed...

  14. LINEAR AND NON-LINEAR CAMERA CALIBRATION TECHNIQUES

    OpenAIRE

    Manoj Gupta

    2011-01-01

    This Paper deals with calibrate a camera to find out the intrinsic and extrinsic camera parameters which are necessary to recover the depth estimation of an object in stereovision system. Keywords: Camera Calibration, Tsai’s algorithm, Stereovision, Linear Calibration, Non-Linear Calibration, Depth estimation

  15. CCD characterization for a range of color cameras

    NARCIS (Netherlands)

    Withagen, P.J.; Groen, F.C.A.; Schutte, K.

    2005-01-01

    CCD cameras are widely used for remote sensing and image processing applications. However, most cameras are produced to create nice images, not to do accurate measurements. Post processing operations such as gamma adjustment and automatic gain control are incorporated in the camera. When a (CCD) cam

  16. Modeling and simulation of gamma camera

    International Nuclear Information System (INIS)

    Simulation techniques play a vital role in designing of sophisticated instruments and also for the training of operating and maintenance staff. Gamma camera systems have been used for functional imaging in nuclear medicine. Functional images are derived from the external counting of the gamma emitting radioactive tracer that after introduction in to the body mimics the behavior of native biochemical compound. The position sensitive detector yield the coordinates of the gamma ray interaction with the detector and are used to estimate the point of gamma ray emission within the tracer distribution space. This advanced imaging device is thus dependent on the performance of algorithm for coordinate computing, estimation of point of emission, generation of image and display of the image data. Contemporary systems also have protocols for quality control and clinical evaluation of imaging studies. Simulation of this processing leads to understanding of the basic camera design problems. This report describes a PC based package for design and simulation of gamma camera along with the options of simulating data acquisition and quality control of imaging studies. Image display and data processing the other options implemented in SIMCAM will be described in separate reports (under preparation). Gamma camera modeling and simulation in SIMCAM has preset configuration of the design parameters for various sizes of crystal detector with the option to pack the PMT on hexagon or square lattice. Different algorithm for computation of coordinates and spatial distortion removal are allowed in addition to the simulation of energy correction circuit. The user can simulate different static, dynamic, MUGA and SPECT studies. The acquired/ simulated data is processed for quality control and clinical evaluation of the imaging studies. Results show that the program can be used to assess these performances. Also the variations in performance parameters can be assessed due to the induced

  17. Methods for identification of images acquired with digital cameras

    Science.gov (United States)

    Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki

    2001-02-01

    From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.

  18. Readout electronics of physics of accelerating universe camera

    Science.gov (United States)

    de Vicente, Juan; Castilla, Javier; Jiménez, Jorge; Cardiel-Sas, L.; Illa, José M.

    2014-08-01

    The Physics of Accelerating Universe Camera (PAUCam) is a new camera for dark energy studies that will be installed in the William Herschel telescope. The main characteristic of the camera is the capacity for high precision photometric redshift measurement. The camera is composed of eighteen Hamamatsu Photonics CCDs providing a wide field of view covering a diameter of one degree. Unlike the common five optical filters of other similar surveys, PAUCam has forty optical narrow band filters which will provide higher resolution in photometric redshifts. In this paper a general description of the electronics of the camera and its status is presented.

  19. Preliminary field evaluation of solid state cameras for security applications

    International Nuclear Information System (INIS)

    Recent developments in solid state imager technology have resulted in a series of compact, lightweight, all-solid-state closed circuit television (CCTV) cameras. Although it is widely known that the various solid state cameras have less light sensitivity and lower resolution than their vacuum tube counterparts, the potential for having a much longer Mean Time Between Failure (MTBF) for the all-solid-state cameras is generating considerable interest within the security community. Questions have been raised as to whether the newest and best of the solid state cameras are a viable alternative to the high maintenance vacuum tube cameras in exterior security applications. To help answer these questions, a series of tests were performed by Sandia National Laboratories at various test sites and under several lighting conditions. In general, all-solid-state cameras need to be improved in four areas before they can be used as wholesale replacements for tube cameras in exterior security applications: resolution, sensitivity, contrast, and smear. However, with careful design some of the higher performance cameras can be used for perimeter security systems, and all of the cameras have applications where they are uniquely qualified. Many of the cameras are well suited for interior assessment and surveillance uses, and several of the cameras are well designed as robotics and machine vision devices

  20. MEDIUM-FORMAT CAMERAS AND THEIR USE IN TOPOGRAPHIC MAPPING

    Directory of Open Access Journals (Sweden)

    J. Höhle

    2012-07-01

    Full Text Available Based on practical experiences with large-format aerial cameras the impact of new medium-format digital cameras on topographic mapping tasks is discussed. Two new medium-format cameras are investigated with respect to elevation accuracy, area coverage and image quality. The produced graphs and tables show the potential of these cameras for general mapping tasks. Special attention is given to the image quality of the selected cameras. Applications for the medium-format cameras are discussed. The necessary tools for selected applications are described. The impact of sensors for georeferencing, multi-spectral images, and new matching algo-rithms is also dealt with. Practical investigations are carried out for the production of digital elevation models. A comparison with large-format frame cameras is carried out. It is concluded that the medium-format cameras have a potential for mapping of smaller areas and will be used in future in true orthoimage production, corridor mapping, and updating of maps. Their small dimensions and low weight allow installation in small airplanes, helicopters, and high-end UAVs. The two investigated medium-format cameras are low-cost alternatives for standard mapping tasks and special applications. The detection of changes in topographic databases and DTMs can be carried out by means of those medium-format cameras which can image the same area in four bands of the visible and invisible spectrum of light. Medium-format cameras will play an important role in future mapping tasks.

  1. The GCT camera for the Cherenkov Telescope Array

    CERN Document Server

    Brown, Anthony M; Allan, D; Amans, J P; Armstrong, T P; Balzer, A; Berge, D; Boisson, C; Bousquet, J -J; Bryan, M; Buchholtz, G; Chadwick, P M; Costantini, H; Cotter, G; Daniel, M K; De Franco, A; De Frondat, F; Dournaux, J -L; Dumas, D; Fasola, G; Funk, S; Gironnet, J; Graham, J A; Greenshaw, T; Hervet, O; Hidaka, N; Hinton, J A; Huet, J -M; Jegouzo, I; Jogler, T; Kraus, M; Lapington, J S; Laporte, P; Lefaucheur, J; Markoff, S; Melse, T; Mohrmann, L; Molyneux, P; Nolan, S J; Okumura, A; Osborne, J P; Parsons, R D; Rosen, S; Ross, D; Rowell, G; Sato, Y; Sayede, F; Schmoll, J; Schoorlemmer, H; Servillat, M; Sol, H; Stamatescu, V; Stephan, M; Stuik, R; Sykes, J; Tajima, H; Thornhill, J; Tibaldo, L; Trichard, C; Vink, J; Watson, J J; White, R; Yamane, N; Zech, A; Zink, A; Zorn, J

    2016-01-01

    The Gamma-ray Cherenkov Telescope (GCT) is proposed for the Small-Sized Telescope component of the Cherenkov Telescope Array (CTA). GCT's dual-mirror Schwarzschild-Couder (SC) optical system allows the use of a compact camera with small form-factor photosensors. The GCT camera is ~0.4 m in diameter and has 2048 pixels; each pixel has a ~0.2 degree angular size, resulting in a wide field-of-view. The design of the GCT camera is high performance at low cost, with the camera housing 32 front-end electronics modules providing full waveform information for all of the camera's 2048 pixels. The first GCT camera prototype, CHEC-M, was commissioned during 2015, culminating in the first Cherenkov images recorded by a SC telescope and the first light of a CTA prototype. In this contribution we give a detailed description of the GCT camera and present preliminary results from CHEC-M's commissioning.

  2. Development for calibration target for infrared thermal imaging camera

    International Nuclear Information System (INIS)

    Camera calibration is an indispensable process for improving measurement accuracy in industry fields such as machine vision. However, existing calibration cannot be applied to the calibration of mid-wave and long-wave infrared cameras. Recently, with the growing use of infrared thermal cameras that can measure defects from thermal properties, development of an applicable calibration target has become necessary. Thus, based on heat conduction analysis using finite element analysis, we developed a calibration target that can be used with both existing visible cameras and infrared thermal cameras, by implementing optimal design conditions, with consideration of factors such as thermal conductivity and emissivity, colors and materials. We performed comparative experiments on calibration target images from infrared thermal cameras and visible cameras. The results demonstrated the effectiveness of the proposed calibration target.

  3. Calibration Tests of Industrial and Scientific CCD Cameras

    Science.gov (United States)

    Shortis, M. R.; Burner, A. W.; Snow, W. L.; Goad, W. K.

    1991-01-01

    Small format, medium resolution CCD cameras are at present widely used for industrial metrology applications. Large format, high resolution CCD cameras are primarily in use for scientific applications, but in due course should increase both the range of applications and the object space accuracy achievable by close range measurement. Slow scan, cooled scientific CCD cameras provide the additional benefit of additional quantisation levels which enables improved radiometric resolution. The calibration of all types of CCD cameras is necessary in order to characterize the geometry of the sensors and lenses. A number of different types of CCD cameras have been calibrated a the NASA Langley Research Center using self calibration and a small test object. The results of these calibration tests will be described, with particular emphasis on the differences between standard CCD video cameras and scientific slow scan CCD cameras.

  4. Determining Vision Graphs for Distributed Camera Networks Using Feature Digests

    Directory of Open Access Journals (Sweden)

    Cheng Zhaolin

    2007-01-01

    Full Text Available We propose a decentralized method for obtaining the vision graph for a distributed, ad-hoc camera network, in which each edge of the graph represents two cameras that image a sufficiently large part of the same environment. Each camera encodes a spatially well-distributed set of distinctive, approximately viewpoint-invariant feature points into a fixed-length "feature digest" that is broadcast throughout the network. Each receiver camera robustly matches its own features with the decompressed digest and decides whether sufficient evidence exists to form a vision graph edge. We also show how a camera calibration algorithm that passes messages only along vision graph edges can recover accurate 3D structure and camera positions in a distributed manner. We analyze the performance of different message formation schemes, and show that high detection rates ( can be achieved while maintaining low false alarm rates ( using a simulated 60-node outdoor camera network.

  5. Simple method for calibrating omnidirectional stereo with multiple cameras

    Science.gov (United States)

    Ha, Jong-Eun; Choi, I.-Sak

    2011-04-01

    Cameras can give useful information for the autonomous navigation of a mobile robot. Typically, one or two cameras are used for this task. Recently, an omnidirectional stereo vision system that can cover the whole surrounding environment of a mobile robot is adopted. They usually adopt a mirror that cannot offer uniform spatial resolution. In this paper, we deal with an omnidirectional stereo system which consists of eight cameras where each two vertical cameras constitute one stereo system. Camera calibration is the first necessary step to obtain 3D information. Calibration using a planar pattern requires many images acquired under different poses so it is a tedious step to calibrate all eight cameras. In this paper, we present a simple calibration procedure using a cubic-type calibration structure that surrounds the omnidirectional stereo system. We can calibrate all the cameras on an omnidirectional stereo system in just one shot.

  6. Calibration of asynchronous smart phone cameras from moving objects

    Science.gov (United States)

    Hagen, Oksana; Istenič, Klemen; Bharti, Vibhav; Dhali, Maruf Ahmed; Barmaimon, Daniel; Houssineau, Jérémie; Clark, Daniel

    2015-04-01

    Calibrating multiple cameras is a fundamental prerequisite for many Computer Vision applications. Typically this involves using a pair of identical synchronized industrial or high-end consumer cameras. This paper considers an application on a pair of low-cost portable cameras with different parameters that are found in smart phones. This paper addresses the issues of acquisition, detection of moving objects, dynamic camera registration and tracking of arbitrary number of targets. The acquisition of data is performed using two standard smart phone cameras and later processed using detections of moving objects in the scene. The registration of cameras onto the same world reference frame is performed using a recently developed method for camera calibration using a disparity space parameterisation and the single-cluster PHD filter.

  7. Precision Multiband Photometry with a DSLR Camera

    Science.gov (United States)

    Zhang, M.; Bakos, G. Á.; Penev, K.; Csubry, Z.; Hartman, J. D.; Bhatti, W.; de Val-Borro, M.

    2016-03-01

    Ground-based exoplanet surveys such as SuperWASP, HAT Network of Telescopes (HATNet), and KELT have discovered close to two hundred transiting extrasolar planets in the past several years. The strategy of these surveys is to look at a large field of view and measure the brightnesses of its bright stars to around half a percent per point precision, which is adequate for detecting hot Jupiters. Typically, these surveys use CCD detectors to achieve high precision photometry. These CCDS, however, are expensive relative to other consumer-grade optical imaging devices, such as digital single-lens reflex cameras (DSLRs). We look at the possibility of using a DSLR camera for precision photometry. Specifically, we used a Canon EOS 60D camera that records light in three colors simultaneously. The DSLR was integrated into the HATNet survey and collected observations for a month, after which photometry was extracted for 6600 stars in a selected stellar field. We found that the DSLR achieves a best-case median absolute deviation of 4.6 mmag per 180 s exposure when the DSLR color channels are combined, and 1000 stars are measured to better than 10 mmag (1%). Also, we achieve 10 mmag or better photometry in the individual colors. This is good enough to detect transiting hot Jupiters. We performed a candidate search on all stars and found four candidates, one of which is KELT-3b, the only known transiting hot Jupiter in our selected field. We conclude that the Canon 60D is a cheap, lightweight device capable of useful photometry in multiple colors.

  8. Active control for single camera SLAM

    OpenAIRE

    Vidal-Calleja, Teresa A.; Davison, Andrew J.; Andrade-Cetto, J.; Murray, David W

    2006-01-01

    In this paper we consider a single hand-held camera performing SLAM at video rate with generic 6DOF motion. The aim is to optimise both the localisation of the sensor and building of the feature map by computing the most appropriate control actions or movements. The actions belong to a discrete set (e.g. go forward, go left, go up, turn right, etc), and are chosen so as to maximise the mutual information gain between posterior states and measurements. Maximising the mutual information helps t...

  9. Thermal imaging cameras characteristics and performance

    CERN Document Server

    Williams, Thomas

    2009-01-01

    The ability to see through smoke and mist and the ability to use the variances in temperature to differentiate between targets and their backgrounds are invaluable in military applications and have become major motivators for the further development of thermal imagers. As the potential of thermal imaging is more clearly understood and the cost decreases, the number of industrial and civil applications being exploited is growing quickly. In order to evaluate the suitability of particular thermal imaging cameras for particular applications, it is important to have the means to specify and measur

  10. Markerless Camera Pose Estimation - An Overview

    OpenAIRE

    Nöll, Tobias; Pagani, Alain; Stricker, Didier

    2011-01-01

    As shown by the human perception, a correct interpretation of a 3D scene on the basis of a 2D image is possible without markers. Solely by identifying natural features of different objects, their locations and orientations on the image can be identified. This allows a three dimensional interpretation of a two dimensional pictured scene. The key aspect for this interpretation is the correct estimation of the camera pose, i.e. the knowledge of the orientation and location a picture was recorded...

  11. Development of a micro-PIXE camera

    International Nuclear Information System (INIS)

    We developed a system of μ-PIXE analysis at the division of Takasaki ion accelerator for advanced radiation application (TIARA) in Japan Atomic Energy Research institute (JAERI), which consists of a microbeam apparatus, a multi-parameter data acquisition system and a personal computer. Elemental analysis in the region of 500 μm x 500 μm can be performed with a spatial resolution of < 0.3 μm and multi-elemental distributions are presented as images on a computer display even during measurement. We call this system a micro-PIXE camera. (author)

  12. A positron camera for industrial application

    International Nuclear Information System (INIS)

    A positron camera for application to flow tracing and measurement in mechanical subjects is described. It is based on two 300 x 600 mm2 hybrid multiwire detectors; the cathodes are in the form of lead strips planted onto printed-circuit board, and delay lines are used to determine the location of photon interactions. Measurements of the positron detection efficiency (30 Hz μCi-1 for a centred unshielded source), the maximum data logging rate (3 kHz) and the spatial resolving power (point source response = 5.7 mm fwhm) are presented and discussed, and results from initial demonstration experiments are shown. (orig.)

  13. Compact optical technique for streak camera calibration

    International Nuclear Information System (INIS)

    To produce accurate data from optical streak cameras requires accurate temporal calibration sources. We have reproduced an older technology for generating optical timing marks that had been lost due to component availability. Many improvements have been made which allow the modern units to service a much larger need. Optical calibrators are now available that produce optical pulse trains of 780 nm wavelength light at frequencies ranging from 0.1 to 10 GHz, with individual pulse widths of approximately 25 ps full width half maximum. Future plans include the development of single units that produce multiple frequencies to cover a wide temporal range, and that are fully controllable via an RS232 interface

  14. Calibrating Images from the MINERVA Cameras

    Science.gov (United States)

    Mercedes Colón, Ana

    2016-01-01

    The MINiature Exoplanet Radial Velocity Array (MINERVA) consists of an array of robotic telescopes located on Mount Hopkins, Arizona with the purpose of performing transit photometry and spectroscopy to find Earth-like planets around Sun-like stars. In order to make photometric observations, it is necessary to perform calibrations on the CCD cameras of the telescopes to take into account possible instrument error on the data. In this project, we developed a pipeline that takes optical images, calibrates them using sky flats, darks, and biases to generate a transit light curve.

  15. HPD camera development for the MAGIC project

    International Nuclear Information System (INIS)

    Today the Hybrid Photon Detector (HPD) is one of the few low light level sensors that can provide an excellent single and multiple photoelectron amplitude resolution. We developed HPDs with a GaAsP photocathode, namely the R9792U-40, together with Hamamatsu photonics. A peak quantum efficiency (QE) exceeds 50% and a pulse width is 2 nsec. In addition, the afterpulsing rate of these tubes is ∝300 times lower compared to that of conventional photomultiplier tubes (PMTs). Here we want to report on the recent progress of the HPD camera development. We also want to discuss the prospects of using it in the MAGIC telescope project

  16. Computational cameras for moving iris recognition

    Science.gov (United States)

    McCloskey, Scott; Venkatesha, Sharath

    2015-05-01

    Iris-based biometric identification is increasingly used for facility access and other security applications. Like all methods that exploit visual information, however, iris systems are limited by the quality of captured images. Optical defocus due to a small depth of field (DOF) is one such challenge, as is the acquisition of sharply-focused iris images from subjects in motion. This manuscript describes the application of computational motion-deblurring cameras to the problem of moving iris capture, from the underlying theory to system considerations and performance data.

  17. Analysis of Camera Parameters Value in Various Object Distances Calibration

    International Nuclear Information System (INIS)

    In photogrammetric applications, good camera parameters are needed for mapping purpose such as an Unmanned Aerial Vehicle (UAV) that encompassed with non-metric camera devices. Simple camera calibration was being a common application in many laboratory works in order to get the camera parameter's value. In aerial mapping, interior camera parameters' value from close-range camera calibration is used to correct the image error. However, the causes and effects of the calibration steps used to get accurate mapping need to be analyze. Therefore, this research aims to contribute an analysis of camera parameters from portable calibration frame of 1.5 × 1 meter dimension size. Object distances of two, three, four, five, and six meters are the research focus. Results are analyzed to find out the changes in image and camera parameters' value. Hence, camera calibration parameter's of a camera is consider different depend on type of calibration parameters and object distances

  18. Design of a variable field prototype PET camera

    International Nuclear Information System (INIS)

    A prototype PET camera has been designed and is being constructed to test the concept, and develop the engineering design and production methodology for a variable field PET camera. The long term goal of the design is to develop a lower cost, high resolution PET camera. The camera has eight detector heads which form a closely packed octagon detector ring with an average diameter of 44cm for brain/breast and animal model imaging. The heads can be translated radially to a maximum ring diameter of 70cm for whole body imaging. In the larger diameter modes, the camera rotates 45 degree during imaging. The camera heads can be set to intermediate positions to fit the camera to the subject size to maximize detection sensitivity and sampling uniformity. Special design features for imaging the breast and the axillary metastases have been incorporated. The detector design implemented is the quadrant sharing photomultiplier (PMT) design using circular 19mm PMT. The BGO detector pitch size is 2.7 x 2.7mm. The prototype camera images 27 slices simultaneously with an axial field of view (FOV) of 39mm. The prototype's limited axial FOV, which is appropriate for testing the camera concept, would be expanded in a next-generation clinical camera implementation. Preliminary simulation studies have been performed to evaluate the resolution, sensitivity, and sampling uniformity

  19. The NectarCAM camera project

    CERN Document Server

    Glicenstein, J-F; Barrio, J-A; Blanch, O; Boix, J; Bolmont, J; Boutonnet, C; Cazaux, S; Chabanne, E; Champion, C; Chateau, F; Colonges, S; Corona, P; Couturier, S; Courty, B; Delagnes, E; Delgado, C; Ernenwein, J-P; Fegan, S; Ferreira, O; Fesquet, M; Fontaine, G; Fouque, N; Henault, F; Gascón, D; Herranz, D; Hermel, R; Hoffmann, D; Houles, J; Karkar, S; Khelifi, B; Knödlseder, J; Martinez, G; Lacombe, K; Lamanna, G; LeFlour, T; Lopez-Coto, R; Louis, F; Mathieu, A; Moulin, E; Nayman, P; Nunio, F; Olive, J-F; Panazol, J-L; Petrucci, P-O; Punch, M; Prast, J; Ramon, P; Riallot, M; Ribó, M; Rosier-Lees, S; Sanuy, A; Siero, J; Tavernet, J-P; Tejedor, L A; Toussenel, F; Vasileiadis, G; Voisin, V; Waegebert, V; Zurbach, C

    2013-01-01

    In the framework of the next generation of Cherenkov telescopes, the Cherenkov Telescope Array (CTA), NectarCAM is a camera designed for the medium size telescopes covering the central energy range of 100 GeV to 30 TeV. NectarCAM will be finely pixelated (~ 1800 pixels for a 8 degree field of view, FoV) in order to image atmospheric Cherenkov showers by measuring the charge deposited within a few nanoseconds time-window. It will have additional features like the capacity to record the full waveform with GHz sampling for every pixel and to measure event times with nanosecond accuracy. An array of a few tens of medium size telescopes, equipped with NectarCAMs, will achieve up to a factor of ten improvement in sensitivity over existing instruments in the energy range of 100 GeV to 10 TeV. The camera is made of roughly 250 independent read-out modules, each composed of seven photo-multipliers, with their associated high voltage base and control, a read-out board and a multi-service backplane board. The read-out b...

  20. The Dark Energy Survey Camera (DECam)

    International Nuclear Information System (INIS)

    The Dark Energy Survey (DES) is a next generation optical survey aimed at understanding the expansion rate of the Universe using four complementary methods: weak gravitational lensing, galaxy cluster counts, baryon acoustic oscillations, and Type Ia supernovae. To perform the survey, the DES Collaboration is building the Dark Energy Camera (DECam), a 3 square degree, 570 Megapixel CCD camera that will be mounted at the prime focus of the Blanco 4-meter telescope at the Cerro Tololo Inter-American Observatory. CCD production has finished, yielding roughly twice the required 62 2k x 4k detectors. The construction of DECam is nearly finished. Integration and commissioning on a 'telescope simulator' of the major hardware and software components, except for the optics, recently concluded at Fermilab. Final assembly of the optical corrector has started at University College, London. Some components have already been received at CTIO. 'First-light' will be sometime in 2012. This oral presentation concentrates on the technical challenges involved in building DECam (and how we overcame them), and the present status of the instrument.

  1. Mars Cameras Make Panoramic Photography a Snap

    Science.gov (United States)

    2008-01-01

    If you wish to explore a Martian landscape without leaving your armchair, a few simple clicks around the NASA Web site will lead you to panoramic photographs taken from the Mars Exploration Rovers, Spirit and Opportunity. Many of the technologies that enable this spectacular Mars photography have also inspired advancements in photography here on Earth, including the panoramic camera (Pancam) and its housing assembly, designed by the Jet Propulsion Laboratory and Cornell University for the Mars missions. Mounted atop each rover, the Pancam mast assembly (PMA) can tilt a full 180 degrees and swivel 360 degrees, allowing for a complete, highly detailed view of the Martian landscape. The rover Pancams take small, 1 megapixel (1 million pixel) digital photographs, which are stitched together into large panoramas that sometimes measure 4 by 24 megapixels. The Pancam software performs some image correction and stitching after the photographs are transmitted back to Earth. Different lens filters and a spectrometer also assist scientists in their analyses of infrared radiation from the objects in the photographs. These photographs from Mars spurred developers to begin thinking in terms of larger and higher quality images: super-sized digital pictures, or gigapixels, which are images composed of 1 billion or more pixels. Gigapixel images are more than 200 times the size captured by today s standard 4 megapixel digital camera. Although originally created for the Mars missions, the detail provided by these large photographs allows for many purposes, not all of which are limited to extraterrestrial photography.

  2. FIDO Rover Retracted Arm and Camera

    Science.gov (United States)

    1999-01-01

    The Field Integrated Design and Operations (FIDO) rover extends the large mast that carries its panoramic camera. The FIDO is being used in ongoing NASA field tests to simulate driving conditions on Mars. FIDO is controlled from the mission control room at JPL's Planetary Robotics Laboratory in Pasadena. FIDO uses a robot arm to manipulate science instruments and it has a new mini-corer or drill to extract and cache rock samples. Several camera systems onboard allow the rover to collect science and navigation images by remote-control. The rover is about the size of a coffee table and weighs as much as a St. Bernard, about 70 kilograms (150 pounds). It is approximately 85 centimeters (about 33 inches) wide, 105 centimeters (41 inches) long, and 55 centimeters (22 inches) high. The rover moves up to 300 meters an hour (less than a mile per hour) over smooth terrain, using its onboard stereo vision systems to detect and avoid obstacles as it travels 'on-the-fly.' During these tests, FIDO is powered by both solar panels that cover the top of the rover and by replaceable, rechargeable batteries.

  3. Gamma camera based FDG PET in oncology

    International Nuclear Information System (INIS)

    Positron Emission Tomography(PET) was introduced as a research tool in the 1970s and it took about 20 years before PET became an useful clinical imaging modality. In the USA, insurance coverage for PET procedures in the 1990s was the turning point, I believe, for this progress. Initially PET was used in neurology but recently more than 80% of PET procedures are in oncological applications. I firmly believe, in the 21st century, one can not manage cancer patients properly without PET and PET is very important medical imaging modality in basic and clinical sciences. PET is grouped into 2 categories; conventional (c) and gamma camera based (CB) PET. CBPET is more readily available utilizing dual-head gamma cameras and commercially available FDG to many medical centers at low cost to patients. In fact there are more CBPET in operation than cPET in the USA. CBPET is inferior to cPET in its performance but clinical studies in oncology is feasible without expensive infrastructures such as staffing, rooms and equipments. At Ajou university Hospital, CBPET was installed in late 1997 for the first time in Korea as well as in Asia and the system has been used successfully and effectively in oncological applications. Our was the fourth PET operation in Korea and I believe this may have been instrumental for other institutions got interested in clinical PET. The following is a brief description of our clinical experience of FDG CBPET in oncology

  4. Time-of-Flight Microwave Camera

    Science.gov (United States)

    Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh

    2015-10-01

    Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.

  5. The design of aerial camera focusing mechanism

    Science.gov (United States)

    Hu, Changchang; Yang, Hongtao; Niu, Haijun

    2015-10-01

    In order to ensure the imaging resolution of aerial camera and compensating defocusing caused by the changing of atmospheric temperature, pressure, oblique photographing distance and other environmental factor [1,2], and to meeting the overall design requirements of the camera for the lower mass and smaller size , the linear focusing mechanism is designed. Through the target surface support, the target surface component is connected with focusing driving mechanism. Make use of precision ball screws, focusing mechanism transforms the input rotary motion of motor into linear motion of the focal plane assembly. Then combined with the form of linear guide restraint movement, the magnetic encoder is adopted to detect the response of displacement. And the closed loop control is adopted to realize accurate focusing. This paper illustrated the design scheme for a focusing mechanism and analyzed its error sources. It has the advantages of light friction and simple transmission chain and reducing the transmission error effectively. And this paper also analyses the target surface by finite element analysis and lightweight design. Proving that the precision of focusing mechanism can achieve higher than 3um, and the focusing range is +/-2mm.

  6. Far-infrared cameras for automotive safety

    Science.gov (United States)

    Lonnoy, Jacques; Le Guilloux, Yann; Moreira, Raphael

    2005-02-01

    Far Infrared cameras used initially for the driving of military vehicles are slowly coming into the area of commercial (luxury) cars while providing with the FIR imagery a useful assistance for driving at night or in adverse conditions (fog, smoke, ...). However this imagery needs a minimum driver effort as the image understanding is not so natural as the visible or near IR one. A developing field of FIR cameras is ADAS (Advanced Driver Assistance Systems) where FIR processed imagery fused with other sensors data (radar, ...) is providing a driver warning when dangerous situations are occurring. The communication will concentrate on FIR processed imagery for object or obstacles detection on the road or near the road. FIR imagery highlighting hot spots is a powerful detection tool as it provides a good contrast on some of the most common elements of the road scenery (engines, wheels, gas exhaust pipes, pedestrians, 2 wheelers, animals,...). Moreover FIR algorithms are much more robust than visible ones as there is less variability in image contrast with time (day/night, shadows, ...). We based our detection algorithm on one side on the peculiar aspect of vehicles, pedestrians in FIR images and on the other side on the analysis of motion along time, that allows anticipation of future motion. We will show results obtained with FIR processed imagery within the PAROTO project, supported by the French Ministry of Research, that ended in spring 04.

  7. Real-time holographic camera system

    Science.gov (United States)

    Bazhenov, Mikhail Y.; Grabovski, Vitaly V.; Stolyarenko, Alexandr V.; Zahaykevich, George A.

    1997-04-01

    The holographic camera system for surface-relief hologram multiple reversible registration is presented. Photosensitive media is a single-layer photothermoplastic polymer on a glass substrate with conductive layer. This exclude a charges accumulation in the polymer volume and permits to realize an efficient enhancement of latent electrostatic image and its fast pulse heating development. The processes of charging, photogeneration, carriers transport, fast development and erasing, image enhancement were studied in detail and optimized. In order to improve some defects of photothermoplastic recording, originating from influences of circumstances and recording conditions, some new processes were developed: (1) fast charging with pulses corona in closed dielectric volume, (2) optoelectronic enhancement of electrostatic image, and (3) fast pulsed development with automatically controlled temperature rate. The dust-proof recording camera with built-in highvoltage power supply, thermo- and photosensors was designed to meet the needs of real-time or multiple- exposure interferometry, holographic training recording, holographic storage systems, correlation investigations and pattern recognition.

  8. The Mars NetLander panoramic camera

    Science.gov (United States)

    Jaumann, Ralf; Langevin, Yves; Hauber, Ernst; Oberst, Jürgen; Grothues, Hans-Georg; Hoffmann, Harald; Soufflot, Alain; Bertaux, Jean-Loup; Dimarellis, Emmanuel; Mottola, Stefano; Bibring, Jean-Pierre; Neukum, Gerhard; Albertz, Jörg; Masson, Philippe; Pinet, Patrick; Lamy, Philippe; Formisano, Vittorio

    2000-10-01

    The panoramic camera (PanCam) imaging experiment is designed to obtain high-resolution multispectral stereoscopic panoramic images from each of the four Mars NetLander 2005 sites. The main scientific objectives to be addressed by the PanCam experiment are (1) to locate the landing sites and support the NetLander network sciences, (2) to geologically investigate and map the landing sites, and (3) to study the properties of the atmosphere and of variable phenomena. To place in situ measurements at a landing site into a proper regional context, it is necessary to determine the lander orientation on ground and to exactly locate the position of the landing site with respect to the available cartographic database. This is not possible by tracking alone due to the lack of on-ground orientation and the so-called map-tie problem. Images as provided by the PanCam allow to determine accurate tilt and north directions for each lander and to identify the lander locations based on landmarks, which can also be recognized in appropriate orbiter imagery. With this information, it will be further possible to improve the Mars-wide geodetic control point network and the resulting geometric precision of global map products. The major geoscientific objectives of the PanCam lander images are the recognition of surface features like ripples, ridges and troughs, and the identification and characterization of different rock and surface units based on their morphology, distribution, spectral characteristics, and physical properties. The analysis of the PanCam imagery will finally result in the generation of precise map products for each of the landing sites. So far comparative geologic studies of the Martian surface are restricted to the timely separated Mars Pathfinder and the two Viking Lander Missions. Further lander missions are in preparation (Beagle-2, Mars Surveyor 03). NetLander provides the unique opportunity to nearly double the number of accessible landing site data by providing

  9. Women's Creation of Camera Phone Culture

    Directory of Open Access Journals (Sweden)

    Dong-Hoo Lee

    2005-01-01

    Full Text Available A major aspect of the relationship between women and the media is the extent to which the new media environment is shaping how women live and perceive the world. It is necessary to understand, in a concrete way, how the new media environment is articulated to our gendered culture, how the symbolic or physical forms of the new media condition women’s experiences, and the degree to which a ‘post-gendered re-codification’ can be realized within a new media environment. This paper intends to provide an ethnographic case study of women’s experiences with camera phones, examining the extent to which these experiences recreate or reconstruct women’s subjectivity or identity. By taking a close look at the ways in which women utilize and appropriate the camera phone in their daily lives, it focuses not only on women’s cultural practices in making meanings but also on their possible effect in the deconstruction of gendered techno-culture.

  10. Focal Plane Metrology for the LSST Camera

    Energy Technology Data Exchange (ETDEWEB)

    A Rasmussen, Andrew P.; Hale, Layton; Kim, Peter; Lee, Eric; Perl, Martin; Schindler, Rafe; Takacs, Peter; Thurston, Timothy; /SLAC

    2007-01-10

    Meeting the science goals for the Large Synoptic Survey Telescope (LSST) translates into a demanding set of imaging performance requirements for the optical system over a wide (3.5{sup o}) field of view. In turn, meeting those imaging requirements necessitates maintaining precise control of the focal plane surface (10 {micro}m P-V) over the entire field of view (640 mm diameter) at the operating temperature (T {approx} -100 C) and over the operational elevation angle range. We briefly describe the hierarchical design approach for the LSST Camera focal plane and the baseline design for assembling the flat focal plane at room temperature. Preliminary results of gravity load and thermal distortion calculations are provided, and early metrological verification of candidate materials under cold thermal conditions are presented. A detailed, generalized method for stitching together sparse metrology data originating from differential, non-contact metrological data acquisition spanning multiple (non-continuous) sensor surfaces making up the focal plane, is described and demonstrated. Finally, we describe some in situ alignment verification alternatives, some of which may be integrated into the camera's focal plane.

  11. Lights, Camera, AG-Tion: Promoting Agricultural and Environmental Education on Camera

    Science.gov (United States)

    Fuhrman, Nicholas E.

    2016-01-01

    Viewing of online videos and television segments has become a popular and efficient way for Extension audiences to acquire information. This article describes a unique approach to teaching on camera that may help Extension educators communicate their messages with comfort and personality. The S.A.L.A.D. approach emphasizes using relevant teaching…

  12. Motorcycle detection and counting using stereo camera, IR camera, and microphone array

    Science.gov (United States)

    Ling, Bo; Gibson, David R. P.; Middleton, Dan

    2013-03-01

    Detection, classification, and characterization are the key to enhancing motorcycle safety, motorcycle operations and motorcycle travel estimation. Average motorcycle fatalities per Vehicle Mile Traveled (VMT) are currently estimated at 30 times those of auto fatalities. Although it has been an active research area for many years, motorcycle detection still remains a challenging task. Working with FHWA, we have developed a hybrid motorcycle detection and counting system using a suite of sensors including stereo camera, thermal IR camera and unidirectional microphone array. The IR thermal camera can capture the unique thermal signatures associated with the motorcycle's exhaust pipes that often show bright elongated blobs in IR images. The stereo camera in the system is used to detect the motorcyclist who can be easily windowed out in the stereo disparity map. If the motorcyclist is detected through his or her 3D body recognition, motorcycle is detected. Microphones are used to detect motorcycles that often produce low frequency acoustic signals. All three microphones in the microphone array are placed in strategic locations on the sensor platform to minimize the interferences of background noises from sources such as rain and wind. Field test results show that this hybrid motorcycle detection and counting system has an excellent performance.

  13. Calibration of the Lunar Reconnaissance Orbiter Camera

    Science.gov (United States)

    Tschimmel, M.; Robinson, M. S.; Humm, D. C.; Denevi, B. W.; Lawrence, S. J.; Brylow, S.; Ravine, M.; Ghaemi, T.

    2008-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) onboard the NASA Lunar Reconnaissance Orbiter (LRO) spacecraft consists of three cameras: the Wide-Angle Camera (WAC) and two identical Narrow Angle Cameras (NAC-L, NAC-R). The WAC is push-frame imager with 5 visible wavelength filters (415 to 680 nm) at a spatial resolution of 100 m/pixel and 2 UV filters (315 and 360 nm) with a resolution of 400 m/pixel. In addition to the multicolor imaging the WAC can operate in monochrome mode to provide a global large- incidence angle basemap and a time-lapse movie of the illumination conditions at both poles. The WAC has a highly linear response, a read noise of 72 e- and a full well capacity of 47,200 e-. The signal-to-noise ratio in each band is 140 in the worst case. There are no out-of-band leaks and the spectral response of each filter is well characterized. Each NAC is a monochrome pushbroom scanner, providing images with a resolution of 50 cm/pixel from a 50-km orbit. A single NAC image has a swath width of 2.5 km and a length of up to 26 km. The NACs are mounted to acquire side-by-side imaging for a combined swath width of 5 km. The NAC is designed to fully characterize future human and robotic landing sites in terms of topography and hazard risks. The North and South poles will be mapped on a 1-meter-scale poleward of 85.5° latitude. Stereo coverage can be provided by pointing the NACs off-nadir. The NACs are also highly linear. Read noise is 71 e- for NAC-L and 74 e- for NAC-R and the full well capacity is 248,500 e- for NAC-L and 262,500 e- for NAC- R. The focal lengths are 699.6 mm for NAC-L and 701.6 mm for NAC-R; the system MTF is 28% for NAC-L and 26% for NAC-R. The signal-to-noise ratio is at least 46 (terminator scene) and can be higher than 200 (high sun scene). Both NACs exhibit a straylight feature, which is caused by out-of-field sources and is of a magnitude of 1-3%. However, as this feature is well understood it can be greatly reduced during ground

  14. Photometric Calibration of Consumer Video Cameras

    Science.gov (United States)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  15. A wide-angle camera module for disposable endoscopy

    Science.gov (United States)

    Shim, Dongha; Yeon, Jesun; Yi, Jason; Park, Jongwon; Park, Soo Nam; Lee, Nanhee

    2016-08-01

    A wide-angle miniaturized camera module for disposable endoscope is demonstrated in this paper. A lens module with 150° angle of view (AOV) is designed and manufactured. All plastic injection-molded lenses and a commercial CMOS image sensor are employed to reduce the manufacturing cost. The image sensor and LED illumination unit are assembled with a lens module. The camera module does not include a camera processor to further reduce its size and cost. The size of the camera module is 5.5 × 5.5 × 22.3 mm3. The diagonal field of view (FOV) of the camera module is measured to be 110°. A prototype of a disposable endoscope is implemented to perform a pre-clinical animal testing. The esophagus of an adult beagle dog is observed. These results demonstrate the feasibility of a cost-effective and high-performance camera module for disposable endoscopy.

  16. A wide-angle camera module for disposable endoscopy

    Science.gov (United States)

    Shim, Dongha; Yeon, Jesun; Yi, Jason; Park, Jongwon; Park, Soo Nam; Lee, Nanhee

    2016-06-01

    A wide-angle miniaturized camera module for disposable endoscope is demonstrated in this paper. A lens module with 150° angle of view (AOV) is designed and manufactured. All plastic injection-molded lenses and a commercial CMOS image sensor are employed to reduce the manufacturing cost. The image sensor and LED illumination unit are assembled with a lens module. The camera module does not include a camera processor to further reduce its size and cost. The size of the camera module is 5.5 × 5.5 × 22.3 mm3. The diagonal field of view (FOV) of the camera module is measured to be 110°. A prototype of a disposable endoscope is implemented to perform a pre-clinical animal testing. The esophagus of an adult beagle dog is observed. These results demonstrate the feasibility of a cost-effective and high-performance camera module for disposable endoscopy.

  17. On Pixel Detection Threshold in the Gigavision Camera

    OpenAIRE

    Yang, F.; Sbaiz, L.; Charbon, E.; Susstrunk, S.; Vetterli, M.

    2010-01-01

    Recently, we have proposed a new image device called gigavision camera whose most important characteristic is that pixels have binary response. The response function of a gigavision sensor is non-linear and similar to a logarithmic function, which makes the camera suitable for high dynamic range imaging. One important parameter in the gigavision camera is the threshold for generating binary pixels. Threshold T relates to the number of photo-electrons necessary for the pixel output to switch f...

  18. Abnormal Event Detection via Multikernel Learning for Distributed Camera Networks

    OpenAIRE

    Tian Wang; Jie Chen; Paul Honeine; Hichem Snoussi

    2015-01-01

    Distributed camera networks play an important role in public security surveillance. Analyzing video sequences from cameras set at different angles will provide enhanced performance for detecting abnormal events. In this paper, an abnormal detection algorithm is proposed to identify unusual events captured by multiple cameras. The visual event is summarized and represented by the histogram of the optical flow orientation descriptor, and then a multikernel strategy that takes the multiview scen...

  19. IR Camera Report for the 7 Day Production Test

    International Nuclear Information System (INIS)

    The following report gives a summary of the IR camera performance results and data for the 7 day production run that occurred from 10 Sep 2015 thru 16 Sep 2015. During this production run our goal was to see how well the camera performed its task of monitoring the target window temperature with our improved alignment procedure and emissivity measurements. We also wanted to see if the increased shielding would be effective in protecting the camera from damage and failure.

  20. Movement-based interaction in camera spaces: a conceptual framework

    DEFF Research Database (Denmark)

    Eriksson, Eva; Hansen, Thomas Riisgaard; Lykke-Olesen, Andreas

    2007-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movementbased projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space......, relations, and feedback. We see these as central for describing and analysing movement-based systems using camera tracking and we show how these three concepts can be used to analyse other camera tracking applications....

  1. Extrinsic Calibration of Camera Networks Using a Sphere

    OpenAIRE

    Junzhi Guan; Francis Deboeverie; Maarten Slembrouck; Dirk Van Haerenborgh; Dimitri van Cauwelaert; Peter Veelaert; Wilfried Philips

    2015-01-01

    In this paper, we propose a novel extrinsic calibration method for camera networks using a sphere as the calibration object. First of all, we propose an easy and accurate method to estimate the 3D positions of the sphere center w.r.t. the local camera coordinate system. Then, we propose to use orthogonal procrustes analysis to pairwise estimate the initial camera relative extrinsic parameters based on the aforementioned estimation of 3D positions. Finally, an optimization routine is applied t...

  2. Calibration of a Stereo Radiation Detection Camera Using Planar Homography

    OpenAIRE

    Seung-Hae Baek; Pathum Rathnayaka; Soon-Yong Park

    2016-01-01

    This paper proposes a calibration technique of a stereo gamma detection camera. Calibration of the internal and external parameters of a stereo vision camera is a well-known research problem in the computer vision society. However, few or no stereo calibration has been investigated in the radiation measurement research. Since no visual information can be obtained from a stereo radiation camera, it is impossible to use a general stereo calibration algorithm directly. In this paper, we develop ...

  3. More Accurate Pinhole Camera Calibration with Imperfect Planar Target

    OpenAIRE

    Strobl, Klaus H.; Hirzinger, Gerd

    2011-01-01

    This paper presents a novel approach to camera calibration that improves final accuracy with respect to standard methods using precision planar targets, even if now inaccurate, unmeasured, roughly planar targets can be used. The work builds on a recent trend in camera calibration, namely concurrent optimization of scene structure together with the intrinsic camera parameters. A novel formulation is presented that allows maximum likelihood estimation in the case of inaccurate targets, as it ex...

  4. Reading Between the Pixels: Photographic Steganography for Camera Display Messaging

    OpenAIRE

    Wengrowski, Eric; Dana, Kristin; Gruteser, Marco; Mandayam, Narayan

    2016-01-01

    We exploit human color metamers to send light-modulated messages less visible to the human eye, but recoverable by cameras. These messages are a key component to camera-display messaging, such as handheld smartphones capturing information from electronic signage. Each color pixel in the display image is modified by a particular color gradient vector. The challenge is to find the color gradient that maximizes camera response, while minimizing human response. The mismatch in human spectral and ...

  5. Quality assessment of user-generated video using camera motion

    OpenAIRE

    Guo, Jinlin; Gurrin, Cathal; Hopfgartner, Frank; Zhang, ZhenXing; Lao, Songyang

    2013-01-01

    With user-generated video (UGV) becoming so popular on theWeb, the availability of a reliable quality assessment (QA) measure of UGV is necessary for improving the users’ quality of experience in videobased application. In this paper, we explore QA of UGV based on how much irregular camera motion it contains with low-cost manner. A blockmatch based optical flow approach has been employed to extract camera motion features in UGV, based on which, irregular camera motion is calculated and ...

  6. PHOTOGRAMMETRIC PROCESSING OF APOLLO 15 METRIC CAMERA OBLIQUE IMAGES

    OpenAIRE

    K. L. Edmundson; O. Alexandrov; Archinal, B. A.; Becker, K.J.; Becker, T. L.; Kirk, R L; Moratto, Z. M.; Nefian, A. V.; Richie, J. O.; Robinson, M S

    2016-01-01

    The integrated photogrammetric mapping system flown on the last three Apollo lunar missions (15, 16, and 17) in the early 1970s incorporated a Metric (mapping) Camera, a high-resolution Panoramic Camera, and a star camera and laser altimeter to provide support data. In an ongoing collaboration, the U.S. Geological Survey’s Astrogeology Science Center, the Intelligent Robotics Group of the NASA Ames Research Center, and Arizona State University are working to achieve the most complete...

  7. Central Acceptance Testing for Camera Technologies for CTA

    OpenAIRE

    Bonardi, A.; T. Buanes; Chadwick, P.; Dazzi, F.; A. Förster(CERN, Geneva, Switzerland); Hörandel, J. R.; Punch, M.; Consortium, R. M. Wagner for the CTA

    2015-01-01

    The Cherenkov Telescope Array (CTA) is an international initiative to build the next generation ground based very-high energy gamma-ray observatory. It will consist of telescopes of three different sizes, employing several different technologies for the cameras that detect the Cherenkov light from the observed air showers. In order to ensure the compliance of each camera technology with CTA requirements, CTA will perform central acceptance testing of each camera technology. To assist with thi...

  8. IR Camera Report for the 7 Day Production Test

    Energy Technology Data Exchange (ETDEWEB)

    Holloway, Michael Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-22

    The following report gives a summary of the IR camera performance results and data for the 7 day production run that occurred from 10 Sep 2015 thru 16 Sep 2015. During this production run our goal was to see how well the camera performed its task of monitoring the target window temperature with our improved alignment procedure and emissivity measurements. We also wanted to see if the increased shielding would be effective in protecting the camera from damage and failure.

  9. Analysis of Camera Arrays Applicable to the Internet of Things

    OpenAIRE

    Jiachen Yang; Ru Xu; Zhihan Lv; Houbing Song

    2016-01-01

    The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are...

  10. 360 deg Camera Head for Unmanned Sea Surface Vehicles

    Science.gov (United States)

    Townsend, Julie A.; Kulczycki, Eric A.; Willson, Reginald G.; Huntsberger, Terrance L.; Garrett, Michael S.; Trebi-Ollennu, Ashitey; Bergh, Charles F.

    2012-01-01

    The 360 camera head consists of a set of six color cameras arranged in a circular pattern such that their overlapping fields of view give a full 360 view of the immediate surroundings. The cameras are enclosed in a watertight container along with support electronics and a power distribution system. Each camera views the world through a watertight porthole. To prevent overheating or condensation in extreme weather conditions, the watertight container is also equipped with an electrical cooling unit and a pair of internal fans for circulation.

  11. Geometric Stability and Lens Decentering in Compact Digital Cameras

    Directory of Open Access Journals (Sweden)

    María Flor Álvarez Taboada

    2010-03-01

    Full Text Available A study on the geometric stability and decentering present in sensor-lens systems of six identical compact digital cameras has been conducted. With regard to geometrical stability, the variation of internal geometry parameters (principal distance, principal point position and distortion parameters was considered. With regard to lens decentering, the amount of radial and tangential displacement resulting from decentering distortion was related with the precision of the camera and with the offset of the principal point from the geometric center of the sensor. The study was conducted with data obtained after 372 calibration processes (62 per camera. The tests were performed for each camera in three situations: during continuous use of the cameras, after camera power off/on and after the full extension and retraction of the zoom-lens. Additionally, 360 new calibrations were performed in order to study the variation of the internal geometry when the camera is rotated. The aim of this study was to relate the level of stability and decentering in a camera with the precision and quality that can be obtained. An additional goal was to provide practical recommendations about photogrammetric use of such cameras.

  12. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    Science.gov (United States)

    Shortis, Mark

    2015-01-01

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172

  13. The new camera calibration system at the US Geological Survey

    Science.gov (United States)

    Light, D.L.

    1992-01-01

    Modern computerized photogrammetric instruments are capable of utilizing both radial and decentering camera calibration parameters which can increase plotting accuracy over that of older analog instrumentation technology from previous decades. Also, recent design improvements in aerial cameras have minimized distortions and increased the resolving power of camera systems, which should improve the performance of the overall photogrammetric process. In concert with these improvements, the Geological Survey has adopted the rigorous mathematical model for camera calibration developed by Duane Brown. An explanation of the Geological Survey's calibration facility and the additional calibration parameters now being provided in the USGS calibration certificate are reviewed. -Author

  14. Calibration and accuracy analysis of a focused plenoptic camera

    OpenAIRE

    Zeller, N.; F. Quint; U. Stilla

    2014-01-01

    In this article we introduce new methods for the calibration of depth images from focused plenoptic cameras and validate the results. We start with a brief description of the concept of a focused plenoptic camera and how from the recorded raw image a depth map can be estimated. For this camera, an analytical expression of the depth accuracy is derived for the first time. In the main part of the paper, methods to calibrate a focused plenoptic camera are developed and evaluated. The op...

  15. Testing and evaluation of thermal cameras for absolute temperature measurement

    Science.gov (United States)

    Chrzanowski, Krzysztof; Fischer, Joachim; Matyszkiel, Robert

    2000-09-01

    The accuracy of temperature measurement is the most important criterion for the evaluation of thermal cameras used in applications requiring absolute temperature measurement. All the main international metrological organizations currently propose a parameter called uncertainty as a measure of measurement accuracy. We propose a set of parameters for the characterization of thermal measurement cameras. It is shown that if these parameters are known, then it is possible to determine the uncertainty of temperature measurement due to only the internal errors of these cameras. Values of this uncertainty can be used as an objective criterion for comparisons of different thermal measurement cameras.

  16. Design and tests of a portable mini gamma camera

    International Nuclear Information System (INIS)

    Design optimization, manufacturing, and tests, both laboratory and clinical, of a portable gamma camera for medical applications are presented. This camera, based on a continuous scintillation crystal and a position-sensitive photomultiplier tube, has an intrinsic spatial resolution of ≅2 mm, an energy resolution of 13% at 140 keV, and linearities of 0.28 mm (absolute) and 0.15 mm (differential), with a useful field of view of 4.6 cm diameter. Our camera can image small organs with high efficiency and so it can address the demand for devices of specific clinical applications like thyroid and sentinel node scintigraphy as well as scintimammography and radio-guided surgery. The main advantages of the gamma camera with respect to those previously reported in the literature are high portability, low cost, and weight (2 kg), with no significant loss of sensitivity and spatial resolution. All the electronic components are packed inside the minigamma camera, and no external electronic devices are required. The camera is only connected through the universal serial bus port to a portable personal computer (PC), where a specific software allows to control both the camera parameters and the measuring process, by displaying on the PC the acquired image on 'real time'. In this article, we present the camera and describe the procedures that have led us to choose its configuration. Laboratory and clinical tests are presented together with diagnostic capabilities of the gamma camera

  17. Calibration of a Stereo Radiation Detection Camera Using Planar Homography

    Directory of Open Access Journals (Sweden)

    Seung-Hae Baek

    2016-01-01

    Full Text Available This paper proposes a calibration technique of a stereo gamma detection camera. Calibration of the internal and external parameters of a stereo vision camera is a well-known research problem in the computer vision society. However, few or no stereo calibration has been investigated in the radiation measurement research. Since no visual information can be obtained from a stereo radiation camera, it is impossible to use a general stereo calibration algorithm directly. In this paper, we develop a hybrid-type stereo system which is equipped with both radiation and vision cameras. To calibrate the stereo radiation cameras, stereo images of a calibration pattern captured from the vision cameras are transformed in the view of the radiation cameras. The homography transformation is calibrated based on the geometric relationship between visual and radiation camera coordinates. The accuracy of the stereo parameters of the radiation camera is analyzed by distance measurements to both visual light and gamma sources. The experimental results show that the measurement error is about 3%.

  18. Neutron emissivity profile camera diagnostics considering present and future tokamaks

    International Nuclear Information System (INIS)

    This thesis describes the neutron profile camera situated at JET. The profile camera is one of the most important neutron emission diagnostic devices operating at JET. It gives useful information of the total neutron yield rate but also about the neutron emissivity distribution. Data analysis was performed in order to compare three different calibration methods. The data was collected from the deuterium campaign, C4, in the beginning of 2001. The thesis also includes a section about the implication of a neutron profile camera for ITER, where the issue regarding interface difficulties is in focus. The ITER JCT (Joint Central Team) proposal of a neutron camera for ITER is studied in some detail

  19. Plenoptic camera image simulation for reconstruction algorithm verification

    Science.gov (United States)

    Schwiegerling, Jim

    2014-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.

  20. Camera Raw解读(3)

    Institute of Scientific and Technical Information of China (English)

    张恣宽

    2010-01-01

    接上期,继续介绍Camera Raw的调整面板。(2).【色调曲线】面板单击【色调曲线】按钮.进入【色调曲线】选项面板(快捷键Ctrl+Alt+2)。该面板主要是对图片中间色调进行精细调整,从Photoshop CS3开始.在曲线背景中加入了色阶中才有的直方图波形,使我们可以直观地看到照片调整前后的色阶变化。

  1. Camera array based light field microscopy.

    Science.gov (United States)

    Lin, Xing; Wu, Jiamin; Zheng, Guoan; Dai, Qionghai

    2015-09-01

    This paper proposes a novel approach for high-resolution light field microscopy imaging by using a camera array. In this approach, we apply a two-stage relay system for expanding the aperture plane of the microscope into the size of an imaging lens array, and utilize a sensor array for acquiring different sub-apertures images formed by corresponding imaging lenses. By combining the rectified and synchronized images from 5 × 5 viewpoints with our prototype system, we successfully recovered color light field videos for various fast-moving microscopic specimens with a spatial resolution of 0.79 megapixels at 30 frames per second, corresponding to an unprecedented data throughput of 562.5 MB/s for light field microscopy. We also demonstrated the use of the reported platform for different applications, including post-capture refocusing, phase reconstruction, 3D imaging, and optical metrology. PMID:26417490

  2. Passive MMW camera for low visibility landings

    Science.gov (United States)

    Shoucri, Merit

    1994-01-01

    A passive, millimeter wave imaging sensor for aircraft landing in low or poor visibility conditions is described. The sensor can be incorporated in a camera for future enhanced/synthetic vision systems. Contrast is provided by differences in material reflectivities, temperature, and sky illumination of the scene being imaged. Photographic images of the system's fog penetration capabilities are presented. A combinatorial geometry technique is used to construct the scene geometries. This technique uses eight basic geometric shapes which are used as building blocks for 3-D complex-shaped objects. The building blocks are then combined via union, intersection and exclusion operations to form 3-D scene objects and the combinatorial geometry package determines ray intercepts with scene objects, providing the specific surfaces and propagation distance for the scene.

  3. Neutron camera employing row and column summations

    Science.gov (United States)

    Clonts, Lloyd G.; Diawara, Yacouba; Donahue, Jr, Cornelius; Montcalm, Christopher A.; Riedel, Richard A.; Visscher, Theodore

    2016-06-14

    For each photomultiplier tube in an Anger camera, an R.times.S array of preamplifiers is provided to detect electrons generated within the photomultiplier tube. The outputs of the preamplifiers are digitized to measure the magnitude of the signals from each preamplifier. For each photomultiplier tube, a corresponding summation circuitry including R row summation circuits and S column summation circuits numerically add the magnitudes of the signals from preamplifiers for each row and for each column to generate histograms. For a P.times.Q array of photomultiplier tubes, P.times.Q summation circuitries generate P.times.Q row histograms including R entries and P.times.Q column histograms including S entries. The total set of histograms include P.times.Q.times.(R+S) entries, which can be analyzed by a position calculation circuit to determine the locations of events (detection of a neutron).

  4. Operational experience with a CID camera system

    CERN Document Server

    Welsch, Carsten P; Burel, Bruno; Lefèvre, Thibaut

    2006-01-01

    In future high intensity, high energy accelerators particle losses must be minimized as activation of the vacuum chambers or other components makes maintenance and upgrade work time consuming and costly. It is imperative to have a clear understanding of the mechanisms that can lead to halo formation, and to have the possibility to test available theoretical models with an adequate experimental setup. Measurements based on optical transition radiation (OTR) provide an interesting opportunity for analyzing the transverse beam profile due to the fast time response and very good linearity of the signal with respect to the beam intensity. On the other hand, the dynamic range of typical acquisition systems as they are used in the CLIC test facility (CTF3) is typically limited and must be improved before these systems can be applied to halo measurements. One possibility for high dynamic range measurements is an innovative camera system based on charge injection device (CID) technology. With possible future measureme...

  5. Robust multi-camera view face recognition

    CERN Document Server

    Kisku, Dakshina Ranjan; Gupta, Phalguni; Sing, Jamuna Kanta

    2010-01-01

    This paper presents multi-appearance fusion of Principal Component Analysis (PCA) and generalization of Linear Discriminant Analysis (LDA) for multi-camera view offline face recognition (verification) system. The generalization of LDA has been extended to establish correlations between the face classes in the transformed representation and this is called canonical covariate. The proposed system uses Gabor filter banks for characterization of facial features by spatial frequency, spatial locality and orientation to make compensate to the variations of face instances occurred due to illumination, pose and facial expression changes. Convolution of Gabor filter bank to face images produces Gabor face representations with high dimensional feature vectors. PCA and canonical covariate are then applied on the Gabor face representations to reduce the high dimensional feature spaces into low dimensional Gabor eigenfaces and Gabor canonical faces. Reduced eigenface vector and canonical face vector are fused together usi...

  6. DAWN Framing Camera results from Ceres orbit

    Science.gov (United States)

    Nathues, A.; Hoffmann, M.; Schäfer, M.; Le Corre, L.; Reddy, V.; Platz, T.; Russel, C. T.; Li, J.-Y.; Ammanito, E.; Buettner, I.; Christensen, U.; Hall, I.; Kelley, M.; Gutiérrez Marqués, P.; McCord, T. B.; McFadden, L. A.; Mengel, K.; Mottola, S.; O'Brien, D.; Pieters, C.

    2015-10-01

    Having completed its investigation of Vesta in late 2012, the NASA Dawn mission [1] reached its second target, the dwarf planet Ceres on March 6, 2015. During its operational phase, Dawn is scheduled to fly four polar orbits, each with a different distance to the target. The Framing Cameras (FCs) onboard the Dawn spacecraft are mapping the dwarf planet Ceres in seven colors and a clear filter [2], covering the wavelength range between 0.4 and 1.0 μm. The FCs also conduct a number of sequences for purposes of navigation, instrument calibration, and have already performed satellite searches and three early rotational characterizations (RCs) of Ceres in February and May 2015. During the EPSC conference we intend to present the most intriguing results obtained from the Survey orbit (resolution ~400 m/pixel) as well as the first results from HAMO orbit (~140 m/pixel) focusing on the analysis of FC color data.

  7. Fast Camera Imaging of Hall Thruster Ignition

    International Nuclear Information System (INIS)

    Hall thrusters provide efficient space propulsion by electrostatic acceleration of ions. Rotating electron clouds in the thruster overcome the space charge limitations of other methods. Images of the thruster startup, taken with a fast camera, reveal a bright ionization period which settles into steady state operation over 50 (micro)s. The cathode introduces azimuthal asymmetry, which persists for about 30 (micro)s into the ignition. Plasma thrusters are used on satellites for repositioning, orbit correction and drag compensation. The advantage of plasma thrusters over conventional chemical thrusters is that the exhaust energies are not limited by chemical energy to about an electron volt. For xenon Hall thrusters, the ion exhaust velocity can be 15-20 km/s, compared to 5 km/s for a typical chemical thruster.

  8. Collimated trans-axial tomographic scintillation camera

    International Nuclear Information System (INIS)

    The principal problem in trans-axial tomographic radioisotope scanning is the length of time required to obtain meaningful data. Patient movement and radioisotope migration during the scanning period can cause distortion of the image. The object of this invention is to reduce the scanning time without degrading the images obtained. A system is described in which a scintillation camera detector is moved to an orbit about the cranial-caudal axis relative to the patient. A collimator is used in which lead septa are arranged so as to admit gamma rays travelling perpendicular to this axis with high spatial resolution and those travelling in the direction of the axis with low spatial resolution, thus increasing the rate of acceptance of radioactive events to contribute to the positional information obtainable without sacrificing spatial resolution. (author)

  9. Smart Cameras for Remote Science Survey

    Science.gov (United States)

    Thompson, David R.; Abbey, William; Allwood, Abigail; Bekker, Dmitriy; Bornstein, Benjamin; Cabrol, Nathalie A.; Castano, Rebecca; Estlin, Tara; Fuchs, Thomas; Wagstaff, Kiri L.

    2012-01-01

    Communication with remote exploration spacecraft is often intermittent and bandwidth is highly constrained. Future missions could use onboard science data understanding to prioritize downlink of critical features [1], draft summary maps of visited terrain [2], or identify targets of opportunity for followup measurements [3]. We describe a generic approach to classify geologic surfaces for autonomous science operations, suitable for parallelized implementations in FPGA hardware. We map these surfaces with texture channels - distinctive numerical signatures that differentiate properties such as roughness, pavement coatings, regolith characteristics, sedimentary fabrics and differential outcrop weathering. This work describes our basic image analysis approach and reports an initial performance evaluation using surface images from the Mars Exploration Rovers. Future work will incorporate these methods into camera hardware for real-time processing.

  10. New high-dynamic-range camera architecture

    Science.gov (United States)

    Cernasov, Andrei

    2006-05-01

    The need for high (wide) dynamic range cameras in the Security and Defense sectors is self-evident. Still the development of a cost-effective and viable system proves to be an elusive goal. To this end we take a new approach which meets a number of requirements, most notably a high "fill" factor for the associated APS (active pixel sensor) array and a minimal technology development curve. The approach can be used with any sensor array technology supporting, on a granular level, random pixel access. To achieve high dynamic range one of the presented camera systems classifies image pixels according to their probable brightness levels. Then it scans the pixels according to their probable brightness, with the pixels most likely to be the brightest being scanned first and the pixels most likely to be the darkest, last. Periodically the system re-adjusts the scanning strategy based on collected data or operator inputs. The overall exposure time is dictated by the sensitivity of the selected array and by the content and frame rate of the image. The local exposure time is determined by the predicted pixel brightness levels. The prediction method we use in this paper is simple duplication; i.e. the brightness of the vast majority of pixels is assumed to change little from frame to frame. This allows us to dedicate resources only to the few pixels undergoing large output excursions. Such approach was found to require only minimal modifications to standard APS array architectures and less "off-sensor" resources than CAMs (Content Addressable Memory) or other DSP intensive methods.

  11. Acceptance tests of a new gamma camera

    International Nuclear Information System (INIS)

    For best patient service, a QA programme is needed to produce quantitative/qualitative data and keep records of the results and equipment faults. Gamma cameras must be checked against the manufacturer's specifications.The service manual is usually useful to achieve this goal. Acceptance tests are very important not only to accept a new gamma camera system for routine clinical use but also to have a role in a reference for future measurements. In this study, acceptance tests were performed for a new gamma camera in our department. It is a General Electric MG system with two detectors, two collimators. They are low energy general purpose (LEGP) and medium energy general purpose (MEGP). All intrinsic calibrations and corrections were done by the service engineer at installation (PM tune, dynamic correction, energy calibration, geometric calibration, energy correction, linearity correction and second order corrections).After installation, calibrations and corrections, a close physical inspection of the mechanical and electrical safety aspects of the cameras were done by the responsible physicist of the department. The planar system is based on measurement of system uniformity, resolution/linearity and multiple window spatial registration. All test procedures were performed according to NEMA procedures developed by the manufacturer. Intrinsic uniformity: NEMA uniformity was done first by using service manual and then other isotope uniformities were acquired with 99mTc, 131I, 201Tl and 67Ga. They were evaluated qualitatively and quantitatively, but non-uniformities were observed, especially for detector II, The service engineers repeated all tests and made necessary corrections. We repeated all the intrinsic uniformity tests. 99mTc intrinsic images were also performed at 'no correction', 'no energy correction', 'no linearity correction', 'all correction' and '±10% off peak', and compared. Extrinsic uniformity: At the beginning, collimators were checked for defects

  12. Range Camera Self-Calibration Based on Integrated Bundle Adjustment via Joint Setup with a 2D Digital Camera

    OpenAIRE

    Mehran Sattari; Mohammad Saadatseresht; Mozhdeh Shahbazi; Saeid Homayouni

    2011-01-01

    Time-of-flight cameras, based on Photonic Mixer Device (PMD) technology, are capable of measuring distances to objects at high frame rates, however, the measured ranges and the intensity data contain systematic errors that need to be corrected. In this paper, a new integrated range camera self-calibration method via joint setup with a digital (RGB) camera is presented. This method can simultaneously estimate the systematic range error parameters as well as the interior and external orientatio...

  13. Comment on ‘From the pinhole camera to the shape of a lens: the camera-obscura reloaded’

    Science.gov (United States)

    Grusche, Sascha

    2016-09-01

    In the article ‘From the pinhole camera to the shape of a lens: the camera-obscura reloaded’ (Phys. Educ. 50 706), the authors show that a prism array, or an equivalent lens, can be used to bring together multiple camera obscura images from a pinhole array. It should be pointed out that the size of the camera obscura images is conserved by a prism array, but changed by a lens. To avoid this discrepancy in image size, the prism array, or the lens, should be made to touch the pinhole array.

  14. Los Alamos Pinhole Camera (LAPC): A new flexible x-ray pinhole camera

    International Nuclear Information System (INIS)

    We have recently designed, built and fielded a versatile, multi-channel x-ray pinhole camera. The LAPC was designed to fit into any six inch manipulator (SIM) which is a standardized target chamber diagnostic tube. There are currently compatible SIMs available at the Trident, Omega, and NOVA laser systems. The camera uses 9 pinholes in a 3x3 array to produce images at the film plane. The film housing is designed to hold multiple sheets of stacked x-ray film which also uses a dark-slide to protect the film before exposure. Magnifications of 12, 8, 4 and 2X are selected by slip-on nosecones, which support pinholes, collimators, and blast shields. Individual channel filtering is provided by a 3x3 filterpack containing 9 separate filter sub-packs. Spatial resolution is limited by the pinhole diffraction limit and field of view is dependent on magnification and filterpack diameter

  15. Camera Self-Calibration in the AUV Monocular Vision Navigation and Positioning

    OpenAIRE

    GAO Jun-chai; LIU Ming-yong

    2013-01-01

    camera calibration is essential to obtain 3D information from 2D image, for underwater camera self-calibration, because the linear model can not accurately describe the imaging geometry of real cameras, the underwater camera nonlinear model and its calibration method are studied. According to the relationship between underwater and air in the camera focal length, principal point and no vertical factors, the underwater camera imaging geometry is modeled. Underwater camera nonlinear imaging ge...

  16. Wildlife speed cameras: measuring animal travel speed and day range using camera traps

    OpenAIRE

    Rowcliffe, J. M.; Jansen, P A; Kays, R.; Kranstauber, B.; C. Carbone

    2016-01-01

    Travel speed (average speed of travel while active) and day range (average speed over the daily activity cycle) are behavioural metrics that influence processes including energy use, foraging success, disease transmission and human-wildlife interactions, and which can therefore be applied to a range of questions in ecology and conservation. These metrics are usually derived from telemetry or direct observations. Here, we describe and validate an entirely new alternative approach, using camera...

  17. Radiation level survey of a mobile phone base station

    International Nuclear Information System (INIS)

    Electromagnetic field (E.M.F.) evaluations were carried out in the surroundings of a roof-top mobile-phone radio-base station (R.B.S.). Four of its sector-panel antennas are installed on two parallel vertical masts, each supporting two panels in a vertical collinear-array. The geometry is such that the vertical plane containing both masts is about 10 meters distant and parallel to the backside of an educational institution. This proximity provoked great anxiety among the local community members regarding potential health hazards.1. Introduction: To keep up with the expansion of the mobile-phone services, the number of Radio-Base Stations (R.B.S.) installations is increasing tremendously in Brazil. Efficient control and radiation monitoring to assess R.B.S. compliance to existing regulations are still lacking and particularly in big cities, clearly non - compliant R.B.S. can be seen which represent potentially hazardous E.M.F. sources to the nearby population. This first survey of an irregular R.B.S. revealed significant E-field strengths outside, as well as inside a classroom of an educational building where an usually prolonged stay is necessary. These results confirm that this problem deserves further attention, moreover, if one considers that public and occupational exposure limits set by I.C.N.I.R.P. (also adopted in Brazil) are exclusively based on the immediate thermal effects of acute exposure, disregarding any potential health risk from prolonged exposure to lower level radiation. Research activities focusing on quantitative aspects of electromagnetic radiation from R.B.S., as well as on biological and adverse health effects are still at a very incipient level, urging for immediate actions to improve this scenario in our country. 2. Material, methods and results Measurements were carried out with a broadband field strength monitor, E.M.R.-300 (W and G) coupled to an isotropic E-field probe (100 khz to 3 GHz). Preliminary measurements helped locating critical points where a prolonged monitoring was carried out. By connecting the field monitor to a Notebook computer running a specific data acquisition software, this monitoring was remote controlled. Measured E-field intensities are lower than I.C.N.I.R.P. reference-values but comparable to and even higher than more restrictive limits adopted in some countries. 3. Conclusions: Significant E-field intensities were measured around a non-compliant R.B.S. installation, highlighting the importance for implementation of further regulatory and controlling mechanisms in the mobile-phone sector. Discussions regarding the adoption of additional restrictions on the public exposure to radiation, as already observed in some european countries, seem to be relevant as one considers that the long-term risks associated to nonthermal effects still represent an area of scientific uncertainty affecting an ever-increasing number of people. (authors)

  18. Cellular Phone Base Stations: Technology and Exposures (invited paper)

    International Nuclear Information System (INIS)

    The principles and practice of cellular radio systems for mobile communications are presented using GSM 1800 as a reference system. In particular, the concepts of small cells and frequency re-use and the components of radio base station technology are described. Public and cellular broadcasting have been widely available for many years and a brief history is given to indicate length of exposure from these sources. National and international guidelines for safe exposure to non-ionising radiation are used by cellular operators to define exclusion zones around transmitting antennas. A methodology for calculating an exclusion zone is described, together with an example for a typical antenna configuration. Estimated levels of exposure near practical base stations are given and comparisons made with other sources of RF radiation. Finally, the digital nature of today's cellular radio systems, such as GSM, are explained and the implications described. (author)

  19. Game Development for Smart Phones Based on Local Heritage

    Directory of Open Access Journals (Sweden)

    Oras F. Baker

    2011-01-01

    Full Text Available The market for mobile games is expanding rapidly, and several games market in the advanced world are booming with sales, that has led us to the use of latest tools for mobile application development to modify and  enhance an existing and traditional game to make it available for mobile users, Congkak game is developed for the latest mobile phones in the market. This version has been modified to an eight holes, one store, and one player game for a start. Each player  hole contains four seeds, which are sown continuously until the seeds are exhausted. The main objectives of the game are: to make it available for mobile phones as well as extend/modify the games’ functionality. Major development tools used were the Sony Ericsson KToolBar, Netbeans IDE, Scite text editor, and Adobe Photoshop. UML class diagrams have also been modeled for the classes. The programming language used in this game development is java, programming in the micro edition platform (J2ME.

  20. A secure mobile phone-based interactive logon in Windows

    OpenAIRE

    Bodriagov, Oleksandr

    2010-01-01

    Password-based logon schemes have many security weaknesses. Smart card and biometric based authentication solutions are available as a replacement for standard password-based schemes for security sensitive environments. However, the cost of deployment and maintenance of these systems is quite high. On the other hand, mobile network operators have a huge base of deployed smart cards that can be reused to provide authentication in other areas significantly reducing costs. This master s thesis ...