WorldWideScience

Sample records for camera phone-based wayfinding

  1. Use of a smart phone based thermo camera for skin prick allergy testing: a feasibility study (Conference Presentation)

    Science.gov (United States)

    Barla, Lindi; Verdaasdonk, Rudolf M.; Rustemeyer, Thomas; Klaessens, John; van der Veen, Albert

    2016-02-01

    Allergy testing is usually performed by exposing the skin to small quantities of potential allergens on the inner forearm and scratching the protective epidermis to increase exposure. After 15 minutes the dermatologist performs a visual check for swelling and erythema which is subjective and difficult for e.g. dark skin types. A small smart phone based thermo camera (FLIR One) was used to obtain quantitative images in a feasibility study of 17 patients Directly after allergen exposure on the forearm, thermal images were captured at 30 seconds interval and processed to a time lapse movie over 15 minutes. Considering the 'subjective' reading of the dermatologist as golden standard, in 11/17 pts (65%) the evaluation of dermatologist was confirmed by the thermo camera including 5 of 6 patients without allergic response. In 7 patients thermo showed additional spots. Of the 342 sites tested, the dermatologist detected 47 allergies of which 28 (60%) were confirmed by thermo imaging while thermo imaging showed 12 additional spots. The method can be improved with user dedicated acquisition software and better registration between normal and thermal images. The lymphatic reaction seems to shift from the original puncture site. The interpretation of the thermal images is still subjective since collecting quantitative data is difficult due to motion patient during 15 minutes. Although not yet conclusive, thermal imaging shows to be promising to improve the sensitivity and selectivity of allergy testing using a smart phone based camera.

  2. A camera-phone based study reveals erratic eating pattern and disrupted daily eating-fasting cycle among adults in India

    Science.gov (United States)

    Gupta, Neelu Jain; Kumar, Vinod

    2017-01-01

    The daily rhythm of feeding-fasting and meal-timing are emerging as important determinants of health. Circadian rhythm research in animal models and retrospective analyses of human nutrition data have shown that reduced length of overnight fasting or increased late night eating increases risk for metabolic diseases including obesity and diabetes. However, the daily rhythm in eating pattern in humans is rarely measured. Traditional methods to collect nutrition information through food diary and food log pay little attention to the timing of eating which may also change from day to day. We adopted a novel cell-phone based approach to longitudinally record all events of food and beverage intake in adults. In a feasibility study daily food-eating patterns of 93 healthy individuals were recorded for 21 days using camera phones. Analysis of the daily eating patterns of these individuals indicates deviation from conventional assumption that people eat three meals-a-day within a 12 h interval. We found that eating events are widespread throughout the day, with 30% consumed in evening and late night hours. There was little difference in eating pattern between weekdays and weekends. In this cohort more than 50% of people spread their caloric intake events over 15 h or longer. One decile of the cohort who were spouses of shift-workers or had flexible work schedule spread their caloric intake over 20 h. Although the nutrition quality and diversity of food consumed is different between South-East Asian and Western countries, such overall disruption of daily eating-fasting rhythm is similar. Therefore, in view of hypothesis that disrupted daily eating pattern may contribute to the global increase in metabolic diseases and modification of daily eating pattern is a potential modifiable behavior to contain these diseases, monitoring eating pattern is an important aspect of lifestyle. PMID:28264001

  3. Indoor wayfinding and navigation

    CERN Document Server

    2015-01-01

    Due to the widespread use of navigation systems for wayfinding and navigation in the outdoors, researchers have devoted their efforts in recent years to designing navigation systems that can be used indoors. This book is a comprehensive guide to designing and building indoor wayfinding and navigation systems. It covers all types of feasible sensors (for example, Wi-Fi, A-GPS), discussing the level of accuracy, the types of map data needed, the data sources, and the techniques for providing routes and directions within structures.

  4. Wayfinding Design for Amherst Senior Center.

    Science.gov (United States)

    Kim, Karen

    2016-01-01

    This paper presents a design case of wayfinding design for a senior centre located in Amherst, New York. The design case proposed a new signage system and colour coding scheme to enhance the wayfinding experience of seniors, visitors, and staff members at Amherst Senior Center.

  5. Use of gestalt in wayfinding design and analysis of wayfinding process

    Institute of Scientific and Technical Information of China (English)

    Li NIU; Leiqing XU; Zhong TANG

    2008-01-01

    The authors brought forward the definition of "Gestalt space" and indicated this kind of space can be easily cognized. Three experiments showed that "clas-sification" and "grouping" are the human strategies to solve wayfinding problems. "Similarity" and "Legibi-lity" of the space are advantageous to help people to com-plete wayfinding tasks. The designer should provide the essential "Legibility" in Gestalt space, by using some tech-niques such as "break" and "accession" to settle the way-finding problem.

  6. Way-Finding Assistance System for Underground Facilities Using Augmented Reality

    Science.gov (United States)

    Yokoi, K.; Yabuki, N.; Fukuda, T.; Michikawa, T.; Motamedi, A.

    2015-05-01

    Way-finding is one of main challenges for pedestrians in large subterranean spaces with complex network of connected labyrinths. This problem is caused by the loss of their sense of directions and orientation due to the lack of landmarks that are occluded by ceilings, walls, and skyscraper. This paper introduces an assistance system for way-finding problem in large subterranean spaces using Augmented Reality (AR). It suggests displaying known landmarks that are invisible in indoor environments on tablet/handheld devices to assist users with relative positioning and indoor way-finding. The location and orientation of the users can be estimated by the indoor positioning systems and sensors available in the common tablet or smartphones devices. The constructed 3D model of a chosen landmark that is in the field of view of the handheld's camera is augmented on the camera's video feed. A prototype system has been implemented to demonstrate the efficiency of the proposed system for way-finding.

  7. Cell phone based balance trainer

    Directory of Open Access Journals (Sweden)

    Lee Beom-Chan

    2012-02-01

    Full Text Available Abstract Background In their current laboratory-based form, existing vibrotactile sensory augmentation technologies that provide cues of body motion are impractical for home-based rehabilitation use due to their size, weight, complexity, calibration procedures, cost, and fragility. Methods We have designed and developed a cell phone based vibrotactile feedback system for potential use in balance rehabilitation training in clinical and home environments. It comprises an iPhone with an embedded tri-axial linear accelerometer, custom software to estimate body tilt, a "tactor bud" accessory that plugs into the headphone jack to provide vibrotactile cues of body tilt, and a battery. Five young healthy subjects (24 ± 2.8 yrs, 3 females and 2 males and four subjects with vestibular deficits (42.25 ± 13.5 yrs, 2 females and 2 males participated in a proof-of-concept study to evaluate the effectiveness of the system. Healthy subjects used the system with eyes closed during Romberg, semi-tandem Romberg, and tandem Romberg stances. Subjects with vestibular deficits used the system with both eyes-open and eyes-closed conditions during semi-tandem Romberg stance. Vibrotactile feedback was provided when the subject exceeded either an anterior-posterior (A/P or a medial-lateral (M/L body tilt threshold. Subjects were instructed to move away from the vibration. Results The system was capable of providing real-time vibrotactile cues that informed corrective postural responses. When feedback was available, both healthy subjects and those with vestibular deficits significantly reduced their A/P or M/L RMS sway (depending on the direction of feedback, had significantly smaller elliptical area fits to their sway trajectory, spent a significantly greater mean percentage time within the no feedback zone, and showed a significantly greater A/P or M/L mean power frequency. Conclusion The results suggest that the real-time feedback provided by this system can be used

  8. Wayfinding Services for Open Educational Practices

    Directory of Open Access Journals (Sweden)

    M. Kalz

    2008-06-01

    Full Text Available To choose suitable resources for personalcompetence development in the vast amount of openeducational resources is a challenging task for a learner.Starting with a needs analysis of lifelong learners andlearning designers we introduce two wayfinding servicesthat are currently researched and developed in theframework of the Integrated Project TENCompetence.Then we discuss the role of these services to supportlearners in finding and selecting open educational resourcesand finally we give an outlook on future research.

  9. Mobile Phone Based Participatory Sensing in Hydrology

    Science.gov (United States)

    Lowry, C.; Fienen, M. N.; Böhlen, M.

    2014-12-01

    Although many observations in the hydrologic sciences are easy to obtain, requiring very little training or equipment, spatial and temporally-distributed data collection is hindered by associated personnel and telemetry costs. Lack of data increases the uncertainty and can limit applications of both field and modeling studies. However, modern society is much more digitally connected than the past, which presents new opportunities to collect real-time hydrologic data through the use of participatory sensing. Participatory sensing in this usage refers to citizens contributing distributed observations of physical phenomena. Real-time data streams are possible as a direct result of the growth of mobile phone networks and high adoption rates of mobile users. In this research, we describe an example of the development, methodology, barriers to entry, data uncertainty, and results of mobile phone based participatory sensing applied to groundwater and surface water characterization. Results are presented from three participatory sensing experiments that focused on stream stage, surface water temperature, and water quality. Results demonstrate variability in the consistency and reliability across the type of data collected and the challenges of collecting research grade data. These studies also point to needed improvements and future developments for widespread use of low cost techniques for participatory sensing.

  10. Mobile phone based SCADA for industrial automation.

    Science.gov (United States)

    Ozdemir, Engin; Karacor, Mevlut

    2006-01-01

    SCADA is the acronym for "Supervisory Control And Data Acquisition." SCADA systems are widely used in industry for supervisory control and data acquisition of industrial processes. Conventional SCADA systems use PC, notebook, thin client, and PDA as a client. In this paper, a Java-enabled mobile phone has been used as a client in a sample SCADA application in order to display and supervise the position of a sample prototype crane. The paper presents an actual implementation of the on-line controlling of the prototype crane via mobile phone. The wireless communication between the mobile phone and the SCADA server is performed by means of a base station via general packet radio service (GPRS) and wireless application protocol (WAP). Test results have indicated that the mobile phone based SCADA integration using the GPRS or WAP transfer scheme could enhance the performance of the crane in a day without causing an increase in the response times of SCADA functions. The operator can visualize and modify the plant parameters using his mobile phone, without reaching the site. In this way maintenance costs are reduced and productivity is increased.

  11. Coded illumination for motion-blur free imaging of cells on cell-phone based imaging flow cytometer

    Science.gov (United States)

    Saxena, Manish; Gorthi, Sai Siva

    2014-10-01

    Cell-phone based imaging flow cytometry can be realized by flowing cells through the microfluidic devices, and capturing their images with an optically enhanced camera of the cell-phone. Throughput in flow cytometers is usually enhanced by increasing the flow rate of cells. However, maximum frame rate of camera system limits the achievable flow rate. Beyond this, the images become highly blurred due to motion-smear. We propose to address this issue with coded illumination, which enables recovery of high-fidelity images of cells far beyond their motion-blur limit. This paper presents simulation results of deblurring the synthetically generated cell/bead images under such coded illumination.

  12. Rapid Prototyping a Collections-Based Mobile Wayfinding Application

    Science.gov (United States)

    Hahn, Jim; Morales, Alaina

    2011-01-01

    This research presents the results of a project that investigated how students use a library developed mobile app to locate books in the library. The study employed a methodology of formative evaluation so that the development of the mobile app would be informed by user preferences for next generation wayfinding systems. A key finding is the…

  13. Swarm-based wayfinding support in open and distance learning

    NARCIS (Netherlands)

    Tattersall, Colin; Manderveld, Jocelyn; Van den Berg, Bert; Van Es, René; Janssen, José; Koper, Rob

    2005-01-01

    Please refer to the original source: Tattersall, C. Manderveld, J., Van den Berg, B., Van Es, R., Janssen, J., & Koper, R. (2005). Swarm-based wayfinding support in open and distance learning. In Alkhalifa, E.M. (Ed). Cognitively Informed Systems: Utilizing Practical Approaches to Enrich Information

  14. Wayfinding Behaviour in Down Syndrome: A Study with Virtual Environments

    Science.gov (United States)

    Courbois, Yannick; Farran, Emily K.; Lemahieu, Axelle; Blades, Mark; Mengue-Topio, Hursula; Sockeel, Pascal

    2013-01-01

    The aim of this study was to assess wayfinding abilities in individuals with Down syndrome (DS). The ability to learn routes though a virtual environment (VE) and to make a novel shortcut between two locations was assessed in individuals with DS (N = 10) and control participants individually matched on mental age (MA) or chronological age (CA).…

  15. Seeing the Axial Line: Evidence from Wayfinding Experiments

    Directory of Open Access Journals (Sweden)

    Beatrix Emo

    2014-07-01

    Full Text Available Space-geometric measures are proposed to explain the location of fixations during wayfinding. Results from an eye tracking study based on real-world stimuli are analysed; the gaze bias shows that attention is paid to structural elements in the built environment. Three space-geometric measures are used to explain the data: sky area, floor area and longest line of sight. Together with the finding that participants choose the more connected street, a relationship is proposed between the individual cognitive processes that occur during wayfinding, relative street connectivity measured through space syntactic techniques and the spatial geometry of the environment. The paper adopts an egocentric approach to gain a greater understanding on how individuals process the axial map.

  16. Lost in the Labyrinthine Library: A Multi-Method Case Study Investigating Public Library User Wayfinding Behavior

    Science.gov (United States)

    Mandel, Lauren Heather

    2012-01-01

    Wayfinding is the method by which humans orient and navigate in space, and particularly in built environments such as cities and complex buildings, including public libraries. In order to wayfind successfully in the built environment, humans need information provided by wayfinding systems and tools, for instance architectural cues, signs, and…

  17. Mobile Phone Base Station Radiation Study for Addressing Public Concern

    Directory of Open Access Journals (Sweden)

    Aiman Ismail

    2010-01-01

    Full Text Available Problem statement: The proliferation of mobile phone base stations had increased concerns from the public on the radio frequency radiation hazards that might come from them. The world wide public concern involved health risk due to radio frequency radiation. In Malaysia also public interest has increased, although it is not as intense as probably in other parts of the world, but had also resulted in tearing down of a few base stations. Due to this growing concern, a study was conducted to evaluate the radio frequency radiation levels near several mobile phone base stations in two major cities in Malaysia. Approach: Measurements in terms of electric field strength, power density and specific absorption rate were made to check the exposure level at public locations. Broadband meter were first used to survey the sites near the base stations. From the survey, spots with relatively higher readings will be further investigated using narrow band measurements. The measured values were then compared with the recommended international maximum permissible exposure limit. Results: The study showed that the measured values were found to be less than 1% of the maximum permissible exposure. Conclusion: The amount of radio frequency radiation from the selected base stations in the two major cities are adhering to the international limits although the physical radio base station infrastructures spawning out everywhere in these areas may give the reverse impression.

  18. Autonomous indoor wayfinding for individuals with cognitive impairments

    Directory of Open Access Journals (Sweden)

    Chang Yao-Jen

    2010-09-01

    Full Text Available Abstract Background A challenge to individuals with cognitive impairments in wayfinding is how to remain oriented, recall routines, and travel in unfamiliar areas in a way relying on limited cognitive capacity. While people without disabilities often use maps or written directions as navigation tools or for remaining oriented, this cognitively-impaired population is very sensitive to issues of abstraction (e.g. icons on maps or signage and presents the designer with a challenge to tailor navigation information specific to each user and context. Methods This paper describes an approach to providing distributed cognition support of travel guidance for persons with cognitive disabilities. A solution is proposed based on passive near-field RFID tags and scanning PDAs. A prototype is built and tested in field experiments with real subjects. The unique strength of the system is the ability to provide unique-to-the-user prompts that are triggered by context. The key to the approach is to spread the context awareness across the system, with the context being flagged by the RFID tags and the appropriate response being evoked by displaying the appropriate path guidance images indexed by the intersection of specific end-user and context ID embedded in RFID tags. Results We found that passive RFIDs generally served as good context for triggering navigation prompts, although individual differences in effectiveness varied. The results of controlled experiments provided more evidence with regard to applicabilities of the proposed autonomous indoor wayfinding method. Conclusions Our findings suggest that the ability to adapt indoor wayfinding devices for appropriate timing of directions and standing orientation will be particularly important.

  19. Dynamic Operations Wayfinding System (DOWS) for Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald Laurids [Idaho National Laboratory; Ulrich, Thomas Anthony [Idaho National Laboratory; Lew, Roger Thomas [Idaho National Laboratory

    2015-08-01

    A novel software tool is proposed to aid reactor operators in respond- ing to upset plant conditions. The purpose of the Dynamic Operations Wayfind- ing System (DOWS) is to diagnose faults, prioritize those faults, identify paths to resolve those faults, and deconflict the optimal path for the operator to fol- low. The objective of DOWS is to take the guesswork out of the best way to combine procedures to resolve compound faults, mitigate low threshold events, or respond to severe accidents. DOWS represents a uniquely flexible and dy- namic computer-based procedure system for operators.

  20. The language of landmarks: the role of background knowledge in indoor wayfinding.

    Science.gov (United States)

    Frankenstein, Julia; Brüssow, Sven; Ruzzoli, Felix; Hölscher, Christoph

    2012-08-01

    To effectively wayfind through unfamiliar buildings, humans infer their relative position to target locations not only by interpreting geometric layouts, especially length of line of sight, but also by using background knowledge to evaluate landmarks with respect to their probable spatial relation to a target. Questionnaire results revealed that participants have consistent background knowledge about the relative position of target locations. Landmarks were rated significantly differently with respect to their spatial relation to targets. In addition, results from a forced-choice task comparing snapshots of a virtual environment revealed that background knowledge influenced wayfinding decisions. We suggest that landmarks are interpreted semantically with respect to their function and spatial relation to the target location and thereby influence wayfinding decisions. This indicates that background knowledge plays a role in wayfinding.

  1. MAGELLAN: a cognitive map-based model of human wayfinding.

    Science.gov (United States)

    Manning, Jeremy R; Lew, Timothy F; Li, Ningcheng; Sekuler, Robert; Kahana, Michael J

    2014-06-01

    In an unfamiliar environment, searching for and navigating to a target requires that spatial information be acquired, stored, processed, and retrieved. In a study encompassing all of these processes, participants acted as taxicab drivers who learned to pick up and deliver passengers in a series of small virtual towns. We used data from these experiments to refine and validate MAGELLAN, a cognitive map-based model of spatial learning and wayfinding. MAGELLAN accounts for the shapes of participants' spatial learning curves, which measure their experience-based improvement in navigational efficiency in unfamiliar environments. The model also predicts the ease (or difficulty) with which different environments are learned and, within a given environment, which landmarks will be easy (or difficult) to localize from memory. Using just 2 free parameters, MAGELLAN provides a useful account of how participants' cognitive maps evolve over time with experience, and how participants use the information stored in their cognitive maps to navigate and explore efficiently.

  2. A semiotic approach to blind wayfinding: some primary conceptual standpoints

    Directory of Open Access Journals (Sweden)

    Marcelo Santos

    2009-01-01

    Full Text Available Researchers from a wide variety of disciplines, such as philosophy, art, education or psychology, have over the years sustained the idea that blind persons are incapable or nearly incapable of formulating complex mental diagrammatic representations, which are schema based on the similarities found within internal logical relations between sign and object.Contrary to this widely accepted opinion, we will present an alternative approach in this paper: Our main idea is that blind and visually impaired people relying upon tact as a main knowledge source are capable of diagrammatic reasoning very well, but use a different method for this purpose, namely the method of inductive reasoning. Such method can effectively provide the mind with the data necessary to the elaboration of mental maps. Therefore, wayfinding as a semiotic process in which a route is planned and executed from marks or navigation indexes, is also enabled by tact.

  3. Detecting Signage and Doors for Blind Navigation and Wayfinding.

    Science.gov (United States)

    Wang, Shuihua; Yang, Xiaodong; Tian, Yingli

    2013-07-01

    Signage plays a very important role to find destinations in applications of navigation and wayfinding. In this paper, we propose a novel framework to detect doors and signage to help blind people accessing unfamiliar indoor environments. In order to eliminate the interference information and improve the accuracy of signage detection, we first extract the attended areas by using a saliency map. Then the signage is detected in the attended areas by using a bipartite graph matching. The proposed method can handle multiple signage detection. Furthermore, in order to provide more information for blind users to access the area associated with the detected signage, we develop a robust method to detect doors based on a geometric door frame model which is independent to door appearances. Experimental results on our collected datasets of indoor signage and doors demonstrate the effectiveness and efficiency of our proposed method.

  4. Mobile phone base stations-Effects on wellbeing and health.

    Science.gov (United States)

    Kundi, Michael; Hutter, Hans-Peter

    2009-08-01

    Studying effects of mobile phone base station signals on health have been discouraged by authoritative bodies like WHO International EMF Project and COST 281. WHO recommended studies around base stations in 2003 but again stated in 2006 that studies on cancer in relation to base station exposure are of low priority. As a result only few investigations of effects of base station exposure on health and wellbeing exist. Cross-sectional investigations of subjective health as a function of distance or measured field strength, despite differences in methods and robustness of study design, found indications for an effect of exposure that is likely independent of concerns and attributions. Experimental studies applying short-term exposure to base station signals gave various results, but there is weak evidence that UMTS and to a lesser degree GSM signals reduce wellbeing in persons that report to be sensitive to such exposures. Two ecological studies of cancer in the vicinity of base stations report both a strong increase of incidence within a radius of 350 and 400m respectively. Due to the limitations inherent in this design no firm conclusions can be drawn, but the results underline the urgent need for a comprehensive investigation of this issue. Animal and in vitro studies are inconclusive to date. An increased incidence of DMBA induced mammary tumors in rats at a SAR of 1.4W/kg in one experiment could not be replicated in a second trial. Indications of oxidative stress after low-level in vivo exposure of rats could not be supported by in vitro studies of human fibroblasts and glioblastoma cells. From available evidence it is impossible to delineate a threshold below which no effect occurs, however, given the fact that studies reporting low exposure were invariably negative it is suggested that power densities around 0.5-1mW/m(2) must be exceeded in order to observe an effect. The meager data base must be extended in the coming years. The difficulties of investigating

  5. Radiofrequency radiation injures trees around mobile phone base stations.

    Science.gov (United States)

    Waldmann-Selsam, Cornelia; Balmori-de la Puente, Alfonso; Breunig, Helmut; Balmori, Alfonso

    2016-12-01

    In the last two decades, the deployment of phone masts around the world has taken place and, for many years, there has been a discussion in the scientific community about the possible environmental impact from mobile phone base stations. Trees have several advantages over animals as experimental subjects and the aim of this study was to verify whether there is a connection between unusual (generally unilateral) tree damage and radiofrequency exposure. To achieve this, a detailed long-term (2006-2015) field monitoring study was performed in the cities of Bamberg and Hallstadt (Germany). During monitoring, observations and photographic recordings of unusual or unexplainable tree damage were taken, alongside the measurement of electromagnetic radiation. In 2015 measurements of RF-EMF (Radiofrequency Electromagnetic Fields) were carried out. A polygon spanning both cities was chosen as the study site, where 144 measurements of the radiofrequency of electromagnetic fields were taken at a height of 1.5m in streets and parks at different locations. By interpolation of the 144 measurement points, we were able to compile an electromagnetic map of the power flux density in Bamberg and Hallstadt. We selected 60 damaged trees, in addition to 30 randomly selected trees and 30 trees in low radiation areas (n=120) in this polygon. The measurements of all trees revealed significant differences between the damaged side facing a phone mast and the opposite side, as well as differences between the exposed side of damaged trees and all other groups of trees in both sides. Thus, we found that side differences in measured values of power flux density corresponded to side differences in damage. The 30 selected trees in low radiation areas (no visual contact to any phone mast and power flux density under 50μW/m(2)) showed no damage. Statistical analysis demonstrated that electromagnetic radiation from mobile phone masts is harmful for trees. These results are consistent with the fact

  6. Smart-Phone Based Magnetic Levitation for Measuring Densities.

    Directory of Open Access Journals (Sweden)

    Stephanie Knowlton

    Full Text Available Magnetic levitation, which uses a magnetic field to suspend objects in a fluid, is a powerful and versatile technology. We develop a compact magnetic levitation platform compatible with a smart-phone to separate micro-objects and estimate the density of the sample based on its levitation height. A 3D printed attachment is mechanically installed over the existing camera unit of a smart-phone. Micro-objects, which may be either spherical or irregular in shape, are suspended in a paramagnetic medium and loaded in a microcapillary tube which is then inserted between two permanent magnets. The micro-objects are levitated and confined in the microcapillary at an equilibrium height dependent on their volumetric mass densities (causing a buoyancy force toward the edge of the microcapillary and magnetic susceptibilities (causing a magnetic force toward the center of the microcapillary relative to the suspending medium. The smart-phone camera captures magnified images of the levitating micro-objects through an additional lens positioned between the sample and the camera lens cover. A custom-developed Android application then analyzes these images to determine the levitation height and estimate the density. Using this platform, we were able to separate microspheres with varying densities and calibrate their levitation heights to known densities to develop a technique for precise and accurate density estimation. We have also characterized the magnetic field, the optical imaging capabilities, and the thermal state over time of this platform.

  7. Wayfinding and Navigation for People with Disabilities Using Social Navigation Networks

    Directory of Open Access Journals (Sweden)

    Hassan A. Karimi

    2014-10-01

    Full Text Available To achieve safe and independent mobility, people usually depend on published information, prior experience, the knowledge of others, and/or technology to navigate unfamiliar outdoor and indoor environments. Today, due to advances in various technologies, wayfinding and navigation systems and services are commonplace and are accessible on desktop, laptop, and mobile devices. However, despite their popularity and widespread use, current wayfinding and navigation solutions often fail to address the needs of people with disabilities (PWDs. We argue that these shortcomings are primarily due to the ubiquity of the compute-centric approach adopted in these systems and services, where they do not benefit from the experience-centric approach. We propose that following a hybrid approach of combining experience-centric and compute-centric methods will overcome the shortcomings of current wayfinding and navigation solutions for PWDs.

  8. Signage and wayfinding design a complete guide to creating environmental graphic design systems

    CERN Document Server

    Calori, Chris

    2015-01-01

    A new edition of the market-leading guide to signage and wayfinding design This new edition of Signage and Wayfinding Design: A Complete Guide to Creating Environmental Graphic Design Systems has been fully updated to offer you the latest, most comprehensive coverage of the environmental design process-from research and design development to project execution. Utilizing a cross-disciplinary approach that makes the information relevant to architects, interior designers, landscape architects, graphic designers, and industrial engineers alike, the book arms you with the skills needed to apply a

  9. Mobile phone base stations and well-being--A meta-analysis.

    Science.gov (United States)

    Klaps, Armin; Ponocny, Ivo; Winker, Robert; Kundi, Michael; Auersperg, Felicitas; Barth, Alfred

    2016-02-15

    It is unclear whether electromagnetic fields emitted by mobile phone base stations affect well-being in adults. The existing studies on this topic are highly inconsistent. In the current paper we attempt to clarify this question by carrying out a meta-analysis which is based on the results of 17 studies. Double-blind studies found no effects on human well-being. By contrast, field or unblinded studies clearly showed that there were indeed effects. This provides evidence that at least some effects are based on a nocebo effect. Whether there is an influence of electromagnetic fields emitted by mobile phone base stations thus depends on a person's knowledge about the presence of the presumed cause. Taken together, the results of the meta-analysis show that the effects of mobile phone base stations seem to be rather unlikely. However, nocebo effects occur.

  10. Haptic Cues Used for Outdoor Wayfinding by Individuals with Visual Impairments

    Science.gov (United States)

    Koutsoklenis, Athanasios; Papadopoulos, Konstantinos

    2014-01-01

    Introduction: The study presented here examines which haptic cues individuals with visual impairments use more frequently and determines which of these cues are deemed by these individuals to be the most important for way-finding in urban environments. It also investigates the ways in which these haptic cues are used by individuals with visual…

  11. [Dementia-friendly architecture. Environments that facilitate wayfinding in nursing homes].

    Science.gov (United States)

    Marquardt, G; Schmieg, P

    2009-10-01

    Spatial disorientation is among the first manifestations of dementia and a prime reason for institutionalization. However, the autonomy of residents and their quality of live are strongly linked with their ability to reach certain places within the nursing home. Also affected is the efficiency of the institutions and the quality of care provided.The physical environment has a great potential for supporting resident's residual wayfinding abilities. Until now little systematic research has been carried out to identify supportive architectural characteristics.For this exploratory study, extensive data on resident's spatial capabilities were collected in 30 German nursing homes. The architectural structure of the buildings was also analyzed. Within the nursing homes five identical, ADL-related wayfinding tasks were identified. Skilled nurses rated the resident's ability to perform those tasks on a three-point scale. The impact of the different architectural characteristics on the resulting scores was tested for statistical significance.Results show that people with advancing dementia are increasingly dependent on a compensating environment. Significant influencing factors are the number of residents per living area, the layout of the circulation system and the characteristics of the living/dining room. Smaller units facilitate wayfinding but larger entities may also provide good results, if they feature a straight circulation system without any changes in direction. Repetitive elements, such as several living/dining rooms, interfere with a resident's wayfinding abilities. These and further results were transformed into architectural policies and guidelines which can be used in the planning and remodelling of nursing homes.

  12. Indoor Signposting and Wayfinding through an Adaptation of the Dutch Cyclist Junction Network System

    NARCIS (Netherlands)

    Makri, A.; Verbree, E.

    2014-01-01

    Finding's ones way in complex indoor settings can be a quite stressful and time-consuming task, especially for users unfamiliar with the environment. There have been developed several different approaches to provide wayfinding assistance in order to guide a person from a starting point to a destinat

  13. User Interface Preferences in the Design of a Camera-Based Navigation and Wayfinding Aid

    Science.gov (United States)

    Arditi, Aries; Tian, YingLi

    2013-01-01

    Introduction: Development of a sensing device that can provide a sufficient perceptual substrate for persons with visual impairments to orient themselves and travel confidently has been a persistent rehabilitation technology goal, with the user interface posing a significant challenge. In the study presented here, we enlist the advice and ideas of…

  14. Mobile phone based clinical microscopy for global health applications.

    Directory of Open Access Journals (Sweden)

    David N Breslauer

    Full Text Available Light microscopy provides a simple, cost-effective, and vital method for the diagnosis and screening of hematologic and infectious diseases. In many regions of the world, however, the required equipment is either unavailable or insufficiently portable, and operators may not possess adequate training to make full use of the images obtained. Counterintuitively, these same regions are often well served by mobile phone networks, suggesting the possibility of leveraging portable, camera-enabled mobile phones for diagnostic imaging and telemedicine. Toward this end we have built a mobile phone-mounted light microscope and demonstrated its potential for clinical use by imaging P. falciparum-infected and sickle red blood cells in brightfield and M. tuberculosis-infected sputum samples in fluorescence with LED excitation. In all cases resolution exceeded that necessary to detect blood cell and microorganism morphology, and with the tuberculosis samples we took further advantage of the digitized images to demonstrate automated bacillus counting via image analysis software. We expect such a telemedicine system for global healthcare via mobile phone -- offering inexpensive brightfield and fluorescence microscopy integrated with automated image analysis -- to provide an important tool for disease diagnosis and screening, particularly in the developing world and rural areas where laboratory facilities are scarce but mobile phone infrastructure is extensive.

  15. DESIGN AND DEVELOPMENT OF A MOBILE SENSOR BASED THE BLIND ASSISTANCE WAYFINDING SYSTEM

    Directory of Open Access Journals (Sweden)

    F. Barati

    2015-12-01

    Full Text Available The blind and visually impaired people are facing a number of challenges in their daily life. One of the major challenges is finding their way both indoor and outdoor. For this reason, routing and navigation independently, especially in urban areas are important for the blind. Most of the blind undertake route finding and navigation with the help of a guide. In addition, other tools such as a cane, guide dog or electronic aids are used by the blind. However, in some cases these aids are not efficient enough in a wayfinding around obstacles and dangerous areas for the blind. As a result, the need to develop effective methods as decision support using a non-visual media is leading to improve quality of life for the blind through their increased mobility and independence. In this study, we designed and implemented an outdoor mobile sensor-based wayfinding system for the blind. The objectives of this study are to guide the blind for the obstacle recognition and the design and implementation of a wayfinding and navigation mobile sensor system for them. In this study an ultrasonic sensor is used to detect obstacles and GPS is employed for positioning and navigation in the wayfinding. This type of ultrasonic sensor measures the interval between sending waves and receiving the echo signals with respect to the speed of sound in the environment to estimate the distance to the obstacles. In this study the coordinates and characteristics of all the obstacles in the study area are already stored in a GIS database. All of these obstacles were labeled on the map. The ultrasonic sensor designed and constructed in this study has the ability to detect the obstacles in a distance of 2cm to 400cm. The implementation and the results obtained from the interview of a number of blind persons who employed the sensor verified that the designed mobile sensor system for wayfinding was very satisfactory.

  16. Design and Development of a Mobile Sensor Based the Blind Assistance Wayfinding System

    Science.gov (United States)

    Barati, F.; Delavar, M. R.

    2015-12-01

    The blind and visually impaired people are facing a number of challenges in their daily life. One of the major challenges is finding their way both indoor and outdoor. For this reason, routing and navigation independently, especially in urban areas are important for the blind. Most of the blind undertake route finding and navigation with the help of a guide. In addition, other tools such as a cane, guide dog or electronic aids are used by the blind. However, in some cases these aids are not efficient enough in a wayfinding around obstacles and dangerous areas for the blind. As a result, the need to develop effective methods as decision support using a non-visual media is leading to improve quality of life for the blind through their increased mobility and independence. In this study, we designed and implemented an outdoor mobile sensor-based wayfinding system for the blind. The objectives of this study are to guide the blind for the obstacle recognition and the design and implementation of a wayfinding and navigation mobile sensor system for them. In this study an ultrasonic sensor is used to detect obstacles and GPS is employed for positioning and navigation in the wayfinding. This type of ultrasonic sensor measures the interval between sending waves and receiving the echo signals with respect to the speed of sound in the environment to estimate the distance to the obstacles. In this study the coordinates and characteristics of all the obstacles in the study area are already stored in a GIS database. All of these obstacles were labeled on the map. The ultrasonic sensor designed and constructed in this study has the ability to detect the obstacles in a distance of 2cm to 400cm. The implementation and the results obtained from the interview of a number of blind persons who employed the sensor verified that the designed mobile sensor system for wayfinding was very satisfactory.

  17. What input data are needed to accurately model electromagnetic fields from mobile phone base stations?

    NARCIS (Netherlands)

    Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel

    2015-01-01

    The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what

  18. Phone-based motivational interviewing to increase self-efficacy in individuals with phenylketonuria

    Directory of Open Access Journals (Sweden)

    Krista S. Viau

    2016-03-01

    Conclusion: These results demonstrate the feasibility of implementing phone-based dietary counseling for PKU using MI. This study also supports further investigation of MI as an intervention approach to improving self-efficacy and self-management behaviors in adolescents and adults with PKU.

  19. [Study on mobile phone based wireless ECG monitoring technology system realization and performance test].

    Science.gov (United States)

    Yu, Yang; Liu, Jing

    2010-11-01

    This paper introduces a novel mobile phone based wireless real-time ECG monitoring system. And experiments were conducted to demonstrate the reliability and stability of the device. This novel system not only addresses the contradiction between continuous monitoring and device cost, but also represents advanced concepts of low cost medicine and personal health management.

  20. Novel versatile smart phone based Microplate readers for on-site diagnoses.

    Science.gov (United States)

    Fu, Qiangqiang; Wu, Ze; Li, Xiuqing; Yao, Cuize; Yu, Shiting; Xiao, Wei; Tang, Yong

    2016-07-15

    Microplate readers are important diagnostic instruments, used intensively for various readout test kits (biochemical analysis kits and ELISA kits). However, due to their expensive and non-portability, commercial microplate readers are unavailable for home testing, community and rural hospitals, especially in developing countries. In this study, to provide a field-portable, cost-effective and versatile diagnostic tool, we reported a novel smart phone based microplate reader. The basic principle of this devise relies on a smart phone's optical sensor that measures transmitted light intensities of liquid samples. To prove the validity of these devises, developed smart phone based microplate readers were applied to readout results of various analytical targets. These targets included analanine aminotransferase (ALT; limit of detection (LOD) was 17.54 U/L), alkaline phosphatase (AKP; LOD was 15.56 U/L), creatinine (LOD was 1.35μM), bovine serum albumin (BSA; LOD was 0.0041mg/mL), prostate specific antigen (PSA; LOD was 0.76pg/mL), and ractopamine (Rac; LOD was 0.31ng/mL). The developed smart phone based microplate readers are versatile, portable, and inexpensive; they are unique because of their ability to perform under circumstances where resources and expertize are limited.

  1. Proposing a Multi-Criteria Path Optimization Method in Order to Provide a Ubiquitous Pedestrian Wayfinding Service

    Science.gov (United States)

    Sahelgozin, M.; Sadeghi-Niaraki, A.; Dareshiri, S.

    2015-12-01

    A myriad of novel applications have emerged nowadays for different types of navigation systems. One of their most frequent applications is Wayfinding. Since there are significant differences between the nature of the pedestrian wayfinding problems and of those of the vehicles, navigation services which are designed for vehicles are not appropriate for pedestrian wayfinding purposes. In addition, diversity in environmental conditions of the users and in their preferences affects the process of pedestrian wayfinding with mobile devices. Therefore, a method is necessary that performs an intelligent pedestrian routing with regard to this diversity. This intelligence can be achieved by the help of a Ubiquitous service that is adapted to the Contexts. Such a service possesses both the Context-Awareness and the User-Awareness capabilities. These capabilities are the main features of the ubiquitous services that make them flexible in response to any user in any situation. In this paper, it is attempted to propose a multi-criteria path optimization method that provides a Ubiquitous Pedestrian Way Finding Service (UPWFS). The proposed method considers four criteria that are summarized in Length, Safety, Difficulty and Attraction of the path. A conceptual framework is proposed to show the influencing factors that have effects on the criteria. Then, a mathematical model is developed on which the proposed path optimization method is based. Finally, data of a local district in Tehran is chosen as the case study in order to evaluate performance of the proposed method in real situations. Results of the study shows that the proposed method was successful to understand effects of the contexts in the wayfinding procedure. This demonstrates efficiency of the proposed method in providing a ubiquitous pedestrian wayfinding service.

  2. A mobile phone-based Communication Support System for elderly persons.

    Science.gov (United States)

    Ogawa, Hidekuni; Yonezawa, Yoshiharu; Maki, Hiromichi; Caldwell, W Morton

    2007-01-01

    A mobile phone-based communication support system has been developed for assisting elderly people to communicate by mobile phone. The system consists of a low power mobile phone (PHS phone) having a large liquid crystal screen. When an elderly person telephones, they then choose a communication person from registered support personnel pictures displayed on the liquid crystal screen. The PHS phone dials that person automatically. The elderly person can therefore easily recognize and verify the person. The newly-developed communication support system assists a significant percentage of elderly people with poor eyesight and memory, which frequently cause communication problems, such as dialing a wrong number.

  3. Comparison of the effectiveness of three retinal camera technologies for malarial retinopathy detection in Malawi

    Science.gov (United States)

    Soliz, Peter; Nemeth, Sheila C.; Barriga, E. Simon; Harding, Simon P.; Lewallen, Susan; Taylor, Terrie E.; MacCormick, Ian J.; Joshi, Vinayak S.

    2016-03-01

    The purpose of this study was to test the suitability of three available camera technologies (desktop, portable, and iphone based) for imaging comatose children who presented with clinical symptoms of malaria. Ultimately, the results of the project would form the basis for a design of a future camera to screen for malaria retinopathy (MR) in a resource challenged environment. The desktop, portable, and i-phone based cameras were represented by the Topcon, Pictor Plus, and Peek cameras, respectively. These cameras were tested on N=23 children presenting with symptoms of cerebral malaria (CM) at a malaria clinic, Queen Elizabeth Teaching Hospital in Malawi, Africa. Each patient was dilated for binocular indirect ophthalmoscopy (BIO) exam by an ophthalmologist followed by imaging with all three cameras. Each of the cases was graded according to an internationally established protocol and compared to the BIO as the clinical ground truth. The reader used three principal retinal lesions as markers for MR: hemorrhages, retinal whitening, and vessel discoloration. The study found that the mid-priced Pictor Plus hand-held camera performed considerably better than the lower price mobile phone-based camera, and slightly the higher priced table top camera. When comparing the readings of digital images against the clinical reference standard (BIO), the Pictor Plus camera had sensitivity and specificity for MR of 100% and 87%, respectively. This compares to a sensitivity and specificity of 87% and 75% for the i-phone based camera and 100% and 75% for the desktop camera. The drawback of all the cameras were their limited field of view which did not allow complete view of the periphery where vessel discoloration occurs most frequently. The consequence was that vessel discoloration was not addressed in this study. None of the cameras offered real-time image quality assessment to ensure high quality images to afford the best possible opportunity for reading by a remotely located

  4. Where is my car? Examining wayfinding behavior in a parking lot

    Directory of Open Access Journals (Sweden)

    Rodrigo Mora

    2014-08-01

    Full Text Available This article examines wayfinding behavior in an extended parking lot belonging to one of the largest shopping malls in Santiago, Chile. About 500 people were followed while going to the mall and returning from it, and their trajectories were mapped and analyzed. The results indicate that inbound paths were, in average, 10% shorter that outbound paths, and that people stopped three times more frequently when leaving the mall than when accessing it. It is argued that these results are in line with previous research on the subject, which stress the importance of environmental information in shaping people`s behavior.

  5. Mobile Phone Based System Opportunities to Home-based Managing of Chemotherapy Side Effects

    Science.gov (United States)

    Davoodi, Somayeh; Mohammadzadeh, Zeinab; Safdari, Reza

    2016-01-01

    Objective: Applying mobile base systems in cancer care especially in chemotherapy management have remarkable growing in recent decades. Because chemotherapy side effects have significant influences on patient’s lives, therefore it is necessary to take ways to control them. This research has studied some experiences of using mobile phone based systems to home-based monitor of chemotherapy side effects in cancer. Methods: In this literature review study, search was conducted with keywords like cancer, chemotherapy, mobile phone, information technology, side effects and self managing, in Science Direct, Google Scholar and Pub Med databases since 2005. Results: Today, because of the growing trend of the cancer, we need methods and innovations such as information technology to manage and control it. Mobile phone based systems are the solutions that help to provide quick access to monitor chemotherapy side effects for cancer patients at home. Investigated studies demonstrate that using of mobile phones in chemotherapy management have positive results and led to patients and clinicians satisfactions. Conclusion: This study shows that the mobile phone system for home-based monitoring chemotherapy side effects works well. In result, knowledge of cancer self-management and the rate of patient’s effective participation in care process improved. PMID:27482134

  6. The Effect of Gender, Wayfinding Strategy and Navigational Support on Wayfinding Behaviour%性别、寻路策略与导航方式对寻路行为的影响

    Institute of Scientific and Technical Information of China (English)

    房慧聪; 周琳

    2012-01-01

    The wayfinding strategy and the navigational support mode are two important factors in human wayfinding behavior. Although many lines of evidences have displayed the gender differences in the use of wayfinding strategy and the effectiveness of some navigational support designs, the interaction of these two factors still remained to be studied. The present study was aimed to investigate the effect of gender, wayfinding strategy and navigational support mode on wayfinding behavior. 120 subjects were screened by the classic Wayfinding Strategy Scale developed by Lawton and then were assigned to different navigational support mode in a VR maze program scripted with 3Dmax and Virtools. In the practice stage, the subjects were required to get familiar with the operation rules, such as moving forward or backward, turning left or right by pressing the cursor keys. Then, the subjects entered the formal test, in which they were asked to arrive at the exit of the maze as quickly as possible with the aid of a given navigational support mode. The navigation time and the route map were recorded when the subjects successfully completed the task. Firstly, our data showed that the navigation time in males with lower-score in orientation strategy was the shortest under the condition of the guide sign support in the VR maze, while it was the longest under the condition of the YAH map support. Moreover, they were significantly different between the two treatments. However, the effect of the navigational support mode on wayfinding performance was not significantly different in the males with higher score in orientation strategy. These data indicated that orientation strategy was an important factor to predict the male's navigational performance. Secondly, our data also showed that the effect of the navigational support mode on the female's wayfinding performance was statistically significant. The navigation time was the shortest under the condition of the guide sign support, and it was

  7. Precise observation of C. elegans dynamic behaviours under controlled thermal stimulus using a mobile phone-based microscope.

    Science.gov (United States)

    Yoon, T; Shin, D-M; Kim, S; Lee, S; Lee, T G; Kim, K

    2017-04-01

    We investigated the temperature-dependent locomotion of Caenorhabditis elegans by using the mobile phone-based microscope. We developed the customized imaging system with mini incubator and smartphone to effectively control the thermal stimulation for precisely observing the temperature-dependent locomotory behaviours of C. elegans. Using the mobile phone-based microscope, we successfully followed the long-term progress of specimens of C. elegans in real time as they hatched and explored their temperature-dependent locomotory behaviour. We are convinced that the mobile phone-based microscope is a useful device for real time and long-term observations of biological samples during incubation, and can make it possible to carry out live observations via wireless communications regardless of location. In addition, this microscope has the potential for widespread use owing to its low cost and compact design.

  8. Color Targets: Fiducials to Help Visually Impaired People Find Their Way by Camera Phone

    Directory of Open Access Journals (Sweden)

    Roberto Manduchi

    2007-08-01

    Full Text Available A major challenge faced by the blind and visually impaired population is that of wayfinding—the ability of a person to find his or her way to a given destination. We propose a new wayfinding aid based on a camera cell phone, which is held by the user to find and read aloud specially designed machine-readable signs, which we call color targets, in indoor environments (labeling locations such as offices and restrooms. Our main technical innovation is that we have designed the color targets to be detected and located in fractions of a second on the cell phone CPU, even at a distance of several meters. Once the sign has been quickly detected, nearby information in the form of a barcode can be read, an operation that typically requires more computational time. An important contribution of this paper is a principled method for optimizing the design of the color targets and the color target detection algorithm based on training data, instead of relying on heuristic choices as in our previous work. We have implemented the system on Nokia 7610 cell phone, and preliminary experiments with blind subjects demonstrate the feasibility of using the system as a real-time wayfinding aid.

  9. Determinants and stability over time of perception of health risks related to mobile phone base stations

    DEFF Research Database (Denmark)

    Kowall, Bernd; Breckenkamp, Jürgen; Blettner, Maria;

    2012-01-01

    associated with concerns about various other risks like side effects of medications, air pollution or electric power lines. Persons showing more anxiety, depression, or stress were more often concerned about MPBS and also more often attributed health complaints to MPBS. 46.7% of those concerned about MPBS......OBJECTIVE: Perception of possible health risks related to mobile phone base stations (MPBS) is an important factor in citizens' opposition against MPBS and is associated with health complaints. The aim of the present study is to assess whether risk perception of MPBS is associated with concerns...... about other environmental and health risks, is associated with psychological strain, and is stable on the individual level over time. METHODS: Self-administered questionnaires filled in by 3,253 persons aged 15-69 years in 2004 and 2006 in Germany. RESULTS: Risk perception of MPBS was strongly...

  10. Implicit attitudes toward nuclear power and mobile phone base stations: support for the affect heuristic.

    Science.gov (United States)

    Siegrist, Michael; Keller, Carmen; Cousin, Marie-Eve

    2006-08-01

    The implicit association test (IAT) measures automatic associations. In the present research, the IAT was adapted to measure implicit attitudes toward technological hazards. In Study 1, implicit and explicit attitudes toward nuclear power were examined. Implicit measures (i.e., the IAT) revealed negative attitudes toward nuclear power that were not detected by explicit measures (i.e., a questionnaire). In Study 2, implicit attitudes toward EMF (electro-magnetic field) hazards were examined. Results showed that cell phone base stations and power lines are judged to be similarly risky and, further, that base stations are more closely related to risk concepts than home appliances are. No differences between experts and lay people were observed. Results of the present studies are in line with the affect heuristic proposed by Slovic and colleagues. Affect seems to be an important factor in risk perception.

  11. Cell-phone based assistance for waterworks/sewage plant maintenance.

    Science.gov (United States)

    Kawada, T; Nakamichi, K; Hisano, N; Kitamura, M; Miyahara, K

    2006-01-01

    Cell-phones are now incorporating the functions necessary for them to be used as mobile IT devices. In this paper, we present our results of the evaluation of cell-phones as the mobile IT device to assist workers in industrial plants. We use waterworks and sewage plants as examples. By employing techniques to squeeze the SCADA screen on CRT into a small cell-phone LCD, we have made it easier for a plant's field workers to access the information needed for effective maintenance, regardless of location. An idea to link SCADA information and the plant facility information on the cell-phone is also presented. Should an accident or emergency situation arise, these cell-phone-based IT systems can efficiently deliver the latest plant information, thus the worker out in the field can respond to and resolve the emergency.

  12. Compliance to Cell Phone-Based EMA Among Latino Youth in Outpatient Treatment.

    Science.gov (United States)

    Comulada, W Scott; Lightfoot, Marguerita; Swendeman, Dallas; Grella, Christine; Wu, Nancy

    2015-01-01

    Outpatient treatment practices for adolescent substance users utilize retrospective self-report to monitor drug use. Cell phone-based ecological momentary assessment (CEMA) overcomes retrospective self-report biases and can enhance outpatient treatment, particularly among Latino adolescents, who have been understudied with regard to CEMA. This study explores compliance to text message-based CEMA with youth (n = 28; 93% Latino) in outpatient treatment. Participants were rotated through daily, random, and event-based CEMA strategies for 1-month periods. Overall compliance was high (>80%). Compliance decreased slightly over the study period and was less during random versus daily strategies and on days when alcohol use was retrospectively reported. Findings suggest that CEMA is a viable monitoring tool for Latino youth in outpatient treatment, but further study is needed to determine optimal CEMA strategies, monitoring time periods, and the appropriateness of CEMA for differing levels of substance use.

  13. Psychotherapeutic Applications of Mobile Phone-based Technologies: A Systematic Review of Current Research and Trends.

    Science.gov (United States)

    Menon, Vikas; Rajan, Tess Maria; Sarkar, Siddharth

    2017-01-01

    There is a growing interest in using mobile phone technology to offer real-time psychological interventions and support. However, questions remain on the clinical effectiveness and feasibility of such approaches in psychiatric populations. Our aim was to systematically review the published literature on mobile phone apps and other mobile phone-based technology for psychotherapy in mental health disorders. To achieve this, electronic searches of PubMed, ScienceDirect, and Google Scholar were carried out in January 2016. Generated abstracts were systematically screened for eligibility to be included in the review. Studies employing psychotherapy in any form, being delivered through mobile-based technology and reporting core mental health outcomes in mental illness were included in the study. We also included trials in progress with published protocols reporting at least some outcome measures of such interventions. From a total of 1563 search results, 24 eligible articles were identified and reviewed. These included trials in anxiety disorders (8), substance use disorders (5), depression (4), bipolar disorders (3), schizophrenia and psychotic disorders (3), and attempted suicide (1). Of these, eight studies involved the use of smartphone apps and others involved personalized text messages, automated programs, or delivered empirically supported treatments. Trial lengths varied from 6 weeks to 1 year. Good overall retention rates indicated that the treatments were feasible and largely acceptable. Benefits were reported on core outcomes in mental health illness indicating efficacy of such approaches though sample sizes were small. To conclude, mobile phone-based psychotherapies are a feasible and acceptable treatment option for patients with mental disorders. However, there remains a paucity of data on their effectiveness in real-world settings, especially from low- and middle-income countries.

  14. Field portable mobile phone based fluorescence microscopy for detection of Giardia lamblia cysts in water samples

    Science.gov (United States)

    Ceylan Koydemir, Hatice; Gorocs, Zoltan; McLeod, Euan; Tseng, Derek; Ozcan, Aydogan

    2015-03-01

    Giardia lamblia is a waterborne parasite that causes an intestinal infection, known as giardiasis, and it is found not only in countries with inadequate sanitation and unsafe water but also streams and lakes of developed countries. Simple, sensitive, and rapid detection of this pathogen is important for monitoring of drinking water. Here we present a cost-effective and field portable mobile-phone based fluorescence microscopy platform designed for automated detection of Giardia lamblia cysts in large volume water samples (i.e., 10 ml) to be used in low-resource field settings. This fluorescence microscope is integrated with a disposable water-sampling cassette, which is based on a flow-through porous polycarbonate membrane and provides a wide surface area for fluorescence imaging and enumeration of the captured Giardia cysts on the membrane. Water sample of interest, containing fluorescently labeled Giardia cysts, is introduced into the absorbent pads that are in contact with the membrane in the cassette by capillary action, which eliminates the need for electrically driven flow for sample processing. Our fluorescence microscope weighs ~170 grams in total and has all the components of a regular microscope, capable of detecting individual fluorescently labeled cysts under light-emitting-diode (LED) based excitation. Including all the sample preparation, labeling and imaging steps, the entire measurement takes less than one hour for a sample volume of 10 ml. This mobile phone based compact and cost-effective fluorescent imaging platform together with its machine learning based cyst counting interface is easy to use and can even work in resource limited and field settings for spatio-temporal monitoring of water quality.

  15. Psychotherapeutic Applications of Mobile Phone-based Technologies: A Systematic Review of Current Research and Trends

    Science.gov (United States)

    Menon, Vikas; Rajan, Tess Maria; Sarkar, Siddharth

    2017-01-01

    There is a growing interest in using mobile phone technology to offer real-time psychological interventions and support. However, questions remain on the clinical effectiveness and feasibility of such approaches in psychiatric populations. Our aim was to systematically review the published literature on mobile phone apps and other mobile phone-based technology for psychotherapy in mental health disorders. To achieve this, electronic searches of PubMed, ScienceDirect, and Google Scholar were carried out in January 2016. Generated abstracts were systematically screened for eligibility to be included in the review. Studies employing psychotherapy in any form, being delivered through mobile-based technology and reporting core mental health outcomes in mental illness were included in the study. We also included trials in progress with published protocols reporting at least some outcome measures of such interventions. From a total of 1563 search results, 24 eligible articles were identified and reviewed. These included trials in anxiety disorders (8), substance use disorders (5), depression (4), bipolar disorders (3), schizophrenia and psychotic disorders (3), and attempted suicide (1). Of these, eight studies involved the use of smartphone apps and others involved personalized text messages, automated programs, or delivered empirically supported treatments. Trial lengths varied from 6 weeks to 1 year. Good overall retention rates indicated that the treatments were feasible and largely acceptable. Benefits were reported on core outcomes in mental health illness indicating efficacy of such approaches though sample sizes were small. To conclude, mobile phone-based psychotherapies are a feasible and acceptable treatment option for patients with mental disorders. However, there remains a paucity of data on their effectiveness in real-world settings, especially from low- and middle-income countries.

  16. Mapping Cyclists’ Experiences and Agent-Based Modelling of Their Wayfinding Behaviour

    DEFF Research Database (Denmark)

    Snizek, Bernhard

    . The model was implemented in rePAST, a state-of-the-art agent-based modelling toolkit. The behavioural parameter estimates were generated from GPS tracks. A road network was taken from OpenStreetMap and enriched with information about the traffic environment, public register-based source and destination......This dissertation is about modelling cycling transport behaviour. It is partly about urban experiences seen by the cyclist and about modelling, more specifically the agent-based modelling of cyclists' wayfinding behaviour. The dissertation consists of three papers. The first deals...... with the development and application of a method for collecting experiential data via an internet-based questionnaire and statistically relating them to physical features of the city as well as the characteristics of their routes. The other two papers explain methods for building, calibrating and validating an agent-based...

  17. Kilohoku Ho`okele Wa`a : Astronomy of the Modern Hawaiian Wayfinders

    Science.gov (United States)

    Ha`o, Celeste; Dye, Ahia G.; Slater, Stephanie J.; Slater, Timothy F.; Baybayan, Kalepa

    2015-08-01

    This paper provides an introduction to Kilohoku Ho`okele Wa`a, the astronomy of the Hawaiian wayfinders. Rooted in a legacy of navigation across the Polynesian triangle, wayfinding astronomy has been part of a suite of skills that allows navigators to deliberately hop between the small islands of the Pacific, for thousands of years. Forty years ago, in one manifestation of the Hawaiian Renaissance, our teachers demonstrated that ancient Hawaiians were capable of traversing the wide Pacific to settle and trade on islands separated by thousands of miles. Today those same mentors train a new generation of navigators, making Hawaiian voyaging a living, evolving, sustainable endeavor. This paper presents two components of astronomical knowledge that all crewmen, but particularly those in training to become navigators, learn early in their training. Na Ohana Hoku, the Hawaiian Star Families constitute the basic units of the Hawaiian sky. In contrast to the Western system of 88 constellations, Na Ohana Hoku divides the sky into four sections that each run from the northern to the southern poles. This configuration reduces cognitive load, allowing the navigator to preserve working memory for other complex tasks. In addition, these configurations of stars support the navigator in finding and generatively using hundreds of individual, and navigationally important pairs of stars. The Hawaiian Star Compass divides the celestial sphere into a directional system that uses 32 rather than 8 cardinal points. Within the tropics, the rising and setting of celestial objects are consistent within the Hawaiian Star Compass, providing for extremely reliable direction finding. Together, Na Ohana Hoku and the Hawaiian Star Compass provide the tropical navigator with astronomical assistance that is not available to, and would have been unknown to Western navigators trained at higher latitudes.

  18. Structural hippocampal anomalies in a schizophrenia population correlate with navigation performance on a wayfinding task

    Directory of Open Access Journals (Sweden)

    Andrée-Anne eLedoux

    2014-03-01

    Full Text Available Episodic memory, related to the hippocampus, has been found to be impaired in schizophrenia. Further, hippocampal anomalies have also been observed in schizophrenia. This study investigated whether average hippocampal grey matter (GM would differentiate performance on a hippocampus-dependent memory task in patients with schizophrenia and healthy controls. Twenty-one patients with schizophrenia and twenty-two control participants were scanned with an MRI while being tested on a wayfinding task in a virtual town (e.g., find the grocery store from the school. Regressions were performed for both groups individually and together using GM and performance on the wayfinding task. Results indicate that controls successfully completed the task more often than patients, took less time, and made fewer errors. Additionally, controls had significantly more hippocampal GM than patients. Poor performance was associated with a GM decrease in the right hippocampus for both groups. Within group regressions found an association between right hippocampi GM and performance in controls and an association between the left hippocampi GM and performance in patients. A second analysis revealed that different anatomical GM regions, known to be associated with the hippocampus, such as the parahippocampal cortex, amygdala, medial and orbital prefrontal cortices, covaried with the hippocampus in the control group. Interestingly, the cuneus and cingulate gyrus also covaried with the hippocampus in the patient group but the orbital frontal cortex did not, supporting the hypothesis of impaired connectivity between the hippocampus and the frontal cortex in schizophrenia. These results present important implications for creating intervention programs aimed at measuring functional and structural changes in the hippocampus in schizophrenia.

  19. Landmarks in nature to support wayfinding: the effects of seasons and experimental methods.

    Science.gov (United States)

    Kettunen, Pyry; Irvankoski, Katja; Krause, Christina M; Sarjakoski, L Tiina

    2013-08-01

    Landmarks constitute an essential basis for a structural understanding of the spatial environment. Therefore, they are crucial factors in external spatial representations such as maps and verbal route descriptions, which are used to support wayfinding. However, selecting landmarks for these representations is a difficult task, for which an understanding of how people perceive and remember landmarks in the environment is needed. We investigated the ways in which people perceive and remember landmarks in nature using the thinking aloud and sketch map methods during both the summer and the winter seasons. We examined the differences between methods to identify those landmarks that should be selected for external spatial representations, such as maps or route descriptions, in varying conditions. We found differences in the use of landmarks both in terms of the methods and also between the different seasons. In particular, the participants used passage and tree-related landmarks at significantly different frequencies with the thinking aloud and sketch map methods. The results are likely to reflect the different roles of the landmark groups when using the two methods, but also the differences in counting landmarks when using both methods. Seasonal differences in the use of landmarks occurred only with the thinking aloud method. Sketch maps were drawn similarly in summertime and wintertime; the participants remembered and selected landmarks similarly independent of the differences in their perceptions of the environment due to the season. The achieved results may guide the planning of external spatial representations within the context of wayfinding as well as when planning further experimental studies.

  20. Pilot study of a cell phone-based exercise persistence intervention post-rehabilitation for COPD

    Directory of Open Access Journals (Sweden)

    Huong Q Nguyen

    2009-08-01

    Full Text Available Huong Q Nguyen1, Dawn P Gill1, Seth Wolpin1, Bonnie G Steele2, Joshua O Benditt11University of Washington, seattle, WA, USA; 2VA Puget Sound Health Care System, Seattle, WA, USAObjective: To determine the feasibility and efficacy of a six-month, cell phone-based exercise persistence intervention for patients with chronic obstructive pulmonary disease (COPD following pulmonary rehabilitation.Methods: Participants who completed a two-week run-in were randomly assigned to either MOBILE-Coached (n = 9 or MOBILE-Self-Monitored (n = 8. All participants met with a nurse to develop an individualized exercise plan, were issued a pedometer and exercise booklet, and instructed to continue to log their daily exercise and symptoms. MOBILE-Coached also received weekly reinforcement text messages on their cell phones; reports of worsening symptoms were automatically flagged for follow-up. Usability and satisfaction were assessed. Participants completed incremental cycle and six minute walk (6MW tests, wore an activity monitor for 14 days, and reported their health-related quality of life (HRQL at baseline, three, and six months.Results: The sample had a mean age of 68 ± 11 and forced expiratory volume in one second (FEV1 of 40 ± 18% predicted. Participants reported that logging their exercise and symptoms was easy and that keeping track of their exercise helped them remain active. There were no differences between groups over time in maximal workload, 6MW distance, or HRQL (p > 0.05; however, MOBILE-Self-Monitored increased total steps/day whereas MOBILE-Coached logged fewer steps over six months (p = 0.04.Conclusions: We showed that it is feasible to deliver a cell phone-based exercise persistence intervention to patients with COPD post-rehabilitation and that the addition of coaching appeared to be no better than self-monitoring. The latter finding needs to be interpreted with caution since this was a purely exploratory study.Trial registration: Clinical

  1. Mobile Phone Based RIMS for Traffic Control a Case Study of Tanzania

    Directory of Open Access Journals (Sweden)

    Angela-Aida Karugila Runyoro

    2015-04-01

    Full Text Available Vehicles saturation in transportation infrastructure causes traffic congestion, accidents, transportation delays and environment pollution. This problem can be resolved with proper management of traffic flow. Existing traffic management systems are challenged on capturing and processing real-time road data from wide area road networks. The main purpose of this study is to address the gap by implementing a mobile phone based Road Information Management System. The proposed system integrates three modules for data collection, storage and information dissemination. The modules works together to enable real-time traffic control. Disseminated information from the system, enables road users to adjust their travelling habit, also it allows the traffic lights to control the traffic in relation to the real-time situation occurring on the road. In this paper the system implementation and testing was performed. The results indicated that there is a possibility to track traffic data using Global Positioning System enabled mobile phones, and after processing the collected data, real-time traffic status was displayed on web interface. This enabled road users to know in advance the situation occurring on the roads and hence make proper travelling decision. Further research should consider adjusting the traffic lights control system to understand the disseminated real-time traffic information.

  2. An iPhone-based digital image colorimeter for detecting tetracycline in milk.

    Science.gov (United States)

    Masawat, Prinya; Harfield, Antony; Namwong, Anan

    2015-10-01

    An iPhone-based digital image colorimeter (DIC) was fabricated as a portable tool for monitoring tetracycline (TC) in bovine milk. An application named ColorConc was developed for the iPhone that utilizes an image matching algorithm to determine the TC concentration in a solution. The color values; red (R), green (G), blue (B), hue (H), saturation (S), brightness (V), and gray (Gr) were measured from each pictures of the TC standard solutions. TC solution extracted from milk samples using solid phase extraction (SPE) was captured and the concentration was predicted by comparing color values with those collected in a database. The amount of TC could be determined in the concentration range of 0.5-10 μg mL(-1). The proposed DIC-iPhone is able to provide a limit of detection (LOD) of 0.5 μg mL(-1) and limit of quantitation (LOQ) of 1.5 μg mL(-1). The enrichment factor was 70 and color of the extracted milk sample was a strong yellow solution after SPE. Therefore, the SPE-DIC-iPhone could be used for the assay of TC residues in milk at the concentration lower than LOD and LOQ of the proposed technique.

  3. A mobile phone based telemonitoring concept for the simultaneous acquisition of biosignals physiological parameters.

    Science.gov (United States)

    Kumpusch, Hannes; Hayn, Dieter; Kreiner, Karl; Falgenhauer, Markus; Mor, Jürgen; Schreier, Günter

    2010-01-01

    Congestive Heart Failure (CHF) is a common chronic heart disease with high socioeconomic impact. Conventional treatment of CHF is often ineffective and inefficient, since self-management is complex and patients are insufficiently involved in therapy management. With telemedical concepts, continuous monitoring of the health status can be ensured, and consequently therapy management can be adapted to the individual requirements of every individual patient. Therefore, a mobile phone based patient terminal for the concurrent acquisition of biosignals (e.g. ECG) and bioparameters (e.g. blood pressure) for patients with CHF has been developed and prototypically implemented. Usability and interoperability aspects were especially considered by using Bluetooth and Near Field Communication (NFC) technology for data acquisition and standardized data formats for transmission of the data to a central monitoring centre. Results indicated that even complicated measurements like the acquisition of ECG signals could be accomplished autonomously by the patients in an intuitive and easy-to-use way. Through the usage of IHE conform HL7 messages, self-measured data could easily be integrated into a higher-ranking eHealth infrastructure.

  4. Study of variations of radiofrequency power density from mobile phone base stations with distance.

    Science.gov (United States)

    Ayinmode, B O; Farai, I P

    2013-10-01

    The variations of radiofrequency (RF) radiation power density with distance around some mobile phone base stations (BTSs), in ten randomly selected locations in Ibadan, western Nigeria, were studied. Measurements were made with a calibrated hand-held spectrum analyser. The maximum Global System of Mobile (GSM) communication 1800 signal power density was 323.91 µW m(-2) at 250 m radius of a BTS and that of GSM 900 was 1119.00 µW m(-2) at 200 m radius of another BTS. The estimated total maximum power density was 2972.00 µW m(-2) at 50 m radius of a different BTS. This study shows that the maximum carrier signal power density and the total maximum power density from a BTS may be observed averagely at 200 and 50 m of its radius, respectively. The result of this study demonstrates that exposure of people to RF radiation from phone BTSs in Ibadan city is far less than the recommended limits by International scientific bodies.

  5. Time Averaged Transmitter Power and Exposure to Electromagnetic Fields from Mobile Phone Base Stations

    Directory of Open Access Journals (Sweden)

    Alfred Bürgi

    2014-08-01

    Full Text Available Models for exposure assessment of high frequency electromagnetic fields from mobile phone base stations need the technical data of the base stations as input. One of these parameters, the Equivalent Radiated Power (ERP, is a time-varying quantity, depending on communication traffic. In order to determine temporal averages of the exposure, corresponding averages of the ERP have to be available. These can be determined as duty factors, the ratios of the time-averaged power to the maximum output power according to the transmitter setting. We determine duty factors for UMTS from the data of 37 base stations in the Swisscom network. The UMTS base stations sample contains sites from different regions of Switzerland and also different site types (rural/suburban/urban/hotspot. Averaged over all regions and site types, a UMTS duty factor  for the 24 h-average is obtained, i.e., the average output power corresponds to about a third of the maximum power. We also give duty factors for GSM based on simple approximations and a lower limit for LTE estimated from the base load on the signalling channels.

  6. Time averaged transmitter power and exposure to electromagnetic fields from mobile phone base stations.

    Science.gov (United States)

    Bürgi, Alfred; Scanferla, Damiano; Lehmann, Hugo

    2014-08-07

    Models for exposure assessment of high frequency electromagnetic fields from mobile phone base stations need the technical data of the base stations as input. One of these parameters, the Equivalent Radiated Power (ERP), is a time-varying quantity, depending on communication traffic. In order to determine temporal averages of the exposure, corresponding averages of the ERP have to be available. These can be determined as duty factors, the ratios of the time-averaged power to the maximum output power according to the transmitter setting. We determine duty factors for UMTS from the data of 37 base stations in the Swisscom network. The UMTS base stations sample contains sites from different regions of Switzerland and also different site types (rural/suburban/urban/hotspot). Averaged over all regions and site types, a UMTS duty factor for the 24 h-average is obtained, i.e., the average output power corresponds to about a third of the maximum power. We also give duty factors for GSM based on simple approximations and a lower limit for LTE estimated from the base load on the signalling channels.

  7. What input data are needed to accurately model electromagnetic fields from mobile phone base stations?

    Science.gov (United States)

    Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel

    2015-01-01

    The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available.

  8. Measurement and analysis of radiofrequency radiations from some mobile phone base stations in Ghana.

    Science.gov (United States)

    Amoako, J K; Fletcher, J J; Darko, E O

    2009-08-01

    A survey of the radiofrequency electromagnetic radiation at public access points in the vicinity of 50 cellular phone base stations has been carried out. The primary objective was to measure and analyse the electromagnetic field strength levels emitted by antennae installed and operated by the Ghana Telecommunications Company. On all the sites measurements were made using a hand-held spectrum analyser to determine the electric field level with the 900 and 1800 MHz frequency bands. The results indicated that power densities at public access points varied from as low as 0.01 microW m(-2) to as high as 10 microW m(-2) for the frequency of 900 MHz. At a transmission frequency of 1800 MHz, the variation of power densities is from 0.01 to 100 microW m(-2). The results were found to be in compliant with the International Commission on Non-ionizing Radiological Protection guidance level but were 20 times higher than the results generally obtained for such a practice elsewhere. There is therefore a need to re-assess the situation to ensure reduction in the present level as an increase in mobile phone usage is envisaged within the next few years.

  9. Multidimensional control using a mobile-phone based brain-muscle-computer interface.

    Science.gov (United States)

    Vernon, Scott; Joshi, Sanjay S

    2011-01-01

    Many well-known brain-computer interfaces measure signals at the brain, and then rely on the brain's ability to learn via operant conditioning in order to control objects in the environment. In our lab, we have been developing brain-muscle-computer interfaces, which measure signals at a single muscle and then rely on the brain's ability to learn neuromuscular skills via operant conditioning. Here, we report a new mobile-phone based brain-muscle-computer interface prototype for severely paralyzed persons, based on previous results from our group showing that humans may actively create specified power levels in two separate frequency bands of a single sEMG signal. Electromyographic activity on the surface of a single face muscle (Auricularis superior) is recorded with a standard electrode. This analog electrical signal is imported into an Android-based mobile phone. User-modulated power in two separate frequency band serves as two separate and simultaneous control channels for machine control. After signal processing, the Android phone sends commands to external devices via Bluetooth. Users are trained to use the device via biofeedback, with simple cursor-to-target activities on the phone screen.

  10. Mobile Phone-Based Field Monitoring for Satsuma Mandarin and Its Application to Watering Advice System

    Science.gov (United States)

    Kamiya, Toshiyuki; Numano, Nagisa; Yagyu, Hiroyuki; Shimazu, Hideo

    This paper describes a mobile phone-based data logging system for monitoring the growing status of Satsuma mandarin, a type of citrus fruit, in the field. The system can provide various feedback to the farm producers with collected data, such as visualization of related data as a timeline chart or advice on the necessity of watering crops. It is important to collect information on environment conditions, plant status and product quality, to analyze it and to provide it as feedback to the farm producers to aid their operations. This paper proposes a novel framework of field monitoring and feedback for open-field farming. For field monitoring, it combines a low-cost plant status monitoring method using a simple apparatus and a Field Server for environment condition monitoring. Each field worker has a simple apparatus to measure fruit firmness and records data with a mobile phone. The logged data are stored in the database of the system on the server. The system analyzes stored data for each field and is able to show the necessity of watering to the user in five levels. The system is also able to show various stored data in timeline chart form. The user and coach can compare or analyze these data via a web interface. A test site was built at a Satsuma mandarin field at Kumano in Mie Prefecture, Japan using the framework, and farm workers monitor in the area used and evaluated the system.

  11. Mobile phone base stations and adverse health effects: phase 1 of a population-based, cross-sectional study in Germany

    DEFF Research Database (Denmark)

    Blettner, M; Schlehofer, B; Breckenkamp, J;

    2009-01-01

    -sectional study within the context of a large panel survey regularly carried out by a private research institute in Germany. In the initial phase, reported on in this paper, 30,047 persons from a total of 51,444 who took part in the nationwide survey also answered questions on how mobile phone base stations.......7% of participants were concerned about adverse health effects of mobile phone base stations, while an additional 10.3% attributed their personal adverse health effects to the exposure from them. Participants who were concerned about or attributed adverse health effects to mobile phone base stations and those living...

  12. Mobile Phone-Based Lifestyle Intervention for Reducing Overall Cardiovascular Disease Risk in Guangzhou, China: A Pilot Study.

    Science.gov (United States)

    Liu, Zhiting; Chen, Songting; Zhang, Guanrong; Lin, Aihua

    2015-12-17

    With the rapid and widespread adoption of mobile devices, mobile phones offer an opportunity to deliver cardiovascular disease (CVD) interventions. This study evaluated the efficacy of a mobile phone-based lifestyle intervention aimed at reducing the overall CVD risk at a health management center in Guangzhou, China. We recruited 589 workers from eight work units. Based on a group-randomized design, work units were randomly assigned either to receive the mobile phone-based lifestyle interventions or usual care. The reduction in 10-year CVD risk at 1-year follow-up for the intervention group was not statistically significant (-1.05%, p = 0.096). However, the mean risk increased significantly by 1.77% (p = 0.047) for the control group. The difference of the changes between treatment arms in CVD risk was -2.83% (p = 0.001). In addition, there were statistically significant changes for the intervention group relative to the controls, from baseline to year 1, in systolic blood pressure (-5.55 vs. 6.89 mmHg; p Mobile phone-based intervention may therefore be a potential solution for reducing CVD risk in China.

  13. Phases in development of an interactive mobile phone-based system to support self-management of hypertension.

    Science.gov (United States)

    Hallberg, Inger; Taft, Charles; Ranerup, Agneta; Bengtsson, Ulrika; Hoffmann, Mikael; Höfer, Stefan; Kasperowski, Dick; Mäkitalo, Asa; Lundin, Mona; Ring, Lena; Rosenqvist, Ulf; Kjellgren, Karin

    2014-01-01

    Hypertension is a significant risk factor for heart disease and stroke worldwide. Effective treatment regimens exist; however, treatment adherence rates are poor (30%-50%). Improving self-management may be a way to increase adherence to treatment. The purpose of this paper is to describe the phases in the development and preliminary evaluation of an interactive mobile phone-based system aimed at supporting patients in self-managing their hypertension. A person-centered and participatory framework emphasizing patient involvement was used. An interdisciplinary group of researchers, patients with hypertension, and health care professionals who were specialized in hypertension care designed and developed a set of questions and motivational messages for use in an interactive mobile phone-based system. Guided by the US Food and Drug Administration framework for the development of patient-reported outcome measures, the development and evaluation process comprised three major development phases (1, defining; 2, adjusting; 3, confirming the conceptual framework and delivery system) and two evaluation and refinement phases (4, collecting, analyzing, interpreting data; 5, evaluating the self-management system in clinical practice). Evaluation of new mobile health systems in a structured manner is important to understand how various factors affect the development process from both a technical and human perspective. Forthcoming analyses will evaluate the effectiveness and utility of the mobile phone-based system in supporting the self-management of hypertension.

  14. Tower Camera

    Data.gov (United States)

    Oak Ridge National Laboratory — The tower camera in Barrow provides hourly images of ground surrounding the tower. These images may be used to determine fractional snow cover as winter arrives, for...

  15. Hierarchical approaches to estimate energy expenditure using phone-based accelerometers.

    Science.gov (United States)

    Vathsangam, Harshvardhan; Schroeder, E Todd; Sukhatme, Gaurav S

    2014-07-01

    Physical inactivity is linked with increase in risk of cancer, heart disease, stroke, and diabetes. Walking is an easily available activity to reduce sedentary time. Objective methods to accurately assess energy expenditure from walking that is normalized to an individual would allow tailored interventions. Current techniques rely on normalization by weight scaling or fitting a polynomial function of weight and speed. Using the example of steady-state treadmill walking, we present a set of algorithms that extend previous work to include an arbitrary number of anthropometric descriptors. We specifically focus on predicting energy expenditure using movement measured by mobile phone-based accelerometers. The models tested include nearest neighbor models, weight-scaled models, a set of hierarchical linear models, multivariate models, and speed-based approaches. These are compared for prediction accuracy as measured by normalized average root mean-squared error across all participants. Nearest neighbor models showed highest errors. Feature combinations corresponding to sedentary energy expenditure, sedentary heart rate, and sex alone resulted in errors that were higher than speed-based models and nearest-neighbor models. Size-based features such as BMI, weight, and height produced lower errors. Hierarchical models performed better than multivariate models when size-based features were used. We used the hierarchical linear model to determine the best individual feature to describe a person. Weight was the best individual descriptor followed by height. We also test models for their ability to predict energy expenditure with limited training data. Hierarchical models outperformed personal models when a low amount of training data were available. Speed-based models showed poor interpolation capability, whereas hierarchical models showed uniform interpolation capabilities across speeds.

  16. Cardiac cameras.

    Science.gov (United States)

    Travin, Mark I

    2011-05-01

    Cardiac imaging with radiotracers plays an important role in patient evaluation, and the development of suitable imaging instruments has been crucial. While initially performed with the rectilinear scanner that slowly transmitted, in a row-by-row fashion, cardiac count distributions onto various printing media, the Anger scintillation camera allowed electronic determination of tracer energies and of the distribution of radioactive counts in 2D space. Increased sophistication of cardiac cameras and development of powerful computers to analyze, display, and quantify data has been essential to making radionuclide cardiac imaging a key component of the cardiac work-up. Newer processing algorithms and solid state cameras, fundamentally different from the Anger camera, show promise to provide higher counting efficiency and resolution, leading to better image quality, more patient comfort and potentially lower radiation exposure. While the focus has been on myocardial perfusion imaging with single-photon emission computed tomography, increased use of positron emission tomography is broadening the field to include molecular imaging of the myocardium and of the coronary vasculature. Further advances may require integrating cardiac nuclear cameras with other imaging devices, ie, hybrid imaging cameras. The goal is to image the heart and its physiological processes as accurately as possible, to prevent and cure disease processes.

  17. Toward a Computer Vision-based Wayfinding Aid for Blind Persons to Access Unfamiliar Indoor Environments.

    Science.gov (United States)

    Tian, Yingli; Yang, Xiaodong; Yi, Chucai; Arditi, Aries

    2013-04-01

    Independent travel is a well known challenge for blind and visually impaired persons. In this paper, we propose a proof-of-concept computer vision-based wayfinding aid for blind people to independently access unfamiliar indoor environments. In order to find different rooms (e.g. an office, a lab, or a bathroom) and other building amenities (e.g. an exit or an elevator), we incorporate object detection with text recognition. First we develop a robust and efficient algorithm to detect doors, elevators, and cabinets based on their general geometric shape, by combining edges and corners. The algorithm is general enough to handle large intra-class variations of objects with different appearances among different indoor environments, as well as small inter-class differences between different objects such as doors and door-like cabinets. Next, in order to distinguish intra-class objects (e.g. an office door from a bathroom door), we extract and recognize text information associated with the detected objects. For text recognition, we first extract text regions from signs with multiple colors and possibly complex backgrounds, and then apply character localization and topological analysis to filter out background interference. The extracted text is recognized using off-the-shelf optical character recognition (OCR) software products. The object type, orientation, location, and text information are presented to the blind traveler as speech.

  18. Wayfinding: a quality factor in human design approach to healthcare facilities.

    Science.gov (United States)

    Del Nord, R

    1999-01-01

    The specific aim of this paper is the systematic analysis of interactions and reciprocal conditions existing between the physical space of hospital buildings and the different categories of individuals that come in contact with them. The physical and environmental facilities of hospital architecture often influence the therapeutic character of space and the employees. If the values of the individual are to be safeguarded in this context, priority needs to be given to such factors as communication, privacy, etc. This would mean the involvement of other professional groups such as psychologists, sociologists, ergonomists, etc. at the hospital building planning stage. This paper will outline the result of some research conducted at the University Research Center "TESIS" of Florence to provide better understanding of design strategies applied to reduce the pathology of spaces within the healthcare environment. The case studies will highlight the parameters and the possible architectural solutions to wayfinding and the humanization of spaces, with particular emphasis on lay-outs, technologies, furniture and finishing design.

  19. The feasibility of cell phone based electronic diaries for STI/HIV research

    Directory of Open Access Journals (Sweden)

    Hensel Devon J

    2012-06-01

    Full Text Available Abstract Background Self-reports of sensitive, socially stigmatized or illegal behavior are common in STI/HIV research, but can raise challenges in terms of data reliability and validity. The use of electronic data collection tools, including ecological momentary assessment (EMA, can increase the accuracy of this information by allowing a participant to self-administer a survey or diary entry, in their own environment, as close to the occurrence of the behavior as possible. In this paper, we evaluate the feasibility of using cell phone-based EMA as a tool for understanding sexual risk and STI among adult men and women. Methods As part of a larger prospective clinical study on sexual risk behavior and incident STI in clinically recruited adult men and women, using study-provided cell phones, participants (N = 243 completed thrice–daily EMA diaries monitoring individual and partner-specific emotional attributes, non-sexual activities, non-coital or coital sexual behaviors, and contraceptive behaviors. Using these data, we assess feasibility in terms of participant compliance, behavior reactivity, general method acceptability and method efficacy for capturing behaviors. Results Participants were highly compliant with diary entry protocol and schedule: over the entire 12 study weeks, participants submitted 89.7% (54,914/61,236 of the expected diary entries, with an average of 18.86 of the 21 expected diaries (85.7% each week. Submission did not differ substantially across gender, race/ethnicity and baseline sexually transmitted infection status. A sufficient volume and range of sexual behaviors were captured, with reporting trends in different legal and illegal behaviors showing small variation over time. Participants found the methodology to be acceptable, enjoyed and felt comfortable participating in the study. Conclusion Achieving the correct medium of data collection can drastically improve, or degrade, the timeliness and quality of an

  20. CCD Camera

    Science.gov (United States)

    Roth, Roger R.

    1983-01-01

    A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

  1. Flying solo: A review of the literature on wayfinding for older adults experiencing visual or cognitive decline.

    Science.gov (United States)

    Bosch, Sheila J; Gharaveis, Arsalan

    2017-01-01

    Accessible tourism is a growing market within the travel industry, but little research has focused on travel barriers for older adults who may be experiencing visual and cognitive decline as part of the normal aging process, illness, or other disabling conditions. Travel barriers, such as difficulty finding one's way throughout an airport, may adversely affect older adults' travel experience, thereby reducing their desire to travel. This review of the literature investigates wayfinding strategies to ensure that older passengers who have planned to travel independently can do so with dignity. These include facility planning and design strategies (e.g., layout, signage) and technological solutions. Although technological approaches, such as smart phone apps, appear to offer the most promising new solutions for enhancing airport navigation, more traditional approaches, such as designing facilities with an intuitive building layout, are still heavily relied upon in the aviation industry. While there are many design guidelines for enhancing wayfinding for older adults, many are not based on scientific investigation.

  2. A mobile phone-based context-aware video management application

    Science.gov (United States)

    Lahti, Janne; Palola, Marko; Korva, Jari; Westermann, Utz; Pentikousis, Kostas; Pietarila, Paavo

    2006-02-01

    We present a video management system comprising a video server and a mobile camera-phone application called MobiCon, which allows users to capture videos, annotate them with metadata, specify digital rights management (DRM) settings, upload the videos over the cellular network, and share them with others. Once stored in the video server, users can then search their personal video collection via a web interface, and watch the video clips using a wide range of terminals. We describe the MobiCon architecture, compare it with related work, provide an overview of the video server, and illustrate a typical user scenario from the point of capture to video sharing and video searching. Our work takes steps forward in advancing the mobile camera-phone from a video playback device to a video production tool. We summarize field trial results conducted in the area of Oulu, Finland, which demonstrate that users can master the application quickly, but are unwilling to perform extensive manual annotations. Based on the user trial results and our own experience, we present future development directions for MobiCon, in particular, and the video management architecture, in general.

  3. Mobile phone-based biosensing: An emerging "diagnostic and communication" technology.

    Science.gov (United States)

    Quesada-González, Daniel; Merkoçi, Arben

    2017-06-15

    In this review we discuss recent developments on the use of mobile phones and similar devices for biosensing applications in which diagnostics and communications are coupled. Owing to the capabilities of mobile phones (their cameras, connectivity, portability, etc.) and to advances in biosensing, the coupling of these two technologies is enabling portable and user-friendly analytical devices. Any user can now perform quick, robust and easy (bio)assays anywhere and at any time. Among the most widely reported of such devices are paper-based platforms. Herein we provide an overview of a broad range of biosensing possibilities, from optical to electrochemical measurements; explore the various reported designs for adapters; and consider future opportunities for this technology in fields such as health diagnostics, safety & security, and environment monitoring.

  4. Color Targets: Fiducials to Help Visually Impaired People Find Their Way by Camera Phone

    Directory of Open Access Journals (Sweden)

    Manduchi Roberto

    2007-01-01

    Full Text Available A major challenge faced by the blind and visually impaired population is that of wayfinding—the ability of a person to find his or her way to a given destination. We propose a new wayfinding aid based on a camera cell phone, which is held by the user to find and read aloud specially designed machine-readable signs, which we call color targets, in indoor environments (labeling locations such as offices and restrooms. Our main technical innovation is that we have designed the color targets to be detected and located in fractions of a second on the cell phone CPU, even at a distance of several meters. Once the sign has been quickly detected, nearby information in the form of a barcode can be read, an operation that typically requires more computational time. An important contribution of this paper is a principled method for optimizing the design of the color targets and the color target detection algorithm based on training data, instead of relying on heuristic choices as in our previous work. We have implemented the system on Nokia 7610 cell phone, and preliminary experiments with blind subjects demonstrate the feasibility of using the system as a real-time wayfinding aid.

  5. Phases in development of an interactive mobile phone-based system to support self-management of hypertension

    Directory of Open Access Journals (Sweden)

    Hallberg I

    2014-05-01

    Full Text Available Inger Hallberg,1,11 Charles Taft,1,11 Agneta Ranerup,2,11 Ulrika Bengtsson,1,11 Mikael Hoffmann,3,10 Stefan Höfer,4 Dick Kasperowski,5 Åsa Mäkitalo,6 Mona Lundin,6 Lena Ring,7,8 Ulf Rosenqvist,9 Karin Kjellgren1,10,11 1Institute of Health and Care Sciences, 2Department of Applied Information Technology, University of Gothenburg, Gothenburg, 3The NEPI Foundation, Linköping, Sweden; 4Department of Medical Psychology, Innsbruck Medical University, Innsbruck, Austria; 5Department of Philosophy, Linguistics and Theory of Science, 6Department of Education, Communication and Learning, University of Gothenburg, Gothenburg, 7Centre for Research Ethics and Bioethics, Uppsala University, 8Department of Use of Medical Products, Medical Products Agency, Uppsala, 9Department of Medical Specialist and Department of Medical and Health Sciences, Linköping University, Motala, 10Department of Medical and Health Sciences, Linköping University, Linköping, 11Centre for Person-Centred Care, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden Abstract: Hypertension is a significant risk factor for heart disease and stroke worldwide. Effective treatment regimens exist; however, treatment adherence rates are poor (30%–50%. Improving self-management may be a way to increase adherence to treatment. The purpose of this paper is to describe the phases in the development and preliminary evaluation of an interactive mobile phone-based system aimed at supporting patients in self-managing their hypertension. A person-centered and participatory framework emphasizing patient involvement was used. An interdisciplinary group of researchers, patients with hypertension, and health care professionals who were specialized in hypertension care designed and developed a set of questions and motivational messages for use in an interactive mobile phone-based system. Guided by the US Food and Drug Administration framework for the development of patient-reported outcome

  6. Mobile phone base stations and adverse health effects: phase 2 of a cross-sectional study with measured radio frequency electromagnetic fields

    DEFF Research Database (Denmark)

    Berg-Beckhoff, Gabriele; Blettner, M; Kowall, B

    2009-01-01

    affected their health and they gave information on sleep disturbances, headaches, health complaints and mental and physical health using standardised health questionnaires. Information on stress was also collected. Multiple linear regression models were used with health outcomes as dependent variables (n...... not report more headaches or less mental and physical health. Individuals concerned about mobile phone base stations did not have different well-being scores compared with those who were not concerned. CONCLUSIONS: In this large population-based study, measured RF-EMFs emitted from mobile phone base stations...

  7. Effects of competing environmental variables and signage on route-choices in simulated everyday and emergency wayfinding situations.

    Science.gov (United States)

    Vilar, Elisângela; Rebelo, Francisco; Noriega, Paulo; Duarte, Emília; Mayhorn, Christopher B

    2014-01-01

    This study examined the relative influence of environmental variables (corridor width and brightness) and signage (directional and exit signs), when presented in competition, on participants' route-choices in two situational variables (everyday vs. emergency), during indoor wayfinding in virtual environments. A virtual reality-based methodology was used. Thus, participants attempted to find a room (everyday situation) in a virtual hotel, followed by a fire-related emergency egress (emergency situation). Different behaviours were observed. In the everyday situation, for no-signs condition, participants choose mostly the wider and brighter corridors, suggesting a heavy reliance on the environmental affordances. Conversely, for signs condition, participants mostly complied with signage, suggesting a greater reliance on the signs rather than on the environmental cues. During emergency, without signage, reliance on environmental affordances seems to be affected by the intersection type. In the sign condition, the reliance on environmental affordances that started strong decreases along the egress route.

  8. Interpretation of way-finding healthcare symbols by a multicultural population: navigation signage design for global health.

    Science.gov (United States)

    Hashim, Muhammad Jawad; Alkaabi, Mariam Salem Khamis Matar; Bharwani, Sulaiman

    2014-05-01

    The interpretation of way-finding symbols for healthcare facilities in a multicultural community was assessed in a cross-sectional study. One hundred participants recruited from Al Ain city in the United Arab Emirates were asked to interpret 28 healthcare symbols developed at Hablamos Juntos (such as vaccinations and laboratory) as well as 18 general-purpose symbols (such as elevators and restrooms). The mean age was 27.6 years (16-55 years) of whom 84 (84%) were females. Healthcare symbols were more difficult to comprehend than general-purpose signs. Symbols referring to abstract concepts were the most misinterpreted including oncology, diabetes education, outpatient clinic, interpretive services, pharmacy, internal medicine, registration, social services, obstetrics and gynecology, pediatrics and infectious diseases. Interpretation rates varied across cultural backgrounds and increased with higher education and younger age. Signage within healthcare facilities should be tested among older persons, those with limited literacy and across a wide range of cultures.

  9. Validity of at home model predictions as a proxy for personal exposure to radiofrequency electromagnetic fields from mobile phone base stations

    NARCIS (Netherlands)

    Martens, Astrid L; Bolte, John F B; Beekhuizen, Johan; Kromhout, Hans; Smid, Tjabe; Vermeulen, Roel C H

    2015-01-01

    BACKGROUND: Epidemiological studies on the potential health effects of RF-EMF from mobile phone base stations require efficient and accurate exposure assessment methods. Previous studies have demonstrated that the 3D geospatial model NISMap is able to rank locations by indoor and outdoor RF-EMF expo

  10. Integrating mobile-phone based assessment for psychosis into people’s everyday lives and clinical care: a qualitative study

    Directory of Open Access Journals (Sweden)

    Palmier-Claus Jasper E

    2013-01-01

    Full Text Available Abstract Background Over the past decade policy makers have emphasised the importance of healthcare technology in the management of long-term conditions. Mobile-phone based assessment may be one method of facilitating clinically- and cost-effective intervention, and increasing the autonomy and independence of service users. Recently, text-message and smartphone interfaces have been developed for the real-time assessment of symptoms in individuals with schizophrenia. Little is currently understood about patients’ perceptions of these systems, and how they might be implemented into their everyday routine and clinical care. Method 24 community based individuals with non-affective psychosis completed a randomised repeated-measure cross-over design study, where they filled in self-report questions about their symptoms via text-messages on their own phone, or via a purpose designed software application for Android smartphones, for six days. Qualitative interviews were conducted in order to explore participants’ perceptions and experiences of the devices, and thematic analysis was used to analyse the data. Results Three themes emerged from the data: i the appeal of usability and familiarity, ii acceptability, validity and integration into domestic routines, and iii perceived impact on clinical care. Although participants generally found the technology non-stigmatising and well integrated into their everyday activities, the repetitiveness of the questions was identified as a likely barrier to long-term adoption. Potential benefits to the quality of care received were seen in terms of assisting clinicians, faster and more efficient data exchange, and aiding patient-clinician communication. However, patients often failed to see the relevance of the systems to their personal situations, and emphasised the threat to the person centred element of their care. Conclusions The feedback presented in this paper suggests that patients are conscious of the

  11. Mobile phone-based asthma self-management aid for adolescents (mASMAA: a feasibility study

    Directory of Open Access Journals (Sweden)

    Rhee H

    2014-01-01

    Full Text Available Hyekyun Rhee,1 James Allen,2 Jennifer Mammen,1 Mary Swift21School of Nursing, 2Department of Computer Science, University of Rochester, Rochester, NY, USAPurpose: Adolescents report high asthma-related morbidity that can be prevented by adequate self-management of the disease. Therefore, there is a need for a developmentally appropriate strategy to promote effective asthma self-management. Mobile phone-based technology is portable, commonly accessible, and well received by adolescents. The purpose of this study was to develop and evaluate the feasibility and acceptability of a comprehensive mobile phone-based asthma self-management aid for adolescents (mASMAA that was designed to facilitate symptom monitoring, treatment adherence, and adolescent–parent partnership. The system used state-of-the-art natural language-understanding technology that allowed teens to use unconstrained English in their texts, and to self-initiate interactions with the system.Materials and methods: mASMAA was developed based on an existing natural dialogue system that supports broad coverage of everyday natural conversation in English. Fifteen adolescent–parent dyads participated in a 2-week trial that involved adolescents' daily scheduled and unscheduled interactions with mASMAA and parents responding to daily reports on adolescents' asthma condition automatically generated by mASMAA. Subsequently, four focus groups were conducted to systematically obtain user feedback on the system. Frequency data on the daily usage of mASMAA over the 2-week period were tabulated, and content analysis was conducted for focus group interview data.Results: Response rates for daily text messages were 81%–97% in adolescents. The average number of self-initiated messages to mASMAA was 19 per adolescent. Symptoms were the most common topic of teen-initiated messages. Participants concurred that use of mASMAA improved awareness of symptoms and triggers, promoted treatment adherence and

  12. Encouraging 5-year olds to attend to landmarks: A way to improve children’s wayfinding strategies in a virtual environment.

    Directory of Open Access Journals (Sweden)

    Jamie eLingwood

    2015-03-01

    Full Text Available Wayfinding can be defined as the ability to learn and remember a route through an environment. Previous researchers have shown that young children have difficulties remembering routes. However, very few researchers have considered how to improve young children’s wayfinding abilities. Therefore, we investigated ways to help children increase their wayfinding skills. In two studies, a total of 72 5-year olds were shown a route in a six turn virtual environment and were then asked to retrace this route by themselves. A unique landmark was positioned at each junction and each junction was made up of two paths: a correct choice and an incorrect choice. Two different strategies improved route learning performance. In Experiment 1, verbally labelling landmarks at junctions during the first walk reduced children’s errors at turns, and the number of trials they needed to reach the learning criterion. In Experiment 2, encouraging children to attend to landmarks at junctions on the first walk reduced the children’s errors when making a turn. This is the first study to show that very young children can be taught effective route learning skills.

  13. An Efficient Power Harvesting Mobile Phone-Based Electrochemical Biosensor for Point-of-Care Health Monitoring.

    Science.gov (United States)

    Sun, Alexander C; Yao, Chengyang; Venkatesh, A G; Hall, Drew A

    2016-11-01

    Cellular phone penetration has grown continually over the past two decades with the number of connected devices rapidly approaching the total world population. Leveraging the worldwide ubiquity and connectivity of these devices, we developed a mobile phone-based electrochemical biosensor platform for point-of-care (POC) diagnostics and wellness tracking. The platform consists of an inexpensive electronic module (power potentiostat that interfaces with and efficiently harvests power from a wide variety of phones through the audio jack. Active impedance matching improves the harvesting efficiency to 79%. Excluding loses from supply rectification and regulation, the module consumes 6.9 mW peak power and can measure power budget set by mobile devices and produce data that matches well with that of an expensive laboratory grade instrument. We demonstrate that the platform can be used to track the concentration of secretory leukocyte protease inhibitor (SLPI), a biomarker for monitoring lung infections in cystic fibrosis patients, in its physiological range via an electrochemical sandwich assay on disposable screen-printed electrodes with a 1 nM limit of detection.

  14. Supporting the self-management of hypertension: Patients' experiences of using a mobile phone-based system.

    Science.gov (United States)

    Hallberg, I; Ranerup, A; Kjellgren, K

    2016-02-01

    Globally, hypertension is poorly controlled and its treatment consists mainly of preventive behavior, adherence to treatment and risk-factor management. The aim of this study was to explore patients' experiences of an interactive mobile phone-based system designed to support the self-management of hypertension. Forty-nine patients were interviewed about their experiences of using the self-management system for 8 weeks regarding: (i) daily answers on self-report questions concerning lifestyle, well-being, symptoms, medication intake and side effects; (ii) results of home blood-pressure measurements; (iii) reminders and motivational messages; and (iv) access to a web-based platform for visualization of the self-reports. The audio-recorded interviews were analyzed using qualitative thematic analysis. The patients considered the self-management system relevant for the follow-up of hypertension and found it easy to use, but some provided insight into issues for improvement. They felt that using the system offered benefits, for example, increasing their participation during follow-up consultations; they further perceived that it helped them gain understanding of the interplay between blood pressure and daily life, which resulted in increased motivation to follow treatment. Increased awareness of the importance of adhering to prescribed treatment may be a way to minimize the cardiovascular risks of hypertension.

  15. Putting prevention in their pockets: developing mobile phone-based HIV interventions for black men who have sex with men.

    Science.gov (United States)

    Muessig, Kathryn E; Pike, Emily C; Fowler, Beth; LeGrand, Sara; Parsons, Jeffrey T; Bull, Sheana S; Wilson, Patrick A; Wohl, David A; Hightow-Weidman, Lisa B

    2013-04-01

    Young black men who have sex with men (MSM) bear a disproportionate burden of HIV. Rapid expansion of mobile technologies, including smartphone applications (apps), provides a unique opportunity for outreach and tailored health messaging. We collected electronic daily journals and conducted surveys and focus groups with 22 black MSM (age 18-30) at three sites in North Carolina to inform the development of a mobile phone-based intervention. Qualitative data was analyzed thematically using NVivo. Half of the sample earned under $11,000 annually. All participants owned smartphones and had unlimited texting and many had unlimited data plans. Phones were integral to participants' lives and were a primary means of Internet access. Communication was primarily through text messaging and Internet (on-line chatting, social networking sites) rather than calls. Apps were used daily for entertainment, information, productivity, and social networking. Half of participants used their phones to find sex partners; over half used phones to find health information. For an HIV-related app, participants requested user-friendly content about test site locators, sexually transmitted diseases, symptom evaluation, drug and alcohol risk, safe sex, sexuality and relationships, gay-friendly health providers, and connection to other gay/HIV-positive men. For young black MSM in this qualitative study, mobile technologies were a widely used, acceptable means for HIV intervention. Future research is needed to measure patterns and preferences of mobile technology use among broader samples.

  16. Non-specific physical symptoms in relation to actual and perceived proximity to mobile phone base stations and powerlines

    Directory of Open Access Journals (Sweden)

    Bolte John

    2011-06-01

    Full Text Available Abstract Background Evidence about a possible causal relationship between non-specific physical symptoms (NSPS and exposure to electromagnetic fields (EMF emitted by sources such as mobile phone base stations (BS and powerlines is insufficient. So far little epidemiological research has been published on the contribution of psychological components to the occurrence of EMF-related NSPS. The prior objective of the current study is to explore the relative importance of actual and perceived proximity to base stations and psychological components as determinants of NSPS, adjusting for demographic, residency and area characteristics. Methods Analysis was performed on data obtained in a cross-sectional study on environment and health in 2006 in the Netherlands. In the current study, 3611 adult respondents (response rate: 37% in twenty-two Dutch residential areas completed a questionnaire. Self-reported instruments included a symptom checklist and assessment of environmental and psychological characteristics. The computation of the distance between household addresses and location of base stations and powerlines was based on geo-coding. Multilevel regression models were used to test the hypotheses regarding the determinants related to the occurrence of NSPS. Results After adjustment for demographic and residential characteristics, analyses yielded a number of statistically significant associations: Increased report of NSPS was predominantly predicted by higher levels of self-reported environmental sensitivity; perceived proximity to base stations and powerlines, lower perceived control and increased avoidance (coping behavior were also associated with NSPS. A trend towards a moderator effect of perceived environmental sensitivity on the relation between perceived proximity to BS and NSPS was verified (p = 0.055. There was no significant association between symptom occurrence and actual distance to BS or powerlines. Conclusions Perceived proximity to BS

  17. Optimization of measurement methods for a multi-frequency electromagnetic field from mobile phone base station using broadband EMF meter

    Directory of Open Access Journals (Sweden)

    Paweł Bieńkowski

    2015-10-01

    Full Text Available Background: This paper presents the characteristics of the mobile phone base station (BS as an electromagnetic field (EMF source. The most common system configurations with their construction are described. The parameters of radiated EMF in the context of the access to methods and other parameters of the radio transmission are discussed. Attention was also paid to antennas that are used in this technology. Material and Methods: The influence of individual components of a multi-frequency EMF, most commonly found in the BS surroundings, on the resultant EMF strength value indicated by popular broadband EMF meters was analyzed. The examples of metrological characteristics of the most common EMF probes and 2 measurement scenarios of the multisystem base station, with and without microwave relays, are shown. Results: The presented method for measuring the multi-frequency EMF using 2 broadband probes allows for the significant minimization of measurement uncertainty. Equations and formulas that can be used to calculate the actual EMF intensity from multi-frequency sources are shown. They have been verified in the laboratory conditions on a specific standard setup as well as in real conditions in a survey of the existing base station with microwave relays. Conclusions: Presented measurement methodology of multi-frequency EMF from BS with microwave relays, validated both in laboratory and real conditions. It has been proven that the described measurement methodology is the optimal approach to the evaluation of EMF exposure in BS surrounding. Alternative approaches with much greater uncertainty (precaution method or more complex measuring procedure (sources exclusion method are also presented. Med Pr 2015;66(5:701–712

  18. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  19. Vacuum Camera Cooler

    Science.gov (United States)

    Laugen, Geoffrey A.

    2011-01-01

    Acquiring cheap, moving video was impossible in a vacuum environment, due to camera overheating. This overheating is brought on by the lack of cooling media in vacuum. A water-jacketed camera cooler enclosure machined and assembled from copper plate and tube has been developed. The camera cooler (see figure) is cup-shaped and cooled by circulating water or nitrogen gas through copper tubing. The camera, a store-bought "spy type," is not designed to work in a vacuum. With some modifications the unit can be thermally connected when mounted in the cup portion of the camera cooler. The thermal conductivity is provided by copper tape between parts of the camera and the cooled enclosure. During initial testing of the demonstration unit, the camera cooler kept the CPU (central processing unit) of this video camera at operating temperature. This development allowed video recording of an in-progress test, within a vacuum environment.

  20. Constrained space camera assembly

    Science.gov (United States)

    Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.

    1999-01-01

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.

  1. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen...

  2. Digital Pinhole Camera

    Science.gov (United States)

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  3. Efficacy of a randomized cell phone-based counseling intervention in postponing subsequent pregnancy among teen mothers.

    Science.gov (United States)

    Katz, Kathy S; Rodan, Margaret; Milligan, Renee; Tan, Sylvia; Courtney, Lauren; Gantz, Marie; Blake, Susan M; McClain, Lenora; Davis, Maurice; Kiely, Michele; Subramanian, Siva

    2011-12-01

    considerable challenges to treatment success. Individual, social, and contextual factors are all important to consider in the prevention of repeat teen pregnancy. Cell phone-based approaches to counseling may not be the most ideal for addressing complex, socially-mediated behaviors such as this, except for selective subgroups. A lack of resources within the community for older teens may interfere with program success.

  4. Clinically defined non-specific symptoms in the vicinity of mobile phone base stations: A retrospective before-after study

    Energy Technology Data Exchange (ETDEWEB)

    Baliatsas, Christos, E-mail: c.baliatsas@nivel.nl [Netherlands Institute for Health Services Research (NIVEL), Utrecht (Netherlands); Kamp, Irene van, E-mail: irene.van.kamp@rivm.nl [National Institute for Public Health and the Environment (RIVM), Bilthoven (Netherlands); Bolte, John, E-mail: john.bolte@rivm.nl [National Institute for Public Health and the Environment (RIVM), Bilthoven (Netherlands); Kelfkens, Gert, E-mail: gert.kelfkens@rivm.nl [National Institute for Public Health and the Environment (RIVM), Bilthoven (Netherlands); Dijk, Christel van, E-mail: Christel.Van.Dijk@amsterdam.nl [Department of Research, Information and Statistics (OIS), Municipality of Amsterdam, Amsterdam (Netherlands); Spreeuwenberg, Peter, E-mail: p.spreeuwenberg@nivel.nl [Netherlands Institute for Health Services Research (NIVEL), Utrecht (Netherlands); Hooiveld, Mariette, E-mail: m.hooiveld@nivel.nl [Netherlands Institute for Health Services Research (NIVEL), Utrecht (Netherlands); Lebret, Erik, E-mail: erik.lebret@rivm.nl [National Institute for Public Health and the Environment (RIVM), Bilthoven (Netherlands); Institute for Risk Assessment Sciences (IRAS), Utrecht University, Utrecht (Netherlands); Yzermans, Joris, E-mail: J.Yzermans@nivel.nl [Netherlands Institute for Health Services Research (NIVEL), Utrecht (Netherlands)

    2016-09-15

    The number of mobile phone base station(s) (MPBS) has been increasing to meet the rapid technological changes and growing needs for mobile communication. The primary objective of the present study was to test possible changes in prevalence and number of NSS in relation to MPBS exposure before and after increase of installed MPBS antennas. A retrospective cohort study was conducted, comparing two time periods with high contrast in terms of number of installed MPBS. Symptom data were based on electronic health records from 1069 adult participants, registered in 9 general practices in different regions in the Netherlands. All participants were living within 500 m from the nearest bases station. Among them, 55 participants reported to be sensitive to MPBS at T1. A propagation model combined with a questionnaire was used to assess indoor exposure to RF-EMF from MPBS at T1. Estimation of exposure at T0 was based on number of antennas at T0 relative to T1. At T1, there was a > 30% increase in the total number of MPBS antennas. A higher prevalence for most NSS was observed in the MPBS-sensitive group at T1 compared to baseline. Exposure estimates were not associated with GP-registered NSS in the total sample. Some significant interactions were observed between MPBS-sensitivity and exposure estimates on risk of symptoms. Using clinically defined outcomes and a time difference of > 6 years it was demonstrated that RF-EMF exposure to MPBS was not associated with the development of NSS. Nonetheless, there was some indication for a higher risk of NSS for the MPBS-sensitive group, mainly in relation to exposure to UMTS, but this should be interpreted with caution. Results have to be verified by future longitudinal studies with a particular focus on potentially susceptible population subgroups of large sample size and integrated exposure assessment. - Highlights: • There was an important increase in the total number of MPBS at T1 compared to T0. • Prevalence of NSS was

  5. The Application of Architecture Metaphor in Historical and Cultural Block Wayfinding Design%城市历史文化街区导识系统设计中的“建筑隐喻”※

    Institute of Scientific and Technical Information of China (English)

    张莉娜; 王宗雪

    2013-01-01

    Taking the the construction of Architecture metaphor's influence on the city historical and cultural block Wayfinding system as breakthrough point,With The historical and cultural block South Lane Area and Yandai xie street in Beijing Wayfinding system design as an example, Analysis the two aspects of Architecture Metaphor in Wayfinding system design form Architectural form and architectural decoration,and put forward the Construction significance of Architecture metaphor in Wayfinding system design is not the simply "copy" of the traditional visual symbols, but update and renewal of regional cultural image.%  本文从建筑的隐喻对城市历史文化街区导识系统的影响为切入点,以北京历史文化街区南锣鼓巷、烟袋斜街导识系统设计为例,从建筑形制的隐喻、建筑装饰的隐喻两方面就导识系统设计进行了梳理与分析,提出了建筑的隐喻对导识系统设计的借鉴意义并非传统视觉符号的简单“复制”,而是地域文化形象的更新与再现。

  6. Microchannel plate streak camera

    Science.gov (United States)

    Wang, Ching L.

    1989-01-01

    An improved streak camera in which a microchannel plate electron multiplier is used in place of or in combination with the photocathode used in prior streak cameras. The improved streak camera is far more sensitive to photons (UV to gamma-rays) than the conventional x-ray streak camera which uses a photocathode. The improved streak camera offers gamma-ray detection with high temporal resolution. It also offers low-energy x-ray detection without attenuation inside the cathode. Using the microchannel plate in the improved camera has resulted in a time resolution of about 150 ps, and has provided a sensitivity sufficient for 1000 KeV x-rays.

  7. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  8. LSST Camera Optics Design

    Energy Technology Data Exchange (ETDEWEB)

    Riot, V J; Olivier, S; Bauman, B; Pratuch, S; Seppala, L; Gilmore, D; Ku, J; Nordby, M; Foss, M; Antilogus, P; Morgado, N

    2012-05-24

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics will meet their performance goals.

  9. Ringfield lithographic camera

    Science.gov (United States)

    Sweatt, William C.

    1998-01-01

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D.sub.source .apprxeq.0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry with an increased etendue for the camera system. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors.

  10. A cross-sectional case control study on genetic damage in individuals residing in the vicinity of a mobile phone base station.

    Science.gov (United States)

    Gandhi, Gursatej; Kaur, Gurpreet; Nisar, Uzma

    2015-01-01

    Mobile phone base stations facilitate good communication, but the continuously emitting radiations from these stations have raised health concerns. Hence in this study, genetic damage using the single cell gel electrophoresis (comet) assay was assessed in peripheral blood leukocytes of individuals residing in the vicinity of a mobile phone base station and comparing it to that in healthy controls. The power density in the area within 300 m from the base station exceeded the permissive limits and was significantly (p = 0.000) higher compared to the area from where control samples were collected. The study participants comprised 63 persons with residences near a mobile phone tower, and 28 healthy controls matched for gender, age, alcohol drinking and occupational sub-groups. Genetic damage parameters of DNA migration length, damage frequency (DF) and damage index were significantly (p = 0.000) elevated in the sample group compared to respective values in healthy controls. The female residents (n = 25) of the sample group had significantly (p = 0.004) elevated DF than the male residents (n = 38). The linear regression analysis further revealed daily mobile phone usage, location of residence and power density as significant predictors of genetic damage. The genetic damage evident in the participants of this study needs to be addressed against future disease-risk, which in addition to neurodegenerative disorders, may lead to cancer.

  11. Camera as Cultural Critique

    DEFF Research Database (Denmark)

    Suhr, Christian

    2015-01-01

    What does the use of cameras entail for the production of cultural critique in anthropology? Visual anthropological analysis and cultural critique starts at the very moment a camera is brought into the field or existing visual images are engaged. The framing, distances, and interactions between...... to establish analysis as a continued, iterative movement of transcultural dialogue and critique....

  12. Camera Operator and Videographer

    Science.gov (United States)

    Moore, Pam

    2007-01-01

    Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…

  13. Dry imaging cameras

    Directory of Open Access Journals (Sweden)

    I K Indrajit

    2011-01-01

    Full Text Available Dry imaging cameras are important hard copy devices in radiology. Using dry imaging camera, multiformat images of digital modalities in radiology are created from a sealed unit of unexposed films. The functioning of a modern dry camera, involves a blend of concurrent processes, in areas of diverse sciences like computers, mechanics, thermal, optics, electricity and radiography. Broadly, hard copy devices are classified as laser and non laser based technology. When compared with the working knowledge and technical awareness of different modalities in radiology, the understanding of a dry imaging camera is often superficial and neglected. To fill this void, this article outlines the key features of a modern dry camera and its important issues that impact radiology workflow.

  14. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  15. Do Speed Cameras Reduce Collisions?

    OpenAIRE

    Skubic, Jeffrey; Johnson, Steven B.; Salvino, Chris; Vanhoy, Steven; Hu, Chengcheng

    2013-01-01

    We investigated the effects of speed cameras along a 26 mile segment in metropolitan Phoenix, Arizona. Motor vehicle collisions were retrospectively identified according to three time periods – before cameras were placed, while cameras were in place and after cameras were removed. A 14 mile segment in the same area without cameras was used for control purposes. Five cofounding variables were eliminated. In this study, the placement or removal of interstate highway speed cameras did not indepe...

  16. Do speed cameras reduce collisions?

    Science.gov (United States)

    Skubic, Jeffrey; Johnson, Steven B; Salvino, Chris; Vanhoy, Steven; Hu, Chengcheng

    2013-01-01

    We investigated the effects of speed cameras along a 26 mile segment in metropolitan Phoenix, Arizona. Motor vehicle collisions were retrospectively identified according to three time periods - before cameras were placed, while cameras were in place and after cameras were removed. A 14 mile segment in the same area without cameras was used for control purposes. Five cofounding variables were eliminated. In this study, the placement or removal of interstate highway speed cameras did not independently affect the incidence of motor vehicle collisions.

  17. Are people living next to mobile phone base stations more strained? Relationship of health concerns, self-estimated distance to base station, and psychological parameters

    Directory of Open Access Journals (Sweden)

    Augner Christoph

    2009-01-01

    Full Text Available Background and Aims: Coeval with the expansion of mobile phone technology and the associated obvious presence of mobile phone base stations, some people living close to these masts reported symptoms they attributed to electromagnetic fields (EMF. Public and scientific discussions arose with regard to whether these symptoms were due to EMF or were nocebo effects. The aim of this study was to find out if people who believe that they live close to base stations show psychological or psychobiological differences that would indicate more strain or stress. Furthermore, we wanted to detect the relevant connections linking self-estimated distance between home and the next mobile phone base station (DBS, daily use of mobile phone (MPU, EMF-health concerns, electromagnetic hypersensitivity, and psychological strain parameters. Design, Materials and Methods: Fifty-seven participants completed standardized and non-standardized questionnaires that focused on the relevant parameters. In addition, saliva samples were used as an indication to determine the psychobiological strain by concentration of alpha-amylase, cortisol, immunoglobulin A (IgA, and substance P. Results: Self-declared base station neighbors (DBS ≤ 100 meters had significantly higher concentrations of alpha-amylase in their saliva, higher rates in symptom checklist subscales (SCL somatization, obsessive-compulsive, anxiety, phobic anxiety, and global strain index PST (Positive Symptom Total. There were no differences in EMF-related health concern scales. Conclusions: We conclude that self-declared base station neighbors are more strained than others. EMF-related health concerns cannot explain these findings. Further research should identify if actual EMF exposure or other factors are responsible for these results.

  18. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  19. TARGETLESS CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    L. Barazzetti

    2012-09-01

    Full Text Available In photogrammetry a camera is considered calibrated if its interior orientation parameters are known. These encompass the principal distance, the principal point position and some Additional Parameters used to model possible systematic errors. The current state of the art for automated camera calibration relies on the use of coded targets to accurately determine the image correspondences. This paper presents a new methodology for the efficient and rigorous photogrammetric calibration of digital cameras which does not require any longer the use of targets. A set of images depicting a scene with a good texture are sufficient for the extraction of natural corresponding image points. These are automatically matched with feature-based approaches and robust estimation techniques. The successive photogrammetric bundle adjustment retrieves the unknown camera parameters and their theoretical accuracies. Examples, considerations and comparisons with real data and different case studies are illustrated to show the potentialities of the proposed methodology.

  20. TOUCHSCREEN USING WEB CAMERA

    Directory of Open Access Journals (Sweden)

    Kuntal B. Adak

    2015-10-01

    Full Text Available In this paper we present a web camera based touchscreen system which uses a simple technique to detect and locate finger. We have used a camera and regular screen to achieve our goal. By capturing the video and calculating position of finger on the screen, we can determine the touch position and do some function on that location. Our method is very easy and simple to implement. Even our system requirement is less expensive compare to other techniques.

  1. The Circular Camera Movement

    DEFF Research Database (Denmark)

    Hansen, Lennard Højbjerg

    2014-01-01

    It has been an accepted precept in film theory that specific stylistic features do not express specific content. Nevertheless, it is possible to find many examples in the history of film in which stylistic features do express specific content: for instance, the circular camera movement is used...... such as the circular camera movement. Keywords: embodied perception, embodied style, explicit narration, interpretation, style pattern, television style...

  2. Segment Based Camera Calibration

    Institute of Scientific and Technical Information of China (English)

    马颂德; 魏国庆; 等

    1993-01-01

    The basic idea of calibrating a camera system in previous approaches is to determine camera parmeters by using a set of known 3D points as calibration reference.In this paper,we present a method of camera calibration in whih camera parameters are determined by a set of 3D lines.A set of constraints is derived on camea parameters in terms of perspective line mapping.Form these constraints,the same perspective transformation matrix as that for point mapping can be computed linearly.The minimum number of calibration lines is 6.This result generalizes that of Liu,Huang and Faugeras[12] for camera location determination in which at least 8 line correspondences are required for linear computation of camera location.Since line segments in an image can be located easily and more accurately than points,the use of lines as calibration reference tends to ease the computation in inage preprocessing and to improve calibration accuracy.Experimental results on the calibration along with stereo reconstruction are reported.

  3. Wayfinding in Social Networks

    Science.gov (United States)

    Liben-Nowell, David

    With the recent explosion of popularity of commercial social-networking sites like Facebook and MySpace, the size of social networks that can be studied scientifically has passed from the scale traditionally studied by sociologists and anthropologists to the scale of networks more typically studied by computer scientists. In this chapter, I will highlight a recent line of computational research into the modeling and analysis of the small-world phenomenon - the observation that typical pairs of people in a social network are connected by very short chains of intermediate friends - and the ability of members of a large social network to collectively find efficient routes to reach individuals in the network. I will survey several recent mathematical models of social networks that account for these phenomena, with an emphasis on both the provable properties of these social-network models and the empirical validation of the models against real large-scale social-network data.

  4. Linking Wayfinding and Wayfaring

    DEFF Research Database (Denmark)

    Lanng, Ditte Bendix; Jensen, Ole B.

    2016-01-01

    -called mobilities turn in which mobility is viewed as a complex, multilayered process that entails much more than simply getting from point A to point B (see Cresswell 2006 ; Jensen 2013 ; Urry 2007 ).The structure of the chapter is simple: We fi rst introduce the concepts that are key to linking wayfi nding...

  5. Neutron counting with cameras

    Energy Technology Data Exchange (ETDEWEB)

    Van Esch, Patrick; Crisanti, Marta; Mutti, Paolo [Institut Laue Langevin, Grenoble (France)

    2015-07-01

    A research project is presented in which we aim at counting individual neutrons with CCD-like cameras. We explore theoretically a technique that allows us to use imaging detectors as counting detectors at lower counting rates, and transits smoothly to continuous imaging at higher counting rates. As such, the hope is to combine the good background rejection properties of standard neutron counting detectors with the absence of dead time of integrating neutron imaging cameras as well as their very good spatial resolution. Compared to Xray detection, the essence of thermal neutron detection is the nuclear conversion reaction. The released energies involved are of the order of a few MeV, while X-ray detection releases energies of the order of the photon energy, which is in the 10 KeV range. Thanks to advances in camera technology which have resulted in increased quantum efficiency, lower noise, as well as increased frame rate up to 100 fps for CMOS-type cameras, this more than 100-fold higher available detection energy implies that the individual neutron detection light signal can be significantly above the noise level, as such allowing for discrimination and individual counting, which is hard to achieve with X-rays. The time scale of CMOS-type cameras doesn't allow one to consider time-of-flight measurements, but kinetic experiments in the 10 ms range are possible. The theory is next confronted to the first experimental results. (authors)

  6. The Dark Energy Camera

    Energy Technology Data Exchange (ETDEWEB)

    Flaugher, B. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States). et al.

    2015-04-11

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250-μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 Mpixel focal plane comprises 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. The CCDs have 15μm x 15μm pixels with a plate scale of 0.263" per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  7. CAOS-CMOS camera.

    Science.gov (United States)

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  8. The Dark Energy Camera

    CERN Document Server

    Flaugher, B; Honscheid, K; Abbott, T M C; Alvarez, O; Angstadt, R; Annis, J T; Antonik, M; Ballester, O; Beaufore, L; Bernstein, G M; Bernstein, R A; Bigelow, B; Bonati, M; Boprie, D; Brooks, D; Buckley-Geer, E J; Campa, J; Cardiel-Sas, L; Castander, F J; Castilla, J; Cease, H; Cela-Ruiz, J M; Chappa, S; Chi, E; Cooper, C; da Costa, L N; Dede, E; Derylo, G; DePoy, D L; de Vicente, J; Doel, P; Drlica-Wagner, A; Eiting, J; Elliott, A E; Emes, J; Estrada, J; Neto, A Fausti; Finley, D A; Flores, R; Frieman, J; Gerdes, D; Gladders, M D; Gregory, B; Gutierrez, G R; Hao, J; Holland, S E; Holm, S; Huffman, D; Jackson, C; James, D J; Jonas, M; Karcher, A; Karliner, I; Kent, S; Kessler, R; Kozlovsky, M; Kron, R G; Kubik, D; Kuehn, K; Kuhlmann, S; Kuk, K; Lahav, O; Lathrop, A; Lee, J; Levi, M E; Lewis, P; Li, T S; Mandrichenko, I; Marshall, J L; Martinez, G; Merritt, K W; Miquel, R; Munoz, F; Neilsen, E H; Nichol, R C; Nord, B; Ogando, R; Olsen, J; Palio, N; Patton, K; Peoples, J; Plazas, A A; Rauch, J; Reil, K; Rheault, J -P; Roe, N A; Rogers, H; Roodman, A; Sanchez, E; Scarpine, V; Schindler, R H; Schmidt, R; Schmitt, R; Schubnell, M; Schultz, K; Schurter, P; Scott, L; Serrano, S; Shaw, T M; Smith, R C; Soares-Santos, M; Stefanik, A; Stuermer, W; Suchyta, E; Sypniewski, A; Tarle, G; Thaler, J; Tighe, R; Tran, C; Tucker, D; Walker, A R; Wang, G; Watson, M; Weaverdyck, C; Wester, W; Woods, R; Yanny, B

    2015-01-01

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250 micron thick fully-depleted CCDs cooled inside a vacuum Dewar. The 570 Mpixel focal plane comprises 62 2kx4k CCDs for imaging and 12 2kx2k CCDs for guiding and focus. The CCDs have 15 microns x15 microns pixels with a plate scale of 0.263 arc sec per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construct...

  9. Rapid imaging, detection and quantification of Giardia lamblia cysts using mobile-phone based fluorescent microscopy and machine learning.

    Science.gov (United States)

    Koydemir, Hatice Ceylan; Gorocs, Zoltan; Tseng, Derek; Cortazar, Bingen; Feng, Steve; Chan, Raymond Yan Lok; Burbano, Jordi; McLeod, Euan; Ozcan, Aydogan

    2015-03-07

    Rapid and sensitive detection of waterborne pathogens in drinkable and recreational water sources is crucial for treating and preventing the spread of water related diseases, especially in resource-limited settings. Here we present a field-portable and cost-effective platform for detection and quantification of Giardia lamblia cysts, one of the most common waterborne parasites, which has a thick cell wall that makes it resistant to most water disinfection techniques including chlorination. The platform consists of a smartphone coupled with an opto-mechanical attachment weighing ~205 g, which utilizes a hand-held fluorescence microscope design aligned with the camera unit of the smartphone to image custom-designed disposable water sample cassettes. Each sample cassette is composed of absorbent pads and mechanical filter membranes; a membrane with 8 μm pore size is used as a porous spacing layer to prevent the backflow of particles to the upper membrane, while the top membrane with 5 μm pore size is used to capture the individual Giardia cysts that are fluorescently labeled. A fluorescence image of the filter surface (field-of-view: ~0.8 cm(2)) is captured and wirelessly transmitted via the mobile-phone to our servers for rapid processing using a machine learning algorithm that is trained on statistical features of Giardia cysts to automatically detect and count the cysts captured on the membrane. The results are then transmitted back to the mobile-phone in less than 2 minutes and are displayed through a smart application running on the phone. This mobile platform, along with our custom-developed sample preparation protocol, enables analysis of large volumes of water (e.g., 10-20 mL) for automated detection and enumeration of Giardia cysts in ~1 hour, including all the steps of sample preparation and analysis. We evaluated the performance of this approach using flow-cytometer-enumerated Giardia-contaminated water samples, demonstrating an average cyst capture

  10. HIGH SPEED CAMERA

    Science.gov (United States)

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  11. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...... camera. We approach this problem by modelling it as a dynamic multi-objective optimisation problem and show how this metaphor allows a much richer expressiveness than a classical single objective approach. Finally, we showcase the application of a multi-objective evolutionary algorithm to generate a shot...

  12. Underwater camera with depth measurement

    Science.gov (United States)

    Wang, Wei-Chih; Lin, Keng-Ren; Tsui, Chi L.; Schipf, David; Leang, Jonathan

    2016-04-01

    The objective of this study is to develop an RGB-D (video + depth) camera that provides three-dimensional image data for use in the haptic feedback of a robotic underwater ordnance recovery system. Two camera systems were developed and studied. The first depth camera relies on structured light (as used by the Microsoft Kinect), where the displacement of an object is determined by variations of the geometry of a projected pattern. The other camera system is based on a Time of Flight (ToF) depth camera. The results of the structural light camera system shows that the camera system requires a stronger light source with a similar operating wavelength and bandwidth to achieve a desirable working distance in water. This approach might not be robust enough for our proposed underwater RGB-D camera system, as it will require a complete re-design of the light source component. The ToF camera system instead, allows an arbitrary placement of light source and camera. The intensity output of the broadband LED light source in the ToF camera system can be increased by putting them into an array configuration and the LEDs can be modulated comfortably with any waveform and frequencies required by the ToF camera. In this paper, both camera were evaluated and experiments were conducted to demonstrate the versatility of the ToF camera.

  13. Communities, Cameras, and Conservation

    Science.gov (United States)

    Patterson, Barbara

    2012-01-01

    Communities, Cameras, and Conservation (CCC) is the most exciting and valuable program the author has seen in her 30 years of teaching field science courses. In this citizen science project, students and community volunteers collect data on mountain lions ("Puma concolor") at four natural areas and public parks along the Front Range of Colorado.…

  14. Make a Pinhole Camera

    Science.gov (United States)

    Fisher, Diane K.; Novati, Alexander

    2009-01-01

    On Earth, using ordinary visible light, one can create a single image of light recorded over time. Of course a movie or video is light recorded over time, but it is a series of instantaneous snapshots, rather than light and time both recorded on the same medium. A pinhole camera, which is simple to make out of ordinary materials and using ordinary…

  15. The PAU Camera

    Science.gov (United States)

    Casas, R.; Ballester, O.; Cardiel-Sas, L.; Carretero, J.; Castander, F. J.; Castilla, J.; Crocce, M.; de Vicente, J.; Delfino, M.; Fernández, E.; Fosalba, P.; García-Bellido, J.; Gaztañaga, E.; Grañena, F.; Jiménez, J.; Madrid, F.; Maiorino, M.; Martí, P.; Miquel, R.; Neissner, C.; Ponce, R.; Sánchez, E.; Serrano, S.; Sevilla, I.; Tonello, N.; Troyano, I.

    2011-11-01

    The PAU Camera (PAUCam) is a wide-field camera designed to be mounted at the William Herschel Telescope (WHT) prime focus, located at the Observatorio del Roque de los Muchachos in the island of La Palma (Canary Islands).Its primary function is to carry out a cosmological survey, the PAU Survey, covering an area of several hundred square degrees of sky. Its purpose is to determine positions and distances using photometric redshift techniques. To achieve accurate photo-z's, PAUCam will be equipped with 40 narrow-band filters covering the range from 450 to850 nm, and six broad-band filters, those of the SDSS system plus the Y band. To fully cover the focal plane delivered by the telescope optics, 18 CCDs 2k x 4k are needed. The pixels are square of 15 μ m size. The optical characteristics of the prime focus corrector deliver a field-of-view where eight of these CCDs will have an illumination of more than 95% covering a field of 40 arc minutes. The rest of the CCDs will occupy the vignetted region extending the field diameter to one degree. Two of the CCDs will be devoted to auto-guiding.This camera have some innovative features. Firstly, both the broad-band and the narrow-band filters will be placed in mobile trays, hosting 16 such filters at most. Those are located inside the cryostat at few millimeters in front of the CCDs when observing. Secondly, a pressurized liquid nitrogen tank outside the camera will feed a boiler inside the cryostat with a controlled massflow. The read-out electronics will use the Monsoon architecture, originally developed by NOAO, modified and manufactured by our team in the frame of the DECam project (the camera used in the DES Survey).PAUCam will also be available to the astronomical community of the WHT.

  16. Image Sensors Enhance Camera Technologies

    Science.gov (United States)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  17. MISR radiometric camera-by-camera Cloud Mask V004

    Data.gov (United States)

    National Aeronautics and Space Administration — This file contains the Radiometric camera-by-camera Cloud Mask dataset. It is used to determine whether a scene is classified as clear or cloudy. A new parameter has...

  18. PDA-phone-based instant transmission of radiological images over a CDMA network by combining the PACS screen with a Bluetooth-interfaced local wireless link.

    Science.gov (United States)

    Kim, Dong Keun; Yoo, Sun K; Park, Jeong Jin; Kim, Sun Ho

    2007-06-01

    Remote teleconsultation by specialists is important for timely, correct, and specialized emergency surgical and medical decision making. In this paper, we designed a new personal digital assistant (PDA)-phone-based emergency teleradiology system by combining cellular communication with Bluetooth-interfaced local wireless links. The mobility and portability resulting from the use of PDAs and wireless communication can provide a more effective means of emergency teleconsultation without requiring the user to be limited to a fixed location. Moreover, it enables synchronized radiological image sharing between the attending physician in the emergency room and the remote specialist on picture archiving and communication system terminals without distorted image acquisition. To enable rapid and fine-quality radiological image transmission over a cellular network in a secure manner, progressive compression and security mechanisms have been incorporated. The proposed system is tested over a code division Multiple Access 1x-Evolution Data-Only network to evaluate the performance and to demonstrate the feasibility of this system in a real-world setting.

  19. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  20. Combustion pinhole camera system

    Science.gov (United States)

    Witte, Arvel B.

    1984-02-21

    A pinhole camera system utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.

  1. Gamma ray camera

    Science.gov (United States)

    Perez-Mendez, V.

    1997-01-21

    A gamma ray camera is disclosed for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array. 6 figs.

  2. The Star Formation Camera

    CERN Document Server

    Scowen, Paul A; Beasley, Matthew; Calzetti, Daniela; Desch, Steven; Fullerton, Alex; Gallagher, John; Lisman, Doug; Macenka, Steve; Malhotra, Sangeeta; McCaughrean, Mark; Nikzad, Shouleh; O'Connell, Robert; Oey, Sally; Padgett, Deborah; Rhoads, James; Roberge, Aki; Siegmund, Oswald; Shaklan, Stuart; Smith, Nathan; Stern, Daniel; Tumlinson, Jason; Windhorst, Rogier; Woodruff, Robert

    2009-01-01

    The Star Formation Camera (SFC) is a wide-field (~15'x19, >280 arcmin^2), high-resolution (18x18 mas pixels) UV/optical dichroic camera designed for the Theia 4-m space-borne space telescope concept. SFC will deliver diffraction-limited images at lambda > 300 nm in both a blue (190-517nm) and a red (517-1075nm) channel simultaneously. Our aim is to conduct a comprehensive and systematic study of the astrophysical processes and environments relevant for the births and life cycles of stars and their planetary systems, and to investigate and understand the range of environments, feedback mechanisms, and other factors that most affect the outcome of the star and planet formation process. This program addresses the origins and evolution of stars, galaxies, and cosmic structure and has direct relevance for the formation and survival of planetary systems like our Solar System and planets like Earth. We present the design and performance specifications resulting from the implementation study of the camera, conducted ...

  3. Hemispherical Laue camera

    Science.gov (United States)

    Li, James C. M.; Chu, Sungnee G.

    1980-01-01

    A hemispherical Laue camera comprises a crystal sample mount for positioning a sample to be analyzed at the center of sphere of a hemispherical, X-radiation sensitive film cassette, a collimator, a stationary or rotating sample mount and a set of standard spherical projection spheres. X-radiation generated from an external source is directed through the collimator to impinge onto the single crystal sample on the stationary mount. The diffracted beam is recorded on the hemispherical X-radiation sensitive film mounted inside the hemispherical film cassette in either transmission or back-reflection geometry. The distances travelled by X-radiation diffracted from the crystal to the hemispherical film are the same for all crystal planes which satisfy Bragg's Law. The recorded diffraction spots or Laue spots on the film thereby preserve both the symmetry information of the crystal structure and the relative intensities which are directly related to the relative structure factors of the crystal orientations. The diffraction pattern on the exposed film is compared with the known diffraction pattern on one of the standard spherical projection spheres for a specific crystal structure to determine the orientation of the crystal sample. By replacing the stationary sample support with a rotating sample mount, the hemispherical Laue camera can be used for crystal structure determination in a manner previously provided in conventional Debye-Scherrer cameras.

  4. Gamma ray camera

    Science.gov (United States)

    Perez-Mendez, Victor

    1997-01-01

    A gamma ray camera for detecting rays emanating from a radiation source such as an isotope. The gamma ray camera includes a sensor array formed of a visible light crystal for converting incident gamma rays to a plurality of corresponding visible light photons, and a photosensor array responsive to the visible light photons in order to form an electronic image of the radiation therefrom. The photosensor array is adapted to record an integrated amount of charge proportional to the incident gamma rays closest to it, and includes a transparent metallic layer, photodiode consisting of a p-i-n structure formed on one side of the transparent metallic layer, and comprising an upper p-type layer, an intermediate layer and a lower n-type layer. In the preferred mode, the scintillator crystal is composed essentially of a cesium iodide (CsI) crystal preferably doped with a predetermined amount impurity, and the p-type upper intermediate layers and said n-type layer are essentially composed of hydrogenated amorphous silicon (a-Si:H). The gamma ray camera further includes a collimator interposed between the radiation source and the sensor array, and a readout circuit formed on one side of the photosensor array.

  5. Adaptive compressive sensing camera

    Science.gov (United States)

    Hsu, Charles; Hsu, Ming K.; Cha, Jae; Iwamura, Tomo; Landa, Joseph; Nguyen, Charles; Szu, Harold

    2013-05-01

    We have embedded Adaptive Compressive Sensing (ACS) algorithm on Charge-Coupled-Device (CCD) camera based on the simplest concept that each pixel is a charge bucket, and the charges comes from Einstein photoelectric conversion effect. Applying the manufactory design principle, we only allow altering each working component at a minimum one step. We then simulated what would be such a camera can do for real world persistent surveillance taking into account of diurnal, all weather, and seasonal variations. The data storage has saved immensely, and the order of magnitude of saving is inversely proportional to target angular speed. We did design two new components of CCD camera. Due to the matured CMOS (Complementary metal-oxide-semiconductor) technology, the on-chip Sample and Hold (SAH) circuitry can be designed for a dual Photon Detector (PD) analog circuitry for changedetection that predicts skipping or going forward at a sufficient sampling frame rate. For an admitted frame, there is a purely random sparse matrix [Φ] which is implemented at each bucket pixel level the charge transport bias voltage toward its neighborhood buckets or not, and if not, it goes to the ground drainage. Since the snapshot image is not a video, we could not apply the usual MPEG video compression and Hoffman entropy codec as well as powerful WaveNet Wrapper on sensor level. We shall compare (i) Pre-Processing FFT and a threshold of significant Fourier mode components and inverse FFT to check PSNR; (ii) Post-Processing image recovery will be selectively done by CDT&D adaptive version of linear programming at L1 minimization and L2 similarity. For (ii) we need to determine in new frames selection by SAH circuitry (i) the degree of information (d.o.i) K(t) dictates the purely random linear sparse combination of measurement data a la [Φ]M,N M(t) = K(t) Log N(t).

  6. Mars Science Laboratory Engineering Cameras

    Science.gov (United States)

    Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.

    2012-01-01

    NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.

  7. PAU camera: detectors characterization

    Science.gov (United States)

    Casas, Ricard; Ballester, Otger; Cardiel-Sas, Laia; Castilla, Javier; Jiménez, Jorge; Maiorino, Marino; Pío, Cristóbal; Sevilla, Ignacio; de Vicente, Juan

    2012-07-01

    The PAU Camera (PAUCam) [1,2] is a wide field camera that will be mounted at the corrected prime focus of the William Herschel Telescope (Observatorio del Roque de los Muchachos, Canary Islands, Spain) in the next months. The focal plane of PAUCam is composed by a mosaic of 18 CCD detectors of 2,048 x 4,176 pixels each one with a pixel size of 15 microns, manufactured by Hamamatsu Photonics K. K. This mosaic covers a field of view (FoV) of 60 arcmin (minutes of arc), 40 of them are unvignetted. The behaviour of these 18 devices, plus four spares, and their electronic response should be characterized and optimized for the use in PAUCam. This job is being carried out in the laboratories of the ICE/IFAE and the CIEMAT. The electronic optimization of the CCD detectors is being carried out by means of an OG (Output Gate) scan and maximizing it CTE (Charge Transfer Efficiency) while the read-out noise is minimized. The device characterization itself is obtained with different tests. The photon transfer curve (PTC) that allows to obtain the electronic gain, the linearity vs. light stimulus, the full-well capacity and the cosmetic defects. The read-out noise, the dark current, the stability vs. temperature and the light remanence.

  8. Stereoscopic camera design

    Science.gov (United States)

    Montgomery, David J.; Jones, Christopher K.; Stewart, James N.; Smith, Alan

    2002-05-01

    It is clear from the literature that the majority of work in stereoscopic imaging is directed towards the development of modern stereoscopic displays. As costs come down, wider public interest in this technology is expected to increase. This new technology would require new methods of image formation. Advances in stereo computer graphics will of course lead to the creation of new stereo computer games, graphics in films etc. However, the consumer would also like to see real-world stereoscopic images, pictures of family, holiday snaps etc. Such scenery would have wide ranges of depth to accommodate and would need also to cope with moving objects, such as cars, and in particular other people. Thus, the consumer acceptance of auto/stereoscopic displays and 3D in general would be greatly enhanced by the existence of a quality stereoscopic camera. This paper will cover an analysis of existing stereoscopic camera designs and show that they can be categorized into four different types, with inherent advantages and disadvantages. A recommendation is then made with regard to 3D consumer still and video photography. The paper will go on to discuss this recommendation and describe its advantages and how it can be realized in practice.

  9. HONEY -- The Honeywell Camera

    Science.gov (United States)

    Clayton, C. A.; Wilkins, T. N.

    The Honeywell model 3000 colour graphic recorder system (hereafter referred to simply as Honeywell) has been bought by Starlink for producing publishable quality photographic hardcopy from the IKON image displays. Full colour and black & white images can be recorded on positive or negative 35mm film. The Honeywell consists of a built-in high resolution flat-faced monochrome video monitor, a red/green/blue colour filter mechanism and a 35mm camera. The device works on the direct video signals from the IKON. This means that changing the brightness or contrast on the IKON monitor will not affect any photographs that you take. The video signals from the IKON consist of separate red, green and blue signals. When you take a picture, the Honeywell takes the red, green and blue signals in turn and displays three pictures consecutively on its internal monitor. It takes an exposure through each of three filters (red, green and blue) onto the film in the camera. This builds up the complete colour picture on the film. Honeywell systems are installed at nine Starlink sites, namely Belfast (locally funded), Birmingham, Cambridge, Durham, Leicester, Manchester, Rutherford, ROE and UCL.

  10. Transmission electron microscope CCD camera

    Science.gov (United States)

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  11. 医院标识导向系统的分类、制作与定位%CLASSIFICATION, PRODUCTION AND POSITIONING OF HOSPITAL SIGNAGE AND WAY-FINDING SYSTEM

    Institute of Scientific and Technical Information of China (English)

    吴培波

    2015-01-01

    The article introduces Xinjiang Uygur Autonomous Region People's Hospital signage and way-finding system design and instal background;the classification principle of the hospital signage and way-finding system;materials and technologies of each signage board in terms of external signage and internal signage;attentions on the signage distribution and positioning;and summarizes the practice and experience.%文章简单分析了新疆维吾尔自治区人民医院标识导向系统设计安装的项目背景,介绍了医院标识导向系统的分级原理,并从外部标识和内部标识两个方面,详细介绍了各种标识牌的材质和制作工艺,阐述了标识系统布点及定位时需要考虑的重点问题。

  12. Camera artifacts in IUE spectra

    Science.gov (United States)

    Bruegman, O. W.; Crenshaw, D. M.

    1994-01-01

    This study of emission line mimicking features in the IUE cameras has produced an atlas of artifiacts in high-dispersion images with an accompanying table of prominent artifacts and a table of prominent artifacts in the raw images along with a medium image of the sky background for each IUE camera.

  13. Radiation camera motion correction system

    Science.gov (United States)

    Hoffer, P.B.

    1973-12-18

    The device determines the ratio of the intensity of radiation received by a radiation camera from two separate portions of the object. A correction signal is developed to maintain this ratio at a substantially constant value and this correction signal is combined with the camera signal to correct for object motion. (Official Gazette)

  14. Coherent infrared imaging camera (CIRIC)

    Energy Technology Data Exchange (ETDEWEB)

    Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.; Richards, R.K.; Emery, M.S.; Crutcher, R.I.; Sitter, D.N. Jr.; Wachter, E.A.; Huston, M.A.

    1995-07-01

    New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerous and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.

  15. Camera sensitivity study

    Science.gov (United States)

    Schlueter, Jonathan; Murphey, Yi L.; Miller, John W. V.; Shridhar, Malayappan; Luo, Yun; Khairallah, Farid

    2004-12-01

    As the cost/performance Ratio of vision systems improves with time, new classes of applications become feasible. One such area, automotive applications, is currently being investigated. Applications include occupant detection, collision avoidance and lane tracking. Interest in occupant detection has been spurred by federal automotive safety rules in response to injuries and fatalities caused by deployment of occupant-side air bags. In principle, a vision system could control airbag deployment to prevent this type of mishap. Employing vision technology here, however, presents a variety of challenges, which include controlling costs, inability to control illumination, developing and training a reliable classification system and loss of performance due to production variations due to manufacturing tolerances and customer options. This paper describes the measures that have been developed to evaluate the sensitivity of an occupant detection system to these types of variations. Two procedures are described for evaluating how sensitive the classifier is to camera variations. The first procedure is based on classification accuracy while the second evaluates feature differences.

  16. Proportional counter radiation camera

    Science.gov (United States)

    Borkowski, C.J.; Kopp, M.K.

    1974-01-15

    A gas-filled proportional counter camera that images photon emitting sources is described. A two-dimensional, positionsensitive proportional multiwire counter is provided as the detector. The counter consists of a high- voltage anode screen sandwiched between orthogonally disposed planar arrays of multiple parallel strung, resistively coupled cathode wires. Two terminals from each of the cathode arrays are connected to separate timing circuitry to obtain separate X and Y coordinate signal values from pulse shape measurements to define the position of an event within the counter arrays which may be recorded by various means for data display. The counter is further provided with a linear drift field which effectively enlarges the active gas volume of the counter and constrains the recoil electrons produced from ionizing radiation entering the counter to drift perpendicularly toward the planar detection arrays. A collimator is interposed between a subject to be imaged and the counter to transmit only the radiation from the subject which has a perpendicular trajectory with respect to the planar cathode arrays of the detector. (Official Gazette)

  17. The framework and the key technology of the alarm embedded in a mobile phone based on automatic photographing and transmitting the photos%基于自动拍照传输图片的手机嵌入式报警器架构及关键技术研究

    Institute of Scientific and Technical Information of China (English)

    陈阵

    2013-01-01

    To design a personal alarm with one by a mobile phone.Based on the ARM11 mobile phone,the mobile phone camera module and MMS module were the main components,at first connected the mobile phone's camera module and the flash circuit module were connected with the mobile phone's camera shortcut key in series,then used the carbide.c+ + language on S60 platform of the Symbian smartphone OS to realize that the MCU interrupted by FIQ,unlocked the keyboard and invoked the JAVA program after that the mobile phone' s camera shortcut key of the hardware bottom was touched,at last use d J2me language on the WTK platform to realize the mobile phone ' s automatic taking pictures and sending pictures by MMS to the designated mobile phone.In case of emergency once the user triggered the mobile phones's camera shortcut key,the mobile phone immediately unlocked the keypad,the light flashed,automatically taked a picture for the scene and sended the picture through mms to the relatives and friends,so that they could take timely measures and provide help to the user,also preserve the information in order to take the evidence.The alarm embedded in the Mobile phone can effectively protect personal safety on body and property.%利用手机设计一种个人随身报警器.基于ARM11手机,以手机照相模块和彩信模块做为主要部件,首先把手机拍照模块和闪光灯电路模块与手机照相机快捷键相串接,然后在Symbian智能手机系统的S60平台上采用carbide.c++语言实现在硬件底层的手机照相机快捷键被触发后MCU进行快速中断启动键盘解锁并调用JAVA程序,最后在WTK平台上采用J2me语言实现手机自动拍照通过彩信传送图片给指定手机.在紧急情况下一旦用户触发手机照相快捷键,手机立刻解开键盘锁,闪光灯亮,并自动对现场情景照相后把图片通过彩信发送给亲友,以便亲友能及时采取措施对用户提供帮助,同时也为事件取证做好信息保存.手机嵌

  18. Vision Sensors and Cameras

    Science.gov (United States)

    Hoefflinger, Bernd

    Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 μm technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and μm-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.

  19. An Inexpensive Digital Infrared Camera

    Science.gov (United States)

    Mills, Allan

    2012-01-01

    Details are given for the conversion of an inexpensive webcam to a camera specifically sensitive to the near infrared (700-1000 nm). Some experiments and practical applications are suggested and illustrated. (Contains 9 figures.)

  20. The future of consumer cameras

    Science.gov (United States)

    Battiato, Sebastiano; Moltisanti, Marco

    2015-03-01

    In the last two decades multimedia, and in particular imaging devices (camcorders, tablets, mobile phones, etc.) have been dramatically diffused. Moreover the increasing of their computational performances, combined with an higher storage capability, allows them to process large amount of data. In this paper an overview of the current trends of consumer cameras market and technology will be given, providing also some details about the recent past (from Digital Still Camera up today) and forthcoming key issues.

  1. Field-testing of a cost-effective mobile-phone based microscope for screening of Schistosoma haematobium infection (Conference Presentation)

    Science.gov (United States)

    Ceylan Koydemir, Hatice; Bogoch, Isaac I.; Tseng, Derek; Ephraim, Richard K. D.; Duah, Evans; Tee, Joseph; Andrews, Jason R.; Ozcan, Aydogan

    2016-03-01

    Schistosomiasis is a parasitic and neglected tropical disease, and affects Ghana, Africa, for point-of-care diagnosis of S. haematobium infection. In this mobile-phone microscope, a custom-designed 3D printed opto-mechanical attachment (~150g) is placed in contact with the smartphone camera-lens, creating an imaging-system with a half-pitch resolution of ~0.87µm. This unit includes an external lens (also taken from a mobile-phone camera), a sample tray, a z-stage to adjust the focus, two light-emitting-diodes (LEDs) and two diffusers for uniform illumination of the sample. In our field-testing, 60 urine samples, collected from children, were used, where the prevalence of the infection was 72.9%. After concentration of the sample with centrifugation, the sediment was placed on a glass-slide and S. haematobium eggs were first identified/quantified using conventional benchtop microscopy by an expert diagnostician, and then a second expert, blinded to these results, determined the presence/absence of eggs using our mobile-phone microscope. Compared to conventional microscopy, our mobile-phone microscope had a diagnostic sensitivity of 72.1%, specificity of 100%, positive-predictive-value of 100%, and a negative-predictive-value of 57.1%. Furthermore, our mobile-phone platform demonstrated a sensitivity of 65.7% and 100% for low-intensity infections (≤50 eggs/10 mL urine) and high-intensity infections (health challenges.

  2. SUB-CAMERA CALIBRATION OF A PENTA-CAMERA

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-03-01

    Full Text Available Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors

  3. Research on the Space Design of Underground Commercial Street Based on Wayfinding Behavior%基于寻路行为的地下商业街空间设计初探

    Institute of Scientific and Technical Information of China (English)

    吴叶红; 田香

    2015-01-01

    随着城市化进程的快速发展,城市人口膨胀,城市中心区用地日益紧张,地下空间的开发利用越来越受到重视。但是由于地下空间比较封闭,缺乏自然采光,人们往往很容易迷失方向。目前已建成地下商业街在利用空间环境因素进行寻路导向设计方面存在明显不足,很大程度上影响了人的寻路行为。本文从寻路的角度出发,运用实地调研和问卷形式等方法对重庆五大商圈及上海深圳香港等地的地下商业街空间环境进行调查和分析,归纳出产生问题的原因及地下商业街空间寻路的影响因素,结合人的心理行为特点,从空间导向设计、标志系统设计和情感氛围营造三个方面进行分析并提出相应的设计策略,以期对未来地下商业街设计提供一定参考。%Owing to the development of urbanization, urban population expansion and tension of land used for urban center, more attention is paid on the development and utilization of underground space. However, since the underground space is closed and lack of natural lighting, it is easy to get lost. There are obvious deficiencies in utilizing space environment factors in wayfinding guide design, which to a large extent affects people’s wayfinding behavior. From the perspective of wayfinding, having conducted investigation and analysis in the underground commercial street space environment from five business circles in Chongqing and that of Shanghai, Shenzhen, Hong Kong and other places by using the methods of field investigation and questionnaire form, this article concludes the causes of the problem and the influence factors of underground commercial street space wayfinding. Combining with the characteristics of psychological behavior, the article conducts analysis in the three aspects of the space oriented design, system design and the emotional atmosphere, and puts forward the corresponding design strategies in order to

  4. Traditional gamma cameras are preferred.

    Science.gov (United States)

    DePuey, E Gordon

    2016-08-01

    Although the new solid-state dedicated cardiac cameras provide excellent spatial and energy resolution and allow for markedly reduced SPECT acquisition times and/or injected radiopharmaceutical activity, they have some distinct disadvantages compared to traditional sodium iodide SPECT cameras. They are expensive. Attenuation correction is not available. Cardio-focused collimation, advantageous to increase depth-dependent resolution and myocardial count density, accentuates diaphragmatic attenuation and scatter from subdiaphragmatic structures. Although supplemental prone imaging is therefore routinely advised, many patients cannot tolerate it. Moreover, very large patients cannot be accommodated in the solid-state camera gantries. Since data are acquired simultaneously with an arc of solid-state detectors around the chest, no temporally dependent "rotating" projection images are obtained. Therefore, patient motion can be neither detected nor corrected. In contrast, traditional sodium iodide SPECT cameras provide rotating projection images to allow technologists and physicians to detect and correct patient motion and to accurately detect the position of soft tissue attenuators and to anticipate associated artifacts. Very large patients are easily accommodated. Low-dose x-ray attenuation correction is widely available. Also, relatively inexpensive low-count density software is provided by many vendors, allowing shorter SPECT acquisition times and reduced injected activity approaching that achievable with solid-state cameras.

  5. Association of Exposure to Radio-Frequency Electromagnetic Field Radiation (RF-EMFR Generated by Mobile Phone Base Stations with Glycated Hemoglobin (HbA1c and Risk of Type 2 Diabetes Mellitus

    Directory of Open Access Journals (Sweden)

    Sultan Ayoub Meo

    2015-11-01

    Full Text Available Installation of mobile phone base stations in residential areas has initiated public debate about possible adverse effects on human health. This study aimed to determine the association of exposure to radio frequency electromagnetic field radiation (RF-EMFR generated by mobile phone base stations with glycated hemoglobin (HbA1c and occurrence of type 2 diabetes mellitus. For this study, two different elementary schools (school-1 and school-2 were selected. We recruited 159 students in total; 96 male students from school-1, with age range 12–16 years, and 63 male students with age range 12–17 years from school-2. Mobile phone base stations with towers existed about 200 m away from the school buildings. RF-EMFR was measured inside both schools. In school-1, RF-EMFR was 9.601 nW/cm2 at frequency of 925 MHz, and students had been exposed to RF-EMFR for a duration of 6 h daily, five days in a week. In school-2, RF-EMFR was 1.909 nW/cm2 at frequency of 925 MHz and students had been exposed for 6 h daily, five days in a week. 5–6 mL blood was collected from all the students and HbA1c was measured by using a Dimension Xpand Plus Integrated Chemistry System, Siemens. The mean HbA1c for the students who were exposed to high RF-EMFR was significantly higher (5.44 ± 0.22 than the mean HbA1c for the students who were exposed to low RF-EMFR (5.32 ± 0.34 (p = 0.007. Moreover, students who were exposed to high RF-EMFR generated by MPBS had a significantly higher risk of type 2 diabetes mellitus (p = 0.016 relative to their counterparts who were exposed to low RF-EMFR. It is concluded that exposure to high RF-EMFR generated by MPBS is associated with elevated levels of HbA1c and risk of type 2 diabetes mellitus.

  6. Association of Exposure to Radio-Frequency Electromagnetic Field Radiation (RF-EMFR) Generated by Mobile Phone Base Stations with Glycated Hemoglobin (HbA1c) and Risk of Type 2 Diabetes Mellitus.

    Science.gov (United States)

    Meo, Sultan Ayoub; Alsubaie, Yazeed; Almubarak, Zaid; Almutawa, Hisham; AlQasem, Yazeed; Hasanato, Rana Muhammed

    2015-11-13

    Installation of mobile phone base stations in residential areas has initiated public debate about possible adverse effects on human health. This study aimed to determine the association of exposure to radio frequency electromagnetic field radiation (RF-EMFR) generated by mobile phone base stations with glycated hemoglobin (HbA1c) and occurrence of type 2 diabetes mellitus. For this study, two different elementary schools (school-1 and school-2) were selected. We recruited 159 students in total; 96 male students from school-1, with age range 12-16 years, and 63 male students with age range 12-17 years from school-2. Mobile phone base stations with towers existed about 200 m away from the school buildings. RF-EMFR was measured inside both schools. In school-1, RF-EMFR was 9.601 nW/cm² at frequency of 925 MHz, and students had been exposed to RF-EMFR for a duration of 6 h daily, five days in a week. In school-2, RF-EMFR was 1.909 nW/cm² at frequency of 925 MHz and students had been exposed for 6 h daily, five days in a week. 5-6 mL blood was collected from all the students and HbA1c was measured by using a Dimension Xpand Plus Integrated Chemistry System, Siemens. The mean HbA1c for the students who were exposed to high RF-EMFR was significantly higher (5.44 ± 0.22) than the mean HbA1c for the students who were exposed to low RF-EMFR (5.32 ± 0.34) (p = 0.007). Moreover, students who were exposed to high RF-EMFR generated by MPBS had a significantly higher risk of type 2 diabetes mellitus (p = 0.016) relative to their counterparts who were exposed to low RF-EMFR. It is concluded that exposure to high RF-EMFR generated by MPBS is associated with elevated levels of HbA1c and risk of type 2 diabetes mellitus.

  7. Perceptual Color Characterization of Cameras

    Directory of Open Access Journals (Sweden)

    Javier Vazquez-Corral

    2014-12-01

    Full Text Available Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as \\(XYZ\\, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a \\(3 \\times 3\\ matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson al., to perform a perceptual color characterization. In particular, we search for the \\(3 \\times 3\\ matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE \\(\\Delta E\\ error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3for the \\(\\Delta E\\ error, 7& for the S-CIELAB error and 13% for the CID error measures.

  8. Dark Energy Camera for Blanco

    Energy Technology Data Exchange (ETDEWEB)

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images from the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.

  9. The GISMO-2 Bolometer Camera

    Science.gov (United States)

    Staguhn, Johannes G.; Benford, Dominic J.; Fixsen, Dale J.; Hilton, Gene; Irwin, Kent D.; Jhabvala, Christine A.; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; Miller, Timothy M.; Moseley, Samuel H.; Sharp, Elemer H.; Wollack, Edward J.

    2012-01-01

    We present the concept for the GISMO-2 bolometer camera) which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISM0-2 will operate Simultaneously in the 1 mm and 2 mm atmospherical windows. The 1 mm channel uses a 32 x 40 TES-based Backshort Under Grid (BUG) bolometer array, the 2 mm channel operates with a 16 x 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISM0-2 was strongly influenced by our experience with the GISMO 2 mm bolometer camera which is successfully operating at the 30m telescope. GISMO is accessible to the astronomical community through the regular IRAM call for proposals.

  10. EDICAM (Event Detection Intelligent Camera)

    Energy Technology Data Exchange (ETDEWEB)

    Zoletnik, S. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Szabolics, T., E-mail: szabolics.tamas@wigner.mta.hu [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Kocsis, G.; Szepesi, T.; Dunai, D. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary)

    2013-10-15

    Highlights: ► We present EDICAM's hardware modules. ► We present EDICAM's main design concepts. ► This paper will describe EDICAM firmware architecture. ► Operation principles description. ► Further developments. -- Abstract: A new type of fast framing camera has been developed for fusion applications by the Wigner Research Centre for Physics during the last few years. A new concept was designed for intelligent event driven imaging which is capable of focusing image readout to Regions of Interests (ROIs) where and when predefined events occur. At present these events mean intensity changes and external triggers but in the future more sophisticated methods might also be defined. The camera provides 444 Hz frame rate at full resolution of 1280 × 1024 pixels, but monitoring of smaller ROIs can be done in the 1–116 kHz range even during exposure of the full image. Keeping space limitations and the harsh environment in mind the camera is divided into a small Sensor Module and a processing card interconnected by a fast 10 Gbit optical link. This camera hardware has been used for passive monitoring of the plasma in different devices for example at ASDEX Upgrade and COMPASS with the first version of its firmware. The new firmware and software package is now available and ready for testing the new event processing features. This paper will present the operation principle and features of the Event Detection Intelligent Camera (EDICAM). The device is intended to be the central element in the 10-camera monitoring system of the Wendelstein 7-X stellarator.

  11. The Camera Comes to Court.

    Science.gov (United States)

    Floren, Leola

    After the Lindbergh kidnapping trial in 1935, the American Bar Association sought to eliminate electronic equipment from courtroom proceedings. Eventually, all but two states adopted regulations applying that ban to some extent, and a 1965 Supreme Court decision encouraged the banning of television cameras at trials as well. Currently, some states…

  12. Camera Movement in Narrative Cinema

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2007-01-01

    Just like art historians have focused on e.g. composition or lighting, this dissertation takes a single stylistic parameter as its object of study: camera movement. Within film studies this localized avenue of middle-level research has become increasingly viable under the aegis of a perspective k...

  13. OSIRIS camera barrel optomechanical design

    Science.gov (United States)

    Farah, Alejandro; Tejada, Carlos; Gonzalez, Jesus; Cobos, Francisco J.; Sanchez, Beatriz; Fuentes, Javier; Ruiz, Elfego

    2004-09-01

    A Camera Barrel, located in the OSIRIS imager/spectrograph for the Gran Telescopio Canarias (GTC), is described in this article. The barrel design has been developed by the Institute for Astronomy of the University of Mexico (IA-UNAM), in collaboration with the Institute for Astrophysics of Canarias (IAC), Spain. The barrel is being manufactured by the Engineering Center for Industrial Development (CIDESI) at Queretaro, Mexico. The Camera Barrel includes a set of eight lenses (three doublets and two singlets), with their respective supports and cells, as well as two subsystems: the Focusing Unit, which is a mechanism that modifies the first doublet relative position; and the Passive Displacement Unit (PDU), which uses the third doublet as thermal compensator to maintain the camera focal length and image quality when the ambient temperature changes. This article includes a brief description of the scientific instrument; describes the design criteria related with performance justification; and summarizes the specifications related with misalignment errors and generated stresses. The Camera Barrel components are described and analytical calculations, FEA simulations and error budgets are also included.

  14. Learning as way-finding

    DEFF Research Database (Denmark)

    Dau, Susanne

    2014-01-01

    Based on empirical case-study findings and the theoretical framework of learning by Illeris coupled with Nonaka & Takeuchis´s perspectives on knowledge creation, it is stressed that learning are conditioned by contextual orientations-processes in spaces near the body (peripersonal spaces) through...

  15. Learning as way-finding

    DEFF Research Database (Denmark)

    Dau, Susanne

    of learning used in this paper is inspired by the latest work of the Danish professor Illeris and the interwoven concept of knowledge development as revealed in the SECI-model generated by the Japanese professors Nonaka and Takeuchi. The empirical investigation, which is the basis of the presented assumptions......, is based on the findings from research of the implementation of blended learning in two undergraduate programmes at University College North in Denmark. The data collection methods is based on eighteen focus-group interview collected in a period of the first two years of students enrolment in radiography......This paper is based on case study findings from studying undergraduate students’ perceptions of their navigation in a blended learning environment where different learning spaces are offered. In this paper learning is regarded as a multi-level and multi complex concept. In this regard the concept...

  16. Dynamic Camera Positioning and Reconfiguration for Multi-Camera Networks

    OpenAIRE

    Konda, Krishna Reddy

    2015-01-01

    The large availability of different types of cameras and lenses, together with the reduction in price of video sensors, has contributed to a widespread use of video surveillance systems, which have become a widely adopted tool to enforce security and safety, in detecting and preventing crimes and dangerous events. The possibility for personalization of such systems is generally very high, letting the user customize the sensing infrastructure, and deploying ad-hoc solutions based on the curren...

  17. Architectural Design Document for Camera Models

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study.......Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study....

  18. Automated Placement of Multiple Stereo Cameras

    OpenAIRE

    Malik, Rahul; Bajcsy, Peter

    2008-01-01

    International audience; This paper presents a simulation framework for multiple stereo camera placement. Multiple stereo camera systems are becoming increasingly popular these days. Applications of multiple stereo camera systems such as tele-immersive systems enable cloning of dynamic scenes in real-time and delivering 3D information from multiple geographic locations to everyone for viewing it in virtual (immersive) 3D spaces. In order to make such multi stereo camera systems ubiquitous, sol...

  19. Mirrored Light Field Video Camera Adapter

    OpenAIRE

    Tsai, Dorian; Dansereau, Donald G.; Martin, Steve; Corke, Peter

    2016-01-01

    This paper proposes the design of a custom mirror-based light field camera adapter that is cheap, simple in construction, and accessible. Mirrors of different shape and orientation reflect the scene into an upwards-facing camera to create an array of virtual cameras with overlapping field of view at specified depths, and deliver video frame rate light fields. We describe the design, construction, decoding and calibration processes of our mirror-based light field camera adapter in preparation ...

  20. An optical metasurface planar camera

    CERN Document Server

    Arbabi, Amir; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei

    2016-01-01

    Optical metasurfaces are 2D arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optical design by enabling complex low cost systems where multiple metasurfaces are lithographically stacked on top of each other and are integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here, we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has an f-number of 0.9, an angle-of-view larger than 60$^\\circ$$\\times$60$^\\circ$, and operates at 850 nm wavelength with large transmission. The camera exhibits high image quality, which indicates the potential of this technology to produce a paradigm shift in future designs of imaging systems for microscopy, photograp...

  1. Combustion pinhole-camera system

    Science.gov (United States)

    Witte, A.B.

    1982-05-19

    A pinhole camera system is described utilizing a sealed optical-purge assembly which provides optical access into a coal combustor or other energy conversion reactors. The camera system basically consists of a focused-purge pinhole optical port assembly, a conventional TV vidicon receiver, an external, variable density light filter which is coupled electronically to the vidicon automatic gain control (agc). The key component of this system is the focused-purge pinhole optical port assembly which utilizes a purging inert gas to keep debris from entering the port and a lens arrangement which transfers the pinhole to the outside of the port assembly. One additional feature of the port assembly is that it is not flush with the interior of the combustor.

  2. SPEIR: A Ge Compton Camera

    Energy Technology Data Exchange (ETDEWEB)

    Mihailescu, L; Vetter, K M; Burks, M T; Hull, E L; Craig, W W

    2004-02-11

    The SPEctroscopic Imager for {gamma}-Rays (SPEIR) is a new concept of a compact {gamma}-ray imaging system of high efficiency and spectroscopic resolution with a 4-{pi} field-of-view. The system behind this concept employs double-sided segmented planar Ge detectors accompanied by the use of list-mode photon reconstruction methods to create a sensitive, compact Compton scatter camera.

  3. Graphic design of pinhole cameras

    Science.gov (United States)

    Edwards, H. B.; Chu, W. P.

    1979-01-01

    The paper describes a graphic technique for the analysis and optimization of pinhole size and focal length. The technique is based on the use of the transfer function of optical elements described by Scott (1959) to construct the transfer function of a circular pinhole camera. This transfer function is the response of a component or system to a pattern of lines having a sinusoidally varying radiance at varying spatial frequencies. Some specific examples of graphic design are presented.

  4. Image Based Camera Localization: an Overview

    OpenAIRE

    Wu, Yihong

    2016-01-01

    Recently, virtual reality, augmented reality, robotics, self-driving cars et al attractive much attention of industrial community, in which image based camera localization is a key task. It is urgent to give an overview of image based camera localization. In this paper, an overview of image based camera localization is presented. It will be useful to not only researchers but also engineers.

  5. 21 CFR 886.1120 - Opthalmic camera.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Opthalmic camera. 886.1120 Section 886.1120 Food... DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1120 Opthalmic camera. (a) Identification. An ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding...

  6. 21 CFR 892.1110 - Positron camera.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Positron camera. 892.1110 Section 892.1110 Food... DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A positron camera is a device intended to image the distribution of positron-emitting radionuclides in the...

  7. 16 CFR 501.1 - Camera film.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Camera film. 501.1 Section 501.1 Commercial Practices FEDERAL TRADE COMMISSION RULES, REGULATIONS, STATEMENT OF GENERAL POLICY OR INTERPRETATION AND... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the...

  8. Coaxial fundus camera for opthalmology

    Science.gov (United States)

    de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.

    2015-09-01

    A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.

  9. Unassisted 3D camera calibration

    Science.gov (United States)

    Atanassov, Kalin; Ramachandra, Vikas; Nash, James; Goma, Sergio R.

    2012-03-01

    With the rapid growth of 3D technology, 3D image capture has become a critical part of the 3D feature set on mobile phones. 3D image quality is affected by the scene geometry as well as on-the-device processing. An automatic 3D system usually assumes known camera poses accomplished by factory calibration using a special chart. In real life settings, pose parameters estimated by factory calibration can be negatively impacted by movements of the lens barrel due to shaking, focusing, or camera drop. If any of these factors displaces the optical axes of either or both cameras, vertical disparity might exceed the maximum tolerable margin and the 3D user may experience eye strain or headaches. To make 3D capture more practical, one needs to consider unassisted (on arbitrary scenes) calibration. In this paper, we propose an algorithm that relies on detection and matching of keypoints between left and right images. Frames containing erroneous matches, along with frames with insufficiently rich keypoint constellations, are detected and discarded. Roll, pitch yaw , and scale differences between left and right frames are then estimated. The algorithm performance is evaluated in terms of the remaining vertical disparity as compared to the maximum tolerable vertical disparity.

  10. Spectrometry with consumer-quality CMOS cameras.

    Science.gov (United States)

    Scheeline, Alexander

    2015-01-01

    Many modern spectrometric instruments use diode arrays, charge-coupled arrays, or CMOS cameras for detection and measurement. As portable or point-of-use instruments are desirable, one would expect that instruments using the cameras in cellular telephones and tablet computers would be the basis of numerous instruments. However, no mass market for such devices has yet developed. The difficulties in using megapixel CMOS cameras for scientific measurements are discussed, and promising avenues for instrument development reviewed. Inexpensive alternatives to use of the built-in camera are also mentioned, as the long-term question is whether it is better to overcome the constraints of CMOS cameras or to bypass them.

  11. Single Camera Calibration in 3D Vision

    Directory of Open Access Journals (Sweden)

    Caius SULIMAN

    2009-12-01

    Full Text Available Camera calibration is a necessary step in 3D vision in order to extract metric information from 2D images. A camera is considered to be calibrated when the parameters of the camera are known (i.e. principal distance, lens distorsion, focal length etc.. In this paper we deal with a single camera calibration method and with the help of this method we try to find the intrinsic and extrinsic camera parameters. The method was implemented with succes in the programming and simulation environment Matlab.

  12. Mini gamma camera, camera system and method of use

    Science.gov (United States)

    Majewski, Stanislaw; Weisenberger, Andrew G.; Wojcik, Randolph F.

    2001-01-01

    A gamma camera comprising essentially and in order from the front outer or gamma ray impinging surface: 1) a collimator, 2) a scintillator layer, 3) a light guide, 4) an array of position sensitive, high resolution photomultiplier tubes, and 5) printed circuitry for receipt of the output of the photomultipliers. There is also described, a system wherein the output supplied by the high resolution, position sensitive photomultipiler tubes is communicated to: a) a digitizer and b) a computer where it is processed using advanced image processing techniques and a specific algorithm to calculate the center of gravity of any abnormality observed during imaging, and c) optional image display and telecommunications ports.

  13. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    OpenAIRE

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P. T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short conf...

  14. Characterization of the Series 1000 Camera System

    Energy Technology Data Exchange (ETDEWEB)

    Kimbrough, J; Moody, J; Bell, P; Landen, O

    2004-04-07

    The National Ignition Facility requires a compact network addressable scientific grade CCD camera for use in diagnostics ranging from streak cameras to gated x-ray imaging cameras. Due to the limited space inside the diagnostic, an analog and digital input/output option in the camera controller permits control of both the camera and the diagnostic by a single Ethernet link. The system consists of a Spectral Instruments Series 1000 camera, a PC104+ controller, and power supply. The 4k by 4k CCD camera has a dynamic range of 70 dB with less than 14 electron read noise at a 1MHz readout rate. The PC104+ controller includes 16 analog inputs, 4 analog outputs and 16 digital input/output lines for interfacing to diagnostic instrumentation. A description of the system and performance characterization is reported.

  15. Automatic calibration method for plenoptic camera

    Science.gov (United States)

    Luan, Yinsen; He, Xing; Xu, Bing; Yang, Ping; Tang, Guomao

    2016-04-01

    An automatic calibration method is proposed for a microlens-based plenoptic camera. First, all microlens images on the white image are searched and recognized automatically based on digital morphology. Then, the center points of microlens images are rearranged according to their relative position relationships. Consequently, the microlens images are located, i.e., the plenoptic camera is calibrated without the prior knowledge of camera parameters. Furthermore, this method is appropriate for all types of microlens-based plenoptic cameras, even the multifocus plenoptic camera, the plenoptic camera with arbitrarily arranged microlenses, or the plenoptic camera with different sizes of microlenses. Finally, we verify our method by the raw data of Lytro. The experiments show that our method has higher intelligence than the methods published before.

  16. Radiometric calibration for MWIR cameras

    Science.gov (United States)

    Yang, Hyunjin; Chun, Joohwan; Seo, Doo Chun; Yang, Jiyeon

    2012-06-01

    Korean Multi-purpose Satellite-3A (KOMPSAT-3A), which weighing about 1,000 kg is scheduled to be launched in 2013 and will be located at a sun-synchronous orbit (SSO) of 530 km in altitude. This is Korea's rst satellite to orbit with a mid-wave infrared (MWIR) image sensor, which is currently being developed at Korea Aerospace Research Institute (KARI). The missions envisioned include forest re surveillance, measurement of the ocean surface temperature, national defense and crop harvest estimate. In this paper, we shall explain the MWIR scene generation software and atmospheric compensation techniques for the infrared (IR) camera that we are currently developing. The MWIR scene generation software we have developed taking into account sky thermal emission, path emission, target emission, sky solar scattering and ground re ection based on MODTRAN data. Here, this software will be used for generating the radiation image in the satellite camera which requires an atmospheric compensation algorithm and the validation of the accuracy of the temperature which is obtained in our result. Image visibility restoration algorithm is a method for removing the eect of atmosphere between the camera and an object. This algorithm works between the satellite and the Earth, to predict object temperature noised with the Earth's atmosphere and solar radiation. Commonly, to compensate for the atmospheric eect, some softwares like MODTRAN is used for modeling the atmosphere. Our algorithm doesn't require an additional software to obtain the surface temperature. However, it needs to adjust visibility restoration parameters and the precision of the result still should be studied.

  17. The Flutter Shutter Camera Simulator

    Directory of Open Access Journals (Sweden)

    Yohann Tendero

    2012-10-01

    Full Text Available The proposed method simulates an embedded flutter shutter camera implemented either analogically or numerically, and computes its performance. The goal of the flutter shutter is to make motion blur invertible, by a "fluttering" shutter that opens and closes on a well chosen sequence of time intervals. In the simulations the motion is assumed uniform, and the user can choose its velocity. Several types of flutter shutter codes are tested and evaluated: the original ones considered by the inventors, the classic motion blur, and finally several analog or numerical optimal codes proposed recently. In all cases the exact SNR of the deconvolved result is also computed.

  18. Cryogenic mechanism for ISO camera

    Science.gov (United States)

    Luciano, G.

    1987-12-01

    The Infrared Space Observatory (ISO) camera configuration, architecture, materials, tribology, motorization, and development status are outlined. The operating temperature is 2 to 3 K, at 2.5 to 18 microns. Selected material is a titanium alloy, with MoS2/TiC lubrication. A stepping motor drives the ball-bearing mounted wheels to which the optical elements are fixed. Model test results are satisfactory, and also confirm the validity of the test facilities, particularly for vibration tests at 4K.

  19. Light field panorama by a plenoptic camera

    Science.gov (United States)

    Xue, Zhou; Baboulaz, Loic; Prandoni, Paolo; Vetterli, Martin

    2013-03-01

    Consumer-grade plenoptic camera Lytro draws a lot of interest from both academic and industrial world. However its low resolution in both spatial and angular domain prevents it from being used for fine and detailed light field acquisition. This paper proposes to use a plenoptic camera as an image scanner and perform light field stitching to increase the size of the acquired light field data. We consider a simplified plenoptic camera model comprising a pinhole camera moving behind a thin lens. Based on this model, we describe how to perform light field acquisition and stitching under two different scenarios: by camera translation or by camera translation and rotation. In both cases, we assume the camera motion to be known. In the case of camera translation, we show how the acquired light fields should be resampled to increase the spatial range and ultimately obtain a wider field of view. In the case of camera translation and rotation, the camera motion is calculated such that the light fields can be directly stitched and extended in the angular domain. Simulation results verify our approach and demonstrate the potential of the motion model for further light field applications such as registration and super-resolution.

  20. Computational cameras: convergence of optics and processing.

    Science.gov (United States)

    Zhou, Changyin; Nayar, Shree K

    2011-12-01

    A computational camera uses a combination of optics and processing to produce images that cannot be captured with traditional cameras. In the last decade, computational imaging has emerged as a vibrant field of research. A wide variety of computational cameras has been demonstrated to encode more useful visual information in the captured images, as compared with conventional cameras. In this paper, we survey computational cameras from two perspectives. First, we present a taxonomy of computational camera designs according to the coding approaches, including object side coding, pupil plane coding, sensor side coding, illumination coding, camera arrays and clusters, and unconventional imaging systems. Second, we use the abstract notion of light field representation as a general tool to describe computational camera designs, where each camera can be formulated as a projection of a high-dimensional light field to a 2-D image sensor. We show how individual optical devices transform light fields and use these transforms to illustrate how different computational camera designs (collections of optical devices) capture and encode useful visual information.

  1. A Unifying Theory for Camera Calibration.

    Science.gov (United States)

    Ramalingam, SriKumar; Sturm, Peter

    2016-07-19

    This paper proposes a unified theory for calibrating a wide variety of camera models such as pinhole, fisheye, cata-dioptric, and multi-camera networks. We model any camera as a set of image pixels and their associated camera rays in space. Every pixel measures the light traveling along a (half-) ray in 3-space, associated with that pixel. By this definition, calibration simply refers to the computation of the mapping between pixels and the associated 3D rays. Such a mapping can be computed using images of calibration grids, which are objects with known 3D geometry, taken from unknown positions. This general camera model allows to represent non-central cameras; we also consider two special subclasses, namely central and axial cameras. In a central camera, all rays intersect in a single point, whereas the rays are completely arbitrary in a non-central one. Axial cameras are an intermediate case: the camera rays intersect a single line. In this work, we show the theory for calibrating central, axial and non-central models using calibration grids, which can be either three-dimensional or planar.

  2. 手机戒烟干预和网络戒烟干预的国际进展研究%Domestic and International Progress of Mobile Phone-based and Web-based Smoking Cessation Interventions

    Institute of Scientific and Technical Information of China (English)

    王立立; 王燕玲; 姜垣

    2011-01-01

    提供戒烟帮助是控烟工作的重点之一.该文对国际上新显现的手机戒烟干预和网络戒烟干预的发展进行了总结,以期为我国的控烟工作提供借鉴支持.手机和网络戒烟干预方式的共同优势在于:无时间和地域性的限制,范围更广;规避了有些人不愿意面对面交流的忧虑,保护了咨询者的隐私;成本相对较低.二者在可及性、沟通效果、成本效益等方面则各有利弊.%Providing smoking cessation assistance is a key way in tobacco control practice. This paper summarized the progress of mobile phone-based and web-based smoking cessation interventions newly emerged around internationally, which could be experience for China. The mutual advantages of these two ways of interventions were: no time limitation and as well as geography; no necessary of face to face communication for the sake of privacy; and high cost-effective ratio. Meanwhile, the two ways of interventions are different in accessibility, communication effect, and cost, etc.

  3. 基于移动手机平台的高职英语翻转课堂模式研究及实践%On Mobile Phone based Flipped Classroom Model for English Class in Higher Vocational College

    Institute of Scientific and Technical Information of China (English)

    查静

    2015-01-01

    通过总结克莱顿·克里斯坦森和罗伯特·陶伯特教授提出的两种翻转课堂模式以及听说英语教学法的基本理念,提出基于移动手机平台的高职英语翻转课堂的实施模式,手机平台资源和学习环境设计以及课堂活动的设计;最后,分析该模式在实施过程遇到的具体困难和问题,以期望对相关研究提供参考。%The paper sets out to review the two typical flipped classroom models by Clayton Christensen and Robert Talbert respectively as well as the basic conception of audio-lingual method for second language acquisition. On the basis of that, it puts forward a mobile phone based flipped classroom model for English class in higher vocational college, elaborates on how to design a suitable M-learning environment and learning resources for English learners in higher vocational colleges and presents ideas on designing face-to-face classroom activities.

  4. MIOTIC study: a prospective, multicenter, randomized study to evaluate the long-term efficacy of mobile phone-based Internet of Things in the management of patients with stable COPD.

    Science.gov (United States)

    Zhang, Jing; Song, Yuan-Lin; Bai, Chun-Xue

    2013-01-01

    Chronic obstructive pulmonary disease (COPD) is a common disease that leads to huge economic and social burden. Efficient and effective management of stable COPD is essential to improve quality of life and reduce medical expenditure. The Internet of Things (IoT), a recent breakthrough in communication technology, seems promising in improving health care delivery, but its potential strengths in COPD management remain poorly understood. We have developed a mobile phone-based IoT (mIoT) platform and initiated a randomized, multicenter, controlled trial entitled the 'MIOTIC study' to investigate the influence of mIoT among stable COPD patients. In the MIOTIC study, at least 600 patients with stable GOLD group C or D COPD and with a history of at least two moderate-to-severe exacerbations within the previous year will be randomly allocated to the control group, which receives routine follow-up, or the intervention group, which receives mIoT management. Endpoints of the study include (1) frequency and severity of acute exacerbation; (2) symptomatic evaluation; (3) pre- and post-bronchodilator forced expiratory volume in 1 second (FEV1) and FEV1/forced vital capacity (FVC) measurement; (4) exercise capacity; and (5) direct medical cost per year. Results from this study should provide direct evidence for the suitability of mIoT in stable COPD patient management.

  5. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  6. Optimising camera traps for monitoring small mammals.

    Science.gov (United States)

    Glen, Alistair S; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce

    2013-01-01

    Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea), feral cats (Felis catus) and hedgehogs (Erinaceuseuropaeus). Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  7. The Zwicky Transient Facility Camera

    Science.gov (United States)

    Dekany, Richard; Smith, Roger M.; Belicki, Justin; Delacroix, Alexandre; Duggan, Gina; Feeney, Michael; Hale, David; Kaye, Stephen; Milburn, Jennifer; Murphy, Patrick; Porter, Michael; Reiley, Daniel J.; Riddle, Reed L.; Rodriguez, Hector; Bellm, Eric C.

    2016-08-01

    The Zwicky Transient Facility Camera (ZTFC) is a key element of the ZTF Observing System, the integrated system of optoelectromechanical instrumentation tasked to acquire the wide-field, high-cadence time-domain astronomical data at the heart of the Zwicky Transient Facility. The ZTFC consists of a compact cryostat with large vacuum window protecting a mosaic of 16 large, wafer-scale science CCDs and 4 smaller guide/focus CCDs, a sophisticated vacuum interface board which carries data as electrical signals out of the cryostat, an electromechanical window frame for securing externally inserted optical filter selections, and associated cryo-thermal/vacuum system support elements. The ZTFC provides an instantaneous 47 deg2 field of view, limited by primary mirror vignetting in its Schmidt telescope prime focus configuration. We report here on the design and performance of the ZTF CCD camera cryostat and report results from extensive Joule-Thompson cryocooler tests that may be of broad interest to the instrumentation community.

  8. MAGIC-II Camera Slow Control Software

    CERN Document Server

    Steinke, B; Tridon, D Borla

    2009-01-01

    The Imaging Atmospheric Cherenkov Telescope MAGIC I has recently been extended to a stereoscopic system by adding a second 17 m telescope, MAGIC-II. One of the major improvements of the second telescope is an improved camera. The Camera Control Program is embedded in the telescope control software as an independent subsystem. The Camera Control Program is an effective software to monitor and control the camera values and their settings and is written in the visual programming language LabVIEW. The two main parts, the Central Variables File, which stores all information of the pixel and other camera parameters, and the Comm Control Routine, which controls changes in possible settings, provide a reliable operation. A safety routine protects the camera from misuse by accidental commands, from bad weather conditions and from hardware errors by automatic reactions.

  9. Movement-based Interaction in Camera Spaces

    DEFF Research Database (Denmark)

    Eriksson, Eva; Riisgaard Hansen, Thomas; Lykke-Olesen, Andreas

    2006-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movement-based projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space......, relations, and feedback. We see these as central for describing and analysing movement-based systems using camera tracking and we show how these three concepts can be used to analyse other camera tracking applications....

  10. Development of biostereometric experiments. [stereometric camera system

    Science.gov (United States)

    Herron, R. E.

    1978-01-01

    The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.

  11. Comparative evaluation of consumer grade cameras and mobile phone cameras for close range photogrammetry

    Science.gov (United States)

    Chikatsu, Hirofumi; Takahashi, Yoji

    2009-08-01

    The authors have been concentrating on developing convenient 3D measurement methods using consumer grade digital cameras, and it was concluded that consumer grade digital cameras are expected to become a useful photogrammetric device for the various close range application fields. On the other hand, mobile phone cameras which have 10 mega pixels were appeared on the market in Japan. In these circumstances, we are faced with alternative epoch-making problem whether mobile phone cameras are able to take the place of consumer grade digital cameras in close range photogrammetric applications. In order to evaluate potentials of mobile phone cameras in close range photogrammetry, comparative evaluation between mobile phone cameras and consumer grade digital cameras are investigated in this paper with respect to lens distortion, reliability, stability and robustness. The calibration tests for 16 mobile phone cameras and 50 consumer grade digital cameras were conducted indoors using test target. Furthermore, practability of mobile phone camera for close range photogrammetry was evaluated outdoors. This paper presents that mobile phone cameras have ability to take the place of consumer grade digital cameras, and develop the market in digital photogrammetric fields.

  12. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  13. Omnidirectional Underwater Camera Design and Calibration

    Directory of Open Access Journals (Sweden)

    Josep Bosch

    2015-03-01

    Full Text Available This paper presents the development of an underwater omnidirectional multi-camera system (OMS based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3 and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach.

  14. Camera processing with chromatic aberration.

    Science.gov (United States)

    Korneliussen, Jan Tore; Hirakawa, Keigo

    2014-10-01

    Since the refractive index of materials commonly used for lens depends on the wavelengths of light, practical camera optics fail to converge light to a single point on an image plane. Known as chromatic aberration, this phenomenon distorts image details by introducing magnification error, defocus blur, and color fringes. Though achromatic and apochromatic lens designs reduce chromatic aberration to a degree, they are complex and expensive and they do not offer a perfect correction. In this paper, we propose a new postcapture processing scheme designed to overcome these problems computationally. Specifically, the proposed solution is comprised of chromatic aberration-tolerant demosaicking algorithm and post-demosaicking chromatic aberration correction. Experiments with simulated and real sensor data verify that the chromatic aberration is effectively corrected.

  15. Gesture recognition on smart cameras

    Science.gov (United States)

    Dziri, Aziz; Chevobbe, Stephane; Darouich, Mehdi

    2013-02-01

    Gesture recognition is a feature in human-machine interaction that allows more natural interaction without the use of complex devices. For this reason, several methods of gesture recognition have been developed in recent years. However, most real time methods are designed to operate on a Personal Computer with high computing resources and memory. In this paper, we analyze relevant methods found in the literature in order to investigate the ability of smart camera to execute gesture recognition algorithms. We elaborate two hand gesture recognition pipelines. The first method is based on invariant moments extraction and the second on finger tips detection. The hand detection method used for both pipeline is based on skin color segmentation. The results obtained show that the un-optimized versions of invariant moments method and finger tips detection method can reach 10 fps on embedded processor and use about 200 kB of memory.

  16. Framework for Evaluating Camera Opinions

    Directory of Open Access Journals (Sweden)

    K.M. Subramanian

    2015-03-01

    Full Text Available Opinion mining plays a most important role in text mining applications in brand and product positioning, customer relationship management, consumer attitude detection and market research. The applications lead to new generation of companies/products meant for online market perception, online content monitoring and reputation management. Expansion of the web inspires users to contribute/express opinions via blogs, videos and social networking sites. Such platforms provide valuable information for analysis of sentiment pertaining a product or service. This study investigates the performance of various feature extraction methods and classification algorithm for opinion mining. Opinions expressed in Amazon website for cameras are collected and used for evaluation. Features are extracted from the opinions using Term Document Frequency and Inverse Document Frequency (TDFIDF. Feature transformation is achieved through Principal Component Analysis (PCA and kernel PCA. Naïve Bayes, K Nearest Neighbor and Classification and Regression Trees (CART classification algorithms classify the features extracted.

  17. Illumination box and camera system

    Science.gov (United States)

    Haas, Jeffrey S.; Kelly, Fredrick R.; Bushman, John F.; Wiefel, Michael H.; Jensen, Wayne A.; Klunder, Gregory L.

    2002-01-01

    A hand portable, field-deployable thin-layer chromatography (TLC) unit and a hand portable, battery-operated unit for development, illumination, and data acquisition of the TLC plates contain many miniaturized features that permit a large number of samples to be processed efficiently. The TLC unit includes a solvent tank, a holder for TLC plates, and a variety of tool chambers for storing TLC plates, solvent, and pipettes. After processing in the TLC unit, a TLC plate is positioned in a collapsible illumination box, where the box and a CCD camera are optically aligned for optimal pixel resolution of the CCD images of the TLC plate. The TLC system includes an improved development chamber for chemical development of TLC plates that prevents solvent overflow.

  18. LROC - Lunar Reconnaissance Orbiter Camera

    Science.gov (United States)

    Robinson, M. S.; Eliason, E.; Hiesinger, H.; Jolliff, B. L.; McEwen, A.; Malin, M. C.; Ravine, M. A.; Thomas, P. C.; Turtle, E. P.

    2009-12-01

    The Lunar Reconnaissance Orbiter (LRO) went into lunar orbit on 23 June 2009. The LRO Camera (LROC) acquired its first lunar images on June 30 and commenced full scale testing and commissioning on July 10. The LROC consists of two narrow-angle cameras (NACs) that provide 0.5 m scale panchromatic images over a combined 5 km swath, and a wide-angle camera (WAC) to provide images at a scale of 100 m per pixel in five visible wavelength bands (415, 566, 604, 643, and 689 nm) and 400 m per pixel in two ultraviolet bands (321 nm and 360 nm) from the nominal 50 km orbit. Early operations were designed to test the performance of the cameras under all nominal operating conditions and provided a baseline for future calibrations. Test sequences included off-nadir slews to image stars and the Earth, 90° yaw sequences to collect flat field calibration data, night imaging for background characterization, and systematic mapping to test performance. LRO initially was placed into a terminator orbit resulting in images acquired under low signal conditions. Over the next three months the incidence angle at the spacecraft’s equator crossing gradually decreased towards high noon, providing a range of illumination conditions. Several hundred south polar images were collected in support of impact site selection for the LCROSS mission; details can be seen in many of the shadows. Commissioning phase images not only proved the instruments’ overall performance was nominal, but also that many geologic features of the lunar surface are well preserved at the meter-scale. Of particular note is the variety of impact-induced morphologies preserved in a near pristine state in and around kilometer-scale and larger young Copernican age impact craters that include: abundant evidence of impact melt of a variety of rheological properties, including coherent flows with surface textures and planimetric properties reflecting supersolidus (e.g., liquid melt) emplacement, blocks delicately perched on

  19. HRSC: High resolution stereo camera

    Science.gov (United States)

    Neukum, G.; Jaumann, R.; Basilevsky, A.T.; Dumke, A.; Van Gasselt, S.; Giese, B.; Hauber, E.; Head, J. W.; Heipke, C.; Hoekzema, N.; Hoffmann, H.; Greeley, R.; Gwinner, K.; Kirk, R.; Markiewicz, W.; McCord, T.B.; Michael, G.; Muller, Jan-Peter; Murray, J.B.; Oberst, J.; Pinet, P.; Pischel, R.; Roatsch, T.; Scholten, F.; Willner, K.

    2009-01-01

    The High Resolution Stereo Camera (HRSC) on Mars Express has delivered a wealth of image data, amounting to over 2.5 TB from the start of the mapping phase in January 2004 to September 2008. In that time, more than a third of Mars was covered at a resolution of 10-20 m/pixel in stereo and colour. After five years in orbit, HRSC is still in excellent shape, and it could continue to operate for many more years. HRSC has proven its ability to close the gap between the low-resolution Viking image data and the high-resolution Mars Orbiter Camera images, leading to a global picture of the geological evolution of Mars that is now much clearer than ever before. Derived highest-resolution terrain model data have closed major gaps and provided an unprecedented insight into the shape of the surface, which is paramount not only for surface analysis and geological interpretation, but also for combination with and analysis of data from other instruments, as well as in planning for future missions. This chapter presents the scientific output from data analysis and highlevel data processing, complemented by a summary of how the experiment is conducted by the HRSC team members working in geoscience, atmospheric science, photogrammetry and spectrophotometry. Many of these contributions have been or will be published in peer-reviewed journals and special issues. They form a cross-section of the scientific output, either by summarising the new geoscientific picture of Mars provided by HRSC or by detailing some of the topics of data analysis concerning photogrammetry, cartography and spectral data analysis.

  20. MISR FIRSTLOOK radiometric camera-by-camera Cloud Mask V001

    Data.gov (United States)

    National Aeronautics and Space Administration — This file contains the FIRSTLOOK Radiometric camera-by-camera Cloud Mask (RCCM) dataset produced using ancillary inputs (RCCT) from the previous time period. It is...

  1. Trajectory association across multiple airborne cameras.

    Science.gov (United States)

    Sheikh, Yaser Ajmal; Shah, Mubarak

    2008-02-01

    A camera mounted on an aerial vehicle provides an excellent means for monitoring large areas of a scene. Utilizing several such cameras on different aerial vehicles allows further flexibility, in terms of increased visual scope and in the pursuit of multiple targets. In this paper, we address the problem of associating objects across multiple airborne cameras. Since the cameras are moving and often widely separated, direct appearance-based or proximity-based constraints cannot be used. Instead, we exploit geometric constraints on the relationship between the motion of each object across cameras, to test multiple association hypotheses, without assuming any prior calibration information. Given our scene model, we propose a likelihood function for evaluating a hypothesized association between observations in multiple cameras that is geometrically motivated. Since multiple cameras exist, ensuring coherency in association is an essential requirement, e.g. that transitive closure is maintained between more than two cameras. To ensure such coherency we pose the problem of maximizing the likelihood function as a k-dimensional matching and use an approximation to find the optimal assignment of association. Using the proposed error function, canonical trajectories of each object and optimal estimates of inter-camera transformations (in a maximum likelihood sense) are computed. Finally, we show that as a result of associating objects across the cameras, a concurrent visualization of multiple aerial video streams is possible and that, under special conditions, trajectories interrupted due to occlusion or missing detections can be repaired. Results are shown on a number of real and controlled scenarios with multiple objects observed by multiple cameras, validating our qualitative models, and through simulation quantitative performance is also reported.

  2. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many ca

  3. Improving Situational Awareness in camera surveillance by combining top-view maps with camera images

    NARCIS (Netherlands)

    Kooi, F.L.; Zeeders, R.

    2009-01-01

    The goal of the experiment described is to improve today's camera surveillance in public spaces. Three designs with the camera images combined on a top-view map were compared to each other and to the current situation in camera surveillance. The goal was to test which design makes spatial relationsh

  4. Camera self-calibration from translation by referring to a known camera.

    Science.gov (United States)

    Zhao, Bin; Hu, Zhaozheng

    2015-09-01

    This paper presents a novel linear method for camera self-calibration by referring to a known (or calibrated) camera. The method requires at least three images, with two images generated by the uncalibrated camera from pure translation and one image generated by the known reference camera. We first propose a method to compute the infinite homography from scene depths. Based on this, we use two images generated by translating the uncalibrated camera to recover scene depths, which are further utilized to linearly compute the infinite homography between an arbitrary uncalibrated image, and the image from the known camera. With the known camera as reference, the computed infinite homography is readily decomposed for camera calibration. The proposed self-calibration method has been tested with simulation and real image data. Experimental results demonstrate that the method is practical and accurate. This paper proposes using a "known reference camera" for camera calibration. The pure translation, as required in the method, is much more maneuverable, compared with some strict motions in the literature, such as pure rotation. The proposed self-calibration method has good potential for solving online camera calibration problems, which has important applications, especially for multicamera and zooming camera systems.

  5. MIOTIC study: a prospective, multicenter, randomized study to evaluate the long-term efficacy of mobile phone-based Internet of Things in the management of patients with stable COPD

    Directory of Open Access Journals (Sweden)

    Zhang J

    2013-09-01

    Full Text Available Jing Zhang, Yuan-lin Song, Chun-xue Bai Department of Pulmonary Medicine, Zhongshan Hospital, Fudan University, Shanghai, People's Republic of China Abstract: Chronic obstructive pulmonary disease (COPD is a common disease that leads to huge economic and social burden. Efficient and effective management of stable COPD is essential to improve quality of life and reduce medical expenditure. The Internet of Things (IoT, a recent breakthrough in communication technology, seems promising in improving health care delivery, but its potential strengths in COPD management remain poorly understood. We have developed a mobile phone-based IoT (mIoT platform and initiated a randomized, multicenter, controlled trial entitled the ‘MIOTIC study’ to investigate the influence of mIoT among stable COPD patients. In the MIOTIC study, at least 600 patients with stable GOLD group C or D COPD and with a history of at least two moderate-to-severe exacerbations within the previous year will be randomly allocated to the control group, which receives routine follow-up, or the intervention group, which receives mIoT management. Endpoints of the study include (1 frequency and severity of acute exacerbation; (2 symptomatic evaluation; (3 pre- and post-bronchodilator forced expiratory volume in 1 second (FEV1 and FEV1/forced vital capacity (FVC measurement; (4 exercise capacity; and (5 direct medical cost per year. Results from this study should provide direct evidence for the suitability of mIoT in stable COPD patient management. Keywords: Internet of Things, mobile phone, chronic obstructive pulmonary disease, efficacy

  6. DESIGN AND IMPLEMENTATION OF ANDROID SMART PHONES-BASED PRIVACY MANAGEMENT SYSTEM%基于Android智能手机的隐私管理系统的设计与实现

    Institute of Scientific and Technical Information of China (English)

    谷琼; 李杰; 龚雄兴

    2014-01-01

    为改善智能手机对隐私安全方面的管理,基于Android系统平台设计开发并实现集桌面加载用户登录验证、权限分配、通信管理、手机应用程序管理以及手机拦截和密码屏蔽保护为一体的智能手机隐私安全管理系统。系统以登录验证的方式,进行加密保护;以拦截屏蔽的方式对黑名单和私密联系人的通信进行管理;以加密存储的方式对系统联系人、通话记录及短信进行安全备份,具有安全可靠、管理灵活、保密性高、功能强大等优势。%In order to improve the management of smart phone on privacy and security,we design,develop and implement a privacy and safety management system for smart phone based on Android system platform,which sets user login authentication loaded on desktop, permission assignments,communication management,mobile applications management and mobile phone interception and password shielding as one.The system encrypts and protects in the way of login authentication,manages the communication of the blacklist and privacy contacts by intercepting shield,and makes safe backup on system contacts,call records and text messages by local encrypted storages,thus has the advantages of safety and reliability,flexible management,high security,and powerful functions,etc.

  7. 基于智能手机平台的主动安全预警系统关键技术研究%Research on Smart-phone Based Active Safety Warning Technology

    Institute of Scientific and Technical Information of China (English)

    金茂菁

    2012-01-01

    Vehicle active safety systems have been proved effective for saving lives and reducing traffic accident. However, these systems were more expensive and less widespread than smart-phones,the multi-sensor functions and information processing ability inside the cell phone were greatly improved for further application. Firstly, the parameters and functions for these sensors are introduced in this study. Then, the framework of smart-phone based active safety system is proposed. The function for safety warning system is designed using forward collision warning, lane departure warning system as examples, and is well explained. The field experiment using two typically smart-phone system and professional system was conduct for function and accuracy analysis finally. The results indicate that the accuracy of smartphone based system was acceptable; the high-equipped smart-phones can realize even more active safety warning function.%介绍了智能手机中传感器类型,提出了基于智能手机的预警系统体系框架,设计了安全预警功能.采用道路实验横向测评了两款智能手机安全预警软件和一款专业设备在功能、可靠性等方面的指标,前碰撞和车道偏离预警结果表明:智能手机可以实现专业系统的主动安全预警功能并进一步拓展,预警信息发布精度也在可接受范围内.

  8. Optimal Camera Placement for Motion Capture Systems.

    Science.gov (United States)

    Rahimian, Pooya; Kearney, Joseph K

    2017-03-01

    Optical motion capture is based on estimating the three-dimensional positions of markers by triangulation from multiple cameras. Successful performance depends on points being visible from at least two cameras and on the accuracy of the triangulation. Triangulation accuracy is strongly related to the positions and orientations of the cameras. Thus, the configuration of the camera network has a critical impact on performance. A poor camera configuration may result in a low quality three-dimensional (3D) estimation and consequently low quality of tracking. This paper introduces and compares two methods for camera placement. The first method is based on a metric that computes target point visibility in the presence of dynamic occlusion from cameras with "good" views. The second method is based on the distribution of views of target points. Efficient algorithms, based on simulated annealing, are introduced for estimating the optimal configuration of cameras for the two metrics and a given distribution of target points. The accuracy and robustness of the algorithms are evaluated through both simulation and empirical measurement. Implementations of the two methods are available for download as tools for the community.

  9. New camera tube improves ultrasonic inspection system

    Science.gov (United States)

    Berger, H.; Collis, W. J.; Jacobs, J. E.

    1968-01-01

    Electron multiplier, incorporated into the camera tube of an ultrasonic imaging system, improves resolution, effectively shields low level circuits, and provides a high level signal input to the television camera. It is effective for inspection of metallic materials for bonds, voids, and homogeneity.

  10. Thermal Cameras in School Laboratory Activities

    Science.gov (United States)

    Haglund, Jesper; Jeppsson, Fredrik; Hedberg, David; Schönborn, Konrad J.

    2015-01-01

    Thermal cameras offer real-time visual access to otherwise invisible thermal phenomena, which are conceptually demanding for learners during traditional teaching. We present three studies of students' conduction of laboratory activities that employ thermal cameras to teach challenging thermal concepts in grades 4, 7 and 10-12. Visualization of…

  11. Solid State Replacement of Rotating Mirror Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Frank, A M; Bartolick, J M

    2006-08-25

    Rotating mirror cameras have been the mainstay of mega-frame per second imaging for decades. There is still no electronic camera that can match a film based rotary mirror camera for the combination of frame count, speed, resolution and dynamic range. The rotary mirror cameras are predominantly used in the range of 0.1 to 100 micro-seconds per frame, for 25 to more than a hundred frames. Electron tube gated cameras dominate the sub microsecond regime but are frame count limited. Video cameras are pushing into the microsecond regime but are resolution limited by the high data rates. An all solid state architecture, dubbed ''In-situ Storage Image Sensor'' or ''ISIS'', by Prof. Goji Etoh, has made its first appearance into the market and its evaluation is discussed. Recent work at Lawrence Livermore National Laboratory has concentrated both on evaluation of the presently available technologies and exploring the capabilities of the ISIS architecture. It is clear though there is presently no single chip camera that can simultaneously match the rotary mirror cameras, the ISIS architecture has the potential to approach their performance.

  12. Depth Estimation Using a Sliding Camera.

    Science.gov (United States)

    Ge, Kailin; Hu, Han; Feng, Jianjiang; Zhou, Jie

    2016-02-01

    Image-based 3D reconstruction technology is widely used in different fields. The conventional algorithms are mainly based on stereo matching between two or more fixed cameras, and high accuracy can only be achieved using a large camera array, which is very expensive and inconvenient in many applications. Another popular choice is utilizing structure-from-motion methods for arbitrarily placed camera(s). However, due to too many degrees of freedom, its computational cost is heavy and its accuracy is rather limited. In this paper, we propose a novel depth estimation algorithm using a sliding camera system. By analyzing the geometric properties of the camera system, we design a camera pose initialization algorithm that can work satisfyingly with only a small number of feature points and is robust to noise. For pixels corresponding to different depths, an adaptive iterative algorithm is proposed to choose optimal frames for stereo matching, which can take advantage of continuously pose-changing imaging and save the time consumption amazingly too. The proposed algorithm can also be easily extended to handle less constrained situations (such as using a camera mounted on a moving robot or vehicle). Experimental results on both synthetic and real-world data have illustrated the effectiveness of the proposed algorithm.

  13. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  14. Cameras Monitor Spacecraft Integrity to Prevent Failures

    Science.gov (United States)

    2014-01-01

    The Jet Propulsion Laboratory contracted Malin Space Science Systems Inc. to outfit Curiosity with four of its cameras using the latest commercial imaging technology. The company parlayed the knowledge gained under working with NASA to develop an off-the-shelf line of cameras, along with a digital video recorder, designed to help troubleshoot problems that may arise on satellites in space.

  15. CCD Color Camera Characterization for Image Measurements

    NARCIS (Netherlands)

    Withagen, P.J.; Groen, F.C.A.; Schutte, K.

    2007-01-01

    In this article, we will analyze a range of different types of cameras for its use in measurements. We verify a general model of a charged coupled device camera using experiments. This model includes gain and offset, additive and multiplicative noise, and gamma correction. It is shown that for sever

  16. A BASIC CAMERA UNIT FOR MEDICAL PHOTOGRAPHY.

    Science.gov (United States)

    SMIALOWSKI, A; CURRIE, D J

    1964-08-22

    A camera unit suitable for most medical photographic purposes is described. The unit comprises a single-lens reflex camera, an electronic flash unit and supplementary lenses. Simple instructions for use of th's basic unit are presented. The unit is entirely suitable for taking fine-quality photographs of most medical subjects by persons who have had little photographic training.

  17. AIM: Ames Imaging Module Spacecraft Camera

    Science.gov (United States)

    Thompson, Sarah

    2015-01-01

    The AIM camera is a small, lightweight, low power, low cost imaging system developed at NASA Ames. Though it has imaging capabilities similar to those of $1M plus spacecraft cameras, it does so on a fraction of the mass, power and cost budget.

  18. Creating and Using a Camera Obscura

    Science.gov (United States)

    Quinnell, Justin

    2012-01-01

    The camera obscura (Latin for "darkened room") is the earliest optical device and goes back over 2500 years. The small pinhole or lens at the front of the room allows light to enter and this is then "projected" onto a screen inside the room. This differs from a camera, which projects its image onto light-sensitive material.…

  19. Rosetta Star Tracker and Navigation Camera

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera.......Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera....

  20. Fazendo 3d com uma camera so

    CERN Document Server

    Lunazzi, J J

    2010-01-01

    A simple system to make stereo photography or videos based in just two mirrors was made in 1989 and recently adapted to a digital camera setup. Um sistema simples para fazer fotografia ou videos em estereo baseado em dois espelhos que dividem o campo da imagem foi criado no ano 1989, e recentemente adaptado para camera digital.

  1. Creating and Using a Camera Obscura

    Science.gov (United States)

    Quinnell, Justin

    2012-01-01

    The camera obscura (Latin for "darkened room") is the earliest optical device and goes back over 2500 years. The small pinhole or lens at the front of the room allows light to enter and this is then "projected" onto a screen inside the room. This differs from a camera, which projects its image onto light-sensitive material. Originally images were…

  2. Active spectral imaging nondestructive evaluation (SINDE) camera

    Energy Technology Data Exchange (ETDEWEB)

    Simova, E.; Rochefort, P.A., E-mail: eli.simova@cnl.ca [Canadian Nuclear Laboratories, Chalk River, Ontario (Canada)

    2016-06-15

    A proof-of-concept video camera for active spectral imaging nondestructive evaluation has been demonstrated. An active multispectral imaging technique has been implemented in the visible and near infrared by using light emitting diodes with wavelengths spanning from 400 to 970 nm. This shows how the camera can be used in nondestructive evaluation to inspect surfaces and spectrally identify materials and corrosion. (author)

  3. Adapting virtual camera behaviour through player modelling

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    a novel approach to virtual camera control, which builds upon camera control and player modelling to provide the user with an adaptive point-of-view. To achieve this goal, we propose a methodology to model the player’s preferences on virtual camera movements and we employ the resulting models to tailor......Research in virtual camera control has focused primarily on finding methods to allow designers to place cameras effectively and efficiently in dynamic and unpredictable environments, and to generate complex and dynamic plans for cinematography in virtual environments. In this article, we propose...... the viewpoint movements to the player type and her game-play style. Ultimately, the methodology is applied to a 3D platform game and is evaluated through a controlled experiment; the results suggest that the resulting adaptive cinematographic experience is favoured by some player types and it can generate...

  4. Incremental activity modeling in multiple disjoint cameras.

    Science.gov (United States)

    Loy, Chen Change; Xiang, Tao; Gong, Shaogang

    2012-09-01

    Activity modeling and unusual event detection in a network of cameras is challenging, particularly when the camera views are not overlapped. We show that it is possible to detect unusual events in multiple disjoint cameras as context-incoherent patterns through incremental learning of time delayed dependencies between distributed local activities observed within and across camera views. Specifically, we model multicamera activities using a Time Delayed Probabilistic Graphical Model (TD-PGM) with different nodes representing activities in different decomposed regions from different views and the directed links between nodes encoding their time delayed dependencies. To deal with visual context changes, we formulate a novel incremental learning method for modeling time delayed dependencies that change over time. We validate the effectiveness of the proposed approach using a synthetic data set and videos captured from a camera network installed at a busy underground station.

  5. Flow visualization by mobile phone cameras

    Science.gov (United States)

    Cierpka, Christian; Hain, Rainer; Buchmann, Nicolas A.

    2016-06-01

    Mobile smart phones were completely changing people's communication within the last ten years. However, these devices do not only offer communication through different channels but also devices and applications for fun and recreation. In this respect, mobile phone cameras include now relatively fast (up to 240 Hz) cameras to capture high-speed videos of sport events or other fast processes. The article therefore explores the possibility to make use of this development and the wide spread availability of these cameras in the terms of velocity measurements for industrial or technical applications and fluid dynamics education in high schools and at universities. The requirements for a simplistic PIV (particle image velocimetry) system are discussed. A model experiment of a free water jet was used to prove the concept and shed some light on the achievable quality and determine bottle necks by comparing the results obtained with a mobile phone camera with data taken by a high-speed camera suited for scientific experiments.

  6. Gamma camera performance: technical assessment protocol

    Energy Technology Data Exchange (ETDEWEB)

    Bolster, A.A. [West Glasgow Hospitals NHS Trust, London (United Kingdom). Dept. of Clinical Physics; Waddington, W.A. [University College London Hospitals NHS Trust, London (United Kingdom). Inst. of Nuclear Medicine

    1996-12-31

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera`s computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author).

  7. Modelling Virtual Camera Behaviour Through Player Gaze

    DEFF Research Database (Denmark)

    Picardi, Andrea; Burelli, Paolo; Yannakakis, Georgios N.

    2012-01-01

    In a three-dimensional virtual environment, aspects such as narrative and interaction largely depend on the placement and animation of the virtual camera. Therefore, virtual camera control plays a critical role in player experience and, thereby, in the overall quality of a computer game. Both game...... industry and game AI research focus on the devel- opment of increasingly sophisticated systems to automate the control of the virtual camera integrating artificial intel- ligence algorithms within physical simulations. However, in both industry and academia little research has been carried out...... on the relationship between virtual camera, game-play and player behaviour. We run a game user experiment to shed some light on this relationship and identify relevant dif- ferences between camera behaviours through different game sessions, playing behaviours and player gaze patterns. Re- sults show that users can...

  8. [Mobile phone based wireless microscopy imaging technology].

    Science.gov (United States)

    Yuan, Yucheng; Liu, Jing

    2011-03-01

    This article proposes a new device named "Wireless Cellscope" that combining mobile phone and optical microscope together. The established wireless microscope platform consists of mobile phone, network monitor, miniaturized microscope or high resolution microscope etc. A series of conceptual experiments were performed on microscopic observation of ordinary objects and mice tumor tissue slices. It was demonstrated that, the new method could acquire microscopy images via a wireless way, which is spatially independent. With small size and low cost, the device thus developed has rather wide applicability in non-disturbing investigation of cell/tissue culture and long distance observation of dangerous biological sample etc.

  9. A Mobile Phone based Speech Therapist

    OpenAIRE

    Pandey, Vinod K.; Pande, Arun; Kopparapu, Sunil Kumar

    2016-01-01

    Patients with articulatory disorders often have difficulty in speaking. These patients need several speech therapy sessions to enable them speak normally. These therapy sessions are conducted by a specialized speech therapist. The goal of speech therapy is to develop good speech habits as well as to teach how to articulate sounds the right way. Speech therapy is critical for continuous improvement to regain normal speech. Speech therapy sessions require a patient to travel to a hospital or a ...

  10. Cloud Computing with Context Cameras

    CERN Document Server

    Pickles, A J

    2013-01-01

    We summarize methods and plans to monitor and calibrate photometric observations with our autonomous, robotic network of 2m, 1m and 40cm telescopes. These are sited globally to optimize our ability to observe time-variable sources. Wide field "context" cameras are aligned with our network telescopes and cycle every 2 minutes through BVriz filters, spanning our optical range. We measure instantaneous zero-point offsets and transparency (throughput) against calibrators in the 5-12m range from the all-sky Tycho2 catalog, and periodically against primary standards. Similar measurements are made for all our science images, with typical fields of view of 0.5 degrees. These are matched against Landolt, Stetson and Sloan standards, and against calibrators in the 10-17m range from the all-sky APASS catalog. Such measurements provide pretty good instantaneous flux calibration, often to better than 5%, even in cloudy conditions. Zero-point and transparency measurements can be used to characterize, monitor and inter-comp...

  11. Practical intraoperative stereo camera calibration.

    Science.gov (United States)

    Pratt, Philip; Bergeles, Christos; Darzi, Ara; Yang, Guang-Zhong

    2014-01-01

    Many of the currently available stereo endoscopes employed during minimally invasive surgical procedures have shallow depths of field. Consequently, focus settings are adjusted from time to time in order to achieve the best view of the operative workspace. Invalidating any prior calibration procedure, this presents a significant problem for image guidance applications as they typically rely on the calibrated camera parameters for a variety of geometric tasks, including triangulation, registration and scene reconstruction. While recalibration can be performed intraoperatively, this invariably results in a major disruption to workflow, and can be seen to represent a genuine barrier to the widespread adoption of image guidance technologies. The novel solution described herein constructs a model of the stereo endoscope across the continuum of focus settings, thereby reducing the number of degrees of freedom to one, such that a single view of reference geometry will determine the calibration uniquely. No special hardware or access to proprietary interfaces is required, and the method is ready for evaluation during human cases. A thorough quantitative analysis indicates that the resulting intrinsic and extrinsic parameters lead to calibrations as accurate as those derived from multiple pattern views.

  12. Smart Camera Technology Increases Quality

    Science.gov (United States)

    2004-01-01

    When it comes to real-time image processing, everyone is an expert. People begin processing images at birth and rapidly learn to control their responses through the real-time processing of the human visual system. The human eye captures an enormous amount of information in the form of light images. In order to keep the brain from becoming overloaded with all the data, portions of an image are processed at a higher resolution than others, such as a traffic light changing colors. changing colors. In the same manner, image processing products strive to extract the information stored in light in the most efficient way possible. Digital cameras available today capture millions of pixels worth of information from incident light. However, at frame rates more than a few per second, existing digital interfaces are overwhelmed. All the user can do is store several frames to memory until that memory is full and then subsequent information is lost. New technology pairs existing digital interface technology with an off-the-shelf complementary metal oxide semiconductor (CMOS) imager to provide more than 500 frames per second of specialty image processing. The result is a cost-effective detection system unlike any other.

  13. True three-dimensional camera

    Science.gov (United States)

    Kornreich, Philipp; Farell, Bart

    2013-01-01

    An imager that can measure the distance from each pixel to the point on the object that is in focus at the pixel is described. This is accomplished by short photo-conducting lightguides at each pixel. In the eye the rods and cones are the fiber-like lightguides. The device uses ambient light that is only coherent in spherical shell-shaped light packets of thickness of one coherence length. Modern semiconductor technology permits the construction of lightguides shorter than a coherence length of ambient light. Each of the frequency components of the broad band light arriving at a pixel has a phase proportional to the distance from an object point to its image pixel. Light frequency components in the packet arriving at a pixel through a convex lens add constructively only if the light comes from the object point in focus at this pixel. The light in packets from all other object points cancels. Thus the pixel receives light from one object point only. The lightguide has contacts along its length. The lightguide charge carriers are generated by the light patterns. These light patterns, and thus the photocurrent, shift in response to the phase of the input signal. Thus, the photocurrent is a function of the distance from the pixel to its object point. Applications include autonomous vehicle navigation and robotic vision. Another application is a crude teleportation system consisting of a camera and a three-dimensional printer at a remote location.

  14. NIR Camera/spectrograph: TEQUILA

    Science.gov (United States)

    Ruiz, E.; Sohn, E.; Cruz-Gonzalez, I.; Salas, L.; Parraga, A.; Torres, R.; Perez, M.; Cobos, F.; Tejada, C.; Iriarte, A.

    1998-11-01

    We describe the configuration and operation modes of the IR camera/spectrograph called TEQUILA, based on a 1024X1024 HgCdTe FPA (HAWAII). The optical system will allow three possible modes of operation: direct imaging, low and medium resolution spectroscopy and polarimetry. The basic system is being designed to consist of the following: 1) A LN$_2$ dewar that allocates the FPA together with the preamplifiers and a 24 filter position cylinder. 2) Control and readout electronics based on DSP modules linked to a workstation through fiber optics. 3) An optomechanical assembly cooled to -30oC that provides an efficient operation of the instrument in its various modes. 4) A control module for the moving parts of the instrument. The opto-mechanical assembly will have the necessary provisions to install a scanning Fabry-Perot interferometer and an adaptive optics correction system. The final image acquisition and control of the whole instrument is carried out in a workstation to provide the observer with a friendly environment. The system will operate at the 2.1 m telescope at the Observatorio Astronomico Nacional in San Pedro Martir, B.C. (Mexico), and is intended to be a first-light instrument for the new 7.8 m Mexican Infrared-Optical Telescope (TIM).

  15. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.

  16. Sky camera geometric calibration using solar observations

    Science.gov (United States)

    Urquhart, Bryan; Kurtz, Ben; Kleissl, Jan

    2016-09-01

    A camera model and associated automated calibration procedure for stationary daytime sky imaging cameras is presented. The specific modeling and calibration needs are motivated by remotely deployed cameras used to forecast solar power production where cameras point skyward and use 180° fisheye lenses. Sun position in the sky and on the image plane provides a simple and automated approach to calibration; special equipment or calibration patterns are not required. Sun position in the sky is modeled using a solar position algorithm (requiring latitude, longitude, altitude and time as inputs). Sun position on the image plane is detected using a simple image processing algorithm. The performance evaluation focuses on the calibration of a camera employing a fisheye lens with an equisolid angle projection, but the camera model is general enough to treat most fixed focal length, central, dioptric camera systems with a photo objective lens. Calibration errors scale with the noise level of the sun position measurement in the image plane, but the calibration is robust across a large range of noise in the sun position. Calibration performance on clear days ranged from 0.94 to 1.24 pixels root mean square error.

  17. Automatic camera tracking for remote manipulators

    Energy Technology Data Exchange (ETDEWEB)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-04-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2/sup 0/ deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables.

  18. Electronic cameras for low-light microscopy.

    Science.gov (United States)

    Rasnik, Ivan; French, Todd; Jacobson, Ken; Berland, Keith

    2013-01-01

    This chapter introduces to electronic cameras, discusses the various parameters considered for evaluating their performance, and describes some of the key features of different camera formats. The chapter also presents the basic understanding of functioning of the electronic cameras and how these properties can be exploited to optimize image quality under low-light conditions. Although there are many types of cameras available for microscopy, the most reliable type is the charge-coupled device (CCD) camera, which remains preferred for high-performance systems. If time resolution and frame rate are of no concern, slow-scan CCDs certainly offer the best available performance, both in terms of the signal-to-noise ratio and their spatial resolution. Slow-scan cameras are thus the first choice for experiments using fixed specimens such as measurements using immune fluorescence and fluorescence in situ hybridization. However, if video rate imaging is required, one need not evaluate slow-scan CCD cameras. A very basic video CCD may suffice if samples are heavily labeled or are not perturbed by high intensity illumination. When video rate imaging is required for very dim specimens, the electron multiplying CCD camera is probably the most appropriate at this technological stage. Intensified CCDs provide a unique tool for applications in which high-speed gating is required. The variable integration time video cameras are very attractive options if one needs to acquire images at video rate acquisition, as well as with longer integration times for less bright samples. This flexibility can facilitate many diverse applications with highly varied light levels.

  19. Intelligent Camera for Surface Defect Inspection

    Institute of Scientific and Technical Information of China (English)

    CHENG Wan-sheng; ZHAO Jie; WANG Ke-cheng

    2007-01-01

    An intelligent camera for surface defect inspection is presented which can pre-process the surface image of a rolled strip and pick defective areas out at a spead of 1600 meters per minute. The camera is made up of a high speed line CCD, a 60Mb/s CCD digitizer with correlated double sampling function, and a field programmable gate array(FPGA), which can quickly distinguish defective areas using a perceptron embedded in FPGA thus the data to be further processed would dramatically be reduced. Some experiments show that the camera can meet high producing speed, and reduce cost and complexity of automation surface inspection systems.

  20. Multi-digital Still Cameras with CCD

    Institute of Scientific and Technical Information of China (English)

    LIU Wen-jing; LONG Zai-chuan; XIONG Ping; HUAN Yao-xiong

    2006-01-01

    Digital still camera is a completely typical tool for capturing the digital images. With the development of IC technology and optimization-algorithm, the performance of digital still cameras(DSCs) will be more and more powerful in the world. But can we obtain the more and better info using the combined information from the multi-digital still camera? The answer is yes by some experiments. By using multi-DSC at different angles, the various 3-D informations of the object are obtained.

  1. Fuzzy logic control for camera tracking system

    Science.gov (United States)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  2. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different....... For this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of scenarios...

  3. Close-range photogrammetry with video cameras

    Science.gov (United States)

    Burner, A. W.; Snow, W. L.; Goad, W. K.

    1985-01-01

    Examples of photogrammetric measurements made with video cameras uncorrected for electronic and optical lens distortions are presented. The measurement and correction of electronic distortions of video cameras using both bilinear and polynomial interpolation are discussed. Examples showing the relative stability of electronic distortions over long periods of time are presented. Having corrected for electronic distortion, the data are further corrected for lens distortion using the plumb line method. Examples of close-range photogrammetric data taken with video cameras corrected for both electronic and optical lens distortion are presented.

  4. Task Panel Sensing with a Movable Camera

    Science.gov (United States)

    Wolfe, William J.; Mathis, Donald W.; Magee, Michael; Hoff, William A.

    1990-03-01

    This paper discusses the integration of model based computer vision with a robot planning system. The vision system deals with structured objects with several movable parts (the "Task Panel"). The robot planning system controls a T3-746 manipulator that has a gripper and a wrist mounted camera. There are two control functions: move the gripper into position for manipulating the panel fixtures (doors, latches, etc.), and move the camera into positions preferred by the vision system. This paper emphasizes the issues related to repositioning the camera for improved viewpoints.

  5. Towards Adaptive Virtual Camera Control In Computer Games

    OpenAIRE

    Burelli, Paolo; Yannakakis, Georgios N.

    2011-01-01

    Automatic camera control aims to define a framework to control virtual camera movements in dynamic and unpredictable virtual environments while ensuring a set of desired visual properties. We inves- tigate the relationship between camera placement and playing behaviour in games and build a user model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platf...

  6. Camera vibration measurement using blinking light-emitting diode array.

    Science.gov (United States)

    Nishi, Kazuki; Matsuda, Yuichi

    2017-01-23

    We present a new method for measuring camera vibrations such as camera shake and shutter shock. This method successfully detects the vibration trajectory and transient waveforms from the camera image itself. We employ a time-varying pattern as the camera test chart over the conventional static pattern. This pattern is implemented using a specially developed blinking light-emitting-diode array. We describe the theoretical framework and pattern analysis of the camera image for measuring camera vibrations. Our verification experiments show that our method has a detection accuracy and sensitivity of 0.1 pixels, and is robust against image distortion. Measurement results of camera vibrations in commercial cameras are also demonstrated.

  7. Contrail study with ground-based cameras

    Directory of Open Access Journals (Sweden)

    U. Schumann

    2013-08-01

    Full Text Available Photogrammetric methods and analysis results for contrails observed with wide-angle cameras are described. Four cameras of two different types (view angle −1. With this information, the aircraft causing the contrails are identified by comparison to traffic waypoint data. The observations are compared with synthetic camera pictures of contrails simulated with the contrail prediction model CoCiP, a Lagrangian model using air traffic movement data and numerical weather prediction (NWP data as input. The results provide tests for the NWP and contrail models. The cameras show spreading and thickening contrails suggesting ice-supersaturation in the ambient air. The ice-supersaturated layer is found thicker and more humid in this case than predicted by the NWP model used. The simulated and observed contrail positions agree up to differences caused by uncertain wind data. The contrail widths, which depend on wake vortex spreading, ambient shear and turbulence, were partly wider than simulated.

  8. Planetary camera control improves microfiche production

    Science.gov (United States)

    Chesterton, W. L.; Lewis, E. B.

    1965-01-01

    Microfiche is prepared using an automatic control system for a planetary camera. The system provides blank end-of-row exposures and signals card completion so the legend of the next card may by photographed.

  9. Calibration Procedures on Oblique Camera Setups

    Science.gov (United States)

    Kemper, G.; Melykuti, B.; Yu, C.

    2016-06-01

    Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager) is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna -IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first step with the help of

  10. Research of Camera Calibration Based on DSP

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2013-09-01

    Full Text Available To take advantage of the high-efficiency and stability of DSP in the data processing and the functions of OpenCV library, this study brought forward a scheme that camera calibration in DSP embedded system calibration. An arithmetic of camera calibration based on OpenCV is designed by analyzing the camera model and lens distortion. The transplantation of EMCV to DSP is completed and the arithmetic of camera calibration is migrated and optimized based on the CCS development environment and the DSP/BIOS system. On the premise of realizing calibration function, this arithmetic improves the efficiency of program execution and the precision of calibration and lays the foundation for further research of the visual location based on DSP embedded system.

  11. Portable mini gamma camera for medical applications

    CERN Document Server

    Porras, E; Benlloch, J M; El-Djalil-Kadi-Hanifi, M; López, S; Pavon, N; Ruiz, J A; Sánchez, F; Sebastiá, A

    2002-01-01

    A small, portable and low-cost gamma camera for medical applications has been developed and clinically tested. This camera, based on a scintillator crystal and a Position Sensitive Photo-Multiplier Tube, has a useful field of view of 4.6 cm diameter and provides 2.2 mm of intrinsic spatial resolution. Its mobility and light weight allow to reach the patient from any desired direction. This camera images small organs with high efficiency and so addresses the demand for devices of specific clinical applications. In this paper, we present the camera and briefly describe the procedures that have led us to choose its configuration and the image reconstruction method. The clinical tests and diagnostic capability are also presented and discussed.

  12. A Survey of Catadioptric Omnidirectional Camera Calibration

    Directory of Open Access Journals (Sweden)

    Yan Zhang

    2013-02-01

    Full Text Available For dozen years, computer vision becomes more popular, in which omnidirectional camera has a larger field of view and widely been used in many fields, such as: robot navigation, visual surveillance, virtual reality, three-dimensional reconstruction, and so on. Camera calibration is an essential step to obtain three-dimensional geometric information from a two-dimensional image. Meanwhile, the omnidirectional camera image has catadioptric distortion, which need to be corrected in many applications, thus the study of such camera calibration method has important theoretical significance and practical applications. This paper firstly introduces the research status of catadioptric omnidirectional imaging system; then the image formation process of catadioptric omnidirectional imaging system has been given; finally a simple classification of omnidirectional imaging method is given, and we discussed the advantages and disadvantages of these methods.

  13. High-performance digital color video camera

    Science.gov (United States)

    Parulski, Kenneth A.; D'Luna, Lionel J.; Benamati, Brian L.; Shelley, Paul R.

    1992-01-01

    Typical one-chip color cameras use analog video processing circuits. An improved digital camera architecture has been developed using a dual-slope A/D conversion technique and two full-custom CMOS digital video processing integrated circuits, the color filter array (CFA) processor and the RGB postprocessor. The system used a 768 X 484 active element interline transfer CCD with a new field-staggered 3G color filter pattern and a lenslet overlay, which doubles the sensitivity of the camera. The industrial-quality digital camera design offers improved image quality, reliability, manufacturability, while meeting aggressive size, power, and cost constraints. The CFA processor digital VLSI chip includes color filter interpolation processing, an optical black clamp, defect correction, white balance, and gain control. The RGB postprocessor digital integrated circuit includes a color correction matrix, gamma correction, 2D edge enhancement, and circuits to control the black balance, lens aperture, and focus.

  14. Increase in the Array Television Camera Sensitivity

    Science.gov (United States)

    Shakhrukhanov, O. S.

    A simple adder circuit for successive television frames that enables to considerably increase the sensitivity of such radiation detectors is suggested by the example of array television camera QN902K.

  15. POLICE BODY CAMERAS: SEEING MAY BE BELIEVING

    Directory of Open Access Journals (Sweden)

    Noel Otu

    2016-11-01

    Full Text Available While the concept of body-mounted cameras (BMC worn by police officers is a controversial issue, it is not new. Since in the early-2000s, police departments across the United States, England, Brazil, and Australia have been implementing wearable cameras. Like all devices used in policing, body-mounted cameras can create a sense of increased power, but also additional responsibilities for both the agencies and individual officers. This paper examines the public debate regarding body-mounted cameras. The conclusions drawn show that while these devices can provide information about incidents relating to police–citizen encounters, and can deter citizen and police misbehavior, these devices can also violate a citizen’s privacy rights. This paper outlines several ramifications for practice as well as implications for policy.

  16. Selecting the Right Camera for Your Desktop.

    Science.gov (United States)

    Rhodes, John

    1997-01-01

    Provides an overview of camera options and selection criteria for desktop videoconferencing. Key factors in image quality are discussed, including lighting, resolution, and signal-to-noise ratio; and steps to improve image quality are suggested. (LRW)

  17. CALIBRATION PROCEDURES ON OBLIQUE CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    G. Kemper

    2016-06-01

    Full Text Available Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna –IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first

  18. Compact stereo endoscopic camera using microprism arrays.

    Science.gov (United States)

    Yang, Sung-Pyo; Kim, Jae-Jun; Jang, Kyung-Won; Song, Weon-Kook; Jeong, Ki-Hun

    2016-03-15

    This work reports a microprism array (MPA) based compact stereo endoscopic camera with a single image sensor. The MPAs were monolithically fabricated by using two-step photolithography and geometry-guided resist reflow to form an appropriate prism angle for stereo image pair formation. The fabricated MPAs were transferred onto a glass substrate with a UV curable resin replica by using polydimethylsiloxane (PDMS) replica molding and then successfully integrated in front of a single camera module. The stereo endoscopic camera with MPA splits an image into two stereo images and successfully demonstrates the binocular disparities between the stereo image pairs for objects with different distances. This stereo endoscopic camera can serve as a compact and 3D imaging platform for medical, industrial, or military uses.

  19. Ge Quantum Dot Infrared Imaging Camera Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Luna Innovations Incorporated proposes to develop a high performance Ge quantum dots-based infrared (IR) imaging camera on Si substrate. The high sensitivity, large...

  20. Vacuum compatible miniature CCD camera head

    Science.gov (United States)

    Conder, Alan D.

    2000-01-01

    A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close(0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.

  1. CMOS Camera Array With Onboard Memory

    Science.gov (United States)

    Gat, Nahum

    2009-01-01

    A compact CMOS (complementary metal oxide semiconductor) camera system has been developed with high resolution (1.3 Megapixels), a USB (universal serial bus) 2.0 interface, and an onboard memory. Exposure times, and other operating parameters, are sent from a control PC via the USB port. Data from the camera can be received via the USB port and the interface allows for simple control and data capture through a laptop computer.

  2. A stereoscopic lens for digital cinema cameras

    Science.gov (United States)

    Lipton, Lenny; Rupkalvis, John

    2015-03-01

    Live-action stereoscopic feature films are, for the most part, produced using a costly post-production process to convert planar cinematography into stereo-pair images and are only occasionally shot stereoscopically using bulky dual-cameras that are adaptations of the Ramsdell rig. The stereoscopic lens design described here might very well encourage more live-action image capture because it uses standard digital cinema cameras and workflow to save time and money.

  3. Analyzing storage media of digital camera

    OpenAIRE

    Chow, KP; Tse, KWH; Law, FYW; Ieong, RSC; Kwan, MYK; Tse, H.; Lai, PKY

    2009-01-01

    Digital photography has become popular in recent years. Photographs have become common tools for people to record every tiny parts of their daily life. By analyzing the storage media of a digital camera, crime investigators may extract a lot of useful information to reconstruct the events. In this work, we will discuss a few approaches in analyzing these kinds of storage media of digital cameras. A hypothetical crime case will be used as case study for demonstration of concepts. © 2009 IEEE.

  4. Single camera stereo using structure from motion

    Science.gov (United States)

    McBride, Jonah; Snorrason, Magnus; Goodsell, Thomas; Eaton, Ross; Stevens, Mark R.

    2005-05-01

    Mobile robot designers frequently look to computer vision to solve navigation, obstacle avoidance, and object detection problems such as those encountered in parking lot surveillance. Stereo reconstruction is a useful technique in this domain and can be done in two ways. The first requires a fixed stereo camera rig to provide two side-by-side images; the second uses a single camera in motion to provide the images. While stereo rigs can be accurately calibrated in advance, they rely on a fixed baseline distance between the two cameras. The advantage of a single-camera method is the flexibility to change the baseline distance to best match each scenario. This directly increases the robustness of the stereo algorithm and increases the effective range of the system. The challenge comes from accurately rectifying the images into an ideal stereo pair. Structure from motion (SFM) can be used to compute the camera motion between the two images, but its accuracy is limited and small errors can cause rectified images to be misaligned. We present a single-camera stereo system that incorporates a Levenberg-Marquardt minimization of rectification parameters to bring the rectified images into alignment.

  5. The Use of Camera Traps in Wildlife

    Directory of Open Access Journals (Sweden)

    Yasin Uçarlı

    2013-11-01

    Full Text Available Camera traps are increasingly used in the abundance and density estimates of wildlife species. Camera traps are very good alternative for direct observation in case, particularly, steep terrain, dense vegetation covered areas or nocturnal species. The main reason for the use of camera traps is eliminated that the economic, personnel and time loss in a continuous manner at the same time in different points. Camera traps, motion and heat sensitive, can take a photo or video according to the models. Crossover points and feeding or mating areas of the focal species are addressed as a priority camera trap set locations. The population size can be finding out by the images combined with Capture-Recapture methods. The population density came out the population size divided to effective sampling area size. Mating and breeding season, habitat choice, group structures and survival rates of the focal species can be achieved from the images. Camera traps are very useful to obtain the necessary data about the particularly mysterious species with economically in planning and conservation efforts.

  6. A comparison of colour micrographs obtained with a charged couple devise (CCD) camera and a 35-mm camera

    DEFF Research Database (Denmark)

    Pedersen, Mads Møller; Smedegaard, Jesper; Jensen, Peter Koch

    2005-01-01

    ophthalmology, colour CCD camera, colour film, digital imaging, resolution, micrographs, histopathology, light microscopy......ophthalmology, colour CCD camera, colour film, digital imaging, resolution, micrographs, histopathology, light microscopy...

  7. Lag Camera: A Moving Multi-Camera Array for Scene-Acquisition

    Directory of Open Access Journals (Sweden)

    Yi Xu

    2007-04-01

    Full Text Available Many applications, such as telepresence, virtual reality, and interactive walkthroughs, require a three-dimensional (3Dmodel of real-world environments. Methods, such as lightfields, geometric reconstruction and computer vision use cameras to acquire visual samples of the environment and construct a model. Unfortunately, obtaining models of real-world locations is a challenging task. In particular, important environments are often actively in use, containing moving objects, such as people entering and leaving the scene. The methods previously listed have difficulty in capturing the color and structure of the environment while in the presence of moving and temporary occluders. We describe a class of cameras called lag cameras. The main concept is to generalize a camera to take samples over space and time. Such a camera, can easily and interactively detect moving objects while continuously moving through the environment. Moreover, since both the lag camera and occluder are moving, the scene behind the occluder is captured by the lag camera even from viewpoints where the occluder lies in between the lag camera and the hidden scene. We demonstrate an implementation of a lag camera, complete with analysis and captured environments.

  8. Camera Calibration Accuracy at Different Uav Flying Heights

    Science.gov (United States)

    Yusoff, A. R.; Ariff, M. F. M.; Idris, K. M.; Majid, Z.; Chong, A. K.

    2017-02-01

    Unmanned Aerial Vehicles (UAVs) can be used to acquire highly accurate data in deformation survey, whereby low-cost digital cameras are commonly used in the UAV mapping. Thus, camera calibration is considered important in obtaining high-accuracy UAV mapping using low-cost digital cameras. The main focus of this study was to calibrate the UAV camera at different camera distances and check the measurement accuracy. The scope of this study included camera calibration in the laboratory and on the field, and the UAV image mapping accuracy assessment used calibration parameters of different camera distances. The camera distances used for the image calibration acquisition and mapping accuracy assessment were 1.5 metres in the laboratory, and 15 and 25 metres on the field using a Sony NEX6 digital camera. A large calibration field and a portable calibration frame were used as the tools for the camera calibration and for checking the accuracy of the measurement at different camera distances. Bundle adjustment concept was applied in Australis software to perform the camera calibration and accuracy assessment. The results showed that the camera distance at 25 metres is the optimum object distance as this is the best accuracy obtained from the laboratory as well as outdoor mapping. In conclusion, the camera calibration at several camera distances should be applied to acquire better accuracy in mapping and the best camera parameter for the UAV image mapping should be selected for highly accurate mapping measurement.

  9. How to Build Your Own Document Camera for around $100

    Science.gov (United States)

    Van Orden, Stephen

    2010-01-01

    Document cameras can have great utility in second language classrooms. However, entry-level consumer document cameras start at around $350. This article describes how the author built three document cameras and offers suggestions for how teachers can successfully build their own quality document camera using a webcam for around $100.

  10. 16 CFR 1025.45 - In camera materials.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false In camera materials. 1025.45 Section 1025.45... PROCEEDINGS Hearings § 1025.45 In camera materials. (a) Definition. In camera materials are documents... excluded from the public record. (b) In camera treatment of documents and testimony. The Presiding...

  11. Hidden cameras everything you need to know about covert recording, undercover cameras and secret filming

    CERN Document Server

    Plomin, Joe

    2016-01-01

    Providing authoritative information on the practicalities of using hidden cameras to expose abuse or wrongdoing, this book is vital reading for anyone who may use or encounter secret filming. It gives specific advice on using phones or covert cameras and unravels the complex legal and ethical issues that need to be considered.

  12. Mobile phone camera benchmarking: combination of camera speed and image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  13. Modulated CMOS camera for fluorescence lifetime microscopy.

    Science.gov (United States)

    Chen, Hongtao; Holst, Gerhard; Gratton, Enrico

    2015-12-01

    Widefield frequency-domain fluorescence lifetime imaging microscopy (FD-FLIM) is a fast and accurate method to measure the fluorescence lifetime of entire images. However, the complexity and high costs involved in construction of such a system limit the extensive use of this technique. PCO AG recently released the first luminescence lifetime imaging camera based on a high frequency modulated CMOS image sensor, QMFLIM2. Here we tested and provide operational procedures to calibrate the camera and to improve the accuracy using corrections necessary for image analysis. With its flexible input/output options, we are able to use a modulated laser diode or a 20 MHz pulsed white supercontinuum laser as the light source. The output of the camera consists of a stack of modulated images that can be analyzed by the SimFCS software using the phasor approach. The nonuniform system response across the image sensor must be calibrated at the pixel level. This pixel calibration is crucial and needed for every camera settings, e.g. modulation frequency and exposure time. A significant dependency of the modulation signal on the intensity was also observed and hence an additional calibration is needed for each pixel depending on the pixel intensity level. These corrections are important not only for the fundamental frequency, but also for the higher harmonics when using the pulsed supercontinuum laser. With these post data acquisition corrections, the PCO CMOS-FLIM camera can be used for various biomedical applications requiring a large frame and high speed acquisition.

  14. Design of Endoscopic Capsule With Multiple Cameras.

    Science.gov (United States)

    Gu, Yingke; Xie, Xiang; Li, Guolin; Sun, Tianjia; Wang, Dan; Yin, Zheng; Zhang, Pengfei; Wang, Zhihua

    2015-08-01

    In order to reduce the miss rate of the wireless capsule endoscopy, in this paper, we propose a new system of the endoscopic capsule with multiple cameras. A master-slave architecture, including an efficient bus architecture and a four level clock management architecture, is applied for the Multiple Cameras Endoscopic Capsule (MCEC). For covering more area of the gastrointestinal tract wall with low power, multiple cameras with a smart image capture strategy, including movement sensitive control and camera selection, are used in the MCEC. To reduce the data transfer bandwidth and power consumption to prolong the MCEC's working life, a low complexity image compressor with PSNR 40.7 dB and compression rate 86% is implemented. A chipset is designed and implemented for the MCEC and a six cameras endoscopic capsule prototype is implemented by using the chipset. With the smart image capture strategy, the coverage rate of the MCEC prototype can achieve 98% and its power consumption is only about 7.1 mW.

  15. Calibration of action cameras for photogrammetric purposes.

    Science.gov (United States)

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  16. Calibration of Action Cameras for Photogrammetric Purposes

    Directory of Open Access Journals (Sweden)

    Caterina Balletti

    2014-09-01

    Full Text Available The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a easy to handle, (b capable of performing under extreme conditions and more importantly (c able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  17. Designing Camera Networks by Convex Quadratic Programming

    KAUST Repository

    Ghanem, Bernard

    2015-05-04

    ​In this paper, we study the problem of automatic camera placement for computer graphics and computer vision applications. We extend the problem formulations of previous work by proposing a novel way to incorporate visibility constraints and camera-to-camera relationships. For example, the placement solution can be encouraged to have cameras that image the same important locations from different viewing directions, which can enable reconstruction and surveillance tasks to perform better. We show that the general camera placement problem can be formulated mathematically as a convex binary quadratic program (BQP) under linear constraints. Moreover, we propose an optimization strategy with a favorable trade-off between speed and solution quality. Our solution is almost as fast as a greedy treatment of the problem, but the quality is significantly higher, so much so that it is comparable to exact solutions that take orders of magnitude more computation time. Because it is computationally attractive, our method also allows users to explore the space of solutions for variations in input parameters. To evaluate its effectiveness, we show a range of 3D results on real-world floorplans (garage, hotel, mall, and airport). ​

  18. Camera Calibration with Radial Variance Component Estimation

    Science.gov (United States)

    Mélykuti, B.; Kruck, E. J.

    2014-11-01

    Camera calibration plays a more and more important role in recent times. Beside real digital aerial survey cameras the photogrammetric market is dominated by a big number of non-metric digital cameras mounted on UAVs or other low-weight flying platforms. The in-flight calibration of those systems has a significant role to enhance the geometric accuracy of survey photos considerably. It is expected to have a better precision of photo measurements in the center of images then along the edges or in the corners. With statistical methods the accuracy of photo measurements in dependency of the distance of points from image center has been analyzed. This test provides a curve for the measurement precision as function of the photo radius. A high number of camera types have been tested with well penetrated point measurements in image space. The result of the tests led to a general consequence to show a functional connection between accuracy and radial distance and to give a method how to check and enhance the geometrical capability of the cameras in respect to these results.

  19. Calibration method for a central catadioptric-perspective camera system.

    Science.gov (United States)

    He, Bingwei; Chen, Zhipeng; Li, Youfu

    2012-11-01

    A central catadioptric-perspective camera system is widely used nowadays. A critical problem is that current calibration methods cannot determine the extrinsic parameters between the central catadioptric camera and a perspective camera effectively. We present a novel calibration method for a central catadioptric-perspective camera system, in which the central catadioptric camera has a hyperbolic mirror. Two cameras are used to capture images of one calibration pattern at different spatial positions. A virtual camera is constructed at the origin of the central catadioptric camera and faced toward the calibration pattern. The transformation between the virtual camera and the calibration pattern could be computed first and the extrinsic parameters between the central catadioptric camera and the calibration pattern could be obtained. Three-dimensional reconstruction results of the calibration pattern show a high accuracy and validate the feasibility of our method.

  20. Speed cameras : how they work and what effect they have.

    OpenAIRE

    2011-01-01

    Much research has been carried out into the effects of speed cameras, and the research shows consistently positive results. International review studies report that speed cameras produce a reduction of approximately 20% in personal injury crashes on road sections where cameras are used. In the Netherlands, research also indicates positive effects on speed behaviour and road safety. Dutch drivers find speed cameras in fixed pole-mounted positions more acceptable than cameras in hidden police c...

  1. Development of a Mobile Phone-Based Weight Loss Lifestyle Intervention for Filipino Americans with Type 2 Diabetes: Protocol and Early Results From the PilAm Go4Health Randomized Controlled Trial

    Science.gov (United States)

    2016-01-01

    Background Filipino Americans are the second largest Asian subgroup in the United States, and were found to have the highest prevalence of obesity and type 2 diabetes (T2D) compared to all Asian subgroups and non-Hispanic whites. In addition to genetic factors, risk factors for Filipinos that contribute to this health disparity include high sedentary rates and high fat diets. However, Filipinos are seriously underrepresented in preventive health research. Research is needed to identify effective interventions to reduce Filipino diabetes risks, subsequent comorbidities, and premature death. Objective The overall goal of this project is to assess the feasibility and potential efficacy of the Filipino Americans Go4Health Weight Loss Program (PilAm Go4Health). This program is a culturally adapted weight loss lifestyle intervention, using digital technology for Filipinos with T2D, to reduce their risk for metabolic syndrome. Methods This study was a 3-month mobile phone-based pilot randomized controlled trial (RCT) weight loss intervention with a wait list active control, followed by a 3-month maintenance phase design for 45 overweight Filipinos with T2D. Participants were randomized to an intervention group (n=22) or active control group (n=23), and analyses of the results are underway. The primary outcome will be percent weight change of the participants, and secondary outcomes will include changes in waist circumference, fasting plasma glucose, glycated hemoglobin A1c, physical activity, fat intake, and sugar-sweetened beverage intake. Data analyses will include descriptive statistics to describe sample characteristics and a feasibility assessment based on recruitment, adherence, and retention. Chi-square, Fisher's exact tests, t-tests, and nonparametric rank tests will be used to assess characteristics of randomized groups. Primary analyses will use analysis of covariance and linear mixed models to compare primary and secondary outcomes at 3 months, compared by arm

  2. Ultra-fast framing camera tube

    Science.gov (United States)

    Kalibjian, Ralph

    1981-01-01

    An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.

  3. Results of the prototype camera for FACT

    Energy Technology Data Exchange (ETDEWEB)

    Anderhub, H. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Backes, M. [Technische Universitaet Dortmund, D-44221 Dortmund (Germany); Biland, A.; Boller, A.; Braun, I. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Bretz, T. [Ecole Polytechnique Federale de Lausanne, CH-1015 Lausanne (Switzerland); Commichau, S.; Commichau, V. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Dorner, D. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); INTEGRAL Science Data Center, CH-1290 Versoix (Switzerland); Gendotti, A.; Grimm, O.; Gunten, H. von; Hildebrand, D.; Horisberger, U. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Koehne, J.-H. [Technische Universitaet Dortmund, D-44221 Dortmund (Germany); Kraehenbuehl, T., E-mail: thomas.kraehenbuehl@phys.ethz.c [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Kranich, D.; Lorenz, E.; Lustermann, W. [ETH Zurich, Institute for Particle Physics, CH-8093 Zurich (Switzerland); Mannheim, K. [Universitaet Wuerzburg, D-97074 Wuerzburg (Germany)

    2011-05-21

    The maximization of the photon detection efficiency (PDE) is a key issue in the development of cameras for Imaging Atmospheric Cherenkov Telescopes. Geiger-mode Avalanche Photodiodes (G-APD) are a promising candidate to replace the commonly used photomultiplier tubes by offering a larger PDE and in addition a facilitated handling. The FACT (First G-APD Cherenkov Telescope) project evaluates the feasibility of this change by building a camera based on 1440 G-APDs for an existing small telescope. As a first step towards a full camera, a prototype module using 144 G-APDs was successfully built and tested. The strong temperature dependence of G-APDs is compensated using a feedback system, which allows to keep the gain of the G-APDs constant to 0.5%.

  4. Mechanical Design of the LSST Camera

    Energy Technology Data Exchange (ETDEWEB)

    Nordby, Martin; Bowden, Gordon; Foss, Mike; Guiffre, Gary; /SLAC; Ku, John; /Unlisted; Schindler, Rafe; /SLAC

    2008-06-13

    The LSST camera is a tightly packaged, hermetically-sealed system that is cantilevered into the main beam of the LSST telescope. It is comprised of three refractive lenses, on-board storage for five large filters, a high-precision shutter, and a cryostat that houses the 3.2 giga-pixel CCD focal plane along with its support electronics. The physically large optics and focal plane demand large structural elements to support them, but the overall size of the camera and its components must be minimized to reduce impact on the image stability. Also, focal plane and optics motions must be minimized to reduce systematic errors in image reconstruction. Design and analysis for the camera body and cryostat will be detailed.

  5. Generating Stereoscopic Television Images With One Camera

    Science.gov (United States)

    Coan, Paul P.

    1996-01-01

    Straightforward technique for generating stereoscopic television images involves use of single television camera translated laterally between left- and right-eye positions. Camera acquires one of images (left- or right-eye image), and video signal from image delayed while camera translated to position where it acquires other image. Length of delay chosen so both images displayed simultaneously or as nearly simultaneously as necessary to obtain stereoscopic effect. Technique amenable to zooming in on small areas within broad scenes. Potential applications include three-dimensional viewing of geological features and meteorological events from spacecraft and aircraft, inspection of workpieces moving along conveyor belts, and aiding ground and water search-and-rescue operations. Also used to generate and display imagery for public education and general information, and possible for medical purposes.

  6. HIGH SPEED KERR CELL FRAMING CAMERA

    Science.gov (United States)

    Goss, W.C.; Gilley, L.F.

    1964-01-01

    The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)

  7. Phase camera experiment for Advanced Virgo

    Energy Technology Data Exchange (ETDEWEB)

    Agatsuma, Kazuhiro, E-mail: agatsuma@nikhef.nl [National Institute for Subatomic Physics, Amsterdam (Netherlands); Beuzekom, Martin van; Schaaf, Laura van der [National Institute for Subatomic Physics, Amsterdam (Netherlands); Brand, Jo van den [National Institute for Subatomic Physics, Amsterdam (Netherlands); VU University, Amsterdam (Netherlands)

    2016-07-11

    We report on a study of the phase camera, which is a frequency selective wave-front sensor of a laser beam. This sensor is utilized for monitoring sidebands produced by phase modulations in a gravitational wave (GW) detector. Regarding the operation of the GW detectors, the laser modulation/demodulation method is used to measure mirror displacements and used for the position controls. This plays a significant role because the quality of controls affect the noise level of the GW detector. The phase camera is able to monitor each sideband separately, which has a great benefit for the manipulation of the delicate controls. Also, overcoming mirror aberrations will be an essential part of Advanced Virgo (AdV), which is a GW detector close to Pisa. Especially low-frequency sidebands can be affected greatly by aberrations in one of the interferometer cavities. The phase cameras allow tracking such changes because the state of the sidebands gives information on mirror aberrations. A prototype of the phase camera has been developed and is currently tested. The performance checks are almost completed and the installation of the optics at the AdV site has started. After the installation and commissioning, the phase camera will be combined to a thermal compensation system that consists of CO{sub 2} lasers and compensation plates. In this paper, we focus on the prototype and show some limitations from the scanner performance. - Highlights: • The phase camera is being developed for a gravitational wave detector. • A scanner performance limits the operation speed and layout design of the system. • An operation range was found by measuring the frequency response of the scanner.

  8. Small Orbital Stereo Tracking Camera Technology Development

    Science.gov (United States)

    Bryan, Tom; MacLeod, Todd; Gagliano, Larry

    2016-01-01

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  9. Virtual camera synthesis for soccer game replays

    Directory of Open Access Journals (Sweden)

    S. Sagas

    2013-07-01

    Full Text Available In this paper, we present a set of tools developed during the creation of a platform that allows the automatic generation of virtual views in a live soccer game production. Observing the scene through a multi-camera system, a 3D approximation of the players is computed and used for the synthesis of virtual views. The system is suitable both for static scenes, to create bullet time effects, and for video applications, where the virtual camera moves as the game plays.

  10. Nitrogen camera: detection of antipersonnel mines

    Science.gov (United States)

    Trower, W. Peter; Saunders, Anna W.; Shvedunov, Vasiliy I.

    1997-01-01

    We describe a nuclear technique, the nitrogen camera, with which we have produced images of elemental nitrogen in concentrations and with surface densities typical of buried plastic anti-personnel mines. We have, under laboratory conditions, obtained images of nitrogen in amounts substantially less than in these small 200 g mines. We report our progress in creating the enabling technology to make the nitrogen camera a field deployable instrument: a mobile 70 MeV electron racetrack microtron and scintillator/semiconductor materials and the detectors based on them.

  11. Camera-enabled techniques for organic synthesis

    Directory of Open Access Journals (Sweden)

    Steven V. Ley

    2013-05-01

    Full Text Available A great deal of time is spent within synthetic chemistry laboratories on non-value-adding activities such as sample preparation and work-up operations, and labour intensive activities such as extended periods of continued data collection. Using digital cameras connected to computer vision algorithms, camera-enabled apparatus can perform some of these processes in an automated fashion, allowing skilled chemists to spend their time more productively. In this review we describe recent advances in this field of chemical synthesis and discuss how they will lead to advanced synthesis laboratories of the future.

  12. Camera-enabled techniques for organic synthesis

    Science.gov (United States)

    Ingham, Richard J; O’Brien, Matthew; Browne, Duncan L

    2013-01-01

    Summary A great deal of time is spent within synthetic chemistry laboratories on non-value-adding activities such as sample preparation and work-up operations, and labour intensive activities such as extended periods of continued data collection. Using digital cameras connected to computer vision algorithms, camera-enabled apparatus can perform some of these processes in an automated fashion, allowing skilled chemists to spend their time more productively. In this review we describe recent advances in this field of chemical synthesis and discuss how they will lead to advanced synthesis laboratories of the future. PMID:23766820

  13. Analysis of Brown camera distortion model

    Science.gov (United States)

    Nowakowski, Artur; Skarbek, Władysław

    2013-10-01

    Contemporary image acquisition devices introduce optical distortion into image. It results in pixel displacement and therefore needs to be compensated for many computer vision applications. The distortion is usually modeled by the Brown distortion model, which parameters can be included in camera calibration task. In this paper we describe original model, its dependencies and analyze orthogonality with regard to radius for its decentering distortion component. We also report experiments with camera calibration algorithm included in OpenCV library, especially a stability of distortion parameters estimation is evaluated.

  14. Vasomotor assessment by camera-based photoplethysmography

    Directory of Open Access Journals (Sweden)

    Trumpp Alexander

    2016-09-01

    Full Text Available Camera-based photoplethysmography (cbPPG is a novel technique that allows the contactless acquisition of cardio-respiratory signals. Previous works on cbPPG most often focused on heart rate extraction. This contribution is directed at the assessment of vasomotor activity by means of cameras. In an experimental study, we show that vasodilation and vasoconstriction both lead to significant changes in cbPPG signals. Our findings underline the potential of cbPPG to monitor vasomotor functions in real-life applications.

  15. A multidetector scintillation camera with 254 channels

    DEFF Research Database (Denmark)

    Sveinsdottir, E; Larsen, B; Rommer, P

    1977-01-01

    A computer-based scintillation camera has been designed for both dynamic and static radionuclide studies. The detecting head has 254 independent sodium iodide crystals, each with a photomultiplier and amplifier. In dynamic measurements simultaneous events can be recorded, and 1 million total counts...... per second can be accommodated with less than 0.5% loss in any one channel. This corresponds to a calculated deadtime of 5 nsec. The multidetector camera is being used for 133Xe dynamic studies of regional cerebral blood flow in man and for 99mTc and 197 Hg static imaging of the brain....

  16. Digital Camera as Gloss Measurement Device

    Directory of Open Access Journals (Sweden)

    Mihálik A.

    2016-05-01

    Full Text Available Nowadays digital cameras with both high resolution and the high dynamic range (HDR can be considered as parallel multiple sensors producing multiple measurements at once. In this paper we describe a technique for processing the captured HDR data and than fit them to theoretical surface reflection models in the form of bidirectional reflectance distribution function (BRDF. Finally, the tabular BRDF can be used to calculate the gloss reflection of the surface. We compare the captured glossiness by digital camera with gloss measured with the industry device and conclude that the results fit well in our experiments.

  17. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  18. Towards Adaptive Virtual Camera Control In Computer Games

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2011-01-01

    Automatic camera control aims to define a framework to control virtual camera movements in dynamic and unpredictable virtual environments while ensuring a set of desired visual properties. We inves- tigate the relationship between camera placement and playing behaviour in games and build a user...... model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platform game, we cluster gaze and camera information to identify camera behaviour profiles and we employ...... machine learning to build predictive models of the virtual camera behaviour. The perfor- mance of the models on unseen data reveals accuracies above 70% for all the player behaviour types identified. The characteristics of the gener- ated models, their limits and their use for creating adaptive automatic...

  19. Multimodal sensing-based camera applications

    Science.gov (United States)

    Bordallo López, Miguel; Hannuksela, Jari; Silvén, J. Olli; Vehviläinen, Markku

    2011-02-01

    The increased sensing and computing capabilities of mobile devices can provide for enhanced mobile user experience. Integrating the data from different sensors offers a way to improve application performance in camera-based applications. A key advantage of using cameras as an input modality is that it enables recognizing the context. Therefore, computer vision has been traditionally utilized in user interfaces to observe and automatically detect the user actions. The imaging applications can also make use of various sensors for improving the interactivity and the robustness of the system. In this context, two applications fusing the sensor data with the results obtained from video analysis have been implemented on a Nokia Nseries mobile device. The first solution is a real-time user interface that can be used for browsing large images. The solution enables the display to be controlled by the motion of the user's hand using the built-in sensors as complementary information. The second application is a real-time panorama builder that uses the device's accelerometers to improve the overall quality, providing also instructions during the capture. The experiments show that fusing the sensor data improves camera-based applications especially when the conditions are not optimal for approaches using camera data alone.

  20. Mapping large environments with an omnivideo camera

    NARCIS (Netherlands)

    Esteban, I.; Booij, O.; Zivkovic, Z.; Krose, B.

    2009-01-01

    We study the problem of mapping a large indoor environment using an omnivideo camera. Local features from omnivideo images and epipolar geometry are used to compute the relative pose between pairs of images. These poses are then used in an Extended Information Filter using a trajectory based represe

  1. Parametrizable cameras for 3D computational steering

    NARCIS (Netherlands)

    Mulder, J.D.; Wijk, J.J. van

    1997-01-01

    We present a method for the definition of multiple views in 3D interfaces for computational steering. The method uses the concept of a point-based parametrizable camera object. This concept enables a user to create and configure multiple views on his custom 3D interface in an intuitive graphical man

  2. Increased Automation in Stereo Camera Calibration Techniques

    Directory of Open Access Journals (Sweden)

    Brandi House

    2006-08-01

    Full Text Available Robotic vision has become a very popular field in recent years due to the numerous promising applications it may enhance. However, errors within the cameras and in their perception of their environment can cause applications in robotics to fail. To help correct these internal and external imperfections, stereo camera calibrations are performed. There are currently many accurate methods of camera calibration available; however, most or all of them are time consuming and labor intensive. This research seeks to automate the most labor intensive aspects of a popular calibration technique developed by Jean-Yves Bouguet. His process requires manual selection of the extreme corners of a checkerboard pattern. The modified process uses embedded LEDs in the checkerboard pattern to act as active fiducials. Images are captured of the checkerboard with the LEDs on and off in rapid succession. The difference of the two images automatically highlights the location of the four extreme corners, and these corner locations take the place of the manual selections. With this modification to the calibration routine, upwards of eighty mouse clicks are eliminated per stereo calibration. Preliminary test results indicate that accuracy is not substantially affected by the modified procedure. Improved automation to camera calibration procedures may finally penetrate the barriers to the use of calibration in practice.

  3. The Legal Implications of Surveillance Cameras

    Science.gov (United States)

    Steketee, Amy M.

    2012-01-01

    The nature of school security has changed dramatically over the last decade. Schools employ various measures, from metal detectors to identification badges to drug testing, to promote the safety and security of staff and students. One of the increasingly prevalent measures is the use of security cameras. In fact, the U.S. Department of Education…

  4. Autofocus method for scanning remote sensing cameras.

    Science.gov (United States)

    Lv, Hengyi; Han, Chengshan; Xue, Xucheng; Hu, Changhong; Yao, Cheng

    2015-07-10

    Autofocus methods are conventionally based on capturing the same scene from a series of positions of the focal plane. As a result, it has been difficult to apply this technique to scanning remote sensing cameras where the scenes change continuously. In order to realize autofocus in scanning remote sensing cameras, a novel autofocus method is investigated in this paper. Instead of introducing additional mechanisms or optics, the overlapped pixels of the adjacent CCD sensors on the focal plane are employed. Two images, corresponding to the same scene on the ground, can be captured at different times. Further, one step of focusing is done during the time interval, so that the two images can be obtained at different focal plane positions. Subsequently, the direction of the next step of focusing is calculated based on the two images. The analysis shows that the method investigated operates without restriction of the time consumption of the algorithm and realizes a total projection for general focus measures and algorithms from digital still cameras to scanning remote sensing cameras. The experiment results show that the proposed method is applicable to the entire focus measure family, and the error ratio is, on average, no more than 0.2% and drops to 0% by reliability improvement, which is lower than that of prevalent approaches (12%). The proposed method is demonstrated to be effective and has potential in other scanning imaging applications.

  5. Lights, Camera, Read! Arizona Reading Program Manual.

    Science.gov (United States)

    Arizona State Dept. of Library, Archives and Public Records, Phoenix.

    This document is the manual for the Arizona Reading Program (ARP) 2003 entitled "Lights, Camera, Read!" This theme spotlights books that were made into movies, and allows readers to appreciate favorite novels and stories that have progressed to the movie screen. The manual consists of eight sections. The Introduction includes welcome letters from…

  6. Camera! Action! Collaborate with Digital Moviemaking

    Science.gov (United States)

    Swan, Kathleen Owings; Hofer, Mark; Levstik, Linda S.

    2007-01-01

    Broadly defined, digital moviemaking integrates a variety of media (images, sound, text, video, narration) to communicate with an audience. There is near-ubiquitous access to the necessary software (MovieMaker and iMovie are bundled free with their respective operating systems) and hardware (computers with Internet access, digital cameras, etc.).…

  7. Metasurface lens: Shrinking the camera size

    Science.gov (United States)

    Sun, Cheng

    2017-01-01

    A miniaturized camera has been developed by integrating a planar metasurface lens doublet with a CMOS image sensor. The metasurface lens doublet corrects the monochromatic aberration and thus delivers nearly diffraction-limited image quality over a wide field of view.

  8. Camera shutter is actuated by electric signal

    Science.gov (United States)

    Neff, J. E.

    1964-01-01

    Rotary solenoid energized by an electric signal opens a camera shutter, and when the solenoid is de-energized a spring closes it. By the use of a microswitch, the shutter may be opened and closed in one continuous, rapid operation when the solenoid is actuated.

  9. Digital Camera Control for Faster Inspection

    Science.gov (United States)

    Brown, Katharine; Siekierski, James D.; Mangieri, Mark L.; Dekome, Kent; Cobarruvias, John; Piplani, Perry J.; Busa, Joel

    2009-01-01

    Digital Camera Control Software (DCCS) is a computer program for controlling a boom and a boom-mounted camera used to inspect the external surface of a space shuttle in orbit around the Earth. Running in a laptop computer in the space-shuttle crew cabin, DCCS commands integrated displays and controls. By means of a simple one-button command, a crewmember can view low- resolution images to quickly spot problem areas and can then cause a rapid transition to high- resolution images. The crewmember can command that camera settings apply to a specific small area of interest within the field of view of the camera so as to maximize image quality within that area. DCCS also provides critical high-resolution images to a ground screening team, which analyzes the images to assess damage (if any); in so doing, DCCS enables the team to clear initially suspect areas more quickly than would otherwise be possible and further saves time by minimizing the probability of re-imaging of areas already inspected. On the basis of experience with a previous version (2.0) of the software, the present version (3.0) incorporates a number of advanced imaging features that optimize crewmember capability and efficiency.

  10. Video Analysis with a Web Camera

    Science.gov (United States)

    Wyrembeck, Edward P.

    2009-01-01

    Recent advances in technology have made video capture and analysis in the introductory physics lab even more affordable and accessible. The purchase of a relatively inexpensive web camera is all you need if you already have a newer computer and Vernier's Logger Pro 3 software. In addition to Logger Pro 3, other video analysis tools such as…

  11. Teaching Camera Calibration by a Constructivist Methodology

    Science.gov (United States)

    Samper, D.; Santolaria, J.; Pastor, J. J.; Aguilar, J. J.

    2010-01-01

    This article describes the Metrovisionlab simulation software and practical sessions designed to teach the most important machine vision camera calibration aspects in courses for senior undergraduate students. By following a constructivist methodology, having received introductory theoretical classes, students use the Metrovisionlab application to…

  12. Camera Systems Rapidly Scan Large Structures

    Science.gov (United States)

    2013-01-01

    Needing a method to quickly scan large structures like an aircraft wing, Langley Research Center developed the line scanning thermography (LST) system. LST works in tandem with a moving infrared camera to capture how a material responds to changes in temperature. Princeton Junction, New Jersey-based MISTRAS Group Inc. now licenses the technology and uses it in power stations and industrial plants.

  13. Measuring rainfall with low-cost cameras

    Science.gov (United States)

    Allamano, Paola; Cavagnero, Paolo; Croci, Alberto; Laio, Francesco

    2016-04-01

    In Allamano et al. (2015), we propose to retrieve quantitative measures of rainfall intensity by relying on the acquisition and analysis of images captured from professional cameras (SmartRAIN technique in the following). SmartRAIN is based on the fundamentals of camera optics and exploits the intensity changes due to drop passages in a picture. The main steps of the method include: i) drop detection, ii) blur effect removal, iii) estimation of drop velocities, iv) drop positioning in the control volume, and v) rain rate estimation. The method has been applied to real rain events with errors of the order of ±20%. This work aims to bridge the gap between the need of acquiring images via professional cameras and the possibility of exporting the technique to low-cost webcams. We apply the image processing algorithm to frames registered with low-cost cameras both in the lab (i.e., controlled rain intensity) and field conditions. The resulting images are characterized by lower resolutions and significant distortions with respect to professional camera pictures, and are acquired with fixed aperture and a rolling shutter. All these hardware limitations indeed exert relevant effects on the readability of the resulting images, and may affect the quality of the rainfall estimate. We demonstrate that a proper knowledge of the image acquisition hardware allows one to fully explain the artefacts and distortions due to the hardware. We demonstrate that, by correcting these effects before applying the image processing algorithm, quantitative rain intensity measures are obtainable with a good accuracy also with low-cost modules.

  14. A novel fully integrated handheld gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Massari, R.; Ucci, A.; Campisi, C. [Biostructure and Bioimaging Institute (IBB), National Research Council of Italy (CNR), Rome (Italy); Scopinaro, F. [University of Rome “La Sapienza”, S. Andrea Hospital, Rome (Italy); Soluri, A., E-mail: alessandro.soluri@ibb.cnr.it [Biostructure and Bioimaging Institute (IBB), National Research Council of Italy (CNR), Rome (Italy)

    2016-10-01

    In this paper, we present an innovative, fully integrated handheld gamma camera, namely designed to gather in the same device the gamma ray detector with the display and the embedded computing system. The low power consumption allows the prototype to be battery operated. To be useful in radioguided surgery, an intraoperative gamma camera must be very easy to handle since it must be moved to find a suitable view. Consequently, we have developed the first prototype of a fully integrated, compact and lightweight gamma camera for radiopharmaceuticals fast imaging. The device can operate without cables across the sterile field, so it may be easily used in the operating theater for radioguided surgery. The prototype proposed consists of a Silicon Photomultiplier (SiPM) array coupled with a proprietary scintillation structure based on CsI(Tl) crystals. To read the SiPM output signals, we have developed a very low power readout electronics and a dedicated analog to digital conversion system. One of the most critical aspects we faced designing the prototype was the low power consumption, which is mandatory to develop a battery operated device. We have applied this detection device in the lymphoscintigraphy technique (sentinel lymph node mapping) comparing the results obtained with those of a commercial gamma camera (Philips SKYLight). The results obtained confirm a rapid response of the device and an adequate spatial resolution for the use in the scintigraphic imaging. This work confirms the feasibility of a small gamma camera with an integrated display. This device is designed for radioguided surgery and small organ imaging, but it could be easily combined into surgical navigation systems.

  15. NEW VERSATILE CAMERA CALIBRATION TECHNIQUE BASED ON LINEAR RECTIFICATION

    Institute of Scientific and Technical Information of China (English)

    Pan Feng; Wang Xuanyin

    2004-01-01

    A new versatile camera calibration technique for machine vision using off-the-shelf cameras is described. Aimed at the large distortion of the off-the-shelf cameras, a new camera distortion rectification technology based on line-rectification is proposed. A full-camera-distortion model is introduced and a linear algorithm is provided to obtain the solution. After the camera rectification intrinsic and extrinsic parameters are obtained based on the relationship between the homograph and absolute conic. This technology needs neither a high-accuracy three-dimensional calibration block, nor a complicated translation or rotation platform. Both simulations and experiments show that this method is effective and robust.

  16. VUV Testing of Science Cameras at MSFC: QE Measurement of the CLASP Flight Cameras

    Science.gov (United States)

    Champey, Patrick R.; Kobayashi, Ken; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512x512 detector, dual channel analog readout electronics and an internally mounted cold block. At the flight operating temperature of -20 C, the CLASP cameras achieved the low-noise performance requirements (less than or equal to 25 e- read noise and greater than or equal to 10 e-/sec/pix dark current), in addition to maintaining a stable gain of approximately equal to 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Four flight-like cameras were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise, dark current and residual non-linearity of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV and EUV science cameras at MSFC.

  17. Registration of Sub-Sequence and Multi-Camera Reconstructions for Camera Motion Estimation

    Directory of Open Access Journals (Sweden)

    Michael Wand

    2010-08-01

    Full Text Available This paper presents different application scenarios for which the registration of sub-sequence reconstructions or multi-camera reconstructions is essential for successful camera motion estimation and 3D reconstruction from video. The registration is achieved by merging unconnected feature point tracks between the reconstructions. One application is drift removal for sequential camera motion estimation of long sequences. The state-of-the-art in drift removal is to apply a RANSAC approach to find unconnected feature point tracks. In this paper an alternative spectral algorithm for pairwise matching of unconnected feature point tracks is used. It is then shown that the algorithms can be combined and applied to novel scenarios where independent camera motion estimations must be registered into a common global coordinate system. In the first scenario multiple moving cameras, which capture the same scene simultaneously, are registered. A second new scenario occurs in situations where the tracking of feature points during sequential camera motion estimation fails completely, e.g., due to large occluding objects in the foreground, and the unconnected tracks of the independent reconstructions must be merged. In the third scenario image sequences of the same scene, which are captured under different illuminations, are registered. Several experiments with challenging real video sequences demonstrate that the presented techniques work in practice.

  18. The AOTF-based NO2 camera

    Science.gov (United States)

    Dekemper, Emmanuel; Vanhamel, Jurgen; Van Opstal, Bert; Fussen, Didier

    2016-12-01

    The abundance of NO2 in the boundary layer relates to air quality and pollution source monitoring. Observing the spatiotemporal distribution of NO2 above well-delimited (flue gas stacks, volcanoes, ships) or more extended sources (cities) allows for applications such as monitoring emission fluxes or studying the plume dynamic chemistry and its transport. So far, most attempts to map the NO2 field from the ground have been made with visible-light scanning grating spectrometers. Benefiting from a high retrieval accuracy, they only achieve a relatively low spatiotemporal resolution that hampers the detection of dynamic features. We present a new type of passive remote sensing instrument aiming at the measurement of the 2-D distributions of NO2 slant column densities (SCDs) with a high spatiotemporal resolution. The measurement principle has strong similarities with the popular filter-based SO2 camera as it relies on spectral images taken at wavelengths where the molecule absorption cross section is different. Contrary to the SO2 camera, the spectral selection is performed by an acousto-optical tunable filter (AOTF) capable of resolving the target molecule's spectral features. The NO2 camera capabilities are demonstrated by imaging the NO2 abundance in the plume of a coal-fired power plant. During this experiment, the 2-D distribution of the NO2 SCD was retrieved with a temporal resolution of 3 min and a spatial sampling of 50 cm (over a 250 × 250 m2 area). The detection limit was close to 5 × 1016 molecules cm-2, with a maximum detected SCD of 4 × 1017 molecules cm-2. Illustrating the added value of the NO2 camera measurements, the data reveal the dynamics of the NO to NO2 conversion in the early plume with an unprecedent resolution: from its release in the air, and for 100 m upwards, the observed NO2 plume concentration increased at a rate of 0.75-1.25 g s-1. In joint campaigns with SO2 cameras, the NO2 camera could also help in removing the bias introduced by the

  19. National Guidelines for Digital Camera Systems Certification

    Science.gov (United States)

    Yaron, Yaron; Keinan, Eran; Benhamu, Moshe; Regev, Ronen; Zalmanzon, Garry

    2016-06-01

    Digital camera systems are a key component in the production of reliable, geometrically accurate, high-resolution geospatial products. These systems have replaced film imaging in photogrammetric data capturing. Today, we see a proliferation of imaging sensors collecting photographs in different ground resolutions, spectral bands, swath sizes, radiometric characteristics, accuracies and carried on different mobile platforms. In addition, these imaging sensors are combined with navigational tools (such as GPS and IMU), active sensors such as laser scanning and powerful processing tools to obtain high quality geospatial products. The quality (accuracy, completeness, consistency, etc.) of these geospatial products is based on the use of calibrated, high-quality digital camera systems. The new survey regulations of the state of Israel specify the quality requirements for each geospatial product including: maps at different scales and for different purposes, elevation models, orthophotographs, three-dimensional models at different levels of details (LOD) and more. In addition, the regulations require that digital camera systems used for mapping purposes should be certified using a rigorous mapping systems certification and validation process which is specified in the Director General Instructions. The Director General Instructions for digital camera systems certification specify a two-step process as follows: 1. Theoretical analysis of system components that includes: study of the accuracy of each component and an integrative error propagation evaluation, examination of the radiometric and spectral response curves for the imaging sensors, the calibration requirements, and the working procedures. 2. Empirical study of the digital mapping system that examines a typical project (product scale, flight height, number and configuration of ground control points and process). The study examine all the aspects of the final product including; its accuracy, the product pixels size

  20. Method for out-of-focus camera calibration.

    Science.gov (United States)

    Bell, Tyler; Xu, Jing; Zhang, Song

    2016-03-20

    State-of-the-art camera calibration methods assume that the camera is at least nearly in focus and thus fail if the camera is substantially defocused. This paper presents a method which enables the accurate calibration of an out-of-focus camera. Specifically, the proposed method uses a digital display (e.g., liquid crystal display monitor) to generate fringe patterns that encode feature points into the carrier phase; these feature points can be accurately recovered, even if the fringe patterns are substantially blurred (i.e., the camera is substantially defocused). Experiments demonstrated that the proposed method can accurately calibrate a camera regardless of the amount of defocusing: the focal length difference is approximately 0.2% when the camera is focused compared to when the camera is substantially defocused.

  1. World's fastest and most sensitive astronomical camera

    Science.gov (United States)

    2009-06-01

    The next generation of instruments for ground-based telescopes took a leap forward with the development of a new ultra-fast camera that can take 1500 finely exposed images per second even when observing extremely faint objects. The first 240x240 pixel images with the world's fastest high precision faint light camera were obtained through a collaborative effort between ESO and three French laboratories from the French Centre National de la Recherche Scientifique/Institut National des Sciences de l'Univers (CNRS/INSU). Cameras such as this are key components of the next generation of adaptive optics instruments of Europe's ground-based astronomy flagship facility, the ESO Very Large Telescope (VLT). ESO PR Photo 22a/09 The CCD220 detector ESO PR Photo 22b/09 The OCam camera ESO PR Video 22a/09 OCam images "The performance of this breakthrough camera is without an equivalent anywhere in the world. The camera will enable great leaps forward in many areas of the study of the Universe," says Norbert Hubin, head of the Adaptive Optics department at ESO. OCam will be part of the second-generation VLT instrument SPHERE. To be installed in 2011, SPHERE will take images of giant exoplanets orbiting nearby stars. A fast camera such as this is needed as an essential component for the modern adaptive optics instruments used on the largest ground-based telescopes. Telescopes on the ground suffer from the blurring effect induced by atmospheric turbulence. This turbulence causes the stars to twinkle in a way that delights poets, but frustrates astronomers, since it blurs the finest details of the images. Adaptive optics techniques overcome this major drawback, so that ground-based telescopes can produce images that are as sharp as if taken from space. Adaptive optics is based on real-time corrections computed from images obtained by a special camera working at very high speeds. Nowadays, this means many hundreds of times each second. The new generation instruments require these

  2. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    Science.gov (United States)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced

  3. Situational Awareness from a Low-Cost Camera System

    Science.gov (United States)

    Freudinger, Lawrence C.; Ward, David; Lesage, John

    2010-01-01

    A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.

  4. Pile volume measurement by range imaging camera in indoor environment

    OpenAIRE

    C. Altuntas

    2014-01-01

    Range imaging (RIM) camera is recent technology in 3D location measurement. The new study areas have been emerged in measurement and data processing together with RIM camera. It has low-cost and fast measurement technique compared to the current measurement techniques. However its measurement accuracy varies according to effects resulting from the device and the environment. The direct sunlight is affect measurement accuracy of the camera. Thus, RIM camera should be used for indoor ...

  5. Camera Augmented Mobile C-arm

    Science.gov (United States)

    Wang, Lejing; Weidert, Simon; Traub, Joerg; Heining, Sandro Michael; Riquarts, Christian; Euler, Ekkehard; Navab, Nassir

    The Camera Augmented Mobile C-arm (CamC) system that extends a regular mobile C-arm by a video camera provides an X-ray and video image overlay. Thanks to the mirror construction and one time calibration of the device, the acquired X-ray images are co-registered with the video images without any calibration or registration during the intervention. It is very important to quantify and qualify the system before its introduction into the OR. In this communication, we extended the previously performed overlay accuracy analysis of the CamC system by another clinically important parameter, the applied radiation dose for the patient. Since the mirror of the CamC system will absorb and scatter radiation, we introduce a method for estimating the correct applied dose by using an independent dose measurement device. The results show that the mirror absorbs and scatters 39% of X-ray radiation.

  6. First polarised light with the NIKA camera

    CERN Document Server

    Ritacco, A; Adane, A; Ade, P; André, P; Beelen, A; Belier, B; Benoît, A; Bideaud, A; Billot, N; Bourrion, O; Calvo, M; Catalano, A; Coiffard, G; Comis, B; D'Addabbo, A; Désert, F -X; Doyle, S; Goupy, J; Kramer, C; Leclercq, S; Macías-Pérez, J F; Martino, J; Mauskopf, P; Maury, A; Mayet, F; Monfardini, A; Pajot, F; Pascale, E; Perotto, L; Pisano, G; Ponthieu, N; Rebolo-Iglesias, M; Réveret, V; Rodriguez, L; Savini, G; Schuster, K; Sievers, A; Thum, C; Triqueneaux, S; Tucker, C; Zylka, R

    2015-01-01

    NIKA is a dual-band camera operating with 315 frequency multiplexed LEKIDs cooled at 100 mK. NIKA is designed to observe the sky in intensity and polarisation at 150 and 260 GHz from the IRAM 30-m telescope. It is a test-bench for the final NIKA2 camera. The incoming linear polarisation is modulated at four times the mechanical rotation frequency by a warm rotating multi-layer Half Wave Plate. Then, the signal is analysed by a wire grid and finally absorbed by the LEKIDs. The small time constant (< 1ms ) of the LEKID detectors combined with the modulation of the HWP enables the quasi-simultaneous measurement of the three Stokes parameters I, Q, U, representing linear polarisation. In this pa- per we present results of recent observational campaigns demonstrating the good performance of NIKA in detecting polarisation at mm wavelength.

  7. SLAM using camera and IMU sensors.

    Energy Technology Data Exchange (ETDEWEB)

    Rothganger, Fredrick H.; Muguira, Maritza M.

    2007-01-01

    Visual simultaneous localization and mapping (VSLAM) is the problem of using video input to reconstruct the 3D world and the path of the camera in an 'on-line' manner. Since the data is processed in real time, one does not have access to all of the data at once. (Contrast this with structure from motion (SFM), which is usually formulated as an 'off-line' process on all the data seen, and is not time dependent.) A VSLAM solution is useful for mobile robot navigation or as an assistant for humans exploring an unknown environment. This report documents the design and implementation of a VSLAM system that consists of a small inertial measurement unit (IMU) and camera. The approach is based on a modified Extended Kalman Filter. This research was performed under a Laboratory Directed Research and Development (LDRD) effort.

  8. The large APEX bolometer camera LABOCA

    Science.gov (United States)

    Siringo, Giorgio; Kreysa, Ernst; Kovacs, Attila; Schuller, Frederic; Weiß, Axel; Esch, Walter; Gemünd, Hans-Peter; Jethava, Nikhil; Lundershausen, Gundula; Güsten, Rolf; Menten, Karl M.; Beelen, Alexandre; Bertoldi, Frank; Beeman, Jeffrey W.; Haller, Eugene E.; Colin, Angel

    2008-07-01

    A new facility instrument, the Large APEX Bolometer Camera (LABOCA), developed by the Max-Planck-Institut für Radioastronomie (MPIfR, Bonn, Germany), has been commissioned in May 2007 for operation on the Atacama Pathfinder Experiment telescope (APEX), a 12 m submillimeter radio telescope located at 5100 m altitude on Llano de Chajnantor in northern Chile. For mapping, this 295-bolometer camera for the 870 micron atmospheric window operates in total power mode without wobbling the secondary mirror. One LABOCA beam is 19 arcsec FWHM and the field of view of the complete array covers 100 square arcmin. Combined with the high efficiency of APEX and the excellent atmospheric transmission at the site, LABOCA offers unprecedented capability in large scale mapping of submillimeter continuum emission. Details of design and operation are presented.

  9. First Polarised Light with the NIKA Camera

    Science.gov (United States)

    Ritacco, A.; Adam, R.; Adane, A.; Ade, P.; André, P.; Beelen, A.; Belier, B.; Benoît, A.; Bideaud, A.; Billot, N.; Bourrion, O.; Calvo, M.; Catalano, A.; Coiffard, G.; Comis, B.; D'Addabbo, A.; Désert, F.-X.; Doyle, S.; Goupy, J.; Kramer, C.; Leclercq, S.; Macías-Pérez, J. F.; Martino, J.; Mauskopf, P.; Maury, A.; Mayet, F.; Monfardini, A.; Pajot, F.; Pascale, E.; Perotto, L.; Pisano, G.; Ponthieu, N.; Rebolo-Iglesias, M.; Revéret, V.; Rodriguez, L.; Savini, G.; Schuster, K.; Sievers, A.; Thum, C.; Triqueneaux, S.; Tucker, C.; Zylka, R.

    2016-08-01

    NIKA is a dual-band camera operating with 315 frequency multiplexed LEKIDs cooled at 100 mK. NIKA is designed to observe the sky in intensity and polarisation at 150 and 260 GHz from the IRAM 30-m telescope. It is a test-bench for the final NIKA2 camera. The incoming linear polarisation is modulated at four times the mechanical rotation frequency by a warm rotating multi-layer half- wave plate. Then, the signal is analyzed by a wire grid and finally absorbed by the lumped element kinetic inductance detectors (LEKIDs). The small time constant (ms ) of the LEKIDs combined with the modulation of the HWP enables the quasi-simultaneous measurement of the three Stokes parameters I, Q, U, representing linear polarisation. In this paper, we present the results of recent observational campaigns demonstrating the good performance of NIKA in detecting polarisation at millimeter wavelength.

  10. Cervical SPECT Camera for Parathyroid Imaging

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2012-08-31

    Primary hyperparathyroidism characterized by one or more enlarged parathyroid glands has become one of the most common endocrine diseases in the world affecting about 1 per 1000 in the United States. Standard treatment is highly invasive exploratory neck surgery called Parathyroidectomy. The surgery has a notable mortality rate because of the close proximity to vital structures. The move to minimally invasive parathyroidectomy is hampered by the lack of high resolution pre-surgical imaging techniques that can accurately localize the parathyroid with respect to surrounding structures. We propose to develop a dedicated ultra-high resolution (~ 1 mm) and high sensitivity (10x conventional camera) cervical scintigraphic imaging device. It will be based on a multiple pinhole-camera SPECT system comprising a novel solid state CZT detector that offers the required performance. The overall system will be configured to fit around the neck and comfortably image a patient.

  11. AUTOMATIC THEFT SECURITY SYSTEM (SMART SURVEILLANCE CAMERA

    Directory of Open Access Journals (Sweden)

    Veena G.S

    2013-12-01

    Full Text Available The proposed work aims to create a smart application camera, with the intention of eliminating the need for a human presence to detect any unwanted sinister activities, such as theft in this case. Spread among the campus, are certain valuable biometric identification systems at arbitrary locations. The application monitosr these systems (hereafter referred to as “object” using our smart camera system based on an OpenCV platform. By using OpenCV Haar Training, employing the Viola-Jones algorithm implementation in OpenCV, we teach the machine to identify the object in environmental conditions. An added feature of face recognition is based on Principal Component Analysis (PCA to generate Eigen Faces and the test images are verified by using distance based algorithm against the eigenfaces, like Euclidean distance algorithm or Mahalanobis Algorithm. If the object is misplaced, or an unauthorized user is in the extreme vicinity of the object, an alarm signal is raised.

  12. Advanced EVA Suit Camera System Development Project

    Science.gov (United States)

    Mock, Kyla

    2016-01-01

    The National Aeronautics and Space Administration (NASA) at the Johnson Space Center (JSC) is developing a new extra-vehicular activity (EVA) suit known as the Advanced EVA Z2 Suit. All of the improvements to the EVA Suit provide the opportunity to update the technology of the video imagery. My summer internship project involved improving the video streaming capabilities of the cameras that will be used on the Z2 Suit for data acquisition. To accomplish this, I familiarized myself with the architecture of the camera that is currently being tested to be able to make improvements on the design. Because there is a lot of benefit to saving space, power, and weight on the EVA suit, my job was to use Altium Design to start designing a much smaller and simplified interface board for the camera's microprocessor and external components. This involved checking datasheets of various components and checking signal connections to ensure that this architecture could be used for both the Z2 suit and potentially other future projects. The Orion spacecraft is a specific project that may benefit from this condensed camera interface design. The camera's physical placement on the suit also needed to be determined and tested so that image resolution can be maximized. Many of the options of the camera placement may be tested along with other future suit testing. There are multiple teams that work on different parts of the suit, so the camera's placement could directly affect their research or design. For this reason, a big part of my project was initiating contact with other branches and setting up multiple meetings to learn more about the pros and cons of the potential camera placements we are analyzing. Collaboration with the multiple teams working on the Advanced EVA Z2 Suit is absolutely necessary and these comparisons will be used as further progress is made for the overall suit design. This prototype will not be finished in time for the scheduled Z2 Suit testing, so my time was

  13. Continuous Graph Partitioning for Camera Network Surveillance

    Science.gov (United States)

    2012-07-23

    robot teams. IEEE Transactions on Robotics , 26(1):32–47, 2010. [15] S. M. LaValle. Planning Algorithms. Cambridge University Press, 2006. Available at... Transactions on Robotics , 28(3):592–606, 2012. [21] M. Spindler, F. Pasqualetti, and F. Bullo. Distributed multi-camera synchronization for smart-intruder...F. Pasqualetti, A. Franchi, and F. Bullo. On cooperative patrolling: Optimal trajectories, complexity analysis and approximation algorithms. IEEE

  14. Task-based automatic camera placement

    OpenAIRE

    Kabak, Mustafa

    2010-01-01

    Ankara : The Department of Computer Engineering and the Institute of Engineering and Science of Bilkent Univ, 2010. Thesis (Master's) -- Bilkent University, 2010. Includes biblioraphical references 56-57. Placing cameras to view an animation that takes place in a virtual 3D environment is a di cult task. Correctly placing an object in space and orienting it, and furthermore, animating it to follow the action in the scene is an activity that requires considerable expertise. ...

  15. Using a portable holographic camera in cosmetology

    Science.gov (United States)

    Bakanas, R.; Gudaitis, G. A.; Zacharovas, S. J.; Ratcliffe, D. B.; Hirsch, S.; Frey, S.; Thelen, A.; Ladrière, N.; Hering, P.

    2006-07-01

    The HSF-MINI portable holographic camera is used to record holograms of the human face. The recorded holograms are analyzed using a unique three-dimensional measurement system that provides topometric data of the face with resolution less than or equal to 0.5 mm. The main advantages of this method over other, more traditional methods (such as laser triangulation and phase-measurement triangulation) are discussed.

  16. VIRUS-P: camera design and performance

    Science.gov (United States)

    Tufts, Joseph R.; MacQueen, Phillip J.; Smith, Michael P.; Segura, Pedro R.; Hill, Gary J.; Edmonston, Robert D.

    2008-07-01

    We present the design and performance of the prototype Visible Integral-field Replicable Unit Spectrograph (VIRUS-P) camera. Commissioned in 2007, VIRUS-P is the prototype for 150+ identical fiber-fed integral field spectrographs for the Hobby-Eberly Telescope Dark Energy Experiment. With minimal complexity, the gimbal mounted, double-Schmidt design achieves high on-sky throughput, image quality, contrast, and stability with novel optics, coatings, baffling, and minimization of obscuration. The system corrector working for both the collimator and f / 1.33 vacuum Schmidt camera serves as the cryostat window while a 49 mm square aspheric field flattener sets the central obscuration. The mount, electronics, and cooling of the 2k × 2k, Fairchild Imaging CCD3041-BI fit in the field-flattener footprint. Ultra-black knife edge baffles at the corrector, spider, and adjustable mirror, and a detector mask, match the optical footprints at each location and help maximize the 94% contrast between 245 spectra. An optimally stiff and light symmetric four vane stainless steel spider supports the CCD which is thermally isolated with an equally stiff Ultem-1000 structure. The detector/field flattener spacing is maintained to 1 μm for all camera orientations and repeatably reassembled to 12 μm. Invar rods in tension hold the camera focus to +/-4 μm over a -5-25 °C temperature range. Delivering a read noise of 4.2 e- RMS, sCTE of 1-10-5 , and pCTE of 1-10-6 at 100 kpix/s, the McDonald V2 controller also helps to achieve a 38 hr hold time with 3 L of LN2 while maintaining the detector temperature setpoint to 150 μK (5σ RMS).

  17. Camera Development for the Cherenkov Telescope Array

    Science.gov (United States)

    Moncada, Roberto Jose

    2017-01-01

    With the Cherenkov Telescope Array (CTA), the very-high-energy gamma-ray universe, between 30 GeV and 300 TeV, will be probed at an unprecedented resolution, allowing deeper studies of known gamma-ray emitters and the possible discovery of new ones. This exciting project could also confirm the particle nature of dark matter by looking for the gamma rays produced by self-annihilating weakly interacting massive particles (WIMPs). The telescopes will use the imaging atmospheric Cherenkov technique (IACT) to record Cherenkov photons that are produced by the gamma-ray induced extensive air shower. One telescope design features dual-mirror Schwarzschild-Couder (SC) optics that allows the light to be finely focused on the high-resolution silicon photomultipliers of the camera modules starting from a 9.5-meter primary mirror. Each camera module will consist of a focal plane module and front-end electronics, and will have four TeV Array Readout with GSa/s Sampling and Event Trigger (TARGET) chips, giving them 64 parallel input channels. The TARGET chip has a self-trigger functionality for readout that can be used in higher logic across camera modules as well as across individual telescopes, which will each have 177 camera modules. There will be two sites, one in the northern and the other in the southern hemisphere, for full sky coverage, each spanning at least one square kilometer. A prototype SC telescope is currently under construction at the Fred Lawrence Whipple Observatory in Arizona. This work was supported by the National Science Foundation's REU program through NSF award AST-1560016.

  18. Rank-based camera spectral sensitivity estimation.

    Science.gov (United States)

    Finlayson, Graham; Darrodi, Maryam Mohammadzadeh; Mackiewicz, Michal

    2016-04-01

    In order to accurately predict a digital camera response to spectral stimuli, the spectral sensitivity functions of its sensor need to be known. These functions can be determined by direct measurement in the lab-a difficult and lengthy procedure-or through simple statistical inference. Statistical inference methods are based on the observation that when a camera responds linearly to spectral stimuli, the device spectral sensitivities are linearly related to the camera rgb response values, and so can be found through regression. However, for rendered images, such as the JPEG images taken by a mobile phone, this assumption of linearity is violated. Even small departures from linearity can negatively impact the accuracy of the recovered spectral sensitivities, when a regression method is used. In our work, we develop a novel camera spectral sensitivity estimation technique that can recover the linear device spectral sensitivities from linear images and the effective linear sensitivities from rendered images. According to our method, the rank order of a pair of responses imposes a constraint on the shape of the underlying spectral sensitivity curve (of the sensor). Technically, each rank-pair splits the space where the underlying sensor might lie in two parts (a feasible region and an infeasible region). By intersecting the feasible regions from all the ranked-pairs, we can find a feasible region of sensor space. Experiments demonstrate that using rank orders delivers equal estimation to the prior art. However, the Rank-based method delivers a step-change in estimation performance when the data is not linear and, for the first time, allows for the estimation of the effective sensitivities of devices that may not even have "raw mode." Experiments validate our method.

  19. Tracking Using Peer-to-Peer Smart Infrared Cameras

    Science.gov (United States)

    2008-11-05

    calibration and gesture recognition from multi-spectral camera setups, including infrared and visible cameras. Result: We developed new object models for...work on single-camera gesture recognition . We partnered with Yokogawa Electric to develop new architectures for embedded computer vision. We developed

  20. Speed cameras : how they work and what effect they have.

    NARCIS (Netherlands)

    2011-01-01

    Much research has been carried out into the effects of speed cameras, and the research shows consistently positive results. International review studies report that speed cameras produce a reduction of approximately 20% in personal injury crashes on road sections where cameras are used. In the Nethe

  1. CCD characterization for a range of color cameras

    NARCIS (Netherlands)

    Withagen, P.J.; Groen, F.C.A.; Schutte, K.

    2005-01-01

    CCD cameras are widely used for remote sensing and image processing applications. However, most cameras are produced to create nice images, not to do accurate measurements. Post processing operations such as gamma adjustment and automatic gain control are incorporated in the camera. When a (CCD) cam

  2. 16 CFR 3.45 - In camera orders.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false In camera orders. 3.45 Section 3.45... PRACTICE FOR ADJUDICATIVE PROCEEDINGS Hearings § 3.45 In camera orders. (a) Definition. Except as hereinafter provided, material made subject to an in camera order will be kept confidential and not placed...

  3. 21 CFR 892.1100 - Scintillation (gamma) camera.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Scintillation (gamma) camera. 892.1100 Section 892...) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1100 Scintillation (gamma) camera. (a) Identification. A scintillation (gamma) camera is a device intended to image the distribution of radionuclides...

  4. Weed detection by UAV with camera guided landing sequence

    DEFF Research Database (Denmark)

    Dyrmann, Mads

    the built-in GPS, allows for the UAV to be navigated within the field of view of a camera, which is mounted on the landing platform. The camera on the platform determines the UAVs position and orientation from markers printed on the UAV, whereby it can be guided in its landing. The UAV has a camera mounted...

  5. 39 CFR 3001.31a - In camera orders.

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false In camera orders. 3001.31a Section 3001.31a Postal... Applicability § 3001.31a In camera orders. (a) Definition. Except as hereinafter provided, documents and testimony made subject to in camera orders are not made a part of the public record, but are...

  6. 15 CFR 743.3 - Thermal imaging camera reporting.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Thermal imaging camera reporting. 743... REPORTING § 743.3 Thermal imaging camera reporting. (a) General requirement. Exports of thermal imaging cameras must be reported to BIS as provided in this section. (b) Transactions to be reported. Exports...

  7. 21 CFR 878.4160 - Surgical camera and accessories.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Surgical camera and accessories. 878.4160 Section... (CONTINUED) MEDICAL DEVICES GENERAL AND PLASTIC SURGERY DEVICES Surgical Devices § 878.4160 Surgical camera and accessories. (a) Identification. A surgical camera and accessories is a device intended to be...

  8. Single eye or camera with depth perception

    Science.gov (United States)

    Kornreich, Philipp; Farell, Bart

    2012-10-01

    An imager that can measure the distance from each pixel to the point on the object that is in focus at the pixel is described. This is accomplished by a short photoconducting lossi lightguide section at each pixel. The eye or camera lens selects the object point who's range is to be determined at the pixel. Light arriving at an image point trough a convex lens adds constructively only if it comes from the object point that is in focus at this pixel.. Light waves from all other object points cancel. Thus the lightguide at this pixel receives light from one object point only. This light signal has a phase component proportional to the range. The light intensity modes and thus the photocurrent in the lightguides shift in response to the phase of the incoming light. Contacts along the length of the lightguide collect the photocurrent signal containing the range information. Applications of this camera include autonomous vehicle navigation and robotic vision. An interesting application is as part of a crude teleportation system consisting of this camera and a three dimensional printer at a remote location.

  9. Auto convergence for stereoscopic 3D cameras

    Science.gov (United States)

    Zhang, Buyue; Kothandaraman, Sreenivas; Batur, Aziz Umit

    2012-03-01

    Viewing comfort is an important concern for 3-D capable consumer electronics such as 3-D cameras and TVs. Consumer generated content is typically viewed at a close distance which makes the vergence-accommodation conflict particularly pronounced, causing discomfort and eye fatigue. In this paper, we present a Stereo Auto Convergence (SAC) algorithm for consumer 3-D cameras that reduces the vergence-accommodation conflict on the 3-D display by adjusting the depth of the scene automatically. Our algorithm processes stereo video in realtime and shifts each stereo frame horizontally by an appropriate amount to converge on the chosen object in that frame. The algorithm starts by estimating disparities between the left and right image pairs using correlations of the vertical projections of the image data. The estimated disparities are then analyzed by the algorithm to select a point of convergence. The current and target disparities of the chosen convergence point determines how much horizontal shift is needed. A disparity safety check is then performed to determine whether or not the maximum and minimum disparity limits would be exceeded after auto convergence. If the limits would be exceeded, further adjustments are made to satisfy the safety limits. Finally, desired convergence is achieved by shifting the left and the right frames accordingly. Our algorithm runs real-time at 30 fps on a TI OMAP4 processor. It is tested using an OMAP4 embedded prototype stereo 3-D camera. It significantly improves 3-D viewing comfort.

  10. Stereo cameras on the International Space Station

    Science.gov (United States)

    Sabbatini, Massimo; Visentin, Gianfranco; Collon, Max; Ranebo, Hans; Sunderland, David; Fortezza, Raimondo

    2007-02-01

    Three-dimensional media is a unique and efficient means to virtually visit/observe objects that cannot be easily reached otherwise, like the International Space Station. The advent of auto-stereoscopic displays and stereo projection system is making the stereo media available to larger audiences than the traditional scientists and design engineers communities. It is foreseen that a major demand for 3D content shall come from the entertainment area. Taking advantage of the 6 months long permanence on the International Space Station of a colleague European Astronaut, Thomas Reiter, the Erasmus Centre uploaded to the ISS a newly developed, fully digital stereo camera, the Erasmus Recording Binocular. Testing the camera and its human interfaces in weightlessness, as well as accurately mapping the interior of the ISS are the main objectives of the experiment that has just been completed at the time of writing. The intent of this paper is to share with the readers the design challenges tackled in the development and operation of the ERB camera and highlight some of the future plans the Erasmus Centre team has in the pipeline.

  11. Infrared stereo camera for human machine interface

    Science.gov (United States)

    Edmondson, Richard; Vaden, Justin; Chenault, David

    2012-06-01

    Improved situational awareness results not only from improved performance of imaging hardware, but also when the operator and human factors are considered. Situational awareness for IR imaging systems frequently depends on the contrast available. A significant improvement in effective contrast for the operator can result when depth perception is added to the display of IR scenes. Depth perception through flat panel 3D displays are now possible due to the number of 3D displays entering the consumer market. Such displays require appropriate and human friendly stereo IR video input in order to be effective in the dynamic military environment. We report on a stereo IR camera that has been developed for integration on to an unmanned ground vehicle (UGV). The camera has auto-convergence capability that significantly reduces ill effects due to image doubling, minimizes focus-convergence mismatch, and eliminates the need for the operator to manually adjust camera properties. Discussion of the size, weight, and power requirements as well as integration onto the robot platform will be given along with description of the stand alone operation.

  12. Imaging characteristics of photogrammetric camera systems

    Science.gov (United States)

    Welch, R.; Halliday, J.

    1973-01-01

    In view of the current interest in high-altitude and space photographic systems for photogrammetric mapping, the United States Geological Survey (U.S.G.S.) undertook a comprehensive research project designed to explore the practical aspects of applying the latest image quality evaluation techniques to the analysis of such systems. The project had two direct objectives: (1) to evaluate the imaging characteristics of current U.S.G.S. photogrammetric camera systems; and (2) to develop methodologies for predicting the imaging capabilities of photogrammetric camera systems, comparing conventional systems with new or different types of systems, and analyzing the image quality of photographs. Image quality was judged in terms of a number of evaluation factors including response functions, resolving power, and the detectability and measurability of small detail. The limiting capabilities of the U.S.G.S. 6-inch and 12-inch focal length camera systems were established by analyzing laboratory and aerial photographs in terms of these evaluation factors. In the process, the contributing effects of relevant parameters such as lens aberrations, lens aperture, shutter function, image motion, film type, and target contrast procedures for analyzing image quality and predicting and comparing performance capabilities. ?? 1973.

  13. Remote hardware-reconfigurable robotic camera

    Science.gov (United States)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.

    2001-10-01

    In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.

  14. Color camera pyrometry for high explosive detonations

    Science.gov (United States)

    Densmore, John; Biss, Matthew; Homan, Barrie; McNesby, Kevin

    2011-06-01

    Temperature measurements of high-explosive and combustion processes are difficult because of the speed and environment of the events. We have characterized and calibrated a digital high-speed color camera that may be used as an optical pyrometer to overcome these challenges. The camera provides both high temporal and spatial resolution. The color filter array of the sensor uses three color filters to measure the spectral distribution of the imaged light. A two-color ratio method is used to calculate a temperature using the color filter array raw image data and a gray-body assumption. If the raw image data is not available, temperatures may be calculated from processed images or movies depending on proper analysis of the digital color imaging pipeline. We analyze three transformations within the pipeline (demosaicing, white balance, and gamma-correction) to determine their effect on the calculated temperature. Using this technique with a Vision Research Phantom color camera, we have measured the temperature of exploded C-4 charges. The surface temperature of the resulting fireball rapidly increases after detonation and then decayed to a constant value of approximately 1980 K. Processed images indicates that the temperature remains constant until the light intensity decreased below the background value.

  15. Refocusing distance of a standard plenoptic camera.

    Science.gov (United States)

    Hahne, Christopher; Aggoun, Amar; Velisavljevic, Vladan; Fiebig, Susanne; Pesch, Matthias

    2016-09-19

    Recent developments in computational photography enabled variation of the optical focus of a plenoptic camera after image exposure, also known as refocusing. Existing ray models in the field simplify the camera's complexity for the purpose of image and depth map enhancement, but fail to satisfyingly predict the distance to which a photograph is refocused. By treating a pair of light rays as a system of linear functions, it will be shown in this paper that its solution yields an intersection indicating the distance to a refocused object plane. Experimental work is conducted with different lenses and focus settings while comparing distance estimates with a stack of refocused photographs for which a blur metric has been devised. Quantitative assessments over a 24 m distance range suggest that predictions deviate by less than 0.35 % in comparison to an optical design software. The proposed refocusing estimator assists in predicting object distances just as in the prototyping stage of plenoptic cameras and will be an essential feature in applications demanding high precision in synthetic focus or where depth map recovery is done by analyzing a stack of refocused photographs.

  16. Theory and applications of smart cameras

    CERN Document Server

    2016-01-01

    This book presents an overview of smart camera systems, considering practical applications but also reviewing fundamental aspects of the underlying technology.  It introduces in a tutorial style the principles of sensing and signal processing, and also describes topics such as wireless connection to the Internet of Things (IoT) which is expected to be the biggest market for smart cameras. It is an excellent guide to the fundamental of smart camera technology, and the chapters complement each other well as the authors have worked as a team under the auspice of GFP(Global Frontier Project), the largest-scale funded research in Korea.  This is the third of three books based on the Integrated Smart Sensors research project, which describe the development of innovative devices, circuits, and system-level enabling technologies.  The aim of the project was to develop common platforms on which various devices and sensors can be loaded, and to create systems offering significant improvements in information processi...

  17. SPECT detectors: the Anger Camera and beyond

    Science.gov (United States)

    Peterson, Todd E.; Furenlid, Lars R.

    2011-09-01

    The development of radiation detectors capable of delivering spatial information about gamma-ray interactions was one of the key enabling technologies for nuclear medicine imaging and, eventually, single-photon emission computed tomography (SPECT). The continuous sodium iodide scintillator crystal coupled to an array of photomultiplier tubes, almost universally referred to as the Anger Camera after its inventor, has long been the dominant SPECT detector system. Nevertheless, many alternative materials and configurations have been investigated over the years. Technological advances as well as the emerging importance of specialized applications, such as cardiac and preclinical imaging, have spurred innovation such that alternatives to the Anger Camera are now part of commercial imaging systems. Increased computing power has made it practical to apply advanced signal processing and estimation schemes to make better use of the information contained in the detector signals. In this review we discuss the key performance properties of SPECT detectors and survey developments in both scintillator and semiconductor detectors and their readouts with an eye toward some of the practical issues at least in part responsible for the continuing prevalence of the Anger Camera in the clinic.

  18. Terrain mapping camera for Chandrayaan-1

    Indian Academy of Sciences (India)

    A S Kiran Kumar; A Roy Chowdhury

    2005-12-01

    The Terrain Mapping Camera (TMC)on India ’s first satellite for lunar exploration,Chandrayaan-1, is for generating high-resolution 3-dimensional maps of the Moon.With this instrument,a complete topographic map of the Moon with 5 m spatial resolution and 10-bit quantization will be available for scienti fic studies.The TMC will image within the panchromatic spectral band of 0.4 to 0.9 m with a stereo view in the fore,nadir and aft directions of the spacecraft movement and have a B/H ratio of 1.The swath coverage will be 20 km.The camera is configured for imaging in the push broom-mode with three linear detectors in the image plane.The camera will have four gain settings to cover the varying illumination conditions of the Moon.Additionally,a provision of imaging with reduced resolution,for improving Signal-to-Noise Ratio (SNR)in polar regions,which have poor illumination conditions throughout,has been made.SNR of better than 100 is expected in the ± 60° latitude region for mature mare soil,which is one of the darkest regions on the lunar surface. This paper presents a brief description of the TMC instrument.

  19. Observed inter-camera variability of clinically relevant performance characteristics for Siemens Symbia gamma cameras.

    Science.gov (United States)

    Kappadath, S Cheenu; Erwin, William D; Wendt, Richard E

    2006-11-28

    We conducted an evaluation of the intercamera (i.e., between cameras) variability in clinically relevant performance characteristics for Symbia gamma cameras (Siemens Medical Solutions, Malvern, PA) based on measurements made using nine separate systems. The significance of the observed intercamera variability was determined by comparing it to the intracamera (i.e., within a single camera) variability. Measurements of performance characteristics were based on the standards of the National Electrical Manufacturers Association and reports 6, 9, 22, and 52 from the American Association of Physicists in Medicine. All measurements were performed using 99mTc (except 57Co used for extrinsic resolution) and low-energy, high-resolution collimation. Of the nine cameras, four have crystals 3/8 in. thick and five have crystals 5/8 in. thick. We evaluated intrinsic energy resolution, intrinsic and extrinsic spatial resolution, intrinsic integral and differential flood uniformity over the useful field-of-view, count rate at 20% count loss, planar sensitivity, single-photon emission computed tomography (SPECT) resolution, and SPECT integral uniformity. The intracamera variability was estimated by repeated measurements of the performance characteristics on a single system. The significance of the observed intercamera variability was evaluated using the two-tailed F distribution. The planar sensitivity of the gamma cameras tested was found be variable at the 99.8% confidence level for both the 3/8-in. and 5/8-in. crystal systems. The integral uniformity and energy resolution were found to be variable only for the 5/8-in. crystal systems at the 98% and 90% confidence level, respectively. All other performance characteristics tested exhibited no significant variability between camera systems. The measured variability reported here could perhaps be used to define nominal performance values of Symbia gamma cameras for planar and SPECT imaging.

  20. Disaster Response for Effective Mapping and Wayfinding

    NARCIS (Netherlands)

    Gunawan L.T.

    2013-01-01

    The research focuses on guiding the affected population towards a safe location in a disaster area by utilizing their self-help capacity with prevalent mobile technology. In contrast to the traditional centralized information management systems for disaster response, this research proposes a decen-

  1. Evaluation of a scientific CMOS camera for astronomical observations

    Institute of Scientific and Technical Information of China (English)

    Peng Qiu; Yong-Na Mao; Xiao-Meng Lu; E Xiang; Xiao-Jun Jiang

    2013-01-01

    We evaluate the performance of the first generation scientific CMOS (sCMOS) camera used for astronomical observations.The sCMOS camera was attached to a 25 cm telescope at Xinglong Observatory,in order to estimate its photometric capabilities.We further compared the capabilities of the sCMOS camera with that of full-frame and electron multiplying CCD cameras in laboratory tests and observations.The results indicate the sCMOS camera is capable of performing photometry of bright sources,especially when high spatial resolution or temporal resolution is desired.

  2. High-dimensional camera shake removal with given depth map.

    Science.gov (United States)

    Yue, Tao; Suo, Jinli; Dai, Qionghai

    2014-06-01

    Camera motion blur is drastically nonuniform for large depth-range scenes, and the nonuniformity caused by camera translation is depth dependent but not the case for camera rotations. To restore the blurry images of large-depth-range scenes deteriorated by arbitrary camera motion, we build an image blur model considering 6-degrees of freedom (DoF) of camera motion with a given scene depth map. To make this 6D depth-aware model tractable, we propose a novel parametrization strategy to reduce the number of variables and an effective method to estimate high-dimensional camera motion as well. The number of variables is reduced by temporal sampling motion function, which describes the 6-DoF camera motion by sampling the camera trajectory uniformly in time domain. To effectively estimate the high-dimensional camera motion parameters, we construct the probabilistic motion density function (PMDF) to describe the probability distribution of camera poses during exposure, and apply it as a unified constraint to guide the convergence of the iterative deblurring algorithm. Specifically, PMDF is computed through a back projection from 2D local blur kernels to 6D camera motion parameter space and robust voting. We conduct a series of experiments on both synthetic and real captured data, and validate that our method achieves better performance than existing uniform methods and nonuniform methods on large-depth-range scenes.

  3. Presence capture cameras - a new challenge to the image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  4. Qualification Tests of Micro-camera Modules for Space Applications

    Science.gov (United States)

    Kimura, Shinichi; Miyasaka, Akira

    Visual capability is very important for space-based activities, for which small, low-cost space cameras are desired. Although cameras for terrestrial applications are continually being improved, little progress has been made on cameras used in space, which must be extremely robust to withstand harsh environments. This study focuses on commercial off-the-shelf (COTS) CMOS digital cameras because they are very small and are based on an established mass-market technology. Radiation and ultrahigh-vacuum tests were conducted on a small COTS camera that weighs less than 100 mg (including optics). This paper presents the results of the qualification tests for COTS cameras and for a small, low-cost COTS-based space camera.

  5. The GCT camera for the Cherenkov Telescope Array

    CERN Document Server

    Brown, Anthony M; Allan, D; Amans, J P; Armstrong, T P; Balzer, A; Berge, D; Boisson, C; Bousquet, J -J; Bryan, M; Buchholtz, G; Chadwick, P M; Costantini, H; Cotter, G; Daniel, M K; De Franco, A; De Frondat, F; Dournaux, J -L; Dumas, D; Fasola, G; Funk, S; Gironnet, J; Graham, J A; Greenshaw, T; Hervet, O; Hidaka, N; Hinton, J A; Huet, J -M; Jegouzo, I; Jogler, T; Kraus, M; Lapington, J S; Laporte, P; Lefaucheur, J; Markoff, S; Melse, T; Mohrmann, L; Molyneux, P; Nolan, S J; Okumura, A; Osborne, J P; Parsons, R D; Rosen, S; Ross, D; Rowell, G; Sato, Y; Sayede, F; Schmoll, J; Schoorlemmer, H; Servillat, M; Sol, H; Stamatescu, V; Stephan, M; Stuik, R; Sykes, J; Tajima, H; Thornhill, J; Tibaldo, L; Trichard, C; Vink, J; Watson, J J; White, R; Yamane, N; Zech, A; Zink, A; Zorn, J

    2016-01-01

    The Gamma-ray Cherenkov Telescope (GCT) is proposed for the Small-Sized Telescope component of the Cherenkov Telescope Array (CTA). GCT's dual-mirror Schwarzschild-Couder (SC) optical system allows the use of a compact camera with small form-factor photosensors. The GCT camera is ~0.4 m in diameter and has 2048 pixels; each pixel has a ~0.2 degree angular size, resulting in a wide field-of-view. The design of the GCT camera is high performance at low cost, with the camera housing 32 front-end electronics modules providing full waveform information for all of the camera's 2048 pixels. The first GCT camera prototype, CHEC-M, was commissioned during 2015, culminating in the first Cherenkov images recorded by a SC telescope and the first light of a CTA prototype. In this contribution we give a detailed description of the GCT camera and present preliminary results from CHEC-M's commissioning.

  6. Simple method for calibrating omnidirectional stereo with multiple cameras

    Science.gov (United States)

    Ha, Jong-Eun; Choi, I.-Sak

    2011-04-01

    Cameras can give useful information for the autonomous navigation of a mobile robot. Typically, one or two cameras are used for this task. Recently, an omnidirectional stereo vision system that can cover the whole surrounding environment of a mobile robot is adopted. They usually adopt a mirror that cannot offer uniform spatial resolution. In this paper, we deal with an omnidirectional stereo system which consists of eight cameras where each two vertical cameras constitute one stereo system. Camera calibration is the first necessary step to obtain 3D information. Calibration using a planar pattern requires many images acquired under different poses so it is a tedious step to calibrate all eight cameras. In this paper, we present a simple calibration procedure using a cubic-type calibration structure that surrounds the omnidirectional stereo system. We can calibrate all the cameras on an omnidirectional stereo system in just one shot.

  7. Determining Vision Graphs for Distributed Camera Networks Using Feature Digests

    Directory of Open Access Journals (Sweden)

    Cheng Zhaolin

    2007-01-01

    Full Text Available We propose a decentralized method for obtaining the vision graph for a distributed, ad-hoc camera network, in which each edge of the graph represents two cameras that image a sufficiently large part of the same environment. Each camera encodes a spatially well-distributed set of distinctive, approximately viewpoint-invariant feature points into a fixed-length "feature digest" that is broadcast throughout the network. Each receiver camera robustly matches its own features with the decompressed digest and decides whether sufficient evidence exists to form a vision graph edge. We also show how a camera calibration algorithm that passes messages only along vision graph edges can recover accurate 3D structure and camera positions in a distributed manner. We analyze the performance of different message formation schemes, and show that high detection rates ( can be achieved while maintaining low false alarm rates ( using a simulated 60-node outdoor camera network.

  8. Waterproof camera case for intraoperative photographs.

    Science.gov (United States)

    Raigosa, Mauricio; Benito-Ruiz, Jesús; Fontdevila, Joan; Ballesteros, José R

    2008-03-01

    Accurate photographic documentation has become essential in reconstructive and cosmetic surgery for both clinical and scientific purposes. Intraoperative photographs are important not only for record purposes, but also for teaching, publications, and presentations. Communication using images proves to be the superior way to persuade audiences. This article presents a simple and easy method for taking intraoperative photographs that uses a presterilized waterproof camera case. This method allows the user to take very good quality pictures with the photographic angle matching the surgeon's view, minimal interruption of the operative procedure, and minimal risk of contaminating the operative field.

  9. Thermal imaging cameras characteristics and performance

    CERN Document Server

    Williams, Thomas

    2009-01-01

    The ability to see through smoke and mist and the ability to use the variances in temperature to differentiate between targets and their backgrounds are invaluable in military applications and have become major motivators for the further development of thermal imagers. As the potential of thermal imaging is more clearly understood and the cost decreases, the number of industrial and civil applications being exploited is growing quickly. In order to evaluate the suitability of particular thermal imaging cameras for particular applications, it is important to have the means to specify and measur

  10. Online camera-gyroscope autocalibration for cell phones.

    Science.gov (United States)

    Jia, Chao; Evans, Brian L

    2014-12-01

    The gyroscope is playing a key role in helping estimate 3D camera rotation for various vision applications on cell phones, including video stabilization and feature tracking. Successful fusion of gyroscope and camera data requires that the camera, gyroscope, and their relative pose to be calibrated. In addition, the timestamps of gyroscope readings and video frames are usually not well synchronized. Previous paper performed camera-gyroscope calibration and synchronization offline after the entire video sequence has been captured with restrictions on the camera motion, which is unnecessarily restrictive for everyday users to run apps that directly use the gyroscope. In this paper, we propose an online method that estimates all the necessary parameters, whereas a user is capturing video. Our contributions are: 1) simultaneous online camera self-calibration and camera-gyroscope calibration based on an implicit extended Kalman filter and 2) generalization of the multiple-view coplanarity constraint on camera rotation in a rolling shutter camera model for cell phones. The proposed method is able to estimate the needed calibration and synchronization parameters online with all kinds of camera motion and can be embedded in gyro-aided applications, such as video stabilization and feature tracking. Both Monte Carlo simulation and cell phone experiments show that the proposed online calibration and synchronization method converge fast to the ground truth values.

  11. Mars Cameras Make Panoramic Photography a Snap

    Science.gov (United States)

    2008-01-01

    If you wish to explore a Martian landscape without leaving your armchair, a few simple clicks around the NASA Web site will lead you to panoramic photographs taken from the Mars Exploration Rovers, Spirit and Opportunity. Many of the technologies that enable this spectacular Mars photography have also inspired advancements in photography here on Earth, including the panoramic camera (Pancam) and its housing assembly, designed by the Jet Propulsion Laboratory and Cornell University for the Mars missions. Mounted atop each rover, the Pancam mast assembly (PMA) can tilt a full 180 degrees and swivel 360 degrees, allowing for a complete, highly detailed view of the Martian landscape. The rover Pancams take small, 1 megapixel (1 million pixel) digital photographs, which are stitched together into large panoramas that sometimes measure 4 by 24 megapixels. The Pancam software performs some image correction and stitching after the photographs are transmitted back to Earth. Different lens filters and a spectrometer also assist scientists in their analyses of infrared radiation from the objects in the photographs. These photographs from Mars spurred developers to begin thinking in terms of larger and higher quality images: super-sized digital pictures, or gigapixels, which are images composed of 1 billion or more pixels. Gigapixel images are more than 200 times the size captured by today s standard 4 megapixel digital camera. Although originally created for the Mars missions, the detail provided by these large photographs allows for many purposes, not all of which are limited to extraterrestrial photography.

  12. Multi-band infrared camera systems

    Science.gov (United States)

    Davis, Tim; Lang, Frank; Sinneger, Joe; Stabile, Paul; Tower, John

    1994-12-01

    The program resulted in an IR camera system that utilizes a unique MOS addressable focal plane array (FPA) with full TV resolution, electronic control capability, and windowing capability. Two systems were delivered, each with two different camera heads: a Stirling-cooled 3-5 micron band head and a liquid nitrogen-cooled, filter-wheel-based, 1.5-5 micron band head. Signal processing features include averaging up to 16 frames, flexible compensation modes, gain and offset control, and real-time dither. The primary digital interface is a Hewlett-Packard standard GPID (IEEE-488) port that is used to upload and download data. The FPA employs an X-Y addressed PtSi photodiode array, CMOS horizontal and vertical scan registers, horizontal signal line (HSL) buffers followed by a high-gain preamplifier and a depletion NMOS output amplifier. The 640 x 480 MOS X-Y addressed FPA has a high degree of flexibility in operational modes. By changing the digital data pattern applied to the vertical scan register, the FPA can be operated in either an interlaced or noninterlaced format. The thermal sensitivity performance of the second system's Stirling-cooled head was the best of the systems produced.

  13. Women's Creation of Camera Phone Culture

    Directory of Open Access Journals (Sweden)

    Dong-Hoo Lee

    2005-01-01

    Full Text Available A major aspect of the relationship between women and the media is the extent to which the new media environment is shaping how women live and perceive the world. It is necessary to understand, in a concrete way, how the new media environment is articulated to our gendered culture, how the symbolic or physical forms of the new media condition women’s experiences, and the degree to which a ‘post-gendered re-codification’ can be realized within a new media environment. This paper intends to provide an ethnographic case study of women’s experiences with camera phones, examining the extent to which these experiences recreate or reconstruct women’s subjectivity or identity. By taking a close look at the ways in which women utilize and appropriate the camera phone in their daily lives, it focuses not only on women’s cultural practices in making meanings but also on their possible effect in the deconstruction of gendered techno-culture.

  14. Process simulation in digital camera system

    Science.gov (United States)

    Toadere, Florin

    2012-06-01

    The goal of this paper is to simulate the functionality of a digital camera system. The simulations cover the conversion from light to numerical signal and the color processing and rendering. We consider the image acquisition system to be linear shift invariant and axial. The light propagation is orthogonal to the system. We use a spectral image processing algorithm in order to simulate the radiometric properties of a digital camera. In the algorithm we take into consideration the transmittances of the: light source, lenses, filters and the quantum efficiency of a CMOS (complementary metal oxide semiconductor) sensor. The optical part is characterized by a multiple convolution between the different points spread functions of the optical components. We use a Cooke triplet, the aperture, the light fall off and the optical part of the CMOS sensor. The electrical part consists of the: Bayer sampling, interpolation, signal to noise ratio, dynamic range, analog to digital conversion and JPG compression. We reconstruct the noisy blurred image by blending different light exposed images in order to reduce the photon shot noise, also we filter the fixed pattern noise and we sharpen the image. Then we have the color processing blocks: white balancing, color correction, gamma correction, and conversion from XYZ color space to RGB color space. For the reproduction of color we use an OLED (organic light emitting diode) monitor. The analysis can be useful to assist students and engineers in image quality evaluation and imaging system design. Many other configurations of blocks can be used in our analysis.

  15. Infrared Camera Analysis of Laser Hardening

    Directory of Open Access Journals (Sweden)

    J. Tesar

    2012-01-01

    Full Text Available The improvement of surface properties such as laser hardening becomes very important in present manufacturing. Resulting laser hardening depth and surface hardness can be affected by changes in optical properties of material surface, that is, by absorptivity that gives the ratio between absorbed energy and incident laser energy. The surface changes on tested sample of steel block were made by engraving laser with different scanning velocity and repetition frequency. During the laser hardening the process was observed by infrared (IR camera system that measures infrared radiation from the heated sample and depicts it in a form of temperature field. The images from the IR camera of the sample are shown, and maximal temperatures of all engraved areas are evaluated and compared. The surface hardness was measured, and the hardening depth was estimated from the measured hardness profile in the sample cross-section. The correlation between reached temperature, surface hardness, and hardening depth is shown. The highest and the lowest temperatures correspond to the lowest/highest hardness and the highest/lowest hardening depth.

  16. FIDO Rover Retracted Arm and Camera

    Science.gov (United States)

    1999-01-01

    The Field Integrated Design and Operations (FIDO) rover extends the large mast that carries its panoramic camera. The FIDO is being used in ongoing NASA field tests to simulate driving conditions on Mars. FIDO is controlled from the mission control room at JPL's Planetary Robotics Laboratory in Pasadena. FIDO uses a robot arm to manipulate science instruments and it has a new mini-corer or drill to extract and cache rock samples. Several camera systems onboard allow the rover to collect science and navigation images by remote-control. The rover is about the size of a coffee table and weighs as much as a St. Bernard, about 70 kilograms (150 pounds). It is approximately 85 centimeters (about 33 inches) wide, 105 centimeters (41 inches) long, and 55 centimeters (22 inches) high. The rover moves up to 300 meters an hour (less than a mile per hour) over smooth terrain, using its onboard stereo vision systems to detect and avoid obstacles as it travels 'on-the-fly.' During these tests, FIDO is powered by both solar panels that cover the top of the rover and by replaceable, rechargeable batteries.

  17. Oil spill detection using hyperspectral infrared camera

    Science.gov (United States)

    Yu, Hui; Wang, Qun; Zhang, Zhen; Zhang, Zhi-jie; Tang, Wei; Tang, Xin; Yue, Song; Wang, Chen-sheng

    2016-11-01

    Oil spill pollution is a severe environmental problem that persists in the marine environment and in inland water systems around the world. Remote sensing is an important part of oil spill response. The hyperspectral images can not only provide the space information but also the spectral information. Pixels of interests generally incorporate information from disparate component that requires quantitative decomposition of these pixels to extract desired information. Oil spill detection can be implemented by applying hyperspectral camera which can collect the hyperspectral data of the oil. By extracting desired spectral signature from hundreds of band information, one can detect and identify oil spill area in vast geographical regions. There are now numerous hyperspectral image processing algorithms developed for target detection. In this paper, we investigate several most widely used target detection algorithm for the identification of surface oil spills in ocean environment. In the experiments, we applied a hyperspectral camera to collect the real life oil spill. The experimental results shows the feasibility of oil spill detection using hyperspectral imaging and the performance of hyperspectral image processing algorithms were also validated.

  18. The NectarCAM camera project

    CERN Document Server

    Glicenstein, J-F; Barrio, J-A; Blanch, O; Boix, J; Bolmont, J; Boutonnet, C; Cazaux, S; Chabanne, E; Champion, C; Chateau, F; Colonges, S; Corona, P; Couturier, S; Courty, B; Delagnes, E; Delgado, C; Ernenwein, J-P; Fegan, S; Ferreira, O; Fesquet, M; Fontaine, G; Fouque, N; Henault, F; Gascón, D; Herranz, D; Hermel, R; Hoffmann, D; Houles, J; Karkar, S; Khelifi, B; Knödlseder, J; Martinez, G; Lacombe, K; Lamanna, G; LeFlour, T; Lopez-Coto, R; Louis, F; Mathieu, A; Moulin, E; Nayman, P; Nunio, F; Olive, J-F; Panazol, J-L; Petrucci, P-O; Punch, M; Prast, J; Ramon, P; Riallot, M; Ribó, M; Rosier-Lees, S; Sanuy, A; Siero, J; Tavernet, J-P; Tejedor, L A; Toussenel, F; Vasileiadis, G; Voisin, V; Waegebert, V; Zurbach, C

    2013-01-01

    In the framework of the next generation of Cherenkov telescopes, the Cherenkov Telescope Array (CTA), NectarCAM is a camera designed for the medium size telescopes covering the central energy range of 100 GeV to 30 TeV. NectarCAM will be finely pixelated (~ 1800 pixels for a 8 degree field of view, FoV) in order to image atmospheric Cherenkov showers by measuring the charge deposited within a few nanoseconds time-window. It will have additional features like the capacity to record the full waveform with GHz sampling for every pixel and to measure event times with nanosecond accuracy. An array of a few tens of medium size telescopes, equipped with NectarCAMs, will achieve up to a factor of ten improvement in sensitivity over existing instruments in the energy range of 100 GeV to 10 TeV. The camera is made of roughly 250 independent read-out modules, each composed of seven photo-multipliers, with their associated high voltage base and control, a read-out board and a multi-service backplane board. The read-out b...

  19. Foreground extraction for moving RGBD cameras

    Science.gov (United States)

    Junejo, Imran N.; Ahmed, Naveed

    2017-02-01

    In this paper, we propose a simple method to perform foreground extraction for a moving RGBD camera. These cameras have now been available for quite some time. Their popularity is primarily due to their low cost and ease of availability. Although the field of foreground extraction or background subtraction has been explored by the computer vision researchers since a long time, the depth-based subtraction is relatively new and has not been extensively addressed as of yet. Most of the current methods make heavy use of geometric reconstruction, making the solutions quite restrictive. In this paper, we make a novel use RGB and RGBD data: from the RGB frame, we extract corner features (FAST) and then represent these features with the histogram of oriented gradients (HoG) descriptor. We train a non-linear SVM on these descriptors. During the test phase, we make used of the fact that the foreground object has distinct depth ordering with respect to the rest of the scene. That is, we use the positively classified FAST features on the test frame to initiate a region growing to obtain the accurate segmentation of the foreground object from just the RGBD data. We demonstrate the proposed method of a synthetic datasets, and demonstrate encouraging quantitative and qualitative results.

  20. From the Pinhole Camera to the Shape of a Lens: The Camera-Obscura Reloaded

    Science.gov (United States)

    Ziegler, Max; Priemer, Burkhard

    2015-01-01

    We demonstrate how the form of a plano-convex lens and a derivation of the thin lens equation can be understood through simple physical considerations. The basic principle is the extension of the pinhole camera using additional holes. The resulting images are brought into coincidence through the deflection of light with an arrangement of prisms.…

  1. Lights, Camera, AG-Tion: Promoting Agricultural and Environmental Education on Camera

    Science.gov (United States)

    Fuhrman, Nicholas E.

    2016-01-01

    Viewing of online videos and television segments has become a popular and efficient way for Extension audiences to acquire information. This article describes a unique approach to teaching on camera that may help Extension educators communicate their messages with comfort and personality. The S.A.L.A.D. approach emphasizes using relevant teaching…

  2. Calibration of the Lunar Reconnaissance Orbiter Camera

    Science.gov (United States)

    Tschimmel, M.; Robinson, M. S.; Humm, D. C.; Denevi, B. W.; Lawrence, S. J.; Brylow, S.; Ravine, M.; Ghaemi, T.

    2008-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) onboard the NASA Lunar Reconnaissance Orbiter (LRO) spacecraft consists of three cameras: the Wide-Angle Camera (WAC) and two identical Narrow Angle Cameras (NAC-L, NAC-R). The WAC is push-frame imager with 5 visible wavelength filters (415 to 680 nm) at a spatial resolution of 100 m/pixel and 2 UV filters (315 and 360 nm) with a resolution of 400 m/pixel. In addition to the multicolor imaging the WAC can operate in monochrome mode to provide a global large- incidence angle basemap and a time-lapse movie of the illumination conditions at both poles. The WAC has a highly linear response, a read noise of 72 e- and a full well capacity of 47,200 e-. The signal-to-noise ratio in each band is 140 in the worst case. There are no out-of-band leaks and the spectral response of each filter is well characterized. Each NAC is a monochrome pushbroom scanner, providing images with a resolution of 50 cm/pixel from a 50-km orbit. A single NAC image has a swath width of 2.5 km and a length of up to 26 km. The NACs are mounted to acquire side-by-side imaging for a combined swath width of 5 km. The NAC is designed to fully characterize future human and robotic landing sites in terms of topography and hazard risks. The North and South poles will be mapped on a 1-meter-scale poleward of 85.5° latitude. Stereo coverage can be provided by pointing the NACs off-nadir. The NACs are also highly linear. Read noise is 71 e- for NAC-L and 74 e- for NAC-R and the full well capacity is 248,500 e- for NAC-L and 262,500 e- for NAC- R. The focal lengths are 699.6 mm for NAC-L and 701.6 mm for NAC-R; the system MTF is 28% for NAC-L and 26% for NAC-R. The signal-to-noise ratio is at least 46 (terminator scene) and can be higher than 200 (high sun scene). Both NACs exhibit a straylight feature, which is caused by out-of-field sources and is of a magnitude of 1-3%. However, as this feature is well understood it can be greatly reduced during ground

  3. Camera systems in human motion analysis for biomedical applications

    Science.gov (United States)

    Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.

    2015-05-01

    Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.

  4. A wide-angle camera module for disposable endoscopy

    Science.gov (United States)

    Shim, Dongha; Yeon, Jesun; Yi, Jason; Park, Jongwon; Park, Soo Nam; Lee, Nanhee

    2016-08-01

    A wide-angle miniaturized camera module for disposable endoscope is demonstrated in this paper. A lens module with 150° angle of view (AOV) is designed and manufactured. All plastic injection-molded lenses and a commercial CMOS image sensor are employed to reduce the manufacturing cost. The image sensor and LED illumination unit are assembled with a lens module. The camera module does not include a camera processor to further reduce its size and cost. The size of the camera module is 5.5 × 5.5 × 22.3 mm3. The diagonal field of view (FOV) of the camera module is measured to be 110°. A prototype of a disposable endoscope is implemented to perform a pre-clinical animal testing. The esophagus of an adult beagle dog is observed. These results demonstrate the feasibility of a cost-effective and high-performance camera module for disposable endoscopy.

  5. Disposition of camera parameters in vehicle navigation system

    Science.gov (United States)

    Yu, Houyun; Zhang, Weigong

    2010-10-01

    To resolve the calibration of onboard camera in the vehicle navigation system based on machine vision, a respective method for disposing of intrinsic and extrinsic parameters of the camera is presented. In view of that the intrinsic parameters are basically invariable during the car's moving, they can be firstly calibrated with a planar pattern as soon as the camera is installed. The installation location of onboard camera can be real-time adjusted according to the slope and vanishing point of lane lines in the picture. Then the quantity of such extrinsic parameters as direction angle, incline angle and level translation are adjusted to zero. This respective disposing method for camera parameters is applied to lane departure detection on the structural road, with which camera calibration is simplified and the measuring error due to extrinsic parameters is decreased. The correctness and feasibility of the method is proved by theoretical calculation and practical experiment.

  6. Two-Phase Algorithm for Optimal Camera Placement

    OpenAIRE

    Jun-Woo Ahn; Tai-Woo Chang; Sung-Hee Lee; Yong Won Seo

    2016-01-01

    As markers for visual sensor networks have become larger, interest in the optimal camera placement problem has continued to increase. The most featured solution for the optimal camera placement problem is based on binary integer programming (BIP). Due to the NP-hard characteristic of the optimal camera placement problem, however, it is difficult to find a solution for a complex, real-world problem using BIP. Many approximation algorithms have been developed to solve this problem. In this pape...

  7. Integrating Scene Parallelism in Camera Auto-Calibration

    Institute of Scientific and Technical Information of China (English)

    LIU Yong (刘勇); WU ChengKe (吴成柯); Hung-Tat Tsui

    2003-01-01

    This paper presents an approach for camera auto-calibration from uncalibrated video sequences taken by a hand-held camera. The novelty of this approach lies in that the line parallelism is transformed to the constraints on the absolute quadric during camera autocalibration. This makes some critical cases solvable and the reconstruction more Euclidean. The approach is implemented and validated using simulated data and real image data. The experimental results show the effectiveness of the approach.

  8. IR Camera Report for the 7 Day Production Test

    Energy Technology Data Exchange (ETDEWEB)

    Holloway, Michael Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-22

    The following report gives a summary of the IR camera performance results and data for the 7 day production run that occurred from 10 Sep 2015 thru 16 Sep 2015. During this production run our goal was to see how well the camera performed its task of monitoring the target window temperature with our improved alignment procedure and emissivity measurements. We also wanted to see if the increased shielding would be effective in protecting the camera from damage and failure.

  9. Nuclear probes and intraoperative gamma cameras.

    Science.gov (United States)

    Heller, Sherman; Zanzonico, Pat

    2011-05-01

    Gamma probes are now an important, well-established technology in the management of cancer, particularly in the detection of sentinel lymph nodes. Intraoperative sentinel lymph node as well as tumor detection may be improved under some circumstances by the use of beta (negatron or positron), rather than gamma detection, because the very short range (∼ 1 mm or less) of such particulate radiations eliminates the contribution of confounding counts from activity other than in the immediate vicinity of the detector. This has led to the development of intraoperative beta probes. Gamma camera imaging also benefits from short source-to-detector distances and minimal overlying tissue, and intraoperative small field-of-view gamma cameras have therefore been developed as well. Radiation detectors for intraoperative probes can generally be characterized as either scintillation or ionization detectors. Scintillators used in scintillation-detector probes include thallium-doped sodium iodide, thallium- and sodium-doped cesium iodide, and cerium-doped lutecium orthooxysilicate. Alternatives to inorganic scintillators are plastic scintillators, solutions of organic scintillation compounds dissolved in an organic solvent that is subsequently polymerized to form a solid. Their combined high counting efficiency for beta particles and low counting efficiency for 511-keV annihilation γ-rays make plastic scintillators well-suited as intraoperative beta probes in general and positron probes in particular Semiconductors used in ionization-detector probes include cadmium telluride, cadmium zinc telluride, and mercuric iodide. Clinical studies directly comparing scintillation and semiconductor intraoperative probes have not provided a clear choice between scintillation and ionization detector-based probes. The earliest small field-of-view intraoperative gamma camera systems were hand-held devices having fields of view of only 1.5-2.5 cm in diameter that used conventional thallium

  10. Design and Field Test of a Galvanometer Deflected Streak Camera

    Energy Technology Data Exchange (ETDEWEB)

    Lai, C C; Goosman, D R; Wade, J T; Avara, R

    2002-11-08

    We have developed a compact fieldable optically-deflected streak camera first reported in the 20th HSPP Congress. Using a triggerable galvanometer that scans the optical signal, the imaging and streaking function is an all-optical process without incurring any photon-electron-photon conversion or photoelectronic deflection. As such, the achievable imaging quality is limited mainly only by optical design, rather than by multiple conversions of signal carrier and high voltage electron-optics effect. All core elements of the camera are packaged into a 12 inch x 24 inch footprint box, a size similar to that of a conventional electronic streak camera. At LLNL's Site-300 Test Site, we have conducted a Fabry-Perot interferometer measurement of fast object velocity using this all-optical camera side-by-side with an intensified electronic streak camera. These two cameras are configured as two independent instruments for recording synchronously each branch of the 50/50 splits from one incoming signal. Given the same signal characteristics, the test result has undisputedly demonstrated superior imaging performance for the all-optical streak camera. It produces higher signal sensitivity, wider linear dynamic range, better spatial contrast, finer temporal resolution, and larger data capacity as compared with that of the electronic counterpart. The camera had also demonstrated its structural robustness and functional consistence to be well compatible with field environment. This paper presents the camera design and the test results in both pictorial records and post-process graphic summaries.

  11. Calibration of a Stereo Radiation Detection Camera Using Planar Homography

    Directory of Open Access Journals (Sweden)

    Seung-Hae Baek

    2016-01-01

    Full Text Available This paper proposes a calibration technique of a stereo gamma detection camera. Calibration of the internal and external parameters of a stereo vision camera is a well-known research problem in the computer vision society. However, few or no stereo calibration has been investigated in the radiation measurement research. Since no visual information can be obtained from a stereo radiation camera, it is impossible to use a general stereo calibration algorithm directly. In this paper, we develop a hybrid-type stereo system which is equipped with both radiation and vision cameras. To calibrate the stereo radiation cameras, stereo images of a calibration pattern captured from the vision cameras are transformed in the view of the radiation cameras. The homography transformation is calibrated based on the geometric relationship between visual and radiation camera coordinates. The accuracy of the stereo parameters of the radiation camera is analyzed by distance measurements to both visual light and gamma sources. The experimental results show that the measurement error is about 3%.

  12. Do speed cameras reduce speeding in urban areas?

    Science.gov (United States)

    Oliveira, Daniele Falci de; Friche, Amélia Augusta de Lima; Costa, Dário Alves da Silva; Mingoti, Sueli Aparecida; Caiaffa, Waleska Teixeira

    2015-11-01

    This observational study aimed to estimate the prevalence of speeding on urban roadways and to analyze associated factors. The sample consisted of 8,565 vehicles circulating in areas with and without fixed speed cameras in operation. We found that 40% of vehicles 200 meters after the fixed cameras and 33.6% of vehicles observed on roadways without speed cameras were moving over the speed limit (p cameras, more women drivers were talking on their cell phones and wearing seatbelts when compared to men (p < 0.05 for both comparisons), independently of speed limits. The results suggest that compliance with speed limits requires more than structural interventions.

  13. Heterogeneous treatment effects of speed cameras on road safety.

    Science.gov (United States)

    Li, Haojie; Graham, Daniel J

    2016-12-01

    This paper analyses how the effects of fixed speed cameras on road casualties vary across sites with different characteristics and evaluates the criteria for selecting camera sites. A total of 771 camera sites and 4787 potential control sites are observed for a period of 9 years across England. Site characteristics such as road class, crash history and site length are combined into a single index, referred to as a propensity score. We first estimate the average effect at each camera site using propensity score matching. The effects are then estimated as a function of propensity scores using local polynomial regression. The results show that the reduction in personal injury collisions ranges from 10% to 40% whilst the average effect is 25.9%, indicating that the effects of speed cameras are not uniform across camera sites and are dependent on site characteristics, as measured by propensity scores. We further evaluate the criteria for selecting camera sites in the UK by comparing the effects at camera sites meeting and not meeting the criteria. The results show that camera sites which meet the criteria perform better in reducing casualties, implying the current site selection criteria are rational.

  14. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    Directory of Open Access Journals (Sweden)

    Mark Shortis

    2015-12-01

    Full Text Available Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  15. Camera traps can be heard and seen by animals.

    Science.gov (United States)

    Meek, Paul D; Ballard, Guy-Anthony; Fleming, Peter J S; Schaefer, Michael; Williams, Warwick; Falzon, Greg

    2014-01-01

    Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals' hearing and produce illumination that can be seen by many species.

  16. Mid-IR image acquisition using a standard CCD camera

    DEFF Research Database (Denmark)

    Dam, Jeppe Seidelin; Sørensen, Knud Palmelund; Pedersen, Christian

    2010-01-01

    Direct image acquisition in the 3-5 µm range is realized using a standard CCD camera and a wavelength up-converter unit. The converter unit transfers the image information to the NIR range were state-of-the-art cameras exist.......Direct image acquisition in the 3-5 µm range is realized using a standard CCD camera and a wavelength up-converter unit. The converter unit transfers the image information to the NIR range were state-of-the-art cameras exist....

  17. 360 deg Camera Head for Unmanned Sea Surface Vehicles

    Science.gov (United States)

    Townsend, Julie A.; Kulczycki, Eric A.; Willson, Reginald G.; Huntsberger, Terrance L.; Garrett, Michael S.; Trebi-Ollennu, Ashitey; Bergh, Charles F.

    2012-01-01

    The 360 camera head consists of a set of six color cameras arranged in a circular pattern such that their overlapping fields of view give a full 360 view of the immediate surroundings. The cameras are enclosed in a watertight container along with support electronics and a power distribution system. Each camera views the world through a watertight porthole. To prevent overheating or condensation in extreme weather conditions, the watertight container is also equipped with an electrical cooling unit and a pair of internal fans for circulation.

  18. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    Science.gov (United States)

    Shortis, Mark

    2015-12-07

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  19. Smart Cameras for Remote Science Survey

    Science.gov (United States)

    Thompson, David R.; Abbey, William; Allwood, Abigail; Bekker, Dmitriy; Bornstein, Benjamin; Cabrol, Nathalie A.; Castano, Rebecca; Estlin, Tara; Fuchs, Thomas; Wagstaff, Kiri L.

    2012-01-01

    Communication with remote exploration spacecraft is often intermittent and bandwidth is highly constrained. Future missions could use onboard science data understanding to prioritize downlink of critical features [1], draft summary maps of visited terrain [2], or identify targets of opportunity for followup measurements [3]. We describe a generic approach to classify geologic surfaces for autonomous science operations, suitable for parallelized implementations in FPGA hardware. We map these surfaces with texture channels - distinctive numerical signatures that differentiate properties such as roughness, pavement coatings, regolith characteristics, sedimentary fabrics and differential outcrop weathering. This work describes our basic image analysis approach and reports an initial performance evaluation using surface images from the Mars Exploration Rovers. Future work will incorporate these methods into camera hardware for real-time processing.

  20. Relevance of ellipse eccentricity for camera calibration

    Science.gov (United States)

    Mordwinzew, W.; Tietz, B.; Boochs, F.; Paulus, D.

    2015-05-01

    Plane circular targets are widely used within calibrations of optical sensors through photogrammetric set-ups. Due to this popularity, their advantages and disadvantages are also well studied in the scientific community. One main disadvantage occurs when the projected target is not parallel to the image plane. In this geometric constellation, the target has an elliptic geometry with an offset between its geometric and its projected center. This difference is referred to as ellipse eccentricity and is a systematic error which, if not treated accordingly, has a negative impact on the overall achievable accuracy. The magnitude and direction of eccentricity errors are dependent on various factors. The most important one is the target size. The bigger an ellipse in the image is, the bigger the error will be. Although correction models dealing with eccentricity have been available for decades, it is mostly seen as a planning task in which the aim is to choose the target size small enough so that the resulting eccentricity error remains negligible. Besides the fact that advanced mathematical models are available and that the influence of this error on camera calibration results is still not completely investigated, there are various additional reasons why bigger targets can or should not be avoided. One of them is the growing image resolution as a by-product from advancements in the sensor development. Here, smaller pixels have a lower S/N ratio, necessitating more pixels to assure geometric quality. Another scenario might need bigger targets due to larger scale differences whereas distant targets should still contain enough information in the image. In general, bigger ellipses contain more contour pixels and therefore more information. This supports the target-detection algorithms to perform better even at non-optimal conditions such as data from sensors with a high noise level. In contrast to rather simple measuring situations in a stereo or multi-image mode, the impact

  1. CHAMP (Camera, Handlens, and Microscope Probe)

    Science.gov (United States)

    Mungas, Greg S.; Boynton, John E.; Balzer, Mark A.; Beegle, Luther; Sobel, Harold R.; Fisher, Ted; Klein, Dan; Deans, Matthew; Lee, Pascal; Sepulveda, Cesar A.

    2005-01-01

    CHAMP (Camera, Handlens And Microscope Probe)is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As a robotic arm-mounted imager, CHAMP supports stereo imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision rangefinding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. CHAMP was originally developed through the Mars Instrument Development Program (MIDP) in support of robotic field investigations, but may also find application in new areas such as robotic in-orbit servicing and maintenance operations associated with spacecraft and human operations. We overview CHAMP'S instrument performance and basic design considerations below.

  2. Evryscope Robotilter automated camera / ccd alignment system

    Science.gov (United States)

    Ratzloff, Jeff K.; Law, Nicholas M.; Fors, Octavi; Ser, Daniel d.; Corbett, Henry T.

    2016-08-01

    We have deployed a new class of telescope, the Evryscope, which opens a new parameter space in optical astronomy - the ability to detect short time scale events across the entire sky simultaneously. The system is a gigapixel-scale array camera with an 8000 sq. deg. field of view, 13 arcsec per pixel sampling, and the ability to detect objects brighter than g = 16 in each 2-minute exposure. The Evryscope is designed to find transiting exoplanets around exotic stars, as well as detect nearby supernovae and provide continuous records of distant relativistic explosions like gamma-ray-bursts. The Evryscope uses commercially available CCDs and optics; the machine and assembly tolerances inherent in the mass production of these parts introduce problematic variations in the lens / CCD alignment which degrades image quality. We have built an automated alignment system (Robotilters) to solve this challenge. In this paper we describe the Robotilter system, mechanical and software design, image quality improvement, and current status.

  3. Retinal oximetry with a multiaperture camera

    Science.gov (United States)

    Lemaillet, Paul; Lompado, Art; Ibrahim, Mohamed; Nguyen, Quan Dong; Ramella-Roman, Jessica C.

    2010-02-01

    Oxygen saturation measurements in the retina is an essential measurement in monitoring eye health of diabetic patient. In this paper, preliminary result of oxygen saturation measurements for a healthy patient retina is presented. The retinal oximeter used is based on a regular fundus camera to which was added an optimized optical train designed to perform aperture division whereas a filter array help select the requested wavelengths. Hence, nine equivalent wavelength-dependent sub-images are taken in a snapshot which helps minimizing the effects of eye movements. The setup is calibrated by using a set of reflectance calibration phantoms and a lookuptable (LUT) is computed. An inverse model based on the LUT is presented to extract the optical properties of a patient fundus and further estimate the oxygen saturation in a retina vessel.

  4. 3D Capturing with Monoscopic Camera

    Directory of Open Access Journals (Sweden)

    M. Galabov

    2014-12-01

    Full Text Available This article presents a new concept of using the auto-focus function of the monoscopic camera sensor to estimate depth map information, which avoids not only using auxiliary equipment or human interaction, but also the introduced computational complexity of SfM or depth analysis. The system architecture that supports both stereo image and video data capturing, processing and display is discussed. A novel stereo image pair generation algorithm by using Z-buffer-based 3D surface recovery is proposed. Based on the depth map, we are able to calculate the disparity map (the distance in pixels between the image points in both views for the image. The presented algorithm uses a single image with depth information (e.g. z-buffer as an input and produces two images for left and right eye.

  5. Robust multi-camera view face recognition

    CERN Document Server

    Kisku, Dakshina Ranjan; Gupta, Phalguni; Sing, Jamuna Kanta

    2010-01-01

    This paper presents multi-appearance fusion of Principal Component Analysis (PCA) and generalization of Linear Discriminant Analysis (LDA) for multi-camera view offline face recognition (verification) system. The generalization of LDA has been extended to establish correlations between the face classes in the transformed representation and this is called canonical covariate. The proposed system uses Gabor filter banks for characterization of facial features by spatial frequency, spatial locality and orientation to make compensate to the variations of face instances occurred due to illumination, pose and facial expression changes. Convolution of Gabor filter bank to face images produces Gabor face representations with high dimensional feature vectors. PCA and canonical covariate are then applied on the Gabor face representations to reduce the high dimensional feature spaces into low dimensional Gabor eigenfaces and Gabor canonical faces. Reduced eigenface vector and canonical face vector are fused together usi...

  6. Dark energy camera installation at CTIO: overview

    Science.gov (United States)

    Abbott, Timothy M.; Muñoz, Freddy; Walker, Alistair R.; Smith, Chris; Montane, Andrés.; Gregory, Brooke; Tighe, Roberto; Schurter, Patricio; van der Bliek, Nicole S.; Schumacher, German

    2012-09-01

    The Dark Energy Camera (DECam) has been installed on the V. M. Blanco telescope at Cerro Tololo Inter-American Observatory in Chile. This major upgrade to the facility has required numerous modifications to the telescope and improvements in observatory infrastructure. The telescope prime focus assembly has been entirely replaced, and the f/8 secondary change procedure radically changed. The heavier instrument means that telescope balance has been significantly modified. The telescope control system has been upgraded. NOAO has established a data transport system to efficiently move DECam's output to the NCSA for processing. The observatory has integrated the DECam highpressure, two-phase cryogenic cooling system into its operations and converted the Coudé room into an environmentally-controlled instrument handling facility incorporating a high quality cleanroom. New procedures to ensure the safety of personnel and equipment have been introduced.

  7. Neutron camera employing row and column summations

    Science.gov (United States)

    Clonts, Lloyd G.; Diawara, Yacouba; Donahue, Jr, Cornelius; Montcalm, Christopher A.; Riedel, Richard A.; Visscher, Theodore

    2016-06-14

    For each photomultiplier tube in an Anger camera, an R.times.S array of preamplifiers is provided to detect electrons generated within the photomultiplier tube. The outputs of the preamplifiers are digitized to measure the magnitude of the signals from each preamplifier. For each photomultiplier tube, a corresponding summation circuitry including R row summation circuits and S column summation circuits numerically add the magnitudes of the signals from preamplifiers for each row and for each column to generate histograms. For a P.times.Q array of photomultiplier tubes, P.times.Q summation circuitries generate P.times.Q row histograms including R entries and P.times.Q column histograms including S entries. The total set of histograms include P.times.Q.times.(R+S) entries, which can be analyzed by a position calculation circuit to determine the locations of events (detection of a neutron).

  8. Comment on ‘From the pinhole camera to the shape of a lens: the camera-obscura reloaded’

    Science.gov (United States)

    Grusche, Sascha

    2016-09-01

    In the article ‘From the pinhole camera to the shape of a lens: the camera-obscura reloaded’ (Phys. Educ. 50 706), the authors show that a prism array, or an equivalent lens, can be used to bring together multiple camera obscura images from a pinhole array. It should be pointed out that the size of the camera obscura images is conserved by a prism array, but changed by a lens. To avoid this discrepancy in image size, the prism array, or the lens, should be made to touch the pinhole array.

  9. GHz modulation detection using a streak camera: Suitability of streak cameras in the AWAKE experiment

    Science.gov (United States)

    Rieger, K.; Caldwell, A.; Reimann, O.; Muggli, P.

    2017-02-01

    Using frequency mixing, a modulated light pulse of ns duration is created. We show that, with a ps-resolution streak camera that is usually used for single short pulse measurements, we can detect via an FFT detection approach up to 450 GHz modulation in a pulse in a single measurement. This work is performed in the context of the AWAKE plasma wakefield experiment where modulation frequencies in the range of 80-280 GHz are expected.

  10. Do it yourself smartphone fundus camera – DIYretCAM

    Directory of Open Access Journals (Sweden)

    Biju Raju

    2016-01-01

    Full Text Available This article describes the method to make a do it yourself smartphone-based fundus camera which can image the central retina as well as the peripheral retina up to the pars plana. It is a cost-effective alternative to the fundus camera.

  11. Calibration of Low Cost RGB and NIR Uav Cameras

    Science.gov (United States)

    Fryskowska, A.; Kedzierski, M.; Grochala, A.; Braula, A.

    2016-06-01

    Non-metric digital cameras are being widely used for photogrammetric studies. The increase in resolution and quality of images obtained by non-metric cameras, allows to use it in low-cost UAV and terrestrial photogrammetry. Imagery acquired with non-metric cameras can be used in 3D modeling of objects or landscapes, reconstructing of historical sites, generating digital elevation models (DTM), orthophotos, or in the assessment of accidents. Non-metric digital camcorders are characterized by instability and ignorance of the interior orientation parameters. Therefore, the use of these devices requires prior calibration. Calibration research was conducted using non-metric camera, different calibration tests and various software. The first part of the paper contains a brief theoretical introduction including the basic definitions, like the construction of non-metric cameras or description of different optical distortions. The second part of the paper contains cameras calibration process, details of the calibration methods and models that have been used. Sony Nex 5 camera calibration has been done using software: Image Master Calib, Matlab - Camera Calibrator application and Agisoft Lens. For the study 2D test fields has been used. As a part of the research a comparative analysis of the results have been done.

  12. Mobile phone camera benchmarking in low light environment

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2015-01-01

    High noise values and poor signal to noise ratio are traditionally associated to the low light imaging. Still, there are several other camera quality features which may suffer from low light environment. For example, what happens to the color accuracy and resolution or how the camera speed behaves in low light? Furthermore, how low light environments affect to the camera benchmarking and which metrics are the critical ones? The work contains standard based image quality measurements including noise, color, and resolution measurements in three different light environments: 1000, 100, and 30 lux. Moreover, camera speed measurements are done. Detailed measurement results of each quality and speed category are revealed and compared. Also a suitable benchmark algorithm is evaluated and corresponding score is calculated to find an appropriate metric which characterize the camera performance in different environments. The result of this work introduces detailed image quality and camera speed measurements of mobile phone camera systems in three different light environments. The paper concludes how different light environments influence to the metrics and which metrics should be measured in low light environment. Finally, a benchmarking score is calculated using measurement data of each environment and mobile phone cameras are compared correspondingly.

  13. Students' Framing of Laboratory Exercises Using Infrared Cameras

    Science.gov (United States)

    Haglund, Jesper; Jeppsson, Fredrik; Hedberg, David; Schönborn, Konrad J.

    2015-01-01

    Thermal science is challenging for students due to its largely imperceptible nature. Handheld infrared cameras offer a pedagogical opportunity for students to see otherwise invisible thermal phenomena. In the present study, a class of upper secondary technology students (N = 30) partook in four IR-camera laboratory activities, designed around the…

  14. Easy-to-use Software Toolkit for IR Cameras

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    CEDIP Infrared Systems, specialists in thermal IR camera systems, have announced a new toolkit for use with their range of cameras that enables simple set-up and control of a wide range of parameters using National Instruments LabVIEW? programming environment,

  15. Augmenting camera images for operators of Unmanned Aerial Vehicles

    NARCIS (Netherlands)

    Veltman, J.A.; Oving, A.B.

    2003-01-01

    The manual control of the camera of an unmanned aerial vehicle (UAV) can be difficult due to several factors such as 1) time delays between steering input and changes of the monitor content, 2) low update rates of the camera images and 3) lack of situation awareness due to the remote position of the

  16. Detection, Deterrence, Docility: Techniques of Control by Surveillance Cameras

    NARCIS (Netherlands)

    Balamir, S.

    2013-01-01

    In spite of the growing omnipresence of surveillance cameras, not much is known by the general public about their background. While many disciplines have scrutinised the techniques and effects of surveillance, the object itself remains somewhat of a mystery. A design typology of surveillance cameras

  17. Imaging Emission Spectra with Handheld and Cellphone Cameras

    Science.gov (United States)

    Sitar, David

    2012-01-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboratory setting on a shoestring budget and get immediate results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon…

  18. Enhanced Engineering Cameras (EECAMs) for the Mars 2020 Rover

    Science.gov (United States)

    Maki, J. N.; McKinney, C. M.; Sellar, R. G.; Copley-Woods, D. S.; Gruel, D. C.; Nuding, D. L.; Valvo, M.; Goodsall, T.; McGuire, J.; Litwin, T. E.

    2016-10-01

    The Mars 2020 Rover will be equipped with a next-generation engineering camera imaging system that represents an upgrade over the previous Mars rover engineering cameras flown on the Mars Exploration Rover (MER) mission and the Mars Science Laboratory (MSL) rover mission.

  19. Three-Dimensional Particle Image Velocimetry Using a Plenoptic Camera

    NARCIS (Netherlands)

    Lynch, K.P.; Fahringer, T.; Thurow, B.

    2012-01-01

    A novel 3-D, 3-C PIV technique is described, based on volume illumination and a plenoptic camera to measure a velocity field. The technique is based on plenoptic photography, which uses a dense microlens array mounted near a camera sensor to sample the spatial and angular distribution of light enter

  20. Camera Ready: Capturing a Digital History of Chester

    Science.gov (United States)

    Lehman, Kathy

    2008-01-01

    Armed with digital cameras, voice recorders, and movie cameras, students from Thomas Dale High School in Chester, Virginia, have been exploring neighborhoods, interviewing residents, and collecting memories of their hometown. In this article, the author describes "Digital History of Chester", a project for creating a commemorative DVD. This…

  1. Microprocessor-controlled wide-range streak camera

    Science.gov (United States)

    Lewis, Amy E.; Hollabaugh, Craig

    2006-08-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  2. A Simple Spectrophotometer Using Common Materials and a Digital Camera

    Science.gov (United States)

    Widiatmoko, Eko; Widayani; Budiman, Maman; Abdullah, Mikrajuddin; Khairurrijal

    2011-01-01

    A simple spectrophotometer was designed using cardboard, a DVD, a pocket digital camera, a tripod and a computer. The DVD was used as a diffraction grating and the camera as a light sensor. The spectrophotometer was calibrated using a reference light prior to use. The spectrophotometer was capable of measuring optical wavelengths with a…

  3. Seeing elements by visible-light digital camera.

    Science.gov (United States)

    Zhao, Wenyang; Sakurai, Kenji

    2017-03-31

    A visible-light digital camera is used for taking ordinary photos, but with new operational procedures it can measure the photon energy in the X-ray wavelength region and therefore see chemical elements. This report describes how one can observe X-rays by means of such an ordinary camera - The front cover of the camera is replaced by an opaque X-ray window to block visible light and to allow X-rays to pass; the camera takes many snap shots (called single-photon-counting mode) to record every photon event individually; an integrated-filtering method is newly proposed to correctly retrieve the energy of photons from raw camera images. Finally, the retrieved X-ray energy-dispersive spectra show fine energy resolution and great accuracy in energy calibration, and therefore the visible-light digital camera can be applied to routine X-ray fluorescence measurement to analyze the element composition in unknown samples. In addition, the visible-light digital camera is promising in that it could serve as a position sensitive X-ray energy detector. It may become able to measure the element map or chemical diffusion in a multi-element system if it is fabricated with external X-ray optic devices. Owing to the camera's low expense and fine pixel size, the present method will be widely applied to the analysis of chemical elements as well as imaging.

  4. Analyzing Gait Using a Time-of-Flight Camera

    DEFF Research Database (Denmark)

    Jensen, Rasmus Ramsbøl; Paulsen, Rasmus Reinhold; Larsen, Rasmus

    2009-01-01

    An algorithm is created, which performs human gait analysis using spatial data and amplitude images from a Time-of-flight camera. For each frame in a sequence the camera supplies cartesian coordinates in space for every pixel. By using an articulated model the subject pose is estimated in the depth...

  5. Demonstrations of Optical Spectra with a Video Camera

    Science.gov (United States)

    Kraftmakher, Yaakov

    2012-01-01

    The use of a video camera may markedly improve demonstrations of optical spectra. First, the output electrical signal from the camera, which provides full information about a picture to be transmitted, can be used for observing the radiant power spectrum on the screen of a common oscilloscope. Second, increasing the magnification by the camera…

  6. Camera Layout Design for the Upper Stage Thrust Cone

    Science.gov (United States)

    Wooten, Tevin; Fowler, Bart

    2010-01-01

    Engineers in the Integrated Design and Analysis Division (EV30) use a variety of different tools to aid in the design and analysis of the Ares I vehicle. One primary tool in use is Pro-Engineer. Pro-Engineer is a computer-aided design (CAD) software that allows designers to create computer generated structural models of vehicle structures. For the Upper State thrust cone, Pro-Engineer was used to assist in the design of a layout for two camera housings. These cameras observe the separation between the first and second stage of the Ares I vehicle. For the Ares I-X, one standard speed camera was used. The Ares I design calls for two separate housings, three cameras, and a lighting system. With previous design concepts and verification strategies in mind, a new layout for the two camera design concept was developed with members of the EV32 team. With the new design, Pro-Engineer was used to draw the layout to observe how the two camera housings fit with the thrust cone assembly. Future analysis of the camera housing design will verify the stability and clearance of the camera with other hardware present on the thrust cone.

  7. Holographic motion picture camera with Doppler shift compensation

    Science.gov (United States)

    Kurtz, R. L. (Inventor)

    1976-01-01

    A holographic motion picture camera is reported for producing three dimensional images by employing an elliptical optical system. There is provided in one of the beam paths (the object or reference beam path) a motion compensator which enables the camera to photograph faster moving objects.

  8. 28 CFR 68.42 - In camera and protective orders.

    Science.gov (United States)

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false In camera and protective orders. 68.42 Section 68.42 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) RULES OF PRACTICE AND PROCEDURE... In camera and protective orders. (a) Privileged communications. Upon application of any person,...

  9. 32 CFR 813.4 - Combat camera operations.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Combat camera operations. 813.4 Section 813.4 National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE SALES AND SERVICES VISUAL INFORMATION DOCUMENTATION PROGRAM § 813.4 Combat camera operations. (a) Air Force COMCAM forces document...

  10. 24 CFR 180.640 - In camera and protective orders.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false In camera and protective orders. 180.640 Section 180.640 Housing and Urban Development Regulations Relating to Housing and Urban... at Hearing § 180.640 In camera and protective orders. The ALJ may limit discovery or the...

  11. Improving Photometric Calibration of Meteor Video Camera Systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2016-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Oce (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the rst point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric ux within the camera band-pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at 0:20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0:05 ?? 0:10 mag in both ltered and un ltered camera observations with no evidence for lingering systematics.

  12. 29 CFR 18.46 - In camera and protective orders.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 1 2010-07-01 2010-07-01 true In camera and protective orders. 18.46 Section 18.46 Labor Office of the Secretary of Labor RULES OF PRACTICE AND PROCEDURE FOR ADMINISTRATIVE HEARINGS BEFORE THE OFFICE OF ADMINISTRATIVE LAW JUDGES General § 18.46 In camera and protective orders. (a) Privileges....

  13. 49 CFR 511.45 - In camera materials.

    Science.gov (United States)

    2010-10-01

    ... excluded from the public record. Pursuant to 49 CFR part 512, the Chief Counsel of the NHTSA is responsible... 49 Transportation 6 2010-10-01 2010-10-01 false In camera materials. 511.45 Section 511.45... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION ADJUDICATIVE PROCEDURES Hearings § 511.45 In camera materials....

  14. Data filtering with support vector machines in geometric camera calibration.

    Science.gov (United States)

    Ergun, B; Kavzoglu, T; Colkesen, I; Sahin, C

    2010-02-01

    The use of non-metric digital cameras in close-range photogrammetric applications and machine vision has become a popular research agenda. Being an essential component of photogrammetric evaluation, camera calibration is a crucial stage for non-metric cameras. Therefore, accurate camera calibration and orientation procedures have become prerequisites for the extraction of precise and reliable 3D metric information from images. The lack of accurate inner orientation parameters can lead to unreliable results in the photogrammetric process. A camera can be well defined with its principal distance, principal point offset and lens distortion parameters. Different camera models have been formulated and used in close-range photogrammetry, but generally sensor orientation and calibration is performed with a perspective geometrical model by means of the bundle adjustment. In this study, support vector machines (SVMs) using radial basis function kernel is employed to model the distortions measured for Olympus Aspherical Zoom lens Olympus E10 camera system that are later used in the geometric calibration process. It is intended to introduce an alternative approach for the on-the-job photogrammetric calibration stage. Experimental results for DSLR camera with three focal length settings (9, 18 and 36 mm) were estimated using bundle adjustment with additional parameters, and analyses were conducted based on object point discrepancies and standard errors. Results show the robustness of the SVMs approach on the correction of image coordinates by modelling total distortions on-the-job calibration process using limited number of images.

  15. Spectral Camera based on Ghost Imaging via Sparsity Constraints

    Science.gov (United States)

    Liu, Zhentao; Tan, Shiyu; Wu, Jianrong; Li, Enrong; Shen, Xia; Han, Shensheng

    2016-05-01

    The image information acquisition ability of a conventional camera is usually much lower than the Shannon Limit since it does not make use of the correlation between pixels of image data. Applying a random phase modulator to code the spectral images and combining with compressive sensing (CS) theory, a spectral camera based on true thermal light ghost imaging via sparsity constraints (GISC spectral camera) is proposed and demonstrated experimentally. GISC spectral camera can acquire the information at a rate significantly below the Nyquist rate, and the resolution of the cells in the three-dimensional (3D) spectral images data-cube can be achieved with a two-dimensional (2D) detector in a single exposure. For the first time, GISC spectral camera opens the way of approaching the Shannon Limit determined by Information Theory in optical imaging instruments.

  16. Accuracy testing of a new intraoral 3D camera.

    Science.gov (United States)

    Mehl, A; Ender, A; Mörmann, W; Attin, T

    2009-01-01

    Surveying intraoral structures by optical means has reached the stage where it is being discussed as a serious clinical alternative to conventional impression taking. Ease of handling and, more importantly, accuracy are important criteria for the clinical suitability of these systems. This article presents a new intraoral camera for the Cerec procedure. It reports on a study investigating the accuracy of this camera and its potential clinical indications. Single-tooth and quadrant images were taken with the camera and the results compared to those obtained with a reference scanner and with the previous 3D camera model. Differences were analyzed by superimposing the data records. Accuracy was higher with the new camera than with the previous model, reaching up to 19 microm in single-tooth images. Quadrant images can also be taken with sufficient accuracy (ca 35 microm) and are simple to perform in clinical practice, thanks to built-in shake detection in automatic capture mode.

  17. Spectral Camera based on Ghost Imaging via Sparsity Constraints.

    Science.gov (United States)

    Liu, Zhentao; Tan, Shiyu; Wu, Jianrong; Li, Enrong; Shen, Xia; Han, Shensheng

    2016-05-16

    The image information acquisition ability of a conventional camera is usually much lower than the Shannon Limit since it does not make use of the correlation between pixels of image data. Applying a random phase modulator to code the spectral images and combining with compressive sensing (CS) theory, a spectral camera based on true thermal light ghost imaging via sparsity constraints (GISC spectral camera) is proposed and demonstrated experimentally. GISC spectral camera can acquire the information at a rate significantly below the Nyquist rate, and the resolution of the cells in the three-dimensional (3D) spectral images data-cube can be achieved with a two-dimensional (2D) detector in a single exposure. For the first time, GISC spectral camera opens the way of approaching the Shannon Limit determined by Information Theory in optical imaging instruments.

  18. Spectral Camera based on Ghost Imaging via Sparsity Constraints

    CERN Document Server

    Liu, Zhentao; Wu, Jianrong; Li, Enrong; Shen, Xia; Han, Shensheng

    2015-01-01

    The information acquisition ability of conventional camera is far lower than the Shannon Limit because of the correlation between pixels of image data. Applying sparse representation of images to reduce the abundance of image data and combined with compressive sensing theory, the spectral camera based on ghost imaging via sparsity constraints (GISC spectral camera) is proposed and demonstrated experimentally. GISC spectral camera can acquire the information at a rate significantly below Nyquist, and the resolution of the cells in the three-dimensional (3D) spectral image data-cube can be achieved with a two-dimensional (2D) detector in a single exposure. For the first time, GISC spectral camera opens the way of approaching the Shannon Limit determined by Information Theory in optical imaging instruments.

  19. Principle of Coordinates Acquisition Based on Single Camera

    Institute of Scientific and Technical Information of China (English)

    HUANG Guiping; YE Shenghua

    2005-01-01

    The principle and accuracy of 3-D coordinates acquisition using one single camera and the Aided Measuring Probe(AMP) are discussed in this paper. Using one single camera and one AMP which has several embedded targets and one tip with known coordinates, the single camera's orientation and location can be calculated. After orientation, the global coordinate system is obtained. During measurement, the camera is fixed firstly, then the AMP is held and the feature point is touched.The camera is triggered lastly. The position and orientation of the AMP are therefore calculated from the size and position of its image on the sensor. Since the tip point of AMP has known relation with the embedded targets, the feature point can be measured. Tests show that the accuracy of length measurement is 0.2 mm and accuracy for flatness measurement in XSY-plane is 0.1 mm.

  20. Central Acceptance Testing for Camera Technologies for CTA

    CERN Document Server

    Bonardi, A; Chadwick, P; Dazzi, F; Förster, A; Hörandel, J R; Punch, M

    2015-01-01

    The Cherenkov Telescope Array (CTA) is an international initiative to build the next generation ground based very-high energy gamma-ray observatory. It will consist of telescopes of three different sizes, employing several different technologies for the cameras that detect the Cherenkov light from the observed air showers. In order to ensure the compliance of each camera technology with CTA requirements, CTA will perform central acceptance testing of each camera technology. To assist with this, the Camera Test Facilities (CTF) work package is developing a detailed test program covering the most important performance, stability, and durability requirements, including setting up the necessary equipment. Performance testing will include a wide range of tests like signal amplitude, time resolution, dead-time determination, trigger efficiency, performance testing under temperature and humidity variations and several others. These tests can be performed on fully-integrated cameras using a portable setup at the came...

  1. Calibration of line-scan cameras for precision measurement.

    Science.gov (United States)

    Sun, Bo; Zhu, Jigui; Yang, Linghui; Yang, Shourui; Niu, Zhiyuan

    2016-09-01

    Calibration of line-scan cameras for precision measurement should have large calibration volume and be flexible in the actual measurement field. In this paper, we present a high-precision calibration method. Instead of using a large 3D pattern, we use a small planar pattern and a precalibrated matrix camera to obtain plenty of points with a suitable distribution, which would ensure the precision of the calibration results. The matrix camera removes the necessity of precise adjustment and movement and links the line-scan camera to the world easily, both of which enhance flexibility in the measurement field. The method has been verified by experiments. The experimental results demonstrated that the proposed method gives a practical solution to calibrate line scan cameras for precision measurement.

  2. A distributed topological camera network representation for tracking applications.

    Science.gov (United States)

    Lobaton, Edgar; Vasudevan, Ramanarayan; Bajcsy, Ruzena; Sastry, Shankar

    2010-10-01

    Sensor networks have been widely used for surveillance, monitoring, and tracking. Camera networks, in particular, provide a large amount of information that has traditionally been processed in a centralized manner employing a priori knowledge of camera location and of the physical layout of the environment. Unfortunately, these conventional requirements are far too demanding for ad-hoc distributed networks. In this article, we present a simplicial representation of a camera network called the camera network complex ( CN-complex), that accurately captures topological information about the visual coverage of the network. This representation provides a coordinate-free calibration of the sensor network and demands no localization of the cameras or objects in the environment. A distributed, robust algorithm, validated via two experimental setups, is presented for the construction of the representation using only binary detection information. We demonstrate the utility of this representation in capturing holes in the coverage, performing tracking of agents, and identifying homotopic paths.

  3. Reconstructing spectral reflectance from digital camera through samples selection

    Science.gov (United States)

    Cao, Bin; Liao, Ningfang; Yang, Wenming; Chen, Haobo

    2016-10-01

    Spectral reflectance provides the most fundamental information of objects and is recognized as the "fingerprint" of them, since reflectance is independent of illumination and viewing conditions. However, reconstructing high-dimensional spectral reflectance from relatively low-dimensional camera outputs is an illposed problem and most of methods requaired camera's spectral responsivity. We propose a method to reconstruct spectral reflectance from digital camera outputs without prior knowledge of camera's spectral responsivity. This method respectively averages reflectances of selected subset from main training samples by prescribing a limit to tolerable color difference between the training samples and the camera outputs. Different tolerable color differences of training samples were investigated with Munsell chips under D65 light source. Experimental results show that the proposed method outperforms classic PI method in terms of multiple evaluation criteria between the actual and the reconstructed reflectances. Besides, the reconstructed spectral reflectances are between 0-1, which make them have actual physical meanings and better than traditional methods.

  4. Design of high speed camera based on CMOS technology

    Science.gov (United States)

    Park, Sei-Hun; An, Jun-Sick; Oh, Tae-Seok; Kim, Il-Hwan

    2007-12-01

    The capacity of a high speed camera in taking high speed images has been evaluated using CMOS image sensors. There are 2 types of image sensors, namely, CCD and CMOS sensors. CMOS sensor consumes less power than CCD sensor and can take images more rapidly. High speed camera with built-in CMOS sensor is widely used in vehicle crash tests and airbag controls, golf training aids, and in bullet direction measurement in the military. The High Speed Camera System made in this study has the following components: CMOS image sensor that can take about 500 frames per second at a resolution of 1280*1024; FPGA and DDR2 memory that control the image sensor and save images; Camera Link Module that transmits saved data to PC; and RS-422 communication function that enables control of the camera from a PC.

  5. HiRISE: The People's Camera

    Science.gov (United States)

    McEwen, A. S.; Eliason, E.; Gulick, V. C.; Spinoza, Y.; Beyer, R. A.; HiRISE Team

    2010-12-01

    The High Resolution Imaging Science Experiment (HiRISE) camera, orbiting Mars since 2006 on the Mars Reconnaissance Orbiter (MRO), has returned more than 17,000 large images with scales as small as 25 cm/pixel. From it’s beginning, the HiRISE team has followed “The People’s Camera” concept, with rapid release of useful images, explanations, and tools, and facilitating public image suggestions. The camera includes 14 CCDs, each read out into 2 data channels, so compressed images are returned from MRO as 28 long (up to 120,000 line) images that are 1024 pixels wide (or binned 2x2 to 512 pixels, etc.). This raw data is very difficult to use, especially for the public. At the HiRISE operations center the raw data are calibrated and processed into a series of B&W and color products, including browse images and JPEG2000-compressed images and tools to make it easy for everyone to explore these enormous images (see http://hirise.lpl.arizona.edu/). Automated pipelines do all of this processing, so we can keep up with the high data rate; images go directly to the format of the Planetary Data System (PDS). After students visually check each image product for errors, they are fully released just 1 month after receipt; captioned images (written by science team members) may be released sooner. These processed HiRISE images have been incorporated into tools such as Google Mars and World Wide Telescope for even greater accessibility. 51 Digital Terrain Models derived from HiRISE stereo pairs have been released, resulting in some spectacular flyover movies produced by members of the public and viewed up to 50,000 times according to YouTube. Public targeting began in 2007 via NASA Quest (http://marsoweb.nas.nasa.gov/HiRISE/quest/) and more than 200 images have been acquired, mostly by students and educators. At the beginning of 2010 we released HiWish (http://www.uahirise.org/hiwish/), opening HiRISE targeting to anyone in the world with Internet access, and already more

  6. Unmanned ground vehicle perception using thermal infrared cameras

    Science.gov (United States)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellutta, Paolo; Sherwin, Gary W.

    2011-05-01

    The ability to perform off-road autonomous navigation at any time of day or night is a requirement for some unmanned ground vehicle (UGV) programs. Because there are times when it is desirable for military UGVs to operate without emitting strong, detectable electromagnetic signals, a passive only terrain perception mode of operation is also often a requirement. Thermal infrared (TIR) cameras can be used to provide day and night passive terrain perception. TIR cameras have a detector sensitive to either mid-wave infrared (MWIR) radiation (3-5μm) or long-wave infrared (LWIR) radiation (7-14μm). With the recent emergence of high-quality uncooled LWIR cameras, TIR cameras have become viable passive perception options for some UGV programs. The Jet Propulsion Laboratory (JPL) has used a stereo pair of TIR cameras under several UGV programs to perform stereo ranging, terrain mapping, tree-trunk detection, pedestrian detection, negative obstacle detection, and water detection based on object reflections. In addition, we have evaluated stereo range data at a variety of UGV speeds, evaluated dual-band TIR classification of soil, vegetation, and rock terrain types, analyzed 24 hour water and 12 hour mud TIR imagery, and analyzed TIR imagery for hazard detection through smoke. Since TIR cameras do not currently provide the resolution available from megapixel color cameras, a UGV's daytime safe speed is often reduced when using TIR instead of color cameras. In this paper, we summarize the UGV terrain perception work JPL has performed with TIR cameras over the last decade and describe a calibration target developed by General Dynamics Robotic Systems (GDRS) for TIR cameras and other sensors.

  7. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    Science.gov (United States)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellutta, Paolo; Sherwin, Gary W.

    2011-01-01

    The ability to perform off-road autonomous navigation at any time of day or night is a requirement for some unmanned ground vehicle (UGV) programs. Because there are times when it is desirable for military UGVs to operate without emitting strong, detectable electromagnetic signals, a passive only terrain perception mode of operation is also often a requirement. Thermal infrared (TIR) cameras can be used to provide day and night passive terrain perception. TIR cameras have a detector sensitive to either mid-wave infrared (MWIR) radiation (3-5?m) or long-wave infrared (LWIR) radiation (8-12?m). With the recent emergence of high-quality uncooled LWIR cameras, TIR cameras have become viable passive perception options for some UGV programs. The Jet Propulsion Laboratory (JPL) has used a stereo pair of TIR cameras under several UGV programs to perform stereo ranging, terrain mapping, tree-trunk detection, pedestrian detection, negative obstacle detection, and water detection based on object reflections. In addition, we have evaluated stereo range data at a variety of UGV speeds, evaluated dual-band TIR classification of soil, vegetation, and rock terrain types, analyzed 24 hour water and 12 hour mud TIR imagery, and analyzed TIR imagery for hazard detection through smoke. Since TIR cameras do not currently provide the resolution available from megapixel color cameras, a UGV's daytime safe speed is often reduced when using TIR instead of color cameras. In this paper, we summarize the UGV terrain perception work JPL has performed with TIR cameras over the last decade and describe a calibration target developed by General Dynamics Robotic Systems (GDRS) for TIR cameras and other sensors.

  8. The Camera Is Not a Methodology: Towards a Framework for Understanding Young Children's Use of Video Cameras

    Science.gov (United States)

    Bird, Jo; Colliver, Yeshe; Edwards, Susan

    2014-01-01

    Participatory research methods argue that young children should be enabled to contribute their perspectives on research seeking to understand their worldviews. Visual research methods, including the use of still and video cameras with young children have been viewed as particularly suited to this aim because cameras have been considered easy and…

  9. The ITER Radial Neutron Camera Detection System

    Science.gov (United States)

    Marocco, D.; Belli, F.; Bonheure, G.; Esposito, B.; Kaschuck, Y.; Petrizzi, L.; Riva, M.

    2008-03-01

    A multichannel neutron detection system (Radial Neutron Camera, RNC) will be installed on the ITER equatorial port plug 1 for total neutron source strength, neutron emissivity/ion temperature profiles and nt/nd ratio measurements [1]. The system is composed by two fan shaped collimating structures: an ex-vessel structure, looking at the plasma core, containing tree sets of 12 collimators (each set lying on a different toroidal plane), and an in-vessel structure, containing 9 collimators, for plasma edge coverage. The RNC detecting system will work in a harsh environment (neutron fiux up to 108-109 n/cm2 s, magnetic field >0.5 T or in-vessel detectors), should provide both counting and spectrometric information and should be flexible enough to cover the high neutron flux dynamic range expected during the different ITER operation phases. ENEA has been involved in several activities related to RNC design and optimization [2,3]. In the present paper the up-to-date design and the neutron emissivity reconstruction capabilities of the RNC will be described. Different options for detectors suitable for spectrometry and counting (e.g. scintillators and diamonds) focusing on the implications in terms of overall RNC performance will be discussed. The increase of the RNC capabilities offered by the use of new digital data acquisition systems will be also addressed.

  10. Driver head pose tracking with thermal camera

    Science.gov (United States)

    Bole, S.; Fournier, C.; Lavergne, C.; Druart, G.; Lépine, T.

    2016-09-01

    Head pose can be seen as a coarse estimation of gaze direction. In automotive industry, knowledge about gaze direction could optimize Human-Machine Interface (HMI) and Advanced Driver Assistance Systems (ADAS). Pose estimation systems are often based on camera when applications have to be contactless. In this paper, we explore uncooled thermal imagery (8-14μm) for its intrinsic night vision capabilities and for its invariance versus lighting variations. Two methods are implemented and compared, both are aided by a 3D model of the head. The 3D model, mapped with thermal texture, allows to synthesize a base of 2D projected models, differently oriented and labeled in yaw and pitch. The first method is based on keypoints. Keypoints of models are matched with those of the query image. These sets of matchings, aided with the 3D shape of the model, allow to estimate 3D pose. The second method is a global appearance approach. Among all 2D models of the base, algorithm searches the one which is the closest to the query image thanks to a weighted least squares difference.

  11. Depth perception camera for autonomous vehicle applications

    Science.gov (United States)

    Kornreich, Philipp

    2013-05-01

    An imager that can measure the distance from each pixel to the point on the object that is in focus at the pixel is described. Since it provides numeric information of the distance from the camera to all points in its field of view it is ideally suited for autonomous vehicle navigation and robotic vision. This eliminates the LIDAR conventionally used for range measurements. The light arriving at a pixel through a convex lens adds constructively only if it comes from the object point in focus at this pixel. The light from all other object points cancels. Thus, the lens selects the point on the object who's range is to be determined. The range measurement is accomplished by short light guides at each pixel. The light guides contain a p - n junction and a pair of contacts along its length. They, too, contain light sensing elements along the length. The device uses ambient light that is only coherent in spherical shell shaped light packets of thickness of one coherence length. Each of the frequency components of the broad band light arriving at a pixel has a phase proportional to the distance from an object point to its image pixel.

  12. Subaru Prime Focus Camera -- Suprime-Cam --

    CERN Document Server

    Miyazaki, S; Sekiguchi, M; Okamura, S; Doi, M; Furusawa, H; Hamabe, M; Imi, K; Kimura, M; Nakata, F; Okada, N; Ouchi, M; Shimasaku, K; Yagi, M; Yasuda, N; Miyazaki, Satoshi; Komiyama, Yutaka; Sekiguchi, Maki; Okamura, Sadanori; Doi, Mamoru; Furusawa, Hisanori; Hamabe, Masaru; Imi, Katsumi; Kimura, Masahiki; Nakata, Fumiaki; Okada, Norio; Ouchi, Masami; Shimasaku, Kazuhiro; Yagi, Masafumi; Yasuda, Naoki

    2002-01-01

    We have built an 80 mega pixels (10240 X 8192) mosaic CCD camera, called Suprime-Cam, for the wide field prime focus of the 8.2 m Subaru telescope. Suprime-Cam covers a field of view 34 arcmin X 27 arcmin, a unique facility among the the 8 - 10 m class telescopes, with a resolution of 0.202 arcsec per pixel. The focal plane consists of ten high-resistivity 2kX4k CCDs developed by MIT Lincoln Laboratory and these are cooled by a large stirling cycle cooler. The CCD readout electronics are developed originally by our group (M-Front & Messia-III) and the system is designed scalable that allows multiple read-out of tens of CCDs. It takes 50 seconds to readout entire arrays. We have designed a filter exchange mechanism of jukebox type that can holds up to ten large filters (205 X 170 X 15 mm^3). The wide field corrector is basically a three-lens Wynne-type but has a new type of atmospheric dispersion corrector. The corrector provides flat focal plane and un-vignetted field of view of 30 arcmin in diameter. Ach...

  13. Event Pileup in AXAF's ACIS CCD Camera

    Science.gov (United States)

    McNamara, Brian R.

    1998-01-01

    AXAF's high resolution mirrors will focus a point source near the optical axis to a spot that is contained within a radius of about two pixels on the ACIS Charge Coupled Devices (CCD) camera. Because of the small spot size, the accuracy to which fluxes and spectral energy distributions of bright point sources can be measured will be degrad3ed by event pileup. Event pileup occurs when two or more X-ray photons arrive simultaneously in a single detection cell on a CCD readout frame. When pileup occurs, ACIS's event detection algorithm registers the photons as a single X-ray event. The pulse height channel of the event will correspond to an energy E approximately E-1 + E-2...E-n, where n is the number of photons registered per detection cell per readout frame. As a result, pileup artificially hardens the observed spectral energy distribution. I will discuss the effort at the AXAF Science Center Lo calibrate pileup in ACIS using focused, nearly monochromatic X-ray source. I will discuss techniques for modeling and correcting pileup effects in polychromatic spectra.

  14. STRAY DOG DETECTION IN WIRED CAMERA NETWORK

    Directory of Open Access Journals (Sweden)

    C. Prashanth

    2013-08-01

    Full Text Available Existing surveillance systems impose high level of security on humans but lacks attention on animals. Stray dogs could be used as an alternative to humans to carry explosive material. It is therefore imperative to ensure the detection of stray dogs for necessary corrective action. In this paper, a novel composite approach to detect the presence of stray dogs is proposed. The captured frame from the surveillance camera is initially pre-processed using Gaussian filter to remove noise. The foreground object of interest is extracted utilizing ViBe algorithm. Histogram of Oriented Gradients (HOG algorithm is used as the shape descriptor which derives the shape and size information of the extracted foreground object. Finally, stray dogs are classified from humans using a polynomial Support Vector Machine (SVM of order 3. The proposed composite approach is simulated in MATLAB and OpenCV. Further it is validated with real time video feeds taken from an existing surveillance system. From the results obtained, it is found that a classification accuracy of about 96% is achieved. This encourages the utilization of the proposed composite algorithm in real time surveillance systems.

  15. Depth Cameras on UAVs: a First Approach

    Science.gov (United States)

    Deris, A.; Trigonis, I.; Aravanis, A.; Stathopoulou, E. K.

    2017-02-01

    Accurate depth information retrieval of a scene is a field under investigation in the research areas of photogrammetry, computer vision and robotics. Various technologies, active, as well as passive, are used to serve this purpose such as laser scanning, photogrammetry and depth sensors, with the latter being a promising innovative approach for fast and accurate 3D object reconstruction using a broad variety of measuring principles including stereo vision, infrared light or laser beams. In this study we investigate the use of the newly designed Stereolab's ZED depth camera based on passive stereo depth calculation, mounted on an Unmanned Aerial Vehicle with an ad-hoc setup, specially designed for outdoor scene applications. Towards this direction, the results of its depth calculations and scene reconstruction generated by Simultaneous Localization and Mapping (SLAM) algorithms are compared and evaluated based on qualitative and quantitative criteria with respect to the ones derived by a typical Structure from Motion (SfM) and Multiple View Stereo (MVS) pipeline for a challenging cultural heritage application.

  16. An autonomous sensor module based on a legacy CCTV camera

    Science.gov (United States)

    Kent, P. J.; Faulkner, D. A. A.; Marshall, G. F.

    2016-10-01

    A UK MoD funded programme into autonomous sensors arrays (SAPIENT) has been developing new, highly capable sensor modules together with a scalable modular architecture for control and communication. As part of this system there is a desire to also utilise existing legacy sensors. The paper reports upon the development of a SAPIENT-compliant sensor module using a legacy Close-Circuit Television (CCTV) pan-tilt-zoom (PTZ) camera. The PTZ camera sensor provides three modes of operation. In the first mode, the camera is automatically slewed to acquire imagery of a specified scene area, e.g. to provide "eyes-on" confirmation for a human operator or for forensic purposes. In the second mode, the camera is directed to monitor an area of interest, with zoom level automatically optimized for human detection at the appropriate range. Open source algorithms (using OpenCV) are used to automatically detect pedestrians; their real world positions are estimated and communicated back to the SAPIENT central fusion system. In the third mode of operation a "follow" mode is implemented where the camera maintains the detected person within the camera field-of-view without requiring an end-user to directly control the camera with a joystick.

  17. The Alfred Nobel rocket camera. An early aerial photography attempt

    Science.gov (United States)

    Ingemar Skoog, A.

    2010-02-01

    Alfred Nobel (1833-1896), mainly known for his invention of dynamite and the creation of the Nobel Prices, was an engineer and inventor active in many fields of science and engineering, e.g. chemistry, medicine, mechanics, metallurgy, optics, armoury and rocketry. Amongst his inventions in rocketry was the smokeless solid propellant ballistite (i.e. cordite) patented for the first time in 1887. As a very wealthy person he actively supported many Swedish inventors in their work. One of them was W.T. Unge, who was devoted to the development of rockets and their applications. Nobel and Unge had several rocket patents together and also jointly worked on various rocket applications. In mid-1896 Nobel applied for patents in England and France for "An Improved Mode of Obtaining Photographic Maps and Earth or Ground Measurements" using a photographic camera carried by a "…balloon, rocket or missile…". During the remaining of 1896 the mechanical design of the camera mechanism was pursued and cameras manufactured. In April 1897 (after the death of Alfred Nobel) the first aerial photos were taken by these cameras. These photos might be the first documented aerial photos taken by a rocket borne camera. Cameras and photos from 1897 have been preserved. Nobel did not only develop the rocket borne camera but also proposed methods on how to use the photographs taken for ground measurements and preparing maps.

  18. Determining Vision Graphs for Distributed Camera Networks Using Feature Digests

    Directory of Open Access Journals (Sweden)

    Richard J. Radke

    2007-01-01

    Full Text Available We propose a decentralized method for obtaining the vision graph for a distributed, ad-hoc camera network, in which each edge of the graph represents two cameras that image a sufficiently large part of the same environment. Each camera encodes a spatially well-distributed set of distinctive, approximately viewpoint-invariant feature points into a fixed-length “feature digest” that is broadcast throughout the network. Each receiver camera robustly matches its own features with the decompressed digest and decides whether sufficient evidence exists to form a vision graph edge. We also show how a camera calibration algorithm that passes messages only along vision graph edges can recover accurate 3D structure and camera positions in a distributed manner. We analyze the performance of different message formation schemes, and show that high detection rates (>0.8 can be achieved while maintaining low false alarm rates (<0.05 using a simulated 60-node outdoor camera network.

  19. Analysis of Camera Arrays Applicable to the Internet of Things.

    Science.gov (United States)

    Yang, Jiachen; Xu, Ru; Lv, Zhihan; Song, Houbing

    2016-03-22

    The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are used in various applications and analyzed in the research work, there are few discussions on the comparison of them. Therefore, we make a detailed analysis about their performance over different shooting distances. From our analysis, we find that the threshold of shooting distance for converged cameras is 7 m. In addition, we design a camera array in our work that can be used as a parallel camera array, as well as a converged camera array and take some images and videos with it to identify the threshold.

  20. A Smart Pixel Camera for future Cherenkov Telescopes

    CERN Document Server

    Hermann, G; Glück, B; Hauser, D; Hermann, German; Carrigan, Svenja; Gl\\"uck, Bernhard; Hauser, Dominik

    2005-01-01

    The Smart Pixel Camera is a new camera for imaging atmospheric Cherenkov telescopes, suited for a next generation of large multi-telescope ground based gamma-ray observatories. The design of the camera foresees all electronics needed to process the images to be located inside the camera body at the focal plane. The camera has a modular design and is scalable in the number of pixels. The camera electronics provides the performance needed for the next generation instruments, like short signal integration time, topological trigger and short trigger gate, and at the same time the design is optimized to minimize the cost per channel. In addition new features are implemented, like the measurement of the arrival time of light pulses in the pixels on the few hundred psec timescale. The buffered readout system of the camera allows to take images at sustained rates of O(10 kHz) with a dead-time of only about 0.8 % per kHz.

  1. Next-generation digital camera integration and software development issues

    Science.gov (United States)

    Venkataraman, Shyam; Peters, Ken; Hecht, Richard

    1998-04-01

    This paper investigates the complexities associated with the development of next generation digital cameras due to requirements in connectivity and interoperability. Each successive generation of digital camera improves drastically in cost, performance, resolution, image quality and interoperability features. This is being accomplished by advancements in a number of areas: research, silicon, standards, etc. As the capabilities of these cameras increase, so do the requirements for both hardware and software. Today, there are two single chip camera solutions in the market including the Motorola MPC 823 and LSI DCAM- 101. Real time constraints for a digital camera may be defined by the maximum time allowable between capture of images. Constraints in the design of an embedded digital camera include processor architecture, memory, processing speed and the real-time operating systems. This paper will present the LSI DCAM-101, a single-chip digital camera solution. It will present an overview of the architecture and the challenges in hardware and software for supporting streaming video in such a complex device. Issues presented include the development of the data flow software architecture, testing and integration on this complex silicon device. The strategy for optimizing performance on the architecture will also be presented.

  2. Global Calibration of Multiple Cameras Based on Sphere Targets

    Directory of Open Access Journals (Sweden)

    Junhua Sun

    2016-01-01

    Full Text Available Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three, while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view.

  3. Robust pedestrian detection by combining visible and thermal infrared cameras.

    Science.gov (United States)

    Lee, Ji Hoon; Choi, Jong-Suk; Jeon, Eun Som; Kim, Yeong Gon; Le, Toan Thanh; Shin, Kwang Yong; Lee, Hyeon Chang; Park, Kang Ryoung

    2015-05-05

    With the development of intelligent surveillance systems, the need for accurate detection of pedestrians by cameras has increased. However, most of the previous studies use a single camera system, either a visible light or thermal camera, and their performances are affected by various factors such as shadow, illumination change, occlusion, and higher background temperatures. To overcome these problems, we propose a new method of detecting pedestrians using a dual camera system that combines visible light and thermal cameras, which are robust in various outdoor environments such as mornings, afternoons, night and rainy days. Our research is novel, compared to previous works, in the following four ways: First, we implement the dual camera system where the axes of visible light and thermal cameras are parallel in the horizontal direction. We obtain a geometric transform matrix that represents the relationship between these two camera axes. Second, two background images for visible light and thermal cameras are adaptively updated based on the pixel difference between an input thermal and pre-stored thermal background images. Third, by background subtraction of thermal image considering the temperature characteristics of background and size filtering with morphological operation, the candidates from whole image (CWI) in the thermal image is obtained. The positions of CWI (obtained by background subtraction and the procedures of shadow removal, morphological operation, size filtering, and filtering of the ratio of height to width) in the visible light image are projected on those in the thermal image by using the geometric transform matrix, and the searching regions for pedestrians are defined in the thermal image. Fourth, within these searching regions, the candidates from the searching image region (CSI) of pedestrians in the thermal image are detected. The final areas of pedestrians are located by combining the detected positions of the CWI and CSI of the thermal

  4. 77 FR 59013 - State Journal Register, Camera and Plate Department, Springfield, IL; Notice of Affirmative...

    Science.gov (United States)

    2012-09-25

    ... Employment and Training Administration State Journal Register, Camera and Plate Department, Springfield, IL... workers of State Journal Register, Camera and Plate Department, Springfield, Illinois. The determination... or proportion of workers at State Journal Register, Camera and Plate Department,...

  5. Performance of Watec 910 HX camera for meteor observing

    Science.gov (United States)

    Ocaña, Francisco; Zamorano, Jaime; Tapia Ayuga, Carlos E.

    2014-01-01

    The new Watec 910 HX model is a 0.5 MPix multipurpose video camera with up to ×256 frames integration capability. We present a sensitivity and spectral characterization done at Universidad Complutense de Madrid Instrument Laboratory (LICA). In addition, we have carried out a field test to show the performance of this camera for meteor observing. With respect to the similar model 902 H2 Ultimate, the new camera has additional set-up controls that are important for the scientific use of the recordings. However the overall performance does not justify the extra cost for most of the meteor observers.

  6. Automatic Traffic Monitoring from an Airborne Wide Angle Camera System

    OpenAIRE

    Rosenbaum, Dominik; Charmette, Baptiste; Kurz, Franz; Suri, Sahil; Thomas, Ulrike; Reinartz, Peter

    2008-01-01

    We present an automatic traffic monitoring approach using data of an airborne wide angle camera system. This camera, namely the “3K-Camera”, was recently developed at the German Aerospace Center (DLR). It has a coverage of 8 km perpendicular to the flight direction at a flight height of 3000 m with a resolution of 45 cm and is capable to take images at a frame rate of up to 3 fps. Based on georeferenced images obtained from this camera system, a near real-time processing chain containing roa...

  7. Integrated radar-camera security system: range test

    Science.gov (United States)

    Zyczkowski, M.; Szustakowski, M.; Ciurapinski, W.; Karol, M.; Markowski, P.

    2012-06-01

    The paper presents the test results of a mobile system for the protection of large-area objects, which consists of a radar and thermal and visual cameras. Radar is used for early detection and localization of an intruder and the cameras with narrow field of view are used for identification and tracking of a moving object. The range evaluation of an integrated system is presented as well as the probability of human detection as a function of the distance from radar-camera unit.

  8. Integrated mobile radar-camera system in airport perimeter security

    Science.gov (United States)

    Zyczkowski, M.; Szustakowski, M.; Ciurapinski, W.; Dulski, R.; Kastek, M.; Trzaskawka, P.

    2011-11-01

    The paper presents the test results of a mobile system for the protection of large-area objects, which consists of a radar and thermal and visual cameras. Radar is used for early detection and localization of an intruder and the cameras with narrow field of view are used for identification and tracking of a moving object. The range evaluation of an integrated system are presented as well as the probability of human detection as a function of the distance from radar-camera unit.

  9. Camera monologue: Cultural critique beyond collaboration, participation, and dialogue

    DEFF Research Database (Denmark)

    Suhr, Christian

    2018-01-01

    Cameras always seem to capture a little too little and a little too much. In ethnographic films, profound insights are often found in the tension between what we are socially taught to perceive, and the peculiar non-social perception of the camera. Ethnographic filmmakers study the worlds of huma...... experiment, I imagine what different cameras might reply to these questions if they could speak. In doing so, I call attention to ethnographic filmmaking as a more-than-human, more-than-collaborative, and more-than-dialogical mode of cultural critique....

  10. CAMERA-BASED SOFTWARE IN REHABILITATION/THERAPY INTERVENTION

    DEFF Research Database (Denmark)

    Brooks, Anthony Lewis

    2014-01-01

    Use of an affordable, easily adaptable, ‘non-specific camera-based software’ that is rarely used in the field of rehabilitation is reported in a study with 91 participants over the duration of six workshop sessions. ‘Non-specific camera-based software’ refers to software that is not dependent...... on specific hardware. Adaptable means that human tracking and created artefact interaction in the camera field of view is relatively easily changed as one desires via a user-friendly GUI. The significance of having both available for contemporary intervention is argued. Conclusions are that the mature, robust...

  11. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    Science.gov (United States)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellut, Paolo; Sherwin, Gary

    2011-01-01

    TIR cameras can be used for day/night Unmanned Ground Vehicle (UGV) autonomous navigation when stealth is required. The quality of uncooled TIR cameras has significantly improved over the last decade, making them a viable option at low speed Limiting factors for stereo ranging with uncooled LWIR cameras are image blur and low texture scenes TIR perception capabilities JPL has explored includes: (1) single and dual band TIR terrain classification (2) obstacle detection (pedestrian, vehicle, tree trunks, ditches, and water) (3) perception thru obscurants

  12. BUNDLE ADJUSTMENTS CCD CAMERA CALIBRATION BASED ON COLLINEARITY EQUATION

    Institute of Scientific and Technical Information of China (English)

    Liu Changying; Yu Zhijing; Che Rensheng; Ye Dong; Huang Qingcheng; Yang Dingning

    2004-01-01

    The solid template CCD camera calibration method of bundle adjustments based on collinearity equation is presented considering the characteristics of space large-dimension on-line measurement. In the method, a more comprehensive camera model is adopted which is based on the pinhole model extended with distortions corrections. In the process of calibration, calibration precision is improved by imaging at different locations in the whole measurement space, multi-imaging at the same location and bundle adjustments optimization. The calibration experiment proves that the calibration method is able to fulfill calibration requirement of CCD camera applied to vision measurement.

  13. Gamma-ray imaging with compton cameras: recent years development

    CERN Document Server

    Hirasawa, M; Shibata, S; Enomoto, S; Yano, Y

    2002-01-01

    Compton cameras can image the distribution of gamma-ray sources with electronic collimation instead of mechanical collimators. It consists of at least two position sensitive detectors. The first detector measures the position and the recoil electron energy of Compton scattering process and the second detector working in coincidence with the first measures the position of the scattered ray. This camera was proposed in the 1970s and since then has been improved in moderate pace until recently. This paper reviews the recent years development on Compton cameras technology. (author)

  14. All Sky Camera instrument for night sky monitoring

    CERN Document Server

    Mandat, Dusan; Hrabovsky, Miroslav; Schovanek, Petr; Palatka, Miroslav; Travnicek, Petr; Prouza, Michael; Ebr, Jan

    2014-01-01

    The All Sky Camera (ASC) was developed as an universal device for a monitoring of the night sky quality and night sky background measurement. ASC system consists of an astronomical CCD camera, a fish eye lens, a control computer and associated electronics. The measurement is carried out during astronomical twilight. The analysis results are the cloud fraction (the percentage of the sky covered by clouds), night sky brightness (in mag/arcsec2) and light background in the field of view of the camera. The analysis of the cloud fraction is based on the astrometry (comparison to catalogue positions) of the observed stars.

  15. Television camera for fast-scan data acquisition

    Science.gov (United States)

    Noel, B. W.; Yates, G. J.

    1982-11-01

    A fast-scan television camera is described that was designed specifically for closed-circuit data-acquisition applications. The camera is capable of field durations as low as 2.8 ms. The line and field rates are quasicontinuously adjustable. The number of lines, the integration duty cycle, and the scan direction are among the other adjustable parameters. Typical resolution at the fastest scan rate is ≳500 TV lines per picture height with a corresponding dynamic range (to light input) of more than 100. The camera uses the unique properties of FPS vidicons and specially designed electronics to achieve its performance level and versatility.

  16. An evolution of image source camera attribution approaches.

    Science.gov (United States)

    Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul

    2016-05-01

    Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics

  17. Kinect Fusion improvement using depth camera calibration

    Science.gov (United States)

    Pagliari, D.; Menna, F.; Roncella, R.; Remondino, F.; Pinto, L.

    2014-06-01

    Scene's 3D modelling, gesture recognition and motion tracking are fields in rapid and continuous development which have caused growing demand on interactivity in video-game and e-entertainment market. Starting from the idea of creating a sensor that allows users to play without having to hold any remote controller, the Microsoft Kinect device was created. The Kinect has always attract researchers in different fields, from robotics to Computer Vision (CV) and biomedical engineering as well as third-party communities that have released several Software Development Kit (SDK) versions for Kinect in order to use it not only as a game device but as measurement system. Microsoft Kinect Fusion control libraries (firstly released in March 2013) allow using the device as a 3D scanning and produce meshed polygonal of a static scene just moving the Kinect around. A drawback of this sensor is the geometric quality of the delivered data and the low repeatability. For this reason the authors carried out some investigation in order to evaluate the accuracy and repeatability of the depth measured delivered by the Kinect. The paper will present a throughout calibration analysis of the Kinect imaging sensor, with the aim of establishing the accuracy and precision of the delivered information: a straightforward calibration of the depth sensor in presented and then the 3D data are correct accordingly. Integrating the depth correction algorithm and correcting the IR camera interior and exterior orientation parameters, the Fusion Libraries are corrected and a new reconstruction software is created to produce more accurate models.

  18. THE FLY’S EYE CAMERA SYSTEM

    Directory of Open Access Journals (Sweden)

    László Mészáros

    2014-01-01

    Full Text Available Hacemos una introducci ́on del Fly’s Eye Camera System, un di spositivo de monitorizaci ́on de todo cielo con el prop ́osito de realizar astronom ́ıa de dominio temporal. Es te dise ̃no de sistema de c ́amaras proveer ́a conjuntos de datos complementarios a otros sondeos sin ́opticos como L SST o Pan-STARRS. El campo de visi ́on efectivo se obtiene con 19 c ́amaras dispuestas en un mosaico de forma e sf ́erica. Dichas c ́amaras del dispositivo se apoyan en una montura hexapodal que es completamente capaz d e hacer seguimiento sid ́ereo para exposiciones consecutivas. Esta plataforma tiene muchas ventajas. Prim ero, s ́olo requiere un componente m ́ovil y no incluye partes ́unicas. Por lo tanto este dise ̃no no s ́olo elimina lo s problemas causados por elementos ́unicos, sino que la redundancia del hex ́apodo permite una operaci ́on sin pro blemas incluso si una o dos de las piernas est ́an atoradas. Otra ventaja es que se puede calibrar a si mismo med iante estrellas observadas independientemente de su ubicaci ́on geogr ́afica como de la alineaci ́on polar de la m ontura. Todos los elementos mec ́anicos y electr ́onicos est ́an dise ̃nados dentro de nuestro instituto del Observat orio Konkoly. Actualmente, nuestro instrumento est ́a en fase de pruebas con un hex ́apodo operativo y un n ́umero red ucido de c ́amaras.

  19. LSST camera readout chip ASPIC: test tools

    Science.gov (United States)

    Antilogus, P.; Bailly, Ph; Jeglot, J.; Juramy, C.; Lebbolo, H.; Martin, D.; Moniez, M.; Tocut, V.; Wicek, F.

    2012-02-01

    The LSST camera will have more than 3000 video-processing channels. The readout of this large focal plane requires a very compact readout chain. The correlated ''Double Sampling technique'', which is generally used for the signal readout of CCDs, is also adopted for this application and implemented with the so called ''Dual Slope integrator'' method. We have designed and implemented an ASIC for LSST: the Analog Signal Processing asIC (ASPIC). The goal is to amplify the signal close to the output, in order to maximize signal to noise ratio, and to send differential outputs to the digitization. Others requirements are that each chip should process the output of half a CCD, that is 8 channels and should operate at 173 K. A specific Back End board has been designed especially for lab test purposes. It manages the clock signals, digitizes the analog differentials outputs of ASPIC and stores data into a memory. It contains 8 ADCs (18 bits), 512 kwords memory and an USB interface. An FPGA manages all signals from/to all components on board and generates the timing sequence for ASPIC. Its firmware is written in Verilog and VHDL languages. Internals registers permit to define various tests parameters of the ASPIC. A Labview GUI allows to load or update these registers and to check a proper operation. Several series of tests, including linearity, noise and crosstalk, have been performed over the past year to characterize the ASPIC at room and cold temperature. At present, the ASPIC, Back-End board and CCD detectors are being integrated to perform a characterization of the whole readout chain.

  20. NEOCam: The Near-Earth Object Camera

    Science.gov (United States)

    Mainzer, Amy K.; NEOCam Science Team

    2016-10-01

    The Near-Earth Object Camera (NEOCam) is a Discovery mission in Phase A study designed to carry out a large-scale survey of the inner solar system's minor planets. Its primary science objectives are to understand the origins of the solar system's small bodies and the processes that evolved them into their present state. The mission will also characterize the impact hazard from near-Earth objects as well as rare populations such as Earth Trojans and interior-to-Earth objects. In the process, NEOCam can identify targets for future robotic or human exploration. Using a 50 cm telescope operating in two infrared wavelengths (4-5.2 and 6-10 um), the mission is expected to detect and characterize close to 100,000 NEOs and thousands of comets. By achieving high survey completeness in the main belt down to kilometer-scale objects, NEOCam-derived size and albedo distributions can be directly compared to those of the NEOs. The hypotheses that small, dark NEOs and comets are preferentially disrupted at low perihelia can be tested by searching for correlations between size, orbital elements, and albedos. NEOCam's Sun-Earth L1 Lagrange point halo orbit enables a large instantaneous field of regard with a view of low solar elongations, high data rates, and a cold thermal environment. Like its predecessor, WISE/NEOWISE, candidate minor planet detections will be rapidly disseminated to the community via the Minor Planet Center. NEOCam images, source databases, and tables of derived physical properties will be delivered to the community via NASA's Infrared Science Archive and PDS.