WorldWideScience

Sample records for abrams 360-degree camera

  1. Virtual displays for 360-degree video

    Science.gov (United States)

    Gilbert, Stephen; Boonsuk, Wutthigrai; Kelly, Jonathan W.

    2012-03-01

    In this paper we describe a novel approach for comparing users' spatial cognition when using different depictions of 360- degree video on a traditional 2D display. By using virtual cameras within a game engine and texture mapping of these camera feeds to an arbitrary shape, we were able to offer users a 360-degree interface composed of four 90-degree views, two 180-degree views, or one 360-degree view of the same interactive environment. An example experiment is described using these interfaces. This technique for creating alternative displays of wide-angle video facilitates the exploration of how compressed or fish-eye distortions affect spatial perception of the environment and can benefit the creation of interfaces for surveillance and remote system teleoperation.

  2. The 360 Degree Fulldome Production "Clockwork Ocean"

    Science.gov (United States)

    Baschek, B.; Heinsohn, R.; Opitz, D.; Fischer, T.; Baschek, T.

    2016-02-01

    The investigation of submesoscale eddies and fronts is one of the leading oceanographic topics at the Ocean Sciences Meeting 2016. In order to observe these small and short-lived phenomena, planes equipped with high-resolution cameras and fast vessels were deployed during the Submesoscale Experiments (SubEx) leading to some of the first high-resolution observations of these eddies. In a future experiment, a zeppelin will be used the first time in marine sciences. The relevance of submesoscale processes for the oceans and the work of the eddy hunters is described in the fascinating 9-minute long 360 degree fulldome production Clockwork Ocean. The fully animated movie is introduced in this presentation taking the observer from the bioluminescence in the deep ocean to a view of our blue planet from space. The immersive media is used to combine fascination for a yet unknown environment with scientific education of a broad audience. Detailed background information is available at the parallax website www.clockwork-ocean.com. The Film is also available for Virtual Reality glasses and smartphones to reach a broader distribution. A unique Mobile Dome with an area of 70 m² and seats for 40 people is used for science education at events, festivals, for politicians and school classes. The spectators are also invited to participate in the experiments by presenting 360 degree footage of the measurements. The premiere of Clockwork Ocean was in July 2015 in Hamburg, Germany and will be worldwide available in English and German as of fall 2015. Clockwork Ocean is a film of the Helmholtz-Zentrum Geesthacht produced by Daniel Opitz and Ralph Heinsohn.

  3. 360-degree feedback for medical trainees

    DEFF Research Database (Denmark)

    Holm, Ellen; Holm, Kirsten; Sørensen, Jette Led

    2015-01-01

    In 360-degree feedback medical colleagues and collaborators give a trainee feedback by answering a questionnaire on behaviour of the trainee. The questionnaire may contain questions answered on a scale or/and they may contain open questions. The result from 360-degree feedback is used for formative...

  4. WebVR meets WebRTC: Towards 360-degree social VR experiences

    NARCIS (Netherlands)

    Gunkel, S.; Prins, M.J.; Stokking, H.M.; Niamut, O.A.

    2017-01-01

    Virtual Reality (VR) and 360-degree video are reshaping the media landscape, creating a fertile business environment. During 2016 new 360-degree cameras and VR headsets entered the consumer market, distribution platforms are being established and new production studios are emerging. VR is evermore

  5. Developing 360 degree feedback system for KINS

    International Nuclear Information System (INIS)

    Han, In Soo; Cheon, B. M.; Kim, T. H.; Ryu, J. H.

    2003-12-01

    This project aims to investigate the feasibility of a 360 degree feedback systems for KINS and to design guiding rules and structures in implementing that systems. Literature survey, environmental analysis and questionnaire survey were made to ensure that 360 degree feedback is the right tool to improve performance in KINS. That review leads to conclusion that more readiness and careful feasibility review are needed before implementation of 360 degree feedback in KINS. Further the project suggests some guiding rules that can be helpful for successful implementation of that system in KINS. Those include : start with development, experiment with one department, tie it to a clear organization's goal, train everyone involve, make sure to try that system in an atmosphere of trust

  6. Developing 360 degree feedback system for KINS

    Energy Technology Data Exchange (ETDEWEB)

    Han, In Soo; Cheon, B. M.; Kim, T. H.; Ryu, J. H. [Chungman National Univ., Daejeon (Korea, Republic of)

    2003-12-15

    This project aims to investigate the feasibility of a 360 degree feedback systems for KINS and to design guiding rules and structures in implementing that systems. Literature survey, environmental analysis and questionnaire survey were made to ensure that 360 degree feedback is the right tool to improve performance in KINS. That review leads to conclusion that more readiness and careful feasibility review are needed before implementation of 360 degree feedback in KINS. Further the project suggests some guiding rules that can be helpful for successful implementation of that system in KINS. Those include : start with development, experiment with one department, tie it to a clear organization's goal, train everyone involve, make sure to try that system in an atmosphere of trust.

  7. 360-degree feedback for medical trainees

    DEFF Research Database (Denmark)

    Holm, Ellen; Holm, Kirsten; Sørensen, Jette Led

    2015-01-01

    In 360-degree feedback medical colleagues and collaborators give a trainee feedback by answering a questionnaire on behaviour of the trainee. The questionnaire may contain questions answered on a scale or/and they may contain open questions. The result from 360-degree feedback is used for formative...... feedback and assessment. In order to secure reliability 8-15 respondents are needed. It is a matter of discussion whether the respondents should be chosen by the trainee or by a third part, and if respondents should be anonymous. The process includes a feedback session with a trained supervisor....

  8. Managing "Academic Value": The 360-Degree Perspective

    Science.gov (United States)

    Wilson, Margaret R.; Corr, Philip J.

    2018-01-01

    The "raison d'etre" of all universities is to create and deliver "academic value", which we define as the sum total of the contributions from the 360-degree "angles" of the academic community, including all categories of staff, as well as external stakeholders (e.g. regulatory, commercial, professional and community…

  9. 360-Degree Iris Burns Following Conductive Keratoplasty.

    Science.gov (United States)

    Çakir, Hanefi; Genç, Selim; Güler, Emre

    2016-11-01

    The authors report a case with multiple iris burns after conductive keratoplasty to correct hyperopia. Case report. A 52-year-old woman with hyperopia had a previous conductive keratoplasty procedure and underwent a conductive keratoplasty re-treatment 6 months later. Postoperatively, she presented with 360-degree iris burns in both eyes that were correlated with the corneal conductive keratoplasty scars. In addition, specular microscopy revealed decreased endothelial cell density for both eyes. This is the first reported case of iris burns associated with conductive keratoplasty. [J Refract Surg. 2016;32(11):776-778.]. Copyright 2016, SLACK Incorporated.

  10. Developing Your 360-Degree Leadership Potential.

    Science.gov (United States)

    Verma, Nupur; Mohammed, Tan-Lucien; Bhargava, Puneet

    2017-09-01

    Radiologists serve in leadership roles throughout their career, making leadership education an integral part of their development. A maxim of leadership style is summarized by 360-Degree Leadership, which highlights the ability of a leader to lead from any position within the organization while relying on core characteristics to build confidence from within their team. The qualities of leadership discussed can be learned and applied by radiologists at any level. These traits can form a foundation for the leader when faced with unfavorable events, which themselves allow the leader an opportunity to build trust. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  11. Twin Peaks in 360-degree panorama

    Science.gov (United States)

    1997-01-01

    The prominent hills dubbed 'Twin Peaks' approximately 1-2 kilometers away were imaged by the Imager for Mars Pathfinder (IMP) as part of a 360-degree color panorama, taken over sols 8, 9 and 10. A lander petal and deflated airbag are at the bottom of the image.Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

  12. Antecedents and Consequences of Reactions to Developmental 360[degrees] Feedback

    Science.gov (United States)

    Atwater, Leanne E.; Brett, Joan F.

    2005-01-01

    This study investigated the factors that influence leaders' reactions to 360[degrees] feedback and the relationship of feedback reactions to subsequent development activities and changes in leader behavior. For leaders with low ratings, those who agreed with others about their ratings were less motivated than those who received low ratings and…

  13. Peripheral 360 degrees retinectomy in complex retinal detachment.

    Science.gov (United States)

    Banaee, Touka; Hosseini, Seyyedeh Maryam; Eslampoor, Alireza; Abrishami, Majid; Moosavi, Mirnaghi

    2009-06-01

    To report the functional and anatomical results and complications of 360 degrees peripheral retinectomy for management of complicated retinal detachment. Patients with complicated retinal detachment underwent pars plana vitrectomy, 360 degrees retinectomy, intraoperative endolaser, and internal tamponade with silicone oil. Postoperative visual acuity, intraocular pressure, retinal status, need for reoperation, and complications are presented. Twenty eyes of 19 patients with a mean age of 32.4 years (8-75 years) underwent pars plana vitrectomy and 360 degrees peripheral retinectomy for complicated retinal detachment due to anterior proliferative vitreoretinopathy, unstable edge of retinal break, anterior hyaloidal fibrovascular proliferation, retinal incarceration in scleral wound, and 300 degrees giant retinal tear. Intraoperative reattachment was achieved in 18 eyes. Mean postoperative follow-up time was 24.2 months (2-70 months). Retina was attached in 14 eyes (70%) in the last visit. Eight eyes (40%) had 5/200 or greater visual acuity. Preoperative and postoperative visual acuities did not have significant correlation (Spearman correlation coefficient = 0.291). There was no relation between diagnosis and anatomical outcome (P > 0.2). Relaxing peripheral 360 degrees retinectomy is an effective procedure for flattening the retina in complicated retinal detachments when no other option is available.

  14. 360-degree videos: a new visualization technique for astrophysical simulations

    Science.gov (United States)

    Russell, Christopher M. P.

    2017-11-01

    360-degree videos are a new type of movie that renders over all 4π steradian. Video sharing sites such as YouTube now allow this unique content to be shared via virtual reality (VR) goggles, hand-held smartphones/tablets, and computers. Creating 360° videos from astrophysical simulations is not only a new way to view these simulations as you are immersed in them, but is also a way to create engaging content for outreach to the public. We present what we believe is the first 360° video of an astrophysical simulation: a hydrodynamics calculation of the central parsec of the Galactic centre. We also describe how to create such movies, and briefly comment on what new science can be extracted from astrophysical simulations using 360° videos.

  15. Joffe, Prof. Abram Fiodorovich

    Indian Academy of Sciences (India)

    Home; Fellowship. Fellow Profile. Elected: 1959 Honorary. Joffe, Prof. Abram Fiodorovich. Date of birth: 29 October 1880. Date of death: 14 October 1960. YouTube; Twitter; Facebook; Blog. Academy News. IAS Logo. Theory Of Evolution. Posted on 23 January 2018. Joint Statement by the Three Science Academies of India ...

  16. Evaluation of Curriculum and Student Learning Needs Using 360 Degree Assessment

    Science.gov (United States)

    Ladyshewsky, Richard; Taplin, Ross

    2015-01-01

    This research used a 360 degree assessment tool modelled from the competing values framework to assess the curriculum. A total of 100 Master's of Business Administration students and 746 of their work colleagues completed the 360 degree assessment tool. The students were enrolled in a course on leadership and management. The results of the…

  17. The Surface Warfare Community's 360-Degree Feedback Pilot Program: A Preliminary Analysis and Evaluation Plan

    National Research Council Canada - National Science Library

    Williams, James M

    2005-01-01

    The system known as 360-degree feedback, also called multi-source or multi-rater feedback, is a development program that provides a recipient with feedback from supervisors, peers, and subordinates...

  18. 360-degree videos: a new visualization technique for astrophysical simulations, applied to the Galactic Center

    Science.gov (United States)

    Russell, Christopher

    2018-01-01

    360-degree videos are a new type of movie that renders over all 4π steradian. Video sharing sites such as YouTube now allow this unique content to be shared via virtual reality (VR) goggles, hand-held smartphones/tablets, and computers. Creating 360-degree videos from astrophysical simulations not only provide a new way to view these simulations due to their immersive nature, but also yield engaging content for outreach to the public. We present our 360-degree video of an astrophysical simulation of the Galactic center: a hydrodynamics calculation of the colliding and accreting winds of the 30 Wolf-Rayet stars orbiting within the central parsec. Viewing the movie, which renders column density, from the location of the supermassive black hole gives a unique and immersive perspective of the shocked wind material inspiraling and tidally stretching as it plummets toward the black hole. We also describe how to create such movies, discuss what type of content does and does not look appealing in 360-degree format, and briefly comment on what new science can be extracted from astrophysical simulations using 360-degree videos.

  19. Use of 360-degree assessment of residents in internal medicine in a Danish setting

    DEFF Research Database (Denmark)

    Allerup, Peter

    2007-01-01

    The aim of the study was to explore the feasibility of 360 degree assessment in early specialist training in a Danish setting. Present Danish postgraduate training requires assessment of specific learning objectives. Residency in Internal Medicine was chosen for the study. It has 65 learning...

  20. 360 degree viewable floating autostereoscopic display using integral photography and multiple semitransparent mirrors.

    Science.gov (United States)

    Zhao, Dong; Su, Baiquan; Chen, Guowen; Liao, Hongen

    2015-04-20

    In this paper, we present a polyhedron-shaped floating autostereoscopic display viewable from 360 degrees using integral photography (IP) and multiple semitransparent mirrors. IP combined with polyhedron-shaped multiple semitransparent mirrors is used to achieve a 360 degree viewable floating three-dimensional (3D) autostereoscopic display, having the advantage of being able to be viewed by several observers from various viewpoints simultaneously. IP is adopted to generate a 3D autostereoscopic image with full parallax property. Multiple semitransparent mirrors reflect corresponding IP images, and the reflected IP images are situated around the center of the polyhedron-shaped display device for producing the floating display. The spatial reflected IP images reconstruct a floating autostereoscopic image viewable from 360 degrees. We manufactured two prototypes for producing such displays and performed two sets of experiments to evaluate the feasibility of the method described above. The results of our experiments showed that our approach can achieve a floating autostereoscopic display viewable from surrounding area. Moreover, it is shown the proposed method is feasible to facilitate the continuous viewpoint of a whole 360 degree display without flipping.

  1. The value of subjectivity: problems and prospects for 360-degree appraisal systems

    NARCIS (Netherlands)

    van der Heijden, Beatrice; Nijhof, A.H.J.

    2004-01-01

    This article focuses on the problems and prospects of 360-degree feedback methods. The rationale behind these appraisal systems is that different evaluation perspectives add objectivity and incremental validity to the assessment of individual performance. This assumption is challenged in this

  2. 360 Degrees Project: Final Report of 1972-73. National Career Education Television Project.

    Science.gov (United States)

    Wisconsin Univ., Madison. Univ. Extension.

    Project 360 Degrees was a mass-media, multi-State, one-year effort in adult career education initiated by WHA-TV, the public television station of the University of Wisconsin-Extension, and funded by the U.S. Office of Education. The overall goal of the project was to provide, through a coordinated media system, information and motivation that…

  3. Doctors' perceptions of why 360-degree feedback does (not) work: a qualitative study

    NARCIS (Netherlands)

    Overeem, Karlijn; Wollersheim, Hub; Driessen, Erik; Lombarts, Kiki; van de Ven, Geertje; Grol, Richard; Arah, Onyebuchi

    2009-01-01

    OBJECTIVES: Delivery of 360-degree feedback is widely used in revalidation programmes. However, little has been done to systematically identify the variables that influence whether or not performance improvement is actually achieved after such assessments. This study aims to explore which factors

  4. Perceptions of Women and Men Leaders Following 360-Degree Feedback Evaluations

    Science.gov (United States)

    Pfaff, Lawrence A.; Boatwright, Karyn J.; Potthoff, Andrea L.; Finan, Caitlin; Ulrey, Leigh Ann; Huber, Daniel M.

    2013-01-01

    In this study, researchers used a customized 360-degree method to examine the frequency with which 1,546 men and 721 women leaders perceived themselves and were perceived by colleagues as using 10 relational and 10 task-oriented leadership behaviors, as addressed in the Management-Leadership Practices Inventory (MLPI). As hypothesized, men and…

  5. A Flexible 360-Degree Thermal Sound Source Based on Laser Induced Graphene

    Directory of Open Access Journals (Sweden)

    Lu-Qi Tao

    2016-06-01

    Full Text Available A flexible sound source is essential in a whole flexible system. It’s hard to integrate a conventional sound source based on a piezoelectric part into a whole flexible system. Moreover, the sound pressure from the back side of a sound source is usually weaker than that from the front side. With the help of direct laser writing (DLW technology, the fabrication of a flexible 360-degree thermal sound source becomes possible. A 650-nm low-power laser was used to reduce the graphene oxide (GO. The stripped laser induced graphene thermal sound source was then attached to the surface of a cylindrical bottle so that it could emit sound in a 360-degree direction. The sound pressure level and directivity of the sound source were tested, and the results were in good agreement with the theoretical results. Because of its 360-degree sound field, high flexibility, high efficiency, low cost, and good reliability, the 360-degree thermal acoustic sound source will be widely applied in consumer electronics, multi-media systems, and ultrasonic detection and imaging.

  6. Doctors' perceptions of why 360-degree feedback does (not) work: a qualitative study.

    Science.gov (United States)

    Overeem, Karlijn; Wollersheim, Hub; Driessen, Erik; Lombarts, Kiki; van de Ven, Geertje; Grol, Richard; Arah, Onyebuchi

    2009-09-01

    Delivery of 360-degree feedback is widely used in revalidation programmes. However, little has been done to systematically identify the variables that influence whether or not performance improvement is actually achieved after such assessments. This study aims to explore which factors represent incentives, or disincentives, for consultants to implement suggestions for improvement from 360-degree feedback. In 2007, 109 consultants in the Netherlands were assessed using 360-degree feedback and portfolio learning. We carried out a qualitative study using semi-structured interviews with 23 of these consultants, purposively sampled based on gender, hospital, work experience, specialty and views expressed in a previous questionnaire. A grounded theory approach was used to analyse the transcribed tape-recordings. We identified four groups of factors that can influence consultants' practice improvement after 360-degree feedback: (i) contextual factors related to workload, lack of openness and social support, lack of commitment from hospital management, free-market principles and public distrust; (ii) factors related to feedback; (iii) characteristics of the assessment system, such as facilitators and a portfolio to encourage reflection, concrete improvement goals and annual follow-up interviews, and (iv) individual factors, such as self-efficacy and motivation. It appears that 360-degree feedback can be a positive force for practice improvement provided certain conditions are met, such as that skilled facilitators are available to encourage reflection, concrete goals are set and follow-up interviews are carried out. This study underscores the fact that hospitals and consultant groups should be aware of the existing lack of openness and absence of constructive feedback. Consultants indicated that sharing personal reflections with colleagues could improve the quality of collegial relationships and heighten the chance of real performance improvement.

  7. Nurses leadership evaluation by nursing aides and technicians according to the 360-degree feedback method

    Directory of Open Access Journals (Sweden)

    Eliana Ofélia Llapa-Rodriguez

    Full Text Available Objective: to assess the leadership of nurses of a maternity hospital according to the nursing aides and technicians and the 360-degree feedback method. Method: a descriptive, cross-sectional study. The population was 19 nurses and 96 nursing aides and assistants. Data were collected from May 2010 to July 2011 using a questionnaire based on the 360-degree method. Results: the nurses mentioned having a favourable performance in the four studied categories. The nursing aides and technicians disagreed with the recorded leadership performance of the nurses in the category "Communication" and "Support environment." The responses for the category "Role Model" were favourable in all items, especially PAPEL18. In "Management Style", the highest favourable rating was 79% for GESTAO16. Conclusion: the categories "Communication" and "Support Environment" revealed a greater fragility of the nurses in comparison to the categories "Role Model" and "Management Style".

  8. [Nurses leadership evaluation by nursing aides and technicians according to the 360-degree feedback method].

    Science.gov (United States)

    Llapa-Rodriguez, Eliana Ofélia; de Oliveira, Júlian Katrin Albuquerque; Lopes Neto, David; de Aguiar Campos, Maria Pontes

    2015-12-01

    to assess the leadership of nurses of a maternity hospital according to the nursing aides and technicians and the 360-degree feedback method. a descriptive, cross-sectional study. The population was 19 nurses and 96 nursing aides and assistants. Data were collected from May 2010 to July 2011 using a questionnaire based on the 360-degree method. the nurses mentioned having a favourable performance in the four studied categories. The nursing aides and technicians disagreed with the recorded leadership performance of the nurses in the category "Communication" and "Support environment." The responses for the category "Role Model" were favourable in all items, especially PAPEL18. In "Management Style", the highest favourable rating was 79% for GESTAO16. the categories "Communication" and "Support Environment" revealed a greater fragility of the nurses in comparison to the categories "Role Model" and "Management Style".

  9. Arthroscopic 360-Degree Capsular Release for Idiopathic Adhesive Capsulitis in the Lateral Decubitus Position

    OpenAIRE

    Romeo, Anthony A.; Cvetanovich, Gregory L.; Leroux, Timothy Sean; Bernardoni, Eamon; Saltzman, Bryan M.; Verma, Nikhil N.

    2017-01-01

    Objectives: Idiopathic glenohumeral adhesive capsulitis impairs patient motion and function. If conservative management fails, arthroscopic capsular release is classically performed in the beach-chair position with incapsule release and manipulation under anesthesia. We report outcomes following arthroscopic 360-degree capsular release in lateral decubitus position followed by limited manipulation to confirm restoration of range of motion. Methods: A retrospective case series of patients unde...

  10. Leadership development in a professional medical society using 360-degree survey feedback to assess emotional intelligence.

    Science.gov (United States)

    Gregory, Paul J; Robbins, Benjamin; Schwaitzberg, Steven D; Harmon, Larry

    2017-09-01

    The current research evaluated the potential utility of a 360-degree survey feedback program for measuring leadership quality in potential committee leaders of a professional medical association (PMA). Emotional intelligence as measured by the extent to which self-other agreement existed in the 360-degree survey ratings was explored as a key predictor of leadership quality in the potential leaders. A non-experimental correlational survey design was implemented to assess the variation in leadership quality scores across the sample of potential leaders. A total of 63 of 86 (76%) of those invited to participate did so. All potential leaders received feedback from PMA Leadership, PMA Colleagues, and PMA Staff and were asked to complete self-ratings regarding their behavior. Analyses of variance revealed a consistent pattern of results as Under-Estimators and Accurate Estimators-Favorable were rated significantly higher than Over-Estimators in several leadership behaviors. Emotional intelligence as conceptualized in this study was positively related to overall performance ratings of potential leaders. The ever-increasing roles and potential responsibilities for PMAs suggest that these organizations should consider multisource performance reviews as these potential future PMA executives rise through their organizations to assume leadership positions with profound potential impact on healthcare. The current findings support the notion that potential leaders who demonstrated a humble pattern or an accurate pattern of self-rating scored significantly higher in their leadership, teamwork, and interpersonal/communication skills than those with an aggrandizing self-rating.

  11. 360-degrees profilometry using strip-light projection coupled to Fourier phase-demodulation.

    Science.gov (United States)

    Servin, Manuel; Padilla, Moises; Garnica, Guillermo

    2016-01-11

    360 degrees (360°) digitalization of three dimensional (3D) solids using a projected light-strip is a well-established technique in academic and commercial profilometers. These profilometers project a light-strip over the digitizing solid while the solid is rotated a full revolution or 360-degrees. Then, a computer program typically extracts the centroid of this light-strip, and by triangulation one obtains the shape of the solid. Here instead of using intensity-based light-strip centroid estimation, we propose to use Fourier phase-demodulation for 360° solid digitalization. The advantage of Fourier demodulation over strip-centroid estimation is that the accuracy of phase-demodulation linearly-increases with the fringe density, while in strip-light the centroid-estimation errors are independent. Here we proposed first to construct a carrier-frequency fringe-pattern by closely adding the individual light-strip images recorded while the solid is being rotated. Next, this high-density fringe-pattern is phase-demodulated using the standard Fourier technique. To test the feasibility of this Fourier demodulation approach, we have digitized two solids with increasing topographic complexity: a Rubik's cube and a plastic model of a human-skull. According to our results, phase demodulation based on the Fourier technique is less noisy than triangulation based on centroid light-strip estimation. Moreover, Fourier demodulation also provides the amplitude of the analytic signal which is a valuable information for the visualization of surface details.

  12. Usefulness of 360 degree evaluation in evaluating nursing students in Iran

    Directory of Open Access Journals (Sweden)

    Tabandeh Sadeghi

    2016-06-01

    Full Text Available Purpose: This study aimed to evaluate the clinical nursing students using 360 degree evaluation. Methods: In this descriptive cross-sectional study that conducted between September 2014 and February 2015, 28 students who were selected by census from those who were passing the last semester of the Nursing BSc program in Rafsanjan University of Medical Sciences. Data collection tools included demographic questionnaire and students’ evaluation questionnaire, to evaluate “professional behavior” and “clinical skills” in pediatric ward. Every student got evaluated from clinical instructor, students, peers, clinical nurses, and children’s mothers’ point of view. Data analysis was done with descriptive and analytic statistics test including Pearson coefficient using SPSS version 18.0. Results: The evaluation mean scores were as following: students, 89.74±6.17; peers, 94.12±6.87; children’s mothers, 92.87±6.21; clinical instructor, 84.01±8.81; and the nurses, 94.87±6.35. The results showed a significant correlation between evaluation scores of peers, clinical instructor and self-evaluation (Pearson coefficient, p<0.001, but the correlation between the nurses’ evaluation score and that of the clinical instructor was not significant (Pearson coefficient, p=0.052. Conclusion: 360 Degree evaluation can provide additional useful information on student performance and evaluation of different perspectives of care. The use of this method is recommended for clinical evaluation of nursing students.

  13. 360-Degree Feedback Implementation Plan: Dean Position, Graduate School of Business and Public Policy, Naval Postgraduate School

    National Research Council Canada - National Science Library

    Morrison, Devin

    2002-01-01

    360-degree feedback is a personal development and appraisal tool designed to quantify the competencies and skills of fellow employees by tapping the collective experience of their superiors, subordinates, and peers...

  14. Evaluating the effectiveness of a 360-degree performance appraisal and feedback in a selected steel organisation / Koetlisi Eugene Lithakong

    OpenAIRE

    Lithakong, Koetlisi Eugene

    2014-01-01

    Most companies are competing in the diverse global markets, and competitive advantage through human capital is becoming very important. Employee development for high productivity and the use of effective tools to measure their performance are therefore paramount. One such tool is the 360-degree performance appraisal system. The study on the effectiveness of the 360-degree performance appraisal was conducted on a selected steel organisation. The primary objective of the research...

  15. The usefulness of 360 degree feedback in developing a personal work style

    Directory of Open Access Journals (Sweden)

    Chicu Nicoleta

    2017-07-01

    Full Text Available The present study focuses on a new approach in the process of developing personal work styles, based on the usefulness of 360 degree feedback, taking into consideration the following dimensions: work-life balance, gender-age, self-development and the behavior a person has, following the process of self-development and defining work style. Using different approaches, the study attempts to identify if there are some differences between the evaluations received from the family and the ones from the work environment. All these factors aim at improving personal, but also organizational performances. Based on the current body of the literature, a discussion is made and conclusions are presented.

  16. Do 360-degree feedback survey results relate to patient satisfaction measures?

    Science.gov (United States)

    Hageman, Michiel G J S; Ring, David C; Gregory, Paul J; Rubash, Harry E; Harmon, Larry

    2015-05-01

    There is evidence that feedback from 360-degree surveys-combined with coaching-can improve physician team performance and quality of patient care. The Physicians Universal Leadership-Teamwork Skills Education (PULSE) 360 is one such survey tool that is used to assess work colleagues' and coworkers' perceptions of a physician's leadership, teamwork, and clinical practice style. The Clinician & Group-Consumer Assessment of Healthcare Providers and System (CG-CAHPS), developed by the US Department of Health and Human Services to serve as the benchmark for quality health care, is a survey tool for patients to provide feedback that is based on their recent experiences with staff and clinicians and soon will be tied to Medicare-based compensation of participating physicians. Prior research has indicated that patients and coworkers often agree in their assessment of physicians' behavioral patterns. The goal of the current study was to determine whether 360-degree, also called multisource, feedback provided by coworkers could predict patient satisfaction/experience ratings. A significant relationship between these two forms of feedback could enable physicians to take a more proactive approach to reinforce their strengths and identify any improvement opportunities in their patient interactions by reviewing feedback from team members. An automated 360-degree software process may be a faster, simpler, and less resource-intensive approach than telephoning and interviewing patients for survey responses, and it potentially could facilitate a more rapid credentialing or quality improvement process leading to greater fiscal and professional development gains for physicians. Our primary research question was to determine if PULSE 360 coworkers' ratings correlate with CG-CAHPS patients' ratings of overall satisfaction, recommendation of the physician, surgeon respect, and clarity of the surgeon's explanation. Our secondary research questions were to determine whether CG-CAHPS scores

  17. ESO unveils an amazing, interactive, 360-degree panoramic view of the entire night sky

    Science.gov (United States)

    2009-09-01

    The first of three images of ESO's GigaGalaxy Zoom project - a new magnificent 800-million-pixel panorama of the entire sky as seen from ESO's observing sites in Chile - has just been released online. The project allows stargazers to explore and experience the Universe as it is seen with the unaided eye from the darkest and best viewing locations in the world. This 360-degree panoramic image, covering the entire celestial sphere, reveals the cosmic landscape that surrounds our tiny blue planet. This gorgeous starscape serves as the first of three extremely high-resolution images featured in the GigaGalaxy Zoom project, launched by ESO within the framework of the International Year of Astronomy 2009 (IYA2009). GigaGalaxy Zoom features a web tool that allows users to take a breathtaking dive into our Milky Way. With this tool users can learn more about many different and exciting objects in the image, such as multicoloured nebulae and exploding stars, just by clicking on them. In this way, the project seeks to link the sky we can all see with the deep, "hidden" cosmos that astronomers study on a daily basis. The wonderful quality of the images is a testament to the splendour of the night sky at ESO's sites in Chile, which are the most productive astronomical observatories in the world. The plane of our Milky Way Galaxy, which we see edge-on from our perspective on Earth, cuts a luminous swath across the image. The projection used in GigaGalaxy Zoom place the viewer in front of our Galaxy with the Galactic Plane running horizontally through the image - almost as if we were looking at the Milky Way from the outside. From this vantage point, the general components of our spiral galaxy come clearly into view, including its disc, marbled with both dark and glowing nebulae, which harbours bright, young stars, as well as the Galaxy's central bulge and its satellite galaxies. The painstaking production of this image came about as a collaboration between ESO, the renowned

  18. Good to great: using 360-degree feedback to improve physician emotional intelligence.

    Science.gov (United States)

    Hammerly, Milton E; Harmon, Larry; Schwaitzberg, Steven D

    2014-01-01

    The past decade has seen intense interest and dramatic change in how hospitals and physician organizations review physician behaviors. The characteristics of successful physicians extend past their technical and cognitive skills. Two of the six core clinical competencies (professionalism and interpersonal/communication skills) endorsed by the Accreditation Council for Graduate Medical Education, the American Board of Medical Specialties, and The Joint Commission require physicians to succeed in measures associated with emotional intelligence (EI). Using 360-degree anonymous feedback surveys to screen for improvement opportunities in these two core competencies enables organizations to selectively offer education to further develop physician EI. Incorporating routine use of these tools and interventions into ongoing professional practice evaluation and focused professional practice evaluation processes may be a cost-effective strategy for preventing disruptive behaviors and increasing the likelihood of success when transitioning to an employed practice model. On the basis of a literature review, we determined that physician EI plays a key role in leadership; teamwork; and clinical, financial, and organizational outcomes. This finding has significant implications for healthcare executives seeking to enhance physician alignment and transition to a team-based delivery model.

  19. Use of 360-degree assessment of residents in internal medicine in a Danish setting: a feasibility study

    DEFF Research Database (Denmark)

    Allerup, P; Aspegren, K; Ejlersen, E

    2007-01-01

    BACKGROUND: The aim of the study was to explore the feasibility of 360 degree assessment in early specialist training in a Danish setting. Present Danish postgraduate training requires assessment of specific learning objectives. Residency in Internal Medicine was chosen for the study. It has 65...

  20. 360-degree video and X-ray modeling of the Galactic center's inner parsec

    Science.gov (United States)

    Russell, Christopher Michael Post; Wang, Daniel; Cuadra, Jorge

    2017-08-01

    360-degree videos, which render an image over all 4pi steradian, provide a unique and immersive way to visualize astrophysical simulations. Video sharing sites such as YouTube allow these videos to be shared with the masses; they can be viewed in their 360° nature on computer screens, with smartphones, or, best of all, in virtual-reality (VR) goggles. We present the first such 360° video of an astrophysical simulation: a hydrodynamics calculation of the Wolf-Rayet stars and their ejected winds in the inner parsec of the Galactic center. Viewed from the perspective of the super-massive black hole (SMBH), the most striking aspect of the video, which renders column density, is the inspiraling and stretching of clumps of WR-wind material as they makes their way towards the SMBH. We will brielfy describe how to make 360° videos and how to publish them online in their desired 360° format. Additionally we discuss computing the thermal X-ray emission from a suite of Galactic-center hydrodynamic simulations that have various SMBH feedback mechanisms, which are compared to Chandra X-ray Visionary Program observations of the region. Over a 2-5” ring centered on Sgr A*, the spectral shape is well matched, indicating that the WR winds are the dominant source of the thermal X-ray emission. Furthermore, the X-ray flux depends on the SMBH feedback due to the feedback's ability to clear out material from the central parsec. A moderate outburst is necessary to explain the current thermal X-ray flux, even though the outburst ended ˜100 yr ago.

  1. The 360-degree evaluation model: A method for assessing competency in graduate nursing students. A pilot research study.

    Science.gov (United States)

    Cormack, Carrie L; Jensen, Elizabeth; Durham, Catherine O; Smith, Gigi; Dumas, Bonnie

    2018-05-01

    The 360 Degree Evaluation Model is one means to provide a comprehensive view of clinical competency and readiness for progression in an online nursing program. This pilot project aimed to evaluate the effectiveness of implementing a 360 Degree Evaluation of clinical competency of graduate advanced practice nursing students. The 360 Degree Evaluation, adapted from corporate industry, encompasses assessment of student knowledge, skills, behaviors and attitudes and validates student's progression from novice to competent. Cohort of advanced practice nursing students in four progressive clinical semesters. Graduate advanced practice nursing students (N = 54). Descriptive statistics and Jonckheere's Trend Test were used to evaluate OSCE's scores from graded rubric, standardized patient survey scores, student reflection and preceptor evaluation. We identified all students passed the four OSCEs during a first attempt or second attempt. Scaffolding OSCE's over time allowed faculty to identify cohort weakness and create subsequent learning opportunities. Standardized patients' evaluation of the students' performance in the domains of knowledge, skills and attitudes, showed high scores of 96% in all OSCEs. Students' self-reflection comments were a mix of strengths and weaknesses in their self-evaluation, demonstrating themes as students progressed. Preceptor evaluation scores revealed the largest increase in knowledge and learning skills (NONPF domain 1), from an aggregate average of 90% in the first clinical course, to an average of 95%. The 360 Degree Evaluation Model provided a comprehensive evaluation of the student and critical information for the faculty ensuring individual student and cohort data and ability to analyze cohort themes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. SOLUTION OF THE STABILITY PROBLEM FOR 360 DEGREE SELF-ACTING, GAS-LUBRICATED BEARINGS OF INFINITE LENGTH

    Science.gov (United States)

    The stability of self-acting, gas-lubricated bearings was investigated. Two approaches to the solution are presented and their results are compared. Also, the relation between the present work and other, more simplified, methods available in the literature is discussed. The particular case of a 360 degree journal bearing of infinite length is treated, and the changes necessary to use the same theories with other geometries are pointed out. Available experimental results are collected and compared with theory.

  3. Nucleation and pinning at 360degree domain walls in SmCo/sub 5/ and related alloys

    Energy Technology Data Exchange (ETDEWEB)

    Gaunt, P.; Mylvaganam, C.K.

    1977-06-01

    It is shown that in a high forward field neighboring 180degree ferromagnetic domain walls come together and either annihilate one another (''unwinding walls'') or combine to form 360degree walls separating domains magnetized in the same direction (''winding walls''). If the 360degree wall encounters:i an inhomogeneity of lower zero-field wall energy (infinityZAK), it may be pinned. A finite reverse field is then required to split the 360degree wall nucleus into two transient 180 degree walls which will reverse the magnetization. The model is developed micromagnetically and applied to the pinning of domain walls at grain boundary inhomogeneities in SmCo/sub 5/ alloys. The nucleation--unpinning coercive field is calculated for inhomogeneities which are assumed to have the magnetic properties of pure cobalt. Inhomogeneity widths from 55.6 to 204 A give coercive forces from zero to 3.8 x 10/sup 4/ Oe..A critical constants chosen, this thickness is 55.6 A It is suggested that one function of liquid-phase sintering may be to increase the inhomogeneity thickness beyond the critical value.

  4. A wideband photonic microwave phase shifter with 360-degree phase tunable range based on a DP-QPSK modulator

    Science.gov (United States)

    Chen, Yang

    2018-03-01

    A novel wideband photonic microwave phase shifter with 360-degree phase tunable range is proposed based on a single dual-polarization quadrature phase shift-keying (DP-QPSK) modulator. The two dual-parallel Mach-Zehnder modulators (DP-MZMs) in the DP-QPSK modulator are properly biased to serve as a carrier-suppressed single-sideband (CS-SSB) modulator and an optical phase shifter (OPS), respectively. The microwave signal is applied to the CS-SSB modulator, while a control direct-current (DC) voltage is applied to the OPS. The first-order optical sideband generated from the CS-SSB modulator and the phase tunable optical carrier from the OPS are combined and then detected in a photodetector, where a microwave signal is generated with its phase controlled by the DC voltage applied to the OPS. The proposed technique is theoretically analyzed and experimentally demonstrated. Microwave signals with a carrier frequency from 10 to 23 GHz are continuously phase shifted over 360-degree phase range. The proposed technique features very compact configuration, easy phase tuning and wide operation bandwidth.

  5. 360-Degree Visual Detection and Target Tracking on an Autonomous Surface Vehicle

    Science.gov (United States)

    Wolf, Michael T; Assad, Christopher; Kuwata, Yoshiaki; Howard, Andrew; Aghazarian, Hrand; Zhu, David; Lu, Thomas; Trebi-Ollennu, Ashitey; Huntsberger, Terry

    2010-01-01

    This paper describes perception and planning systems of an autonomous sea surface vehicle (ASV) whose goal is to detect and track other vessels at medium to long ranges and execute responses to determine whether the vessel is adversarial. The Jet Propulsion Laboratory (JPL) has developed a tightly integrated system called CARACaS (Control Architecture for Robotic Agent Command and Sensing) that blends the sensing, planning, and behavior autonomy necessary for such missions. Two patrol scenarios are addressed here: one in which the ASV patrols a large harbor region and checks for vessels near a fixed asset on each pass and one in which the ASV circles a fixed asset and intercepts approaching vessels. This paper focuses on the ASV's central perception and situation awareness system, dubbed Surface Autonomous Visual Analysis and Tracking (SAVAnT), which receives images from an omnidirectional camera head, identifies objects of interest in these images, and probabilistically tracks the objects' presence over time, even as they may exist outside of the vehicle's sensor range. The integrated CARACaS/SAVAnT system has been implemented on U.S. Navy experimental ASVs and tested in on-water field demonstrations.

  6. 360 DEGREE PHOTOGRAPHY TO DOCUMENT and TRAIN and ORIENT PERSONNEL FOR DECONTAMINATION and DECOMMISSIONING

    International Nuclear Information System (INIS)

    LEBARON, G.J.

    2001-01-01

    360 o photo technology is being used to document conditions, especially hazardous conditions, at US. Department of Energy (DOE) facilities that are being closed. Traditional efforts to document the condition of rooms and cells, especially those difficult to enter due to the hazards present, using engineering drawings, documents, ''traditional flat'' photographs or videos, don't provide perspective. These miss items or quickly pan across areas of interest with little opportunity to study details. Therefore, it becomes necessary to make multiple entries into these hazardous areas resulting in work activities taking longer and increasing exposure and the risk of accidents. High-resolution digital cameras, in conjunction with software techniques, make possible 360 o photos that allow a person to look all around, up and down, and zoom in or out. The software provides the opportunity to attach other information to a 360 o photo such as sound files providing audio information; flat photos providing additional detail or information about what is behind a panel or around a comer; and text information which can be used to show radiological conditions or identify other hazards present but not readily visible. The software also allows other 360 o photos to be attached to create a virtual tour where the user can move from area to area or room to room. The user is able to stop, study and zoom in on areas of interest. A virtual tour of a building or room can be used for facility documentation, work planning and orientation, and training. Documentation is developed during facility closure so people involved in follow-on activities can gain a perspective of the area, focus on points of interest and discuss what they would do or how they would respond to and manage conditions. Decontamination and Decommissioning (D and D) planners and workers can make use of the tour to plan work and decide ahead of time, while looking at the areas of interest, what and how tasks will be performed

  7. Can 360-Degree Reviews Help Surgeons? Evaluation of Multisource Feedback for Surgeons in a Multi-Institutional Quality Improvement Project.

    Science.gov (United States)

    Nurudeen, Suliat M; Kwakye, Gifty; Berry, William R; Chaikof, Elliot L; Lillemoe, Keith D; Millham, Frederick; Rubin, Marc; Schwaitzberg, Steven; Shamberger, Robert C; Zinner, Michael J; Sato, Luke; Lipsitz, Stuart; Gawande, Atul A; Haynes, Alex B

    2015-10-01

    Medical organizations have increased interest in identifying and improving behaviors that threaten team performance and patient safety. Three hundred and sixty degree evaluations of surgeons were performed at 8 academically affiliated hospitals with a common Code of Excellence. We evaluate participant perceptions and make recommendations for future use. Three hundred and eighty-five surgeons in a variety of specialties underwent 360-degree evaluations, with a median of 29 reviewers each (interquartile range 23 to 36). Beginning 6 months after evaluation, surgeons, department heads, and reviewers completed follow-up surveys evaluating accuracy of feedback, willingness to participate in repeat evaluations, and behavior change. Survey response rate was 31% for surgeons (118 of 385), 59% for department heads (10 of 17), and 36% for reviewers (1,042 of 2,928). Eighty-seven percent of surgeons (95% CI, 75%-94%) agreed that reviewers provided accurate feedback. Similarly, 80% of department heads believed the feedback accurately reflected performance of surgeons within their department. Sixty percent of surgeon respondents (95% CI, 49%-75%) reported making changes to their practice based on feedback received. Seventy percent of reviewers (95% CI, 69%-74%) believed the evaluation process was valuable, with 82% (95% CI, 79%-84%) willing to participate in future 360-degree reviews. Thirty-two percent of reviewers (95% CI, 29%-35%) reported perceiving behavior change in surgeons. Three hundred and sixty degree evaluations can provide a practical, systematic, and subjectively accurate assessment of surgeon performance without undue reviewer burden. The process was found to result in beneficial behavior change, according to surgeons and their coworkers. Copyright © 2015 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  8. Extreme embrittlement of austenitic stainless steel irradiated to 75-81 dpa at 335-360 degrees C

    International Nuclear Information System (INIS)

    Porollo, S.I.; Vorobjev, A.N.; Konobeev, Yu.V.

    1997-01-01

    It is generally accepted that void swelling of austenitic steels ceases below some temperature in the range 340-360 degrees C, and exhibits relatively low swelling rates up to ∼400 degrees C. This perception may not be correct at all irradiation conditions, however, since it was largely developed from data obtained at relatively high displacement rates in fast reactors whose inlet temperatures were in the range 360-370 degrees C. There is an expectation, however, that the swelling regime can shift to lower temperatures at low displacement rates via the well-known open-quotes temperature shiftclose quotes phenomenon. It is also known that the swelling rates at the lower end of the swelling regime increase continuously at a sluggish rate, never approaching the terminal 1%/dpa level within the duration of previous experiments. This paper presents the results of an experiment conducted in the BN-350 fast reactor in Kazakhstan that involved the irradiation of argon-pressurized thin-walled tubes (0-200 MPa hoop stress) constructed from Fe-16Cr-15Ni-3Mo-Nb stabilized steel in contact with the sodium coolant, which enters the reactor at ∼270 degrees C. Tubes in the annealed condition reached 75 dpa at 335 degrees C, and another set in the 20% cold-worked condition reached 81 dpa at 360 degrees C. Upon disassembly all tubes, except those in the stress-free condition, were found to have failed in an extremely brittle fashion. The stress-free tubes exhibited diameter changes that imply swelling levels ranging from 9 to 16%. It is expected that stress-enhancement of swelling induced even larger swelling levels in the stressed tubes

  9. Eesti tankivalikud: Abrams või Leopard 2 / Holger Roonemaa

    Index Scriptorium Estoniae

    Roonemaa, Holger

    2010-01-01

    Ametlikku otsust kaitseväele tankide ostmiseks veel ei ole, kuid kui Eesti kaitsevägi asub tanke ostma, siis tõenäoliselt hakatakse valima Leopard 2 ja Abrams vahel. Tankidega seotud kuludest ja olukorrast Norras

  10. Qualification Lab Testing on M1 Abrams Engine Oil Filters

    Science.gov (United States)

    2016-11-01

    OIL FILTERS FINAL REPORT TFLRF No. 483 by Kristi K. Rutta U.S...the originator. UNCLASSIFIED QUALIFICATION LAB TESTING ON M1 ABRAMS ENGINE OIL FILTERS FINAL REPORT TFLRF No. 483 by Kristi K...TITLE AND SUBTITLE Qualification Lab Testing on M1 Abrams Engine Oil Filter 5a. CONTRACT NUMBER W56HZV-15-C-0030 5b. GRANT NUMBER 5c.

  11. Hollow modular anchorage (HMA) screws for anterior transvertebral fixation in high-grade spondylolisthesis cases requiring 360 degrees in-situ fusion.

    Science.gov (United States)

    König, Matthias A; Boszczyk, Bronek M

    2018-03-22

    360 degrees in-situ fusion for high-grade spondylolisthesis showed satisfying clinical long-term results. Combining anterior with posterior surgery increases fusion rates. Anteriorly inserted transvertebral HMA screws could be an alternative to strut graft constructs or cages, avoiding donor site complications. In addition, complete posterior muscle detachment is avoided and the injury risk of neural structures is minimized. This study investigates the use of HMA screws in this context. Five consecutive patients requiring L4-S1 in-situ fusion for isthmic spondylolisthesis (four Grade 3 and one Grade 4) were included. The L5/S1 level was fused with an HMA screw filled with local bone and bone morphogenic protein (BMP2), inserted via the L4/5 disc space level. An L4/5 stand-alone interbody fusion with additional minimal invasive posterior screw fixation was added. Transvertebral insertion of the HMA device was accomplished via a retroperitoneal approach to L4/L5 in all cases without exposure of L5/S1. Blood loss ranged from 150 ml-350 ml. No intraoperative complication occurred. One patient developed posterior wound infection requiring debridement. Solid fusion was confirmed with a CT scan after 6 months in all patients. All patients improved to unrestricted activities of daily living with two being limited by occasional back pain. HMA screws allow for effective lumbosacral fusion via a limited anterior exposure. This is technically easier than posterior exposure of the lumbosacral junction in high-grade spondylolisthesis requiring 360 degrees fusion.

  12. On Culture: Know the Enemy and Know Thyself - Giap, Abrams, and Victory

    Science.gov (United States)

    2016-05-26

    leaders to better anticipate future action. The author conducted a case study of General Vo Nguyen Giap and General Creighton Abrams to analyze the...the rural village of An Xa, North Vietnam. Nghiem diligently homeschooled Giap on the concept of nationalism and the Chinese classics of Taoism and...National Culture The author analyzed General Creighton Abrams as a case study to understand how personal identity can layer over an organization’s, and a

  13. DEĞERLENDİRİCİLER ARASI GÜVENİLİRLİK VE TATMİN BAĞLAMINDA 360 DERECE PERFORMANS DEĞERLENDİRME - 360-DEGREE PERFORMANCE APPRAISAL IN THE CONTEXT OF INTERRATER RELIABILITY AND SATISFACTION

    Directory of Open Access Journals (Sweden)

    Adem BALTACI

    2014-03-01

    Full Text Available ÖzetGünümüzün en popüler değerlendirme sistemi olarak kabul edilen 360 derece değerlendirme sistemi gücünü, farklı kaynaklardan elde edilecek olan sonuçların daha objektif ve kapsayıcı olacağı görüşünden almaktadır. Ancak burada hangi değerlendiricinin daha geçerli ve güvenilir bilgi sağladığı halen belirsizliğini koruyan bir konudur. Bu belirsizliğe rağmen 360 derece değerlendirme sistemi çalışana kendini ve diğerlerini değerlendirme şansı tanıyor olması nedeniyle sistemden duyulan tatmini arttırmaktadır. Bu bağlamda yapılan bu çalışmada, değerlendirme sisteminden duyulan tatmin ve değerlendiriciler arası güvenilirlik özelinde 360 derece değerlendirme sistemi ele alınmıştır. Bu amaçla bu sistemi uygulayan bir işletmenin çalışanlarının değerlendirme sonuçları incelenmiş ve ayrıca çalışanlara sistemden duydukları tatmini ölçen bir anket uygulanmıştır. Analizler sonucunda demografik değişkenlerin performans puanları üzerinde olmasa da farklı kaynaklardan gelen değerlendirmeler üzerinde etkili olabildiği görülmüştür. Ayrıca üstlerin çalışanların gerçek performans puanlarına en yakın değerlendirmeleri yaptığı incelemeler sonucunda ortaya çıkmıştır. Bunun yanı sıra sisteme karşı duyulan tatmin ile çalışanların performansları arasında kuvvetli bir ilişki tespit edilmiştir.AbstractThe 360-degree appraisal system, viewed as today’s most popular appraisal system, gets its strength from the view that results from different sources would be much more objective and inclusive. Yet, the question of exactly which rating source provides relatively more valid and reliable information remains to be answered. This uncertainty notwithstanding, the 360-degree performance appraisal system leads to higher satisfaction with the system as it allows employees to assess both themselves and others. Against this background, this study addresses the

  14. Braudel and Abrams open the door to an insoluble debate: The City

    Directory of Open Access Journals (Sweden)

    Tomás Antônio Moreira

    2016-08-01

    Full Text Available This paper looks to understand the definitions of the city to enrich the new reflections, in the current days. The starting point for reflection is the confrontation of positions in the understanding of the urban phenomenon, the city, by Fernand Braudel and Philip Abrams, later to emerge in front of the dilemma posed in the confrontation of these two authors, samplings of the main elaborate theorizing on City. Among the findings, it is emphasized that the names of propositions about the city seeking to account for the dynamic evolution of human settlements: métapole, edge city and tecnocity.

  15. 360 Derece Performans Değerlendirme ve Geri Bildirim: Bir Üniversite Mediko-Sosyal Merkezi Birim Amirlerinin Yönetsel Yetkinliklerinin Değerlendirilmesi Üzerine Pilot Uygulama Örneği(360 Degree Performance Appraisal And Feedback: “A Pilot Study Illustration in Appraising the Managerial Skills of Supervisors Working in Health Care Centre of a University”

    Directory of Open Access Journals (Sweden)

    Selin Metin CAMGÖZ

    2006-01-01

    Full Text Available In this study, “360 Degree Performance Appraisal” which is one of the most current and controversial issues of human resource practices is extensively examined and supported with an empirical research. The study contains 2 parts. After explaining the necessity and the general utilities of the classical performance appraisal system, the first theoretical part shifts to the emergence of 360 degree performance appraisal, discusses its distinctive benefits over the classical appraisal system and focuses its attention to the raters (superiors, subordinates, peers, self involving in the 360 degree performance appraisal.The second empirical part illustrates the development of 360 degree performance appraisal system as well as its application and sample feedback reports for feedback purposes in order to appraise the managerial skills of supervisors working in Health Care Centre of a public university.

  16. The Abrams geriatric self-neglect scale: introduction, validation and psychometric properties.

    Science.gov (United States)

    Abrams, Robert C; Reid, M Carrington; Lien, Cynthia; Pavlou, Maria; Rosen, Anthony; Needell, Nancy; Eimicke, Joseph; Teresi, Jeanne

    2018-01-01

    Self-neglect is an imprecisely defined entity with multiple clinical expressions and adverse health consequences, especially in the elderly. However, research has been limited by the absence of a measurement instrument that is both inclusive and specific. Our goal was to establish the psychometric properties of a quantitative instrument, the Abrams Geriatric Self-Neglect Scale (AGSS). We analyzed data from a 2007 case-control study of 71 cognitively intact community-dwelling older self-neglectors that had used the AGSS. The AGSS was validated against two "gold standards": a categorical definition of self-neglect developed by expert consensus; and the clinical judgment of a geriatric psychiatrist using chart review. Frequencies were examined for the six scale domains by source (Subject, Observer, and Overall Impression). Internal consistency was estimated for each source, and associations among the sources were evaluated. Internal consistency estimates for the AGSS were rated as "good," with the Subject responses having the lowest alpha and omega (0.681 and 0.692) and the Observer responses the highest (0.758 and 0.765). Subject and Observer scores had the lowest association (0.578, p neglect, while the Subject subscale was "fair." The AGSS correctly classified and quantified self-neglect against two "gold standards." Sufficient correlations among multiple sources of information allow investigators and clinicians to choose flexibly from Subject, Observer, or Overall Impression. The lower internal consistency estimates for Subject responses are consistent with self-neglectors' propensity to disavow symptoms. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Performans Değerlendirme Yöntemi Olarak 360 Derece Geribildirim Sürecinin Orta Kademe Yöneticilerin İş Başarısına Olan Etkisi: 5 Yıldızlı Otel İşletmelerinde Bir Uygulama = The Effect of 360 Degree Feedback Performance Evaluation Process on the Achievement of Middle-Level Managers: an Application in 5-Star Accommodation Establishments

    Directory of Open Access Journals (Sweden)

    Derya KARA

    2010-01-01

    Full Text Available This study sets out to reveal the effect of 360 degree feedback performance evaluation process on the achievement of middle-level managers. To serve this purpose, 5-star establishments, in total 182, with tourism certification and operating in 5 major provinces (Antalya, İstanbul, Muğla, Ankara, İzmir made up the application field of the study. The study aims to determine to what extent the 360 degree feedback performance evaluation process differentiate compared to traditional performance evaluation processes within the context of job achievement. The results of the study suggest that 7 dimensions (leadership, task performance, adaptation to change, communication, human relations, creating output, employee training and development were found to be more effective in the job performances of the middle-level managers.

  18. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, Ul; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is replaceably mounted in the ray inlet opening of the camera, while the others are placed on separate supports. Supports are swingably mounted upon a column one above the other

  19. Gamma camera

    International Nuclear Information System (INIS)

    Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  20. Implications of Articulating Machinery on Operator Line of Sight and Efficacy of Camera Based Proximity Detection Systems

    OpenAIRE

    Nicholas Schwabe; Alison Godwin

    2017-01-01

    The underground mining industry, and some above ground operations, rely on the use of heavy equipment that articulates to navigate corners in the tight confines of the tunnels. Poor line of sight (LOS) has been identified as a problem for safe operation of this machinery. Proximity detection systems, such as a video system designed to provide a 360 degree view around the machine have been implemented to improve the available LOS for the operator. A four-camera system was modeled in a computer...

  1. 360 degree port MDA - a strategy to improve port security

    OpenAIRE

    Leary, Timothy P.

    2006-01-01

    CHDS State/Local Our national security and prosperity depend in part on secure and competitive ports. Effective public and private sector collaboration is needed in a world with myriad security challenges and fierce global competition. Although steps have been taken in the years since 9/11 to realize these twin goals, much more needs to be done. The current maritime domain awareness (MDA) paradigm needs to be expanded to provide comprehensive awareness of intermodal operations in our ports...

  2. 360 degree feedback assessment tool, ver.1.01

    OpenAIRE

    Petrov, Milen; Petrov, Stanislav; Aleksieva-Petrova, Adelina

    2007-01-01

    The software is available as an open source product through the TENCompetence Sourceforge software project repository, as well as a compiled product. Available under the three clause BSD licence, Copyright TENCompetence Foundation.

  3. Contrast-induced nephropathy: The wheel has turned 360 degrees

    DEFF Research Database (Denmark)

    Thomsen, H.S.; Morcos, S.K.; Barrett, B.J.

    2008-01-01

    Contrast-induced nephropathy (CIN) has been a hot topic during the last 5 years due its association with increased morbidity and mortality. CIN is an important complication, particularly in patients with advanced chronic kidney disease (CKD) associated with diabetes mellitus. Methods to diminish ...

  4. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, U.; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is mounted in the ray inlet opening of the camera, while the others are placed on separate supports. The supports are swingably mounted upon a column one above the other through about 90 0 to a collimator exchange position. Each of the separate supports is swingable to a vertically aligned position, with limiting of the swinging movement and positioning of the support at the desired exchange position. The collimators are carried on the supports by means of a series of vertically disposed coil springs. Projections on the camera are movable from above into grooves of the collimator at the exchange position, whereupon the collimator is turned so that it is securely prevented from falling out of the camera head

  5. Gamma camera

    International Nuclear Information System (INIS)

    Reiss, K.H.; Kotschak, O.; Conrad, B.

    1976-01-01

    A gamma camera with a simplified setup as compared with the state of engineering is described permitting, apart from good localization, also energy discrimination. Behind the usual vacuum image amplifier a multiwire proportional chamber filled with trifluorine bromium methane is connected in series. Localizing of the signals is achieved by a delay line, energy determination by means of a pulse height discriminator. With the aid of drawings and circuit diagrams, the setup and mode of operation are explained. (ORU) [de

  6. Gamma camera

    International Nuclear Information System (INIS)

    Berninger, W.H.

    1975-01-01

    The light pulse output of a scintillator, on which incident collimated gamma rays impinge, is detected by an array of photoelectric tubes each having a convexly curved photocathode disposed in close proximity to the scintillator. Electronic circuitry connected to outputs of the phototubes develops the scintillation event position coordinate electrical signals with good linearity and with substantial independence of the spacing between the scintillator and photocathodes so that the phototubes can be positioned as close to the scintillator as is possible to obtain less distortion in the field of view and improved spatial resolution as compared to conventional planar photocathode gamma cameras

  7. Those Nifty Digital Cameras!

    Science.gov (United States)

    Ekhaml, Leticia

    1996-01-01

    Describes digital photography--an electronic imaging technology that merges computer capabilities with traditional photography--and its uses in education. Discusses how a filmless camera works, types of filmless cameras, advantages and disadvantages, and educational applications of the consumer digital cameras. (AEF)

  8. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen...

  9. Preliminary Design of a Lightning Optical Camera and ThundEr (LOCATE) Sensor

    Science.gov (United States)

    Phanord, Dieudonne D.; Koshak, William J.; Rybski, Paul M.; Arnold, James E. (Technical Monitor)

    2001-01-01

    The preliminary design of an optical/acoustical instrument is described for making highly accurate real-time determinations of the location of cloud-to-ground (CG) lightning. The instrument, named the Lightning Optical Camera And ThundEr (LOCATE) sensor, will also image the clear and cloud-obscured lightning channel produced from CGs and cloud flashes, and will record the transient optical waveforms produced from these discharges. The LOCATE sensor will consist of a full (360 degrees) field-of-view optical camera for obtaining CG channel image and azimuth, a sensitive thunder microphone for obtaining CG range, and a fast photodiode system for time-resolving the lightning optical waveform. The optical waveform data will be used to discriminate CGs from cloud flashes. Together, the optical azimuth and thunder range is used to locate CGs and it is anticipated that a network of LOCATE sensors would determine CG source location to well within 100 meters. All of this would be accomplished for a relatively inexpensive cost compared to present RF lightning location technologies, but of course the range detection is limited and will be quantified in the future. The LOCATE sensor technology would have practical applications for electric power utility companies, government (e.g. NASA Kennedy Space Center lightning safety and warning), golf resort lightning safety, telecommunications, and other industries.

  10. Radiation camera exposure control

    International Nuclear Information System (INIS)

    Martone, R.J.; Yarsawich, M.; Wolczek, W.

    1976-01-01

    A system and method for governing the exposure of an image generated by a radiation camera to an image sensing camera is disclosed. The exposure is terminated in response to the accumulation of a predetermined quantity of radiation, defining a radiation density, occurring in a predetermined area. An index is produced which represents the value of that quantity of radiation whose accumulation causes the exposure termination. The value of the predetermined radiation quantity represented by the index is sensed so that the radiation camera image intensity can be calibrated to compensate for changes in exposure amounts due to desired variations in radiation density of the exposure, to maintain the detectability of the image by the image sensing camera notwithstanding such variations. Provision is also made for calibrating the image intensity in accordance with the sensitivity of the image sensing camera, and for locating the index for maintaining its detectability and causing the proper centering of the radiation camera image

  11. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  12. LSST Camera Optics Design

    Energy Technology Data Exchange (ETDEWEB)

    Riot, V J; Olivier, S; Bauman, B; Pratuch, S; Seppala, L; Gilmore, D; Ku, J; Nordby, M; Foss, M; Antilogus, P; Morgado, N

    2012-05-24

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics will meet their performance goals.

  13. Implications of Articulating Machinery on Operator Line of Sight and Efficacy of Camera Based Proximity Detection Systems

    Directory of Open Access Journals (Sweden)

    Nicholas Schwabe

    2017-07-01

    Full Text Available The underground mining industry, and some above ground operations, rely on the use of heavy equipment that articulates to navigate corners in the tight confines of the tunnels. Poor line of sight (LOS has been identified as a problem for safe operation of this machinery. Proximity detection systems, such as a video system designed to provide a 360 degree view around the machine have been implemented to improve the available LOS for the operator. A four-camera system was modeled in a computer environment to assess LOS on a 3D cad model of a typical, articulated machine. When positioned without any articulation, the system is excellent at removing blind spots for a machine driving straight forward or backward in a straight tunnel. Further analysis reveals that when the machine articulates in a simulated corner section, some camera locations are no longer useful for improving LOS into the corner. In some cases, the operator has a superior view into the corner, when compared to the best available view from the camera. The work points to the need to integrate proximity detection systems at the design, build, and manufacture stage, and to consider proper policy and procedures that would address the gains and limits of the systems prior to implementation.

  14. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...

  15. Educational Applications for Digital Cameras.

    Science.gov (United States)

    Cavanaugh, Terence; Cavanaugh, Catherine

    1997-01-01

    Discusses uses of digital cameras in education. Highlights include advantages and disadvantages, digital photography assignments and activities, camera features and operation, applications for digital images, accessory equipment, and comparisons between digital cameras and other digitizers. (AEF)

  16. The laser scanning camera

    International Nuclear Information System (INIS)

    Jagger, M.

    The prototype development of a novel lenseless camera is reported which utilises a laser beam scanned in a raster by means of orthogonal vibrating mirrors to illuminate the field of view. Laser light reflected from the scene is picked up by a conveniently sited photosensitive device and used to modulate the brightness of a T.V. display scanned in synchronism with the moving laser beam, hence producing a T.V. image of the scene. The camera which needs no external lighting system can act in either a wide angle mode or by varying the size and position of the raster can be made to zoom in to view in detail any object within a 40 0 overall viewing angle. The resolution and performance of the camera are described and a comparison of these aspects is made with conventional T.V. cameras. (author)

  17. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  18. M1 Abrams Seal Ring

    National Research Council Canada - National Science Library

    2004-01-01

    .... The 3.650-inch-diameter, 0.500-inch-wide seal, made of wrought A-286 nickel-iron super alloy, is difficult to clamp securely during machining, making it a challenge to maintain required roundness tolerances...

  19. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.; Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  20. The Circular Camera Movement

    DEFF Research Database (Denmark)

    Hansen, Lennard Højbjerg

    2014-01-01

    It has been an accepted precept in film theory that specific stylistic features do not express specific content. Nevertheless, it is possible to find many examples in the history of film in which stylistic features do express specific content: for instance, the circular camera movement is used re...... such as the circular camera movement. Keywords: embodied perception, embodied style, explicit narration, interpretation, style pattern, television style...

  1. Neutron cameras for ITER

    International Nuclear Information System (INIS)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-01-01

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from 16 N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with 16 N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins

  2. Deployable Wireless Camera Penetrators

    Science.gov (United States)

    Badescu, Mircea; Jones, Jack; Sherrit, Stewart; Wu, Jiunn Jeng

    2008-01-01

    A lightweight, low-power camera dart has been designed and tested for context imaging of sampling sites and ground surveys from an aerobot or an orbiting spacecraft in a microgravity environment. The camera penetrators also can be used to image any line-of-sight surface, such as cliff walls, that is difficult to access. Tethered cameras to inspect the surfaces of planetary bodies use both power and signal transmission lines to operate. A tether adds the possibility of inadvertently anchoring the aerobot, and requires some form of station-keeping capability of the aerobot if extended examination time is required. The new camera penetrators are deployed without a tether, weigh less than 30 grams, and are disposable. They are designed to drop from any altitude with the boost in transmitting power currently demonstrated at approximately 100-m line-of-sight. The penetrators also can be deployed to monitor lander or rover operations from a distance, and can be used for surface surveys or for context information gathering from a touch-and-go sampling site. Thanks to wireless operation, the complexity of the sampling or survey mechanisms may be reduced. The penetrators may be battery powered for short-duration missions, or have solar panels for longer or intermittent duration missions. The imaging device is embedded in the penetrator, which is dropped or projected at the surface of a study site at 90 to the surface. Mirrors can be used in the design to image the ground or the horizon. Some of the camera features were tested using commercial "nanny" or "spy" camera components with the charge-coupled device (CCD) looking at a direction parallel to the ground. Figure 1 shows components of one camera that weighs less than 8 g and occupies a volume of 11 cm3. This camera could transmit a standard television signal, including sound, up to 100 m. Figure 2 shows the CAD models of a version of the penetrator. A low-volume array of such penetrator cameras could be deployed from an

  3. The Dark Energy Camera

    Science.gov (United States)

    Flaugher, B.; Diehl, H. T.; Honscheid, K.; Abbott, T. M. C.; Alvarez, O.; Angstadt, R.; Annis, J. T.; Antonik, M.; Ballester, O.; Beaufore, L.; Bernstein, G. M.; Bernstein, R. A.; Bigelow, B.; Bonati, M.; Boprie, D.; Brooks, D.; Buckley-Geer, E. J.; Campa, J.; Cardiel-Sas, L.; Castander, F. J.; Castilla, J.; Cease, H.; Cela-Ruiz, J. M.; Chappa, S.; Chi, E.; Cooper, C.; da Costa, L. N.; Dede, E.; Derylo, G.; DePoy, D. L.; de Vicente, J.; Doel, P.; Drlica-Wagner, A.; Eiting, J.; Elliott, A. E.; Emes, J.; Estrada, J.; Fausti Neto, A.; Finley, D. A.; Flores, R.; Frieman, J.; Gerdes, D.; Gladders, M. D.; Gregory, B.; Gutierrez, G. R.; Hao, J.; Holland, S. E.; Holm, S.; Huffman, D.; Jackson, C.; James, D. J.; Jonas, M.; Karcher, A.; Karliner, I.; Kent, S.; Kessler, R.; Kozlovsky, M.; Kron, R. G.; Kubik, D.; Kuehn, K.; Kuhlmann, S.; Kuk, K.; Lahav, O.; Lathrop, A.; Lee, J.; Levi, M. E.; Lewis, P.; Li, T. S.; Mandrichenko, I.; Marshall, J. L.; Martinez, G.; Merritt, K. W.; Miquel, R.; Muñoz, F.; Neilsen, E. H.; Nichol, R. C.; Nord, B.; Ogando, R.; Olsen, J.; Palaio, N.; Patton, K.; Peoples, J.; Plazas, A. A.; Rauch, J.; Reil, K.; Rheault, J.-P.; Roe, N. A.; Rogers, H.; Roodman, A.; Sanchez, E.; Scarpine, V.; Schindler, R. H.; Schmidt, R.; Schmitt, R.; Schubnell, M.; Schultz, K.; Schurter, P.; Scott, L.; Serrano, S.; Shaw, T. M.; Smith, R. C.; Soares-Santos, M.; Stefanik, A.; Stuermer, W.; Suchyta, E.; Sypniewski, A.; Tarle, G.; Thaler, J.; Tighe, R.; Tran, C.; Tucker, D.; Walker, A. R.; Wang, G.; Watson, M.; Weaverdyck, C.; Wester, W.; Woods, R.; Yanny, B.; DES Collaboration

    2015-11-01

    The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel-1. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6-9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  4. THE DARK ENERGY CAMERA

    Energy Technology Data Exchange (ETDEWEB)

    Flaugher, B.; Diehl, H. T.; Alvarez, O.; Angstadt, R.; Annis, J. T.; Buckley-Geer, E. J. [Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Honscheid, K. [Center for Cosmology and Astro-Particle Physics, The Ohio State University, Columbus, OH 43210 (United States); Abbott, T. M. C.; Bonati, M. [Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory, Casilla 603, La Serena (Chile); Antonik, M.; Brooks, D. [Department of Physics and Astronomy, University College London, Gower Street, London, WC1E 6BT (United Kingdom); Ballester, O.; Cardiel-Sas, L. [Institut de Física d’Altes Energies, Universitat Autònoma de Barcelona, E-08193 Bellaterra, Barcelona (Spain); Beaufore, L. [Department of Physics, The Ohio State University, Columbus, OH 43210 (United States); Bernstein, G. M. [Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, PA 19104 (United States); Bernstein, R. A. [Carnegie Observatories, 813 Santa Barbara St., Pasadena, CA 91101 (United States); Bigelow, B.; Boprie, D. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States); Campa, J. [Centro de Investigaciones Energèticas, Medioambientales y Tecnológicas (CIEMAT), Madrid (Spain); Castander, F. J., E-mail: diehl@fnal.gov [Institut de Ciències de l’Espai, IEEC-CSIC, Campus UAB, Facultat de Ciències, Torre C5 par-2, E-08193 Bellaterra, Barcelona (Spain); Collaboration: DES Collaboration; and others

    2015-11-15

    The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuum Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel{sup −1}. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6–9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.

  5. Harnessing the Transformative Tsunami: Fleet-wide 360-degree Feedback Revisited

    Science.gov (United States)

    2012-06-01

    serve to model the development of associates” (Gardner, Cogliser, Davis and Dickens 2011, 1122). Clearly, OSA evades a segment of the current Navy...studies advocating multi-rater feedback as a means of improving self-awareness (Bailey and Fletcher 2002; Atwater, Brett, and Charles 2007) 1. How a...E., Joan F. Brett, and Cherise Charles . “Multisource Feedback: Lessons Learned and Implications for Practice.” Human Resource Management 46, No. 2

  6. Wideband 360 degrees microwave photonic phase shifter based on slow light in semiconductor optical amplifiers

    DEFF Research Database (Denmark)

    Xue, Weiqi; Sales, Salvador; Capmany, Jose

    2010-01-01

    In this work we demonstrate for the first time, to the best of our knowledge, a continuously tunable 360° microwave phase shifter spanning a microwave bandwidth of several tens of GHz (up to 40 GHz) by slow light effects. The proposed device exploits the phenomenon of coherent population oscillat...

  7. Incorporating a 360 Degree Evaluation Model IOT Transform the USMC Performance Evaluation System

    Science.gov (United States)

    2005-02-08

    Incorporating a 360 Evaluation Model IOT Transform the USMC Performance Evaluation System EWS 2005 Subject Area Manpower...Incorporating a 360 Evaluation Model IOT Transform the USMC Performance Evaluation System” Contemporary...COVERED 00-00-2005 to 00-00-2005 4. TITLE AND SUBTITLE Incorporating a 360 Evaluation Model IOT Transform the USMC Performance

  8. 360-Degree Feedback: Key to Translating Air Force Core Values into Behavioral Change

    National Research Council Canada - National Science Library

    Hancock, Thomas

    1999-01-01

    Integrity, service, and excellence. These are only three words, but as core values they serve as ideals that inspire Air Force people to make our institution what it is the best and most respected Air Force in the world...

  9. 360-Degree Assessments: Are They the Right Tool for the U.S. Military?

    Science.gov (United States)

    2015-01-01

    favorable reactions when they viewed the rater as credible, the feedback was based on job criteria, and the rater refrained from criticism and engaged in...2) the commander and the rater are provided the report; and (3) the com- mander and his or her rating superior must engage in a developmental...favoritism, diversity management, organizational processes, intention to stay, and job burnout (Ripple, undated). Focus groups are also conducted as

  10. Continued Offpost Groundwater Monitoring Program (Revision III-360 Degree Monitoring Program) Rocket Mountain Arsenal

    Science.gov (United States)

    1986-02-01

    MOUNTAiN ARSENAL CLEANUP Contract Number DAAJ-1I-83-0-007 Task Order 0006 ETIWNM OF’F OST CROWM VATI! 301101O8 F!PlOGEAN (vIMSION 111-360* NITOuING PFOG20...CASING 2 FEET DIAMETER (8-INCH DIAMETER) POR INTERNAL MORTAR COLLAR CEMENT PAD 0. FTN GROUND LEVEL I FOOTI PVC CASING (4-INCH DIAMETER) ALLUVIAL GROUT

  11. Communities, Cameras, and Conservation

    Science.gov (United States)

    Patterson, Barbara

    2012-01-01

    Communities, Cameras, and Conservation (CCC) is the most exciting and valuable program the author has seen in her 30 years of teaching field science courses. In this citizen science project, students and community volunteers collect data on mountain lions ("Puma concolor") at four natural areas and public parks along the Front Range of Colorado.…

  12. Mars Observer camera

    Science.gov (United States)

    Malin, M. C.; Danielson, G. E.; Ingersoll, A. P.; Masursky, H.; Veverka, J.; Ravine, M. A.; Soulanille, T. A.

    1992-01-01

    The Mars Observer camera (MOC) is a three-component system (one narrow-angle and two wide-angle cameras) designed to take high spatial resolution pictures of the surface of Mars and to obtain lower spatial resolution, synoptic coverage of the planet's surface and atmosphere. The cameras are based on the 'push broom' technique; that is, they do not take 'frames' but rather build pictures, one line at a time, as the spacecraft moves around the planet in its orbit. MOC is primarily a telescope for taking extremely high resolution pictures of selected locations on Mars. Using the narrow-angle camera, areas ranging from 2.8 km x 2.8 km to 2.8 km x 25.2 km (depending on available internal digital buffer memory) can be photographed at about 1.4 m/pixel. Additionally, lower-resolution pictures (to a lowest resolution of about 11 m/pixel) can be acquired by pixel averaging; these images can be much longer, ranging up to 2.8 x 500 km at 11 m/pixel. High-resolution data will be used to study sediments and sedimentary processes, polar processes and deposits, volcanism, and other geologic/geomorphic processes.

  13. The world's fastest camera

    CERN Multimedia

    Piquepaille, Roland

    2006-01-01

    This image processor is not your typical digital camera. It took 6 years to 20 people and $6 million to build the "Regional Calorimeter Trigger"(RCT) which will be a component of the Compact Muon Solenoid (CMS) experiment, one of the detectors on the Large Hadron Collider (LHC) in Geneva, Switzerland (1 page)

  14. Camera as Cultural Critique

    DEFF Research Database (Denmark)

    Suhr, Christian

    2015-01-01

    researchers, cameras, and filmed subjects already inherently comprise analytical decisions. It is these ethnographic qualities inherent in audiovisual and photographic imagery that make it of particular value to a participatory anthropological enterprise that seeks to resist analytic closure and seeks instead...

  15. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...

  16. The PAU Camera

    Science.gov (United States)

    Casas, R.; Ballester, O.; Cardiel-Sas, L.; Carretero, J.; Castander, F. J.; Castilla, J.; Crocce, M.; de Vicente, J.; Delfino, M.; Fernández, E.; Fosalba, P.; García-Bellido, J.; Gaztañaga, E.; Grañena, F.; Jiménez, J.; Madrid, F.; Maiorino, M.; Martí, P.; Miquel, R.; Neissner, C.; Ponce, R.; Sánchez, E.; Serrano, S.; Sevilla, I.; Tonello, N.; Troyano, I.

    2011-11-01

    The PAU Camera (PAUCam) is a wide-field camera designed to be mounted at the William Herschel Telescope (WHT) prime focus, located at the Observatorio del Roque de los Muchachos in the island of La Palma (Canary Islands).Its primary function is to carry out a cosmological survey, the PAU Survey, covering an area of several hundred square degrees of sky. Its purpose is to determine positions and distances using photometric redshift techniques. To achieve accurate photo-z's, PAUCam will be equipped with 40 narrow-band filters covering the range from 450 to850 nm, and six broad-band filters, those of the SDSS system plus the Y band. To fully cover the focal plane delivered by the telescope optics, 18 CCDs 2k x 4k are needed. The pixels are square of 15 μ m size. The optical characteristics of the prime focus corrector deliver a field-of-view where eight of these CCDs will have an illumination of more than 95% covering a field of 40 arc minutes. The rest of the CCDs will occupy the vignetted region extending the field diameter to one degree. Two of the CCDs will be devoted to auto-guiding.This camera have some innovative features. Firstly, both the broad-band and the narrow-band filters will be placed in mobile trays, hosting 16 such filters at most. Those are located inside the cryostat at few millimeters in front of the CCDs when observing. Secondly, a pressurized liquid nitrogen tank outside the camera will feed a boiler inside the cryostat with a controlled massflow. The read-out electronics will use the Monsoon architecture, originally developed by NOAO, modified and manufactured by our team in the frame of the DECam project (the camera used in the DES Survey).PAUCam will also be available to the astronomical community of the WHT.

  17. MISR radiometric camera-by-camera Cloud Mask V004

    Data.gov (United States)

    National Aeronautics and Space Administration — This file contains the Radiometric camera-by-camera Cloud Mask dataset. It is used to determine whether a scene is classified as clear or cloudy. A new parameter has...

  18. Body worn camera

    Science.gov (United States)

    Aishwariya, A.; Pallavi Sudhir, Gulavani; Garg, Nemesa; Karthikeyan, B.

    2017-11-01

    A body worn camera is small video camera worn on the body, typically used by police officers to record arrests, evidence from crime scenes. It helps preventing and resolving complaints brought by members of the public; and strengthening police transparency, performance, and accountability. The main constants of this type of the system are video format, resolution, frames rate, and audio quality. This system records the video in .mp4 format with 1080p resolution and 30 frames per second. One more important aspect to while designing this system is amount of power the system requires as battery management becomes very critical. The main design challenges are Size of the Video, Audio for the video. Combining both audio and video and saving it in .mp4 format, Battery, size that is required for 8 hours of continuous recording, Security. For prototyping this system is implemented using Raspberry Pi model B.

  19. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1986-01-01

    A positron emission tomography camera having a plurality of detector rings positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom. Each ring contains a plurality of scintillation detectors which are positioned around an inner circumference with a septum ring extending inwardly from the inner circumference along each outer edge of each ring. An additional septum ring is positioned in the middle of each ring of detectors and parallel to the other septa rings, whereby the inward extent of all the septa rings may be reduced by one-half and the number of detectors required in each ring is reduced. The additional septa reduces the costs of the positron camera and improves its performance

  20. The NEAT Camera Project

    Science.gov (United States)

    Jr., Ray L. Newburn

    1995-01-01

    The NEAT (Near Earth Asteroid Tracking) camera system consists of a camera head with a 6.3 cm square 4096 x 4096 pixel CCD, fast electronics, and a Sun Sparc 20 data and control computer with dual CPUs, 256 Mbytes of memory, and 36 Gbytes of hard disk. The system was designed for optimum use with an Air Force GEODSS (Ground-based Electro-Optical Deep Space Surveillance) telescope. The GEODSS telescopes have 1 m f/2.15 objectives of the Ritchey-Chretian type, designed originally for satellite tracking. Installation of NEAT began July 25 at the Air Force Facility on Haleakala, a 3000 m peak on Maui in Hawaii.

  1. Gamma camera display system

    International Nuclear Information System (INIS)

    Stout, K.J.

    1976-01-01

    A gamma camera having an array of photomultipliers coupled via pulse shaping circuitry and a resistor weighting circuit to a display for forming an image of a radioactive subject is described. A linearizing circuit is coupled to the weighting circuit, the linearizing circuit including a nonlinear feedback circuit with diode coupling to the weighting circuit for linearizing the correspondence between points of the display and points of the subject. 4 Claims, 5 Drawing Figures

  2. Scanning gamma camera

    International Nuclear Information System (INIS)

    Engdahl, L.W.; Batter, J.F. Jr.; Stout, K.J.

    1977-01-01

    A scanning system for a gamma camera providing for the overlapping of adjacent scan paths is described. A collimator mask having tapered edges provides for a graduated reduction in intensity of radiation received by a detector thereof, the reduction in intensity being graduated in a direction normal to the scanning path to provide a blending of images of adjacent scan paths. 31 claims, 15 figures

  3. Radiation-resistant camera tube

    International Nuclear Information System (INIS)

    Kuwahata, Takao; Manabe, Sohei; Makishima, Yasuhiro

    1982-01-01

    It was a long time ago that Toshiba launched on manufacturing black-and-white radiation-resistant camera tubes employing nonbrowning face-plate glass for ITV cameras used in nuclear power plants. Now in compliance with the increasing demand in nuclear power field, the Company is at grips with the development of radiation-resistant single color-camera tubes incorporating a color-stripe filter for color ITV cameras used under radiation environment. Herein represented are the results of experiments on characteristics of materials for single color-camera tubes and prospects for commercialization of the tubes. (author)

  4. The PLATO camera

    Science.gov (United States)

    Laubier, D.; Bodin, P.; Pasquier, H.; Fredon, S.; Levacher, P.; Vola, P.; Buey, T.; Bernardi, P.

    2017-11-01

    PLATO (PLAnetary Transits and Oscillation of stars) is a candidate for the M3 Medium-size mission of the ESA Cosmic Vision programme (2015-2025 period). It is aimed at Earth-size and Earth-mass planet detection in the habitable zone of bright stars and their characterisation using the transit method and the asterosismology of their host star. That means observing more than 100 000 stars brighter than magnitude 11, and more than 1 000 000 brighter than magnitude 13, with a long continuous observing time for 20 % of them (2 to 3 years). This yields a need for an unusually long term signal stability. For the brighter stars, the noise requirement is less than 34 ppm.hr-1/2, from a frequency of 40 mHz down to 20 μHz, including all sources of noise like for instance the motion of the star images on the detectors and frequency beatings. Those extremely tight requirements result in a payload consisting of 32 synchronised, high aperture, wide field of view cameras thermally regulated down to -80°C, whose data are combined to increase the signal to noise performances. They are split into 4 different subsets pointing at 4 directions to widen the total field of view; stars in the centre of that field of view are observed by all 32 cameras. 2 extra cameras are used with color filters and provide pointing measurement to the spacecraft Attitude and Orbit Control System (AOCS) loop. The satellite is orbiting the Sun at the L2 Lagrange point. This paper presents the optical, electronic and electrical, thermal and mechanical designs devised to achieve those requirements, and the results from breadboards developed for the optics, the focal plane, the power supply and video electronics.

  5. Stereoscopic camera design

    Science.gov (United States)

    Montgomery, David J.; Jones, Christopher K.; Stewart, James N.; Smith, Alan

    2002-05-01

    It is clear from the literature that the majority of work in stereoscopic imaging is directed towards the development of modern stereoscopic displays. As costs come down, wider public interest in this technology is expected to increase. This new technology would require new methods of image formation. Advances in stereo computer graphics will of course lead to the creation of new stereo computer games, graphics in films etc. However, the consumer would also like to see real-world stereoscopic images, pictures of family, holiday snaps etc. Such scenery would have wide ranges of depth to accommodate and would need also to cope with moving objects, such as cars, and in particular other people. Thus, the consumer acceptance of auto/stereoscopic displays and 3D in general would be greatly enhanced by the existence of a quality stereoscopic camera. This paper will cover an analysis of existing stereoscopic camera designs and show that they can be categorized into four different types, with inherent advantages and disadvantages. A recommendation is then made with regard to 3D consumer still and video photography. The paper will go on to discuss this recommendation and describe its advantages and how it can be realized in practice.

  6. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    A positron emission tomography camera having a plurality of detector planes positioned side-by-side around a patient area to detect radiation. Each plane includes a plurality of photomultiplier tubes and at least two rows of scintillation crystals on each photomultiplier tube extend across to adjacent photomultiplier tubes for detecting radiation from the patient area. Each row of crystals on each photomultiplier tube is offset from the other rows of crystals, and the area of each crystal on each tube in each row is different than the area of the crystals on the tube in other rows for detecting which crystal is actuated and allowing the detector to detect more inter-plane slides. The crystals are offset by an amount equal to the length of the crystal divided by the number of rows. The rows of crystals on opposite sides of the patient may be rotated 90 degrees relative to each other

  7. Junocam: Juno's Outreach Camera

    Science.gov (United States)

    Hansen, C. J.; Caplinger, M. A.; Ingersoll, A.; Ravine, M. A.; Jensen, E.; Bolton, S.; Orton, G.

    2017-11-01

    Junocam is a wide-angle camera designed to capture the unique polar perspective of Jupiter offered by Juno's polar orbit. Junocam's four-color images include the best spatial resolution ever acquired of Jupiter's cloudtops. Junocam will look for convective clouds and lightning in thunderstorms and derive the heights of the clouds. Junocam will support Juno's radiometer experiment by identifying any unusual atmospheric conditions such as hotspots. Junocam is on the spacecraft explicitly to reach out to the public and share the excitement of space exploration. The public is an essential part of our virtual team: amateur astronomers will supply ground-based images for use in planning, the public will weigh in on which images to acquire, and the amateur image processing community will help process the data.

  8. Automatic locking radioisotope camera lock

    International Nuclear Information System (INIS)

    Rosauer, P.J.

    1978-01-01

    The lock of the present invention secures the isotope source in a stored shielded condition in the camera until a positive effort has been made to open the lock and take the source outside of the camera and prevents disconnection of the source pigtail unless the source is locked in a shielded condition in the camera. It also gives a visual indication of the locked or possible exposed condition of the isotope source and prevents the source pigtail from being completely pushed out of the camera, even when the lock is released. (author)

  9. The Eye of the Camera

    NARCIS (Netherlands)

    van Rompay, Thomas Johannes Lucas; Vonk, Dorette J.; Fransen, M.L.

    2009-01-01

    This study addresses the effects of security cameras on prosocial behavior. Results from previous studies indicate that the presence of others can trigger helping behavior, arising from the need for approval of others. Extending these findings, the authors propose that security cameras can likewise

  10. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1982-01-01

    The invention provides a composite solid state detector for use in deriving a display, by spatial coordinate information, of the distribution or radiation emanating from a source within a region of interest, comprising several solid state detector components, each having a given surface arranged for exposure to impinging radiation and exhibiting discrete interactions therewith at given spatially definable locations. The surface of each component and the surface disposed opposite and substantially parallel thereto are associated with impedence means configured to provide for each opposed surface outputs for signals relating the given location of the interactions with one spatial coordinate parameter of one select directional sense. The detector components are arranged to provide groupings of adjacently disposed surfaces mutually linearly oriented to exhibit a common directional sense of the spatial coordinate parameter. Means interconnect at least two of the outputs associated with each of the surfaces within a given grouping for collecting the signals deriving therefrom. The invention also provides a camera system for imaging the distribution of a source of gamma radiation situated within a region of interest

  11. CCD TV camera, TM1300

    International Nuclear Information System (INIS)

    Takano, Mitsuo; Endou, Yukio; Nakayama, Hideo

    1982-01-01

    Development has been made of a black-and-white TV camera TM 1300 using an interline-transfer CCD, which excels in performance frame-transfer CCDs marketed since 1980: it has a greater number of horizontal picture elements and far smaller input power (less than 2 W at 9 V), uses hybrid ICs for the CCD driver unit to reduce the size of the camera, has no picture distortion, no burn-in; in addition, it has peripheral equipment, such as the camera housing and the pan and till head miniaturized as well. It is also expected to be widened in application to industrial TV. (author)

  12. High Quality Camera Surveillance System

    OpenAIRE

    Helaakoski, Ari

    2015-01-01

    Oulu University of Applied Sciences Information Technology Author: Ari Helaakoski Title of the master’s thesis: High Quality Camera Surveillance System Supervisor: Kari Jyrkkä Term and year of completion: Spring 2015 Number of pages: 31 This master’s thesis was commissioned by iProtoXi Oy and it was done to one iProtoXi customer. The aim of the thesis was to make a camera surveillance system which is using a High Quality camera with pan and tilt possibility. It should b...

  13. Control system for gamma camera

    International Nuclear Information System (INIS)

    Miller, D.W.

    1977-01-01

    An improved gamma camera arrangement is described which utilizing a solid state detector, formed of high purity germanium. the central arrangement of the camera operates to effect the carrying out of a trapezoidal filtering operation over antisymmetrically summed spatial signals through gated integration procedures utilizing idealized integrating intervals. By simultaneously carrying out peak energy evaluation of the input signals, a desirable control over pulse pile-up phenomena is achieved. Additionally, through the use of the time derivative of incoming pulse or signal energy information to initially enable the control system, a low level information evaluation is provided serving to enhance the signal processing efficiency of the camera

  14. Analyzer for gamma cameras diagnostic

    International Nuclear Information System (INIS)

    Oramas Polo, I.; Osorio Deliz, J. F.; Diaz Garcia, A.

    2013-01-01

    This research work was carried out to develop an analyzer for gamma cameras diagnostic. It is composed of an electronic system that includes hardware and software capabilities, and operates from the acquisition of the 4 head position signals of a gamma camera detector. The result is the spectrum of the energy delivered by nuclear radiation coming from the camera detector head. This system includes analog processing of position signals from the camera, digitization and the subsequent processing of the energy signal in a multichannel analyzer, sending data to a computer via a standard USB port and processing of data in a personal computer to obtain the final histogram. The circuits are composed of an analog processing board and a universal kit with micro controller and programmable gate array. (Author)

  15. New generation of meteorology cameras

    Science.gov (United States)

    Janout, Petr; Blažek, Martin; Páta, Petr

    2017-12-01

    A new generation of the WILLIAM (WIde-field aLL-sky Image Analyzing Monitoring system) camera includes new features such as monitoring of rain and storm clouds during the day observation. Development of the new generation of weather monitoring cameras responds to the demand for monitoring of sudden weather changes. However, new WILLIAM cameras are ready to process acquired image data immediately, release warning against sudden torrential rains, and send it to user's cell phone and email. Actual weather conditions are determined from image data, and results of image processing are complemented by data from sensors of temperature, humidity, and atmospheric pressure. In this paper, we present the architecture, image data processing algorithms of mentioned monitoring camera and spatially-variant model of imaging system aberrations based on Zernike polynomials.

  16. Astronomy and the camera obscura

    Science.gov (United States)

    Feist, M.

    2000-02-01

    The camera obscura (from Latin meaning darkened chamber) is a simple optical device with a long history. In the form considered here, it can be traced back to 1550. It had its heyday during the Victorian era when it was to be found at the seaside as a tourist attraction or sideshow. It was also used as an artist's drawing aid and, in 1620, the famous astronomer-mathematician, Johannes Kepler used a small tent camera obscura to trace the scenery.

  17. Science, conservation, and camera traps

    Science.gov (United States)

    Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.

  18. Sub-Camera Calibration of a Penta-Camera

    Science.gov (United States)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  19. SUB-CAMERA CALIBRATION OF A PENTA-CAMERA

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-03-01

    Full Text Available Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors

  20. The fly's eye camera system

    Science.gov (United States)

    Mészáros, L.; Pál, A.; Csépány, G.; Jaskó, A.; Vida, K.; Oláh, K.; Mezö, G.

    2014-12-01

    We introduce the Fly's Eye Camera System, an all-sky monitoring device intended to perform time domain astronomy. This camera system design will provide complementary data sets for other synoptic sky surveys such as LSST or Pan-STARRS. The effective field of view is obtained by 19 cameras arranged in a spherical mosaic form. These individual cameras of the device stand on a hexapod mount that is fully capable of achieving sidereal tracking for the subsequent exposures. This platform has many advantages. First of all it requires only one type of moving component and does not include unique parts. Hence this design not only eliminates problems implied by unique elements, but the redundancy of the hexapod allows smooth operations even if one or two of the legs are stuck. In addition, it can calibrate itself by observed stars independently from both the geographical location (including northen and southern hemisphere) and the polar alignment of the full mount. All mechanical elements and electronics are designed within the confines of our institute Konkoly Observatory. Currently, our instrument is in testing phase with an operating hexapod and reduced number of cameras.

  1. Dark Energy Camera for Blanco

    Energy Technology Data Exchange (ETDEWEB)

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images from the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.

  2. EDICAM (Event Detection Intelligent Camera)

    Energy Technology Data Exchange (ETDEWEB)

    Zoletnik, S. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Szabolics, T., E-mail: szabolics.tamas@wigner.mta.hu [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Kocsis, G.; Szepesi, T.; Dunai, D. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary)

    2013-10-15

    Highlights: ► We present EDICAM's hardware modules. ► We present EDICAM's main design concepts. ► This paper will describe EDICAM firmware architecture. ► Operation principles description. ► Further developments. -- Abstract: A new type of fast framing camera has been developed for fusion applications by the Wigner Research Centre for Physics during the last few years. A new concept was designed for intelligent event driven imaging which is capable of focusing image readout to Regions of Interests (ROIs) where and when predefined events occur. At present these events mean intensity changes and external triggers but in the future more sophisticated methods might also be defined. The camera provides 444 Hz frame rate at full resolution of 1280 × 1024 pixels, but monitoring of smaller ROIs can be done in the 1–116 kHz range even during exposure of the full image. Keeping space limitations and the harsh environment in mind the camera is divided into a small Sensor Module and a processing card interconnected by a fast 10 Gbit optical link. This camera hardware has been used for passive monitoring of the plasma in different devices for example at ASDEX Upgrade and COMPASS with the first version of its firmware. The new firmware and software package is now available and ready for testing the new event processing features. This paper will present the operation principle and features of the Event Detection Intelligent Camera (EDICAM). The device is intended to be the central element in the 10-camera monitoring system of the Wendelstein 7-X stellarator.

  3. Streak cameras and their applications

    International Nuclear Information System (INIS)

    Bernet, J.M.; Imhoff, C.

    1987-01-01

    Over the last several years, development of various measurement techniques in the nanosecond and pico-second range has led to increased reliance on streak cameras. This paper will present the main electronic and optoelectronic performances of the Thomson-CSF TSN 506 cameras and their associated devices used to build an automatic image acquisition and processing system (NORMA). A brief survey of the diversity and the spread of the use of high speed electronic cinematography will be illustrated by a few typical applications [fr

  4. The Camera Comes to Court.

    Science.gov (United States)

    Floren, Leola

    After the Lindbergh kidnapping trial in 1935, the American Bar Association sought to eliminate electronic equipment from courtroom proceedings. Eventually, all but two states adopted regulations applying that ban to some extent, and a 1965 Supreme Court decision encouraged the banning of television cameras at trials as well. Currently, some states…

  5. High-speed holographic camera

    International Nuclear Information System (INIS)

    Novaro, Marc

    The high-speed holographic camera is a disgnostic instrument using holography as an information storing support. It allows us to take 10 holograms, of an object, with exposures times of 1,5ns, separated in time by 1 or 2ns. In order to get these results easily, no mobile part is used in the set-up [fr

  6. Camera Movement in Narrative Cinema

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2007-01-01

    Just like art historians have focused on e.g. composition or lighting, this dissertation takes a single stylistic parameter as its object of study: camera movement. Within film studies this localized avenue of middle-level research has become increasingly viable under the aegis of a perspective k...

  7. The LSST camera system overview

    Science.gov (United States)

    Gilmore, Kirk; Kahn, Steven; Nordby, Martin; Burke, David; O'Connor, Paul; Oliver, John; Radeka, Veljko; Schalk, Terry; Schindler, Rafe

    2006-06-01

    The LSST camera is a wide-field optical (0.35-1um) imager designed to provide a 3.5 degree FOV with better than 0.2 arcsecond sampling. The detector format will be a circular mosaic providing approximately 3.2 Gigapixels per image. The camera includes a filter mechanism and, shuttering capability. It is positioned in the middle of the telescope where cross-sectional area is constrained by optical vignetting and heat dissipation must be controlled to limit thermal gradients in the optical beam. The fast, f/1.2 beam will require tight tolerances on the focal plane mechanical assembly. The focal plane array operates at a temperature of approximately -100°C to achieve desired detector performance. The focal plane array is contained within an evacuated cryostat, which incorporates detector front-end electronics and thermal control. The cryostat lens serves as an entrance window and vacuum seal for the cryostat. Similarly, the camera body lens serves as an entrance window and gas seal for the camera housing, which is filled with a suitable gas to provide the operating environment for the shutter and filter change mechanisms. The filter carousel can accommodate 5 filters, each 75 cm in diameter, for rapid exchange without external intervention.

  8. Toy Cameras and Color Photographs.

    Science.gov (United States)

    Speight, Jerry

    1979-01-01

    The technique of using toy cameras for both black-and-white and color photography in the art class is described. The author suggests that expensive equipment can limit the growth of a beginning photographer by emphasizing technique and equipment instead of in-depth experience with composition fundamentals and ideas. (KC)

  9. Gamma camera with reflectivity mask

    International Nuclear Information System (INIS)

    Stout, K.J.

    1980-01-01

    In accordance with the present invention there is provided a radiographic camera comprising: a scintillator; a plurality of photodectors positioned to face said scintillator; a plurality of masked regions formed upon a face of said scintillator opposite said photdetectors and positioned coaxially with respective ones of said photodetectors for decreasing the amount of internal reflection of optical photons generated within said scintillator. (auth)

  10. Implementing and measuring safety goals and safety culture. 3. Shifting to a Coaching Culture Through a 360-Degree Assessment Process

    International Nuclear Information System (INIS)

    Snow, Bruce A.; Maciuska, Frank

    2001-01-01

    Error-free operation is the ultimate objective of any safety culture. Ginna Training and Operations has embarked on an approach directed at further developing coaching skills, attitudes, and values. To accomplish this, a 360-deg assessment process designed to enhance coaching skills, attitudes, and values has been implemented. The process includes measuring participants based on a set of values and an individual self-development plan based on the feedback from the 360-deg assessment. The skills and experience of the people who make up that culture are irreplaceable. As nuclear organizations mature and generations retire, knowledge and skills must be transferred to the incoming generations without a loss in performance. The application of a 360- deg assessment process can shift the culture to include coaching in a strong command and control environment. It is a process of change management strengthened by experience while meeting the challenge to improve human performance by changing workplace attitudes. At Ginna, training programs and new processes were initiated to pursue the ultimate objective: error-free operation. The overall objective of the programs is to create a common knowledge base and the skill required to consistently incorporate ownership of 'coach and collaborate' responsibility into a strong existing 'command and control' culture. This involves the role of coach; the role of communications; and concept integration, which includes communications, coaching, and team dimensional training (TDT). The overall objective of the processes, TDT and shifting to a coaching culture through the application of a 360-deg assessment process, is to provide guidance for applying the skills learned in the programs. As depicted in Fig. 1, the TDT (a process that identifies 'strengths and challenges') can be greatly improved by applying good communications and coaching practices. As the training programs were implemented, the participants were observed and coached in applying the new skills. Working with the shift supervisors, coaching values were developed and defined. Included were coaching and collaboration skills in a strong command and control environment, which became part of an assessment process designed to measure the culture shift through the application of a coaching-specific 360-deg assessment process. The resultant scoring compares the participant's self-assessment with management, peers, crew, and customers. The training program has resulted in a culture shift that was observable on the simulator. This subjective conclusion was based on two separate simulator observations conducted in the fall of 1999 and spring of 2000 prior to the application of the 360-deg assessment. The objective of assessment focused on coaching skills as follows: 1. ownership of coaching responsibilities; 2. coaching confidence; 3. coaching skills; 4. 360-deg assessment process status. The values to be assessed have been identified and defined by the stakeholders. The shift supervisor work group selected the following values: innovation, customer focus, consistency, competence, communication, coaching, teamwork, quality, ownership, leadership, knowledge, and integrity/trust. A measurement instrument has been designed, and the first assessment was completed in December 2000. Feedback reports for the participants were completed in January 2001. Feedback sessions began in February. Individual self-development plans were designed and implemented in late winter 2001, followed by a second measurement in May 2001. Quantitative results, the difference in the first two assessments, will be available in June 2001. The question will be answered: Have we shifted the culture toward error-free operation? (authors)

  11. The creep and intergranular cracking behavior of Ni-Cr-Fe-C alloys in 360 degree C water

    International Nuclear Information System (INIS)

    Angeliu, T.M.; Paraventi, D.J.; Was, G.S.

    1995-01-01

    Mechanical testing of controlled-purity Ni-xCr-9Fe-yC alloys at 360 C revealed an environmental enhancement in IG cracking and time-dependent deformation in high purity and primary water over that exhibited in argon. Dimples on the IG facets indicate a creep void nucleation and growth failure mode. IG cracking was primarily located at the interior of the specimen and not necessarily linked to direct contact with the environment. Controlled potential CERT experiments showed increases in IG cracking as the applied potential decreased, suggesting that hydrogen is detrimental to the mechanical properties. It is proposed that the environment, through the presence of hydrogen, enhances IG cracking by enhancing the matrix dislocation mobility. This is based on observations that dislocation-controlled creep controls the IG cracking of controlled-purity Ni-xCr-9Fe-yC in argon at 360 C and grain boundary cavitation and sliding results that show the environmental enhancement of the creep rate is primarily due to an increase in matrix plastic deformation. However, controlled potential CLT experiments did not exhibit a change in the creep rate as the applied potential decreased. While this does not clearly support hydrogen assisted creep, the material may already be saturated with hydrogen at these applied potentials and thus no effect was realized. Chromium and carbon decrease the IG cracking in high purity and primary water by increasing the creep resistance. The surface film does not play a significant role in the creep or IG cracking behavior under the conditions investigated

  12. Accounting for Human Neurocognitive Function in the Design and Evaluation of 360 Degree Situational Awareness Display Systems

    Science.gov (United States)

    2011-04-28

    situational awareness display systems Jason S. Metcalfe a , Thomas Mikulski b , and Scott Dittman c, a a DCS Corporation, 6909 Metro Park Dr., Suite...ADDRESS(ES) US Army RDECOM-TARDEC 6501 E 11 Mile Rd Warren, MI 48397-5000, USA DCS Corporation, 6909 Metro Park Dr, Suite 500, Alexandria, VA 22310...gigabit Ethernet architecture that supports high-definition (HD) video transmission. Internally, the platform will have three independently controlled

  13. Limited access surgery for 360 degrees in-situ fusion in a dysraphic patient with high-grade spondylolisthesis.

    Science.gov (United States)

    König, M A; Boszczyk, B M

    2012-03-01

    Progressive high-grade spondylolisthesis can lead to spinal imbalance. High-grade spondylolisthesis is often reduced and fused in unbalanced pelvises, whereas in-situ fusion is used more often in balanced patients. The surgical goal is to recreate or maintain sagittal balance but if anatomical reduction is necessary, the risk of nerval damage with nerve root disruption in worst cases is increased. Spinal dysraphism like spina bifida or tethered cord syndrome make it very difficult to achieve reduction and posterior fusion due to altered anatomy putting the focus on anterior column support. Intensive neural structure manipulation should be avoided to reduce neurological complications and re-tethering in these cases. A 26-year-old patient with a history of diastematomyelia, occult spina bifida and tethered cord syndrome presented with new onset of severe low back pain, and bilateral L5/S1 sciatica after a fall. The X-ray demonstrated a grade III spondylolisthesis with spina bifida and the MRI scan revealed bilateral severely narrowed exit foramina L5 due to the listhesis. Because she was well balanced sagittally, the decision for in-situ fusion was made to minimise the risk of neurological disturbance through reduction. Anterior fusion was favoured to minimise manipulation of the dysraphic neural structures. Fusion was achieved via isolated access to the L4/L5 disc space. A L5 transvertebral hollow modular anchorage (HMA) screw was passed into the sacrum from the L4/L5 disc space and interbody fusion of L4/L5 was performed with a cage. The construct was augmented with pedicle screw fixation L4-S1 via a less invasive bilateral muscle split for better anterior biomechanical support. The postoperative course was uneventful and fusion was CT confirmed at the 6-month follow-up. At the last follow-up, she worked full time, was completely pain free and not limited in her free-time activities. The simultaneous presence of high-grade spondylolisthesis and spinal dysraphism make it very difficult to find a decisive treatment plan because both posterior and anterior treatment strategies have advantages and disadvantages in these challenging cases. The described technique combines several surgical options to achieve 360° fusion with limited access, reducing the risk of neurological sequelae.

  14. Disease management 360 degrees: a scorecard approach to evaluating TRICARE's programs for asthma, congestive heart failure, and diabetes.

    Science.gov (United States)

    Yang, Wenya; Dall, Timothy M; Zhang, Yiduo; Hogan, Paul F; Arday, David R; Gantt, Cynthia J

    2010-08-01

    To assess the effect of TRICARE's asthma, congestive heart failure, and diabetes disease management programs using a scorecard approach. EVALUATION MEASURES: Patient healthcare utilization, financial, clinical, and humanistic outcomes. Absolute measures were translated into effect size and incorporated into a scorecard. Actual outcomes for program participants were compared with outcomes predicted in the absence of disease management. The predictive equations were established from regression models based on historical control groups (n = 39,217). Z scores were calculated for the humanistic measures obtained through a mailed survey. Administrative records containing medical claims, patient demographics and characteristics, and program participation status were linked using an encrypted patient identifier (n = 57,489). The study time frame is 1 year prior to program inception through 2 years afterward (October 2005-September 2008). A historical control group was identified with the baseline year starting October 2003 and a 1-year follow-up period starting October 2004. A survey was administered to a subset of participants 6 months after baseline assessment (39% response rate). Within the observation window--24 months for asthma and congestive heart failure, and 15 months for the diabetes program--we observed modest reductions in hospital days and healthcare cost for all 3 programs and reductions in emergency visits for 2 programs. Most clinical outcomes moved in the direction anticipated. The scorecard provided a useful tool to track performance of 3 regional contractors for each of 3 diseases and over time.

  15. Architectural Design Document for Camera Models

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study.......Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study....

  16. Graphic design of pinhole cameras

    Science.gov (United States)

    Edwards, H. B.; Chu, W. P.

    1979-01-01

    The paper describes a graphic technique for the analysis and optimization of pinhole size and focal length. The technique is based on the use of the transfer function of optical elements described by Scott (1959) to construct the transfer function of a circular pinhole camera. This transfer function is the response of a component or system to a pattern of lines having a sinusoidally varying radiance at varying spatial frequencies. Some specific examples of graphic design are presented.

  17. The Use of Camera Traps in Wildlife

    OpenAIRE

    Yasin Uçarlı; Bülent Sağlam

    2013-01-01

    Camera traps are increasingly used in the abundance and density estimates of wildlife species. Camera traps are very good alternative for direct observation in case, particularly, steep terrain, dense vegetation covered areas or nocturnal species. The main reason for the use of camera traps is eliminated that the economic, personnel and time loss in a continuous manner at the same time in different points. Camera traps, motion and heat sensitive, can take a photo or video according to the mod...

  18. Stereo Pinhole Camera: Assembly and experimental activities

    OpenAIRE

    Santos, Gilmário Barbosa; Departamento de Ciência da Computação, Universidade do Estado de Santa Catarina, Joinville; Cunha, Sidney Pinto; Centro de Tecnologia da Informação Renato Archer, Campinas

    2015-01-01

    This work describes the assembling of a stereo pinhole camera for capturing stereo-pairs of images and proposes experimental activities with it. A pinhole camera can be as sophisticated as you want, or so simple that it could be handcrafted with practically recyclable materials. This paper describes the practical use of the pinhole camera throughout history and currently. Aspects of optics and geometry involved in the building of the stereo pinhole camera are presented with illustrations. Fur...

  19. The MVACS Robotic Arm Camera

    Science.gov (United States)

    Keller, H. U.; Hartwig, H.; Kramm, R.; Koschny, D.; Markiewicz, W. J.; Thomas, N.; Fernades, M.; Smith, P. H.; Reynolds, R.; Lemmon, M. T.; Weinberg, J.; Marcialis, R.; Tanner, R.; Boss, B. J.; Oquest, C.; Paige, D. A.

    2001-08-01

    The Robotic Arm Camera (RAC) is one of the key instruments newly developed for the Mars Volatiles and Climate Surveyor payload of the Mars Polar Lander. This lightweight instrument employs a front lens with variable focus range and takes images at distances from 11 mm (image scale 1:1) to infinity. Color images with a resolution of better than 50 μm can be obtained to characterize the Martian soil. Spectral information of nearby objects is retrieved through illumination with blue, green, and red lamp sets. The design and performance of the camera are described in relation to the science objectives and operation. The RAC uses the same CCD detector array as the Surface Stereo Imager and shares the readout electronics with this camera. The RAC is mounted at the wrist of the Robotic Arm and can characterize the contents of the scoop, the samples of soil fed to the Thermal Evolved Gas Analyzer, the Martian surface in the vicinity of the lander, and the interior of trenches dug out by the Robotic Arm. It can also be used to take panoramic images and to retrieve stereo information with an effective baseline surpassing that of the Surface Stereo Imager by about a factor of 3.

  20. An Open Standard for Camera Trap Data

    NARCIS (Netherlands)

    Forrester, Tavis; O'Brien, Tim; Fegraus, Eric; Jansen, P.A.; Palmer, Jonathan; Kays, Roland; Ahumada, Jorge; Stern, Beth; McShea, William

    2016-01-01

    Camera traps that capture photos of animals are a valuable tool for monitoring biodiversity. The use of camera traps is rapidly increasing and there is an urgent need for standardization to facilitate data management, reporting and data sharing. Here we offer the Camera Trap Metadata Standard as an

  1. A camera specification for tendering purposes

    International Nuclear Information System (INIS)

    Lunt, M.J.; Davies, M.D.; Kenyon, N.G.

    1985-01-01

    A standardized document is described which is suitable for sending to companies which are being invited to tender for the supply of a gamma camera. The document refers to various features of the camera, the performance specification of the camera, maintenance details, price quotations for various options and delivery, installation and warranty details. (U.K.)

  2. Relative camera localisation in non-overlapping camera networks using multiple trajectories

    NARCIS (Netherlands)

    John, V.; Englebienne, G.; Kröse, B.J.A.

    2012-01-01

    In this article we present an automatic camera calibration algorithm using multiple trajectories in a multiple camera network with non-overlapping field-of-views (FOV). Visible trajectories within a camera FOV are assumed to be measured with respect to the camera local co-ordinate system.

  3. Modelling Virtual Camera Behaviour Through Player Gaze

    DEFF Research Database (Denmark)

    Picardi, Andrea; Burelli, Paolo; Yannakakis, Georgios N.

    2012-01-01

    In a three-dimensional virtual environment, aspects such as narrative and interaction largely depend on the placement and animation of the virtual camera. Therefore, virtual camera control plays a critical role in player experience and, thereby, in the overall quality of a computer game. Both game...... on the relationship between virtual camera, game-play and player behaviour. We run a game user experiment to shed some light on this relationship and identify relevant dif- ferences between camera behaviours through different game sessions, playing behaviours and player gaze patterns. Re- sults show that users can...... be efficiently profiled in dissimilar clusters according to camera control as part of their game- play behaviour....

  4. Stereo Pinhole Camera: Assembly and experimental activities

    Directory of Open Access Journals (Sweden)

    Gilmário Barbosa Santos

    2015-05-01

    Full Text Available This work describes the assembling of a stereo pinhole camera for capturing stereo-pairs of images and proposes experimental activities with it. A pinhole camera can be as sophisticated as you want, or so simple that it could be handcrafted with practically recyclable materials. This paper describes the practical use of the pinhole camera throughout history and currently. Aspects of optics and geometry involved in the building of the stereo pinhole camera are presented with illustrations. Furthermore, experiments are proposed by using the images obtained by the camera for 3D visualization through a pair of anaglyph glasses, and the estimation of relative depth by triangulation is discussed.

  5. Adapting virtual camera behaviour through player modelling

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Research in virtual camera control has focused primarily on finding methods to allow designers to place cameras effectively and efficiently in dynamic and unpredictable environments, and to generate complex and dynamic plans for cinematography in virtual environments. In this article, we propose...... a novel approach to virtual camera control, which builds upon camera control and player modelling to provide the user with an adaptive point-of-view. To achieve this goal, we propose a methodology to model the player’s preferences on virtual camera movements and we employ the resulting models to tailor...

  6. Initial laboratory evaluation of color video cameras

    Energy Technology Data Exchange (ETDEWEB)

    Terry, P L

    1991-01-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than identify an intruder. Monochrome cameras are adequate for that application and were selected over color cameras because of their greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Color information is useful for identification purposes, and color camera technology is rapidly changing. Thus, Sandia National Laboratories established an ongoing program to evaluate color solid-state cameras. Phase one resulted in the publishing of a report titled, Initial Laboratory Evaluation of Color Video Cameras (SAND--91-2579).'' It gave a brief discussion of imager chips and color cameras and monitors, described the camera selection, detailed traditional test parameters and procedures, and gave the results of the evaluation of twelve cameras. In phase two six additional cameras were tested by the traditional methods and all eighteen cameras were tested by newly developed methods. This report details both the traditional and newly developed test parameters and procedures, and gives the results of both evaluations.

  7. Human tracking over camera networks: a review

    Science.gov (United States)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  8. Image compensation for camera and lighting variability

    Science.gov (United States)

    Daley, Wayne D.; Britton, Douglas F.

    1996-12-01

    With the current trend of integrating machine vision systems in industrial manufacturing and inspection applications comes the issue of camera and illumination stabilization. Unless each application is built around a particular camera and highly controlled lighting environment, the interchangeability of cameras of fluctuations in lighting become a problem as each camera usually has a different response. An empirical approach is proposed where color tile data is acquired using the camera of interest, and a mapping is developed to some predetermined reference image using neural networks. A similar analytical approach based on a rough analysis of the imaging systems is also considered for deriving a mapping between cameras. Once a mapping has been determined, all data from one camera is mapped to correspond to the images of the other prior to performing any processing on the data. Instead of writing separate image processing algorithms for the particular image data being received, the image data is adjusted based on each particular camera and lighting situation. All that is required when swapping cameras is the new mapping for the camera being inserted. The image processing algorithms can remain the same as the input data has been adjusted appropriately. The results of utilizing this technique are presented for an inspection application.

  9. Using DSLR cameras in digital holography

    Science.gov (United States)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  10. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  11. Optimising camera traps for monitoring small mammals.

    Science.gov (United States)

    Glen, Alistair S; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce

    2013-01-01

    Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea), feral cats (Felis catus) and hedgehogs (Erinaceuseuropaeus). Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  12. Photogrammetric Applications of Immersive Video Cameras

    OpenAIRE

    Kwiatek, K.; Tokarczyk, R.

    2014-01-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to ov...

  13. Approximations to camera sensor noise

    Science.gov (United States)

    Jin, Xiaodan; Hirakawa, Keigo

    2013-02-01

    Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.

  14. Streak camera recording of interferometer fringes

    International Nuclear Information System (INIS)

    Parker, N.L.; Chau, H.H.

    1977-01-01

    The use of an electronic high-speed camera in the streaking mode to record interference fringe motion from a velocity interferometer is discussed. Advantages of this method over the photomultiplier tube-oscilloscope approach are delineated. Performance testing and data for the electronic streak camera are discussed. The velocity profile of a mylar flyer accelerated by an electrically exploded bridge, and the jump-off velocity of metal targets struck by these mylar flyers are measured in the camera tests. Advantages of the streak camera include portability, low cost, ease of operation and maintenance, simplified interferometer optics, and rapid data analysis

  15. Decision about buying a gamma camera

    International Nuclear Information System (INIS)

    Ganatra, R.D.

    1992-01-01

    A large part of the referral to a nuclear medicine department is usually for imaging studies. Sooner or later, the nuclear medicine specialist will be called upon to make a decision about when and what type of gamma camera to buy. There is no longer an option of choosing between a rectilinear scanner and a gamma camera as the former is virtually out of the market. The decision that one has to make is when to invest in a gamma camera, and then on what basis to select the gamma camera

  16. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many

  17. Ultra fast x-ray streak camera

    International Nuclear Information System (INIS)

    Coleman, L.W.; McConaghy, C.F.

    1975-01-01

    A unique ultrafast x-ray sensitive streak camera, with a time resolution of 50psec, has been built and operated. A 100A thick gold photocathode on a beryllium vacuum window is used in a modified commerical image converter tube. The X-ray streak camera has been used in experiments to observe time resolved emission from laser-produced plasmas. (author)

  18. An Open Standard for Camera Trap Data

    Directory of Open Access Journals (Sweden)

    Tavis Forrester

    2016-12-01

    Full Text Available Camera traps that capture photos of animals are a valuable tool for monitoring biodiversity. The use of camera traps is rapidly increasing and there is an urgent need for standardization to facilitate data management, reporting and data sharing. Here we offer the Camera Trap Metadata Standard as an open data standard for storing and sharing camera trap data, developed by experts from a variety of organizations. The standard captures information necessary to share data between projects and offers a foundation for collecting the more detailed data needed for advanced analysis. The data standard captures information about study design, the type of camera used, and the location and species names for all detections in a standardized way. This information is critical for accurately assessing results from individual camera trapping projects and for combining data from multiple studies for meta-analysis. This data standard is an important step in aligning camera trapping surveys with best practices in data-intensive science. Ecology is moving rapidly into the realm of big data, and central data repositories are becoming a critical tool and are emerging for camera trap data. This data standard will help researchers standardize data terms, align past data to new repositories, and provide a framework for utilizing data across repositories and research projects to advance animal ecology and conservation.

  19. Active spectral imaging nondestructive evaluation (SINDE) camera

    Energy Technology Data Exchange (ETDEWEB)

    Simova, E.; Rochefort, P.A., E-mail: eli.simova@cnl.ca [Canadian Nuclear Laboratories, Chalk River, Ontario (Canada)

    2016-06-15

    A proof-of-concept video camera for active spectral imaging nondestructive evaluation has been demonstrated. An active multispectral imaging technique has been implemented in the visible and near infrared by using light emitting diodes with wavelengths spanning from 400 to 970 nm. This shows how the camera can be used in nondestructive evaluation to inspect surfaces and spectrally identify materials and corrosion. (author)

  20. CCD Color Camera Characterization for Image Measurements

    NARCIS (Netherlands)

    Withagen, P.J.; Groen, F.C.A.; Schutte, K.

    2007-01-01

    In this article, we will analyze a range of different types of cameras for its use in measurements. We verify a general model of a charged coupled device camera using experiments. This model includes gain and offset, additive and multiplicative noise, and gamma correction. It is shown that for

  1. Digital airborne camera introduction and technology

    CERN Document Server

    Sandau, Rainer

    2014-01-01

    The last decade has seen great innovations on the airborne camera. This book is the first ever written on the topic and describes all components of a digital airborne camera ranging from the object to be imaged to the mass memory device.

  2. Driving with head-slaved camera system

    NARCIS (Netherlands)

    Oving, A.B.; Erp, J.B.F. van

    2001-01-01

    In a field experiment, we tested the effectiveness of a head-slaved camera system for driving an armoured vehicle under armour. This system consists of a helmet-mounted display (HMD), a headtracker, and a motion platform with two cameras. Subjects performed several driving tasks on paved and in

  3. High resolution RGB color line scan camera

    Science.gov (United States)

    Lynch, Theodore E.; Huettig, Fred

    1998-04-01

    A color line scan camera family which is available with either 6000, 8000 or 10000 pixels/color channel, utilizes off-the-shelf lenses, interfaces with currently available frame grabbers, includes on-board pixel by pixel offset correction, and is configurable and controllable via RS232 serial port for computer controlled or stand alone operation is described in this paper. This line scan camera is based on an available 8000 element monochrome line scan camera designed by AOA for OEM use. The new color version includes improvements such as better packaging and additional user features which make the camera easier to use. The heart of the camera is a tri-linear CCD sensor with on-chip color balancing for maximum accuracy and pinned photodiodes for low lag response. Each color channel is digitized to 12 bits and all three channels are multiplexed together so that the resulting camera output video is either a 12 or 8 bit data stream at a rate of up to 24Megpixels/sec. Conversion from 12 to 8 bit, or user-defined gamma, is accomplished by on board user-defined video look up tables. The camera has two user-selectable operating modes; lows speed, high sensitivity mode or high speed, reduced sensitivity mode. The intended uses of the camera include industrial inspection, digital archiving, document scanning, and graphic arts applications.

  4. Optimization of Camera Parameters in Volume Intersection

    Science.gov (United States)

    Sakamoto, Sayaka; Shoji, Kenji; Toyama, Fubito; Miyamichi, Juichi

    Volume intersection is one of the simplest techniques for reconstructing 3-D shape from 2-D silhouettes. 3D shapes can be reconstructed from multiple view images by back-projecting them from the corresponding viewpoints and intersecting the resulting solid cones. The camera position and orientation (extrinsic camera parameters) of each viewpoint with respect to the object are needed to accomplish reconstruction. However, even a little variation in the camera parameters makes the reconstructed 3-D shape smaller than that with the exact parameters. The problem of optimizing camera parameters deals with determining exact ones based on multiple silhouette images and approximate ones. This paper examines attempts to optimize camera parameters by reconstructing a 3-D shape via the concept of volume intersection and then maximizing the volume of the 3-D shape. We have tested the proposed method to optimize the camera parameters using a VRML model. In experiments we apply the downhill simplex method to optimize them. The results of experiments show that the maximized volume of the reconstructed 3-D shape is one of the criteria to optimize camera parameters in camera arrangement like this experiment.

  5. Laser scanning camera inspects hazardous area

    International Nuclear Information System (INIS)

    Fryatt, A.; Miprode, C.

    1985-01-01

    Main operational characteristics of a new laser scanning camera are presented. The camera is intended primarily for low level high resolution viewing inside nuclear reactors. It uses a He-Ne laser beam raster; by detecting the reflected light by means of a phomultiplier, the subject under observation can be reconstructed in an electronic video store and reviewed on a conventional monitor screen

  6. Rosetta Star Tracker and Navigation Camera

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera.......Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera....

  7. Centering mount for a gamma camera

    International Nuclear Information System (INIS)

    Mirkhodzhaev, A.Kh.; Kuznetsov, N.K.; Ostryj, Yu.E.

    1988-01-01

    A device for centering a γ-camera detector in case of radionuclide diagnosis is described. It permits the use of available medical coaches instead of a table with a transparent top. The device can be used for centering a detector (when it is fixed at the low end of a γ-camera) on a required area of the patient's body

  8. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  9. Wavefront analysis for plenoptic camera imaging

    International Nuclear Information System (INIS)

    Luan Yin-Sen; Xu Bing; Yang Ping; Tang Guo-Mao

    2017-01-01

    The plenoptic camera is a single lens stereo camera which can retrieve the direction of light rays while detecting their intensity distribution. In this paper, to reveal more truths of plenoptic camera imaging, we present the wavefront analysis for the plenoptic camera imaging from the angle of physical optics but not from the ray tracing model of geometric optics. Specifically, the wavefront imaging model of a plenoptic camera is analyzed and simulated by scalar diffraction theory and the depth estimation is redescribed based on physical optics. We simulate a set of raw plenoptic images of an object scene, thereby validating the analysis and derivations and the difference between the imaging analysis methods based on geometric optics and physical optics are also shown in simulations. (paper)

  10. Lessons Learned from Crime Caught on Camera

    Science.gov (United States)

    Bernasco, Wim

    2018-01-01

    Objectives: The widespread use of camera surveillance in public places offers criminologists the opportunity to systematically and unobtrusively observe crime, their main subject matter. The purpose of this essay is to inform the reader of current developments in research on crimes caught on camera. Methods: We address the importance of direct observation of behavior and review criminological studies that used observational methods, with and without cameras, including the ones published in this issue. We also discuss the uses of camera recordings in other social sciences and in biology. Results: We formulate six key insights that emerge from the literature and make recommendations for future research. Conclusions: Camera recordings of real-life crime are likely to become part of the criminological tool kit that will help us better understand the situational and interactional elements of crime. Like any source, it has limitations that are best addressed by triangulation with other sources. PMID:29472728

  11. Lessons Learned from Crime Caught on Camera

    DEFF Research Database (Denmark)

    Lindegaard, Marie Rosenkrantz; Bernasco, Wim

    2018-01-01

    Objectives: The widespread use of camera surveillance in public places offers criminologists the opportunity to systematically and unobtrusively observe crime, their main subject matter. The purpose of this essay is to inform the reader of current developments in research on crimes caught on camera....... Methods: We address the importance of direct observation of behavior and review criminological studies that used observational methods, with and without cameras, including the ones published in this issue. We also discuss the uses of camera recordings in other social sciences and in biology. Results: We...... formulate six key insights that emerge from the literature and make recommendations for future research. Conclusions: Camera recordings of real-life crime are likely to become part of the criminological tool kit that will help us better understand the situational and interactional elements of crime. Like...

  12. Architecture of PAU survey camera readout electronics

    Science.gov (United States)

    Castilla, Javier; Cardiel-Sas, Laia; De Vicente, Juan; Illa, Joseph; Jimenez, Jorge; Maiorino, Marino; Martinez, Gustavo

    2012-07-01

    PAUCam is a new camera for studying the physics of the accelerating universe. The camera will consist of eighteen 2Kx4K HPK CCDs: sixteen for science and two for guiding. The camera will be installed at the prime focus of the WHT (William Herschel Telescope). In this contribution, the architecture of the readout electronics system is presented. Back- End and Front-End electronics are described. Back-End consists of clock, bias and video processing boards, mounted on Monsoon crates. The Front-End is based on patch panel boards. These boards are plugged outside the camera feed-through panel for signal distribution. Inside the camera, individual preamplifier boards plus kapton cable completes the path to connect to each CCD. The overall signal distribution and grounding scheme is shown in this paper.

  13. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Romps, David [Univ. of California, Berkeley, CA (United States); Oktem, Rusen [Univ. of California, Berkeley, CA (United States)

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together to obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.

  14. Superconducting millimetre-wave cameras

    Science.gov (United States)

    Monfardini, Alessandro

    2017-05-01

    I present a review of the developments in kinetic inductance detectors (KID) for mm-wave and THz imaging-polarimetry in the framework of the Grenoble collaboration. The main application that we have targeted so far is large field-of-view astronomy. I focus in particular on our own experiment: NIKA2 (Néel IRAM KID Arrays). NIKA2 is today the largest millimetre camera available to the astronomical community for general purpose observations. It consists of a dual-band, dual-polarisation, multi-thousands pixels system installed at the IRAM 30-m telescope at Pico Veleta (Spain). I start with a general introduction covering the underlying physics and the KID working principle. Then I describe briefly the instrument and the detectors, to conclude with examples of pictures taken on the Sky by NIKA2 and its predecessor, NIKA. Thanks to these results, together with the relative simplicity and low cost of the KID fabrication, industrial applications requiring passive millimetre-THz imaging have now become possible.

  15. Towards Adaptive Virtual Camera Control In Computer Games

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2011-01-01

    model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platform game, we cluster gaze and camera information to identify camera behaviour profiles and we employ...

  16. SLR digital camera for forensic photography

    Science.gov (United States)

    Har, Donghwan; Son, Youngho; Lee, Sungwon

    2004-06-01

    Forensic photography, which was systematically established in the late 19th century by Alphonse Bertillon of France, has developed a lot for about 100 years. The development will be more accelerated with the development of high technologies, in particular the digital technology. This paper reviews three studies to answer the question: Can the SLR digital camera replace the traditional silver halide type ultraviolet photography and infrared photography? 1. Comparison of relative ultraviolet and infrared sensitivity of SLR digital camera to silver halide photography. 2. How much ultraviolet or infrared sensitivity is improved when removing the UV/IR cutoff filter built in the SLR digital camera? 3. Comparison of relative sensitivity of CCD and CMOS for ultraviolet and infrared. The test result showed that the SLR digital camera has a very low sensitivity for ultraviolet and infrared. The cause was found to be the UV/IR cutoff filter mounted in front of the image sensor. Removing the UV/IR cutoff filter significantly improved the sensitivity for ultraviolet and infrared. Particularly for infrared, the sensitivity of the SLR digital camera was better than that of the silver halide film. This shows the possibility of replacing the silver halide type ultraviolet photography and infrared photography with the SLR digital camera. Thus, the SLR digital camera seems to be useful for forensic photography, which deals with a lot of ultraviolet and infrared photographs.

  17. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.

  18. UAV CAMERAS: OVERVIEW AND GEOMETRIC CALIBRATION BENCHMARK

    Directory of Open Access Journals (Sweden)

    M. Cramer

    2017-08-01

    Full Text Available Different UAV platforms and sensors are used in mapping already, many of them equipped with (sometimes modified cameras as known from the consumer market. Even though these systems normally fulfil their requested mapping accuracy, the question arises, which system performs best? This asks for a benchmark, to check selected UAV based camera systems in well-defined, reproducible environments. Such benchmark is tried within this work here. Nine different cameras used on UAV platforms, representing typical camera classes, are considered. The focus is laid on the geometry here, which is tightly linked to the process of geometrical calibration of the system. In most applications the calibration is performed in-situ, i.e. calibration parameters are obtained as part of the project data itself. This is often motivated because consumer cameras do not keep constant geometry, thus, cannot be seen as metric cameras. Still, some of the commercial systems are quite stable over time, as it was proven from repeated (terrestrial calibrations runs. Already (pre-calibrated systems may offer advantages, especially when the block geometry of the project does not allow for a stable and sufficient in-situ calibration. Especially for such scenario close to metric UAV cameras may have advantages. Empirical airborne test flights in a calibration field have shown how block geometry influences the estimated calibration parameters and how consistent the parameters from lab calibration can be reproduced.

  19. Uav Cameras: Overview and Geometric Calibration Benchmark

    Science.gov (United States)

    Cramer, M.; Przybilla, H.-J.; Zurhorst, A.

    2017-08-01

    Different UAV platforms and sensors are used in mapping already, many of them equipped with (sometimes) modified cameras as known from the consumer market. Even though these systems normally fulfil their requested mapping accuracy, the question arises, which system performs best? This asks for a benchmark, to check selected UAV based camera systems in well-defined, reproducible environments. Such benchmark is tried within this work here. Nine different cameras used on UAV platforms, representing typical camera classes, are considered. The focus is laid on the geometry here, which is tightly linked to the process of geometrical calibration of the system. In most applications the calibration is performed in-situ, i.e. calibration parameters are obtained as part of the project data itself. This is often motivated because consumer cameras do not keep constant geometry, thus, cannot be seen as metric cameras. Still, some of the commercial systems are quite stable over time, as it was proven from repeated (terrestrial) calibrations runs. Already (pre-)calibrated systems may offer advantages, especially when the block geometry of the project does not allow for a stable and sufficient in-situ calibration. Especially for such scenario close to metric UAV cameras may have advantages. Empirical airborne test flights in a calibration field have shown how block geometry influences the estimated calibration parameters and how consistent the parameters from lab calibration can be reproduced.

  20. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-04-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2 0 deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  1. First results from the TOPSAT camera

    Science.gov (United States)

    Greenway, Paul; Tosh, Ian; Morris, Nigel; Burton, Gary; Cawley, Steve

    2017-11-01

    The TopSat camera is a low cost remote sensing imager capable of producing 2.5 metre resolution panchromatic imagery, funded by the British National Space Centre's Mosaic programme. The instrument was designed and assembled at the Space Science & Technology Department of the CCLRC's Rutherford Appleton Laboratory (RAL) in the UK, and was launched on the 27th October 2005 from Plesetsk Cosmodrome in Northern Russia on a Kosmos-3M. The camera utilises an off-axis three mirror system, which has the advantages of excellent image quality over a wide field of view, combined with a compactness that makes its overall dimensions smaller than its focal length. Keeping the costs to a minimum has been a major design driver in the development of this camera. The camera is part of the TopSat mission, which is a collaboration between four UK organisations; QinetiQ, Surrey Satellite Technology Ltd (SSTL), RAL and Infoterra. Its objective is to demonstrate provision of rapid response high resolution imagery to fixed and mobile ground stations using a low cost minisatellite. The paper "Development of the TopSat Camera" presented by RAL at the 5th ICSO in 2004 described the opto-mechanical design, assembly, alignment and environmental test methods implemented. Now that the spacecraft is in orbit and successfully acquiring images, this paper presents the first results from the camera and makes an initial assessment of the camera's in-orbit performance.

  2. Automatic camera tracking for remote manipulators

    Energy Technology Data Exchange (ETDEWEB)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-07-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2-deg deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables.

  3. Automatic camera tracking for remote manipulators

    Energy Technology Data Exchange (ETDEWEB)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-04-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2/sup 0/ deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables.

  4. Towards next generation 3D cameras

    Science.gov (United States)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (<100 microns resolution) scans in extremely demanding scenarios with low-cost components. Several of these cameras are making a practical impact in industrial automation, being adopted in robotic inspection and assembly systems.

  5. New camera systems for fuel services

    International Nuclear Information System (INIS)

    Hummel, W.; Beck, H.J.

    2010-01-01

    AREVA NP Fuel Services have many years of experience in visual examination and measurements on fuel assemblies and associated core components by using state of the art cameras and measuring technologies. The used techniques allow the surface and dimensional characterization of materials and shapes by visual examination. New enhanced and sophisticated technologies for fuel services f. e. are two shielded color camera systems for use under water and close inspection of a fuel assembly. Nowadays the market requirements for detecting and characterization of small defects (lower than the 10th of one mm) or cracks and analyzing surface appearances on an irradiated fuel rod cladding or fuel assembly structure parts have increased. Therefore it is common practice to use movie cameras with higher resolution. The radiation resistance of high resolution CCD cameras is in general very low and it is not possible to use them unshielded close to a fuel assembly. By extending the camera with a mirror system and shielding around the sensitive parts, the movie camera can be utilized for fuel assembly inspection. AREVA NP Fuel Services is now equipped with such kind of movie cameras. (orig.)

  6. Camera-based driver assistance systems

    Science.gov (United States)

    Grimm, Michael

    2013-04-01

    In recent years, camera-based driver assistance systems have taken an important step: from laboratory setup to series production. This tutorial gives a brief overview on the technology behind driver assistance systems, presents the most significant functionalities and focuses on the processes of developing camera-based systems for series production. We highlight the critical points which need to be addressed when camera-based driver assistance systems are sold in their thousands, worldwide - and the benefit in terms of safety which results from it.

  7. Scintillation camera with second order resolution

    International Nuclear Information System (INIS)

    Muehllehner, G.

    1976-01-01

    A scintillation camera for use in radioisotope imaging to determine the concentration of radionuclides in a two-dimensional area is described in which means is provided for second order positional resolution. The phototubes, which normally provide only a single order of resolution, are modified to provide second order positional resolution of radiation within an object positioned for viewing by the scintillation camera. The phototubes are modified in that multiple anodes are provided to receive signals from the photocathode in a manner such that each anode is particularly responsive to photoemissions from a limited portion of the photocathode. Resolution of radioactive events appearing as an output of this scintillation camera is thereby improved

  8. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different....... For this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of scenarios...

  9. Determining camera parameters for round glassware measurements

    International Nuclear Information System (INIS)

    Baldner, F O; Costa, P B; Leta, F R; Gomes, J F S; Filho, D M E S

    2015-01-01

    Nowadays there are many types of accessible cameras, including digital single lens reflex ones. Although these cameras are not usually employed in machine vision applications, they can be an interesting choice. However, these cameras have many available parameters to be chosen by the user and it may be difficult to select the best of these in order to acquire images with the needed metrological quality. This paper proposes a methodology to select a set of parameters that will supply a machine vision system with the needed quality image, considering the measurement required of a laboratory glassware

  10. Distributed Smart Cameras for Aging in Place

    National Research Council Canada - National Science Library

    Williams, Adam; Xie, Dan; Ou, Shichao; Grupen, Roderic; Hanson, Allen; Riseman, Edward

    2006-01-01

    .... The fall detector relies on features extracted from video by the camera nodes, which are sent to a central processing node where one of several machine learning techniques are applied to detect a fall...

  11. Camera Trajectory fromWide Baseline Images

    Science.gov (United States)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the

  12. Highly Sensitive Flash LADAR Camera, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — A highly sensitive 640 x 480-element flash LADAR camera will be developed that is capable of 100-Hz rates with better than 5-cm range precision. The design is based...

  13. Projector-Camera Systems for Immersive Training

    National Research Council Canada - National Science Library

    Treskunov, Anton; Pair, Jarrell

    2006-01-01

    .... These projector-camera systems effectively paint the real world with digital light. Any surface can become an interactive projection screen allowing unprepared spaces to be transformed into an immersive environment...

  14. Ge Quantum Dot Infrared Imaging Camera Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Luna Innovations Incorporated proposes to develop a high performance Ge quantum dots-based infrared (IR) imaging camera on Si substrate. The high sensitivity, large...

  15. CALIBRATION PROCEDURES ON OBLIQUE CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    G. Kemper

    2016-06-01

    Full Text Available Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna –IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first

  16. Portable mini gamma camera for medical applications

    CERN Document Server

    Porras, E; Benlloch, J M; El-Djalil-Kadi-Hanifi, M; López, S; Pavon, N; Ruiz, J A; Sánchez, F; Sebastiá, A

    2002-01-01

    A small, portable and low-cost gamma camera for medical applications has been developed and clinically tested. This camera, based on a scintillator crystal and a Position Sensitive Photo-Multiplier Tube, has a useful field of view of 4.6 cm diameter and provides 2.2 mm of intrinsic spatial resolution. Its mobility and light weight allow to reach the patient from any desired direction. This camera images small organs with high efficiency and so addresses the demand for devices of specific clinical applications. In this paper, we present the camera and briefly describe the procedures that have led us to choose its configuration and the image reconstruction method. The clinical tests and diagnostic capability are also presented and discussed.

  17. Compact Optical Technique for Streak Camera Calibration

    International Nuclear Information System (INIS)

    Curt Allen; Terence Davies; Frans Janson; Ronald Justin; Bruce Marshall; Oliver Sweningsen; Perry Bell; Roger Griffith; Karla Hagans; Richard Lerche

    2004-01-01

    The National Ignition Facility is under construction at the Lawrence Livermore National Laboratory for the U.S. Department of Energy Stockpile Stewardship Program. Optical streak cameras are an integral part of the experimental diagnostics instrumentation. To accurately reduce data from the streak cameras a temporal calibration is required. This article describes a technique for generating trains of precisely timed short-duration optical pulses that are suitable for temporal calibrations

  18. MCP gated x-ray framing camera

    Science.gov (United States)

    Cai, Houzhi; Liu, Jinyuan; Niu, Lihong; Liao, Hua; Zhou, Junlan

    2009-11-01

    A four-frame gated microchannel plate (MCP) camera is described in this article. Each frame photocathode coated with gold on the MCP is part of a transmission line with open circuit end driven by the gating electrical pulse. The gating pulse is 230 ps in width and 2.5 kV in amplitude. The camera is tested by illuminating its photocathode with ultraviolet laser pulses, 266 nm in wavelength, which shows exposure time as short as 80 ps.

  19. Imaging camera with multiwire proportional chamber

    International Nuclear Information System (INIS)

    Votruba, J.

    1980-01-01

    The camera for imaging radioisotope dislocations for use in nuclear medicine or for other applications, claimed in the patent, is provided by two multiwire lattices for the x-coordinate connected to a first coincidence circuit, and by two multiwire lattices for the y-coordinate connected to a second coincidence circuit. This arrangement eliminates the need of using a collimator and increases camera sensitivity while reducing production cost. (Ha)

  20. An imaging system for a gamma camera

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1980-01-01

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  1. Thermal Wave Imaging: Flying SPOT Camera.

    Science.gov (United States)

    Wang, Yiqian

    1993-01-01

    A novel "Flying Spot" infrared camera for nondestructive evaluation (NDE) and nondestructive characterization is presented. The camera scans the focal point of an unmodulated heating laser beam across the sample in a raster. The detector of the camera tracks the heating spot in the same raster, but with a time delay. The detector is thus looking at the "thermal wake" of the heating spot. The time delay between heating and detection is determined by the speed of the laser spot and the distance between it and the detector image. Since this time delay can be made arbitrarily small, the camera is capable of making thermal wave images of phenomena which occur on a very short time scale. In addition, because the heat source is a very small spot, the heat flow is fully three-dimensional. This makes the camera system sensitive to features, like tightly closed vertical cracks, which are invisible to imaging systems which employ full-field heating. A detailed theory which relates the temperature profile around the heating spot to the sample thermal properties is also described. The camera represents a potentially useful tool for measuring thermal diffusivities of materials by means of fitting the recorded temperature profiles to the theoretical curves with the diffusivity as a fitting parameter.

  2. The Use of Camera Traps in Wildlife

    Directory of Open Access Journals (Sweden)

    Yasin Uçarlı

    2013-11-01

    Full Text Available Camera traps are increasingly used in the abundance and density estimates of wildlife species. Camera traps are very good alternative for direct observation in case, particularly, steep terrain, dense vegetation covered areas or nocturnal species. The main reason for the use of camera traps is eliminated that the economic, personnel and time loss in a continuous manner at the same time in different points. Camera traps, motion and heat sensitive, can take a photo or video according to the models. Crossover points and feeding or mating areas of the focal species are addressed as a priority camera trap set locations. The population size can be finding out by the images combined with Capture-Recapture methods. The population density came out the population size divided to effective sampling area size. Mating and breeding season, habitat choice, group structures and survival rates of the focal species can be achieved from the images. Camera traps are very useful to obtain the necessary data about the particularly mysterious species with economically in planning and conservation efforts.

  3. Multi-Angle Snowflake Camera Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Stuefer, Martin [Univ. of Alaska, Fairbanks, AK (United States); Bailey, J. [Univ. of Alaska, Fairbanks, AK (United States)

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASC cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.

  4. On the evolution of wafer level cameras

    Science.gov (United States)

    Welch, H.

    2011-02-01

    The introduction of small cost effective cameras based on CMOS image sensor technology has played an important role in the revolution in mobile devices of the last 10 years. Wafer-based optics manufacturing leverages the same fabrication equipment used to produce CMOS sensors. The natural integration of these two technologies allows the mass production of very low cost surface mount cameras that can fit into ever thinner mobile devices. Nano Imprint Lithography (NIL) equipment has been adapted to make precision aspheres that can be stacked using wafer bonding techniques to produce multi-element lens assemblies. This, coupled with advances in mastering technology, allows arrays of lenses with prescriptions not previously possible. A primary motivation for these methods is that it allows the consolidation of the supply chain. Image sensor manufacturers envision creating optics by simply adding layers to their existing sensor fabrication lines. Results thus far have been promising. The current alternative techniques for creating VGA cameras are discussed as well as the prime cost drivers for lens to sensor integration. Higher resolution cameras face particularly difficult challenges, but can greatly simplify the critical tilt and focus steps needed to assemble cameras that produce quality images. Finally, we discuss the future of wafer-level cameras and explore several of the novel concepts made possible by the manufacturing advantages of photolithography.

  5. Classroom multispectral imaging using inexpensive digital cameras.

    Science.gov (United States)

    Fortes, A. D.

    2007-12-01

    The proliferation of increasingly cheap digital cameras in recent years means that it has become easier to exploit the broad wavelength sensitivity of their CCDs (360 - 1100 nm) for classroom-based teaching. With the right tools, it is possible to open children's eyes to the invisible world of UVA and near-IR radiation either side of our narrow visual band. The camera-filter combinations I describe can be used to explore the world of animal vision, looking for invisible markings on flowers, or in bird plumage, for example. In combination with a basic spectroscope (such as the Project-STAR handheld plastic spectrometer, 25), it is possible to investigate the range of human vision and camera sensitivity, and to explore the atomic and molecular absorption lines from the solar and terrestrial atmospheres. My principal use of the cameras has been to teach multispectral imaging of the kind used to determine remotely the composition of planetary surfaces. A range of camera options, from 50 circuit-board mounted CCDs up to $900 semi-pro infrared camera kits (including mobile phones along the way), and various UV-vis-IR filter options will be presented. Examples of multispectral images taken with these systems are used to illustrate the range of classroom topics that can be covered. Particular attention is given to learning about spectral reflectance curves and comparing images from Earth and Mars taken using the same filter combination that it used on the Mars Rovers.

  6. High-speed CCD camera at NAOC

    Science.gov (United States)

    Zhao, Zhaowang; Wang, Wei; Liu, Yangbin

    2006-06-01

    A high speed CCD camera has been completed at the National Astronomical Observatories of China (NAOC). A Kodak CCD was used in the camera. Two output ports are used to read out CCD data and total speed achieved 60M pixels per second. The Kodak KAI-4021 image sensor is a high-performance 2Kx2K-pixel interline transfer device. The 7.4μ square pixels with micro lenses provide high sensitivity and the large full well capacity results in high dynamic range. The inter-line transfer structure provides high quality image and enables electronic shuttering for precise exposure control. The electronic shutter provides a method of precisely controlling the image exposure time without any mechanical components. The camera is controlled by a NIOS II family of embedded processors, which is Altera's second-generation soft-core embedded processor for FPGAs. The powerful embedded processors make the camera with splendid features to satisfy continuously appearing new observational requirements. This camera is very flexible and is easy to implement new special functions. Since FPGA and other peripheral logic signals are triggered by a single master clock, the whole system is perfectly synchronized. By using this technique the camera cuts off the noise dramatically.

  7. CALIBRATION PROCEDURES IN MID FORMAT CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    F. Pivnicka

    2012-07-01

    Full Text Available A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU, the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and

  8. Soft x-ray streak cameras

    International Nuclear Information System (INIS)

    Stradling, G.L.

    1988-01-01

    This paper is a discussion of the development and of the current state of the art in picosecond soft x-ray streak camera technology. Accomplishments from a number of institutions are discussed. X-ray streak cameras vary from standard visible streak camera designs in the use of an x-ray transmitting window and an x-ray sensitive photocathode. The spectral sensitivity range of these instruments includes portions of the near UV and extends from the subkilovolt x- ray region to several tens of kilovolts. Attendant challenges encountered in the design and use of x-ray streak cameras include the accommodation of high-voltage and vacuum requirements, as well as manipulation of a photocathode structure which is often fragile. The x-ray transmitting window is generally too fragile to withstand atmospheric pressure, necessitating active vacuum pumping and a vacuum line of sight to the x-ray signal source. Because of the difficulty of manipulating x-ray beams with conventional optics, as is done with visible light, the size of the photocathode sensing area, access to the front of the tube, the ability to insert the streak tube into a vacuum chamber and the capability to trigger the sweep with very short internal delay times are issues uniquely relevant to x-ray streak camera use. The physics of electron imaging may place more stringent limitations on the temporal and spatial resolution obtainable with x-ray photocathodes than with the visible counterpart. Other issues which are common to the entire streak camera community also concern the x-ray streak camera users and manufacturers

  9. Characterization of SWIR cameras by MRC measurements

    Science.gov (United States)

    Gerken, M.; Schlemmer, H.; Haan, Hubertus A.; Siemens, Christofer; Münzberg, M.

    2014-05-01

    Cameras for the SWIR wavelength range are becoming more and more important because of the better observation range for day-light operation under adverse weather conditions (haze, fog, rain). In order to choose the best suitable SWIR camera or to qualify a camera for a given application, characterization of the camera by means of the Minimum Resolvable Contrast MRC concept is favorable as the MRC comprises all relevant properties of the instrument. With the MRC known for a given camera device the achievable observation range can be calculated for every combination of target size, illumination level or weather conditions. MRC measurements in the SWIR wavelength band can be performed widely along the guidelines of the MRC measurements of a visual camera. Typically measurements are performed with a set of resolution targets (e.g. USAF 1951 target) manufactured with different contrast values from 50% down to less than 1%. For a given illumination level the achievable spatial resolution is then measured for each target. The resulting curve is showing the minimum contrast that is necessary to resolve the structure of a target as a function of spatial frequency. To perform MRC measurements for SWIR cameras at first the irradiation parameters have to be given in radiometric instead of photometric units which are limited in their use to the visible range. In order to do so, SWIR illumination levels for typical daylight and twilight conditions have to be defined. At second, a radiation source is necessary with appropriate emission in the SWIR range (e.g. incandescent lamp) and the irradiance has to be measured in W/m2 instead of Lux = Lumen/m2. At third, the contrast values of the targets have to be calibrated newly for the SWIR range because they typically differ from the values determined for the visual range. Measured MRC values of three cameras are compared to the specified performance data of the devices and the results of a multi-band in-house designed Vis-SWIR camera

  10. How to Build Your Own Document Camera for around $100

    Science.gov (United States)

    Van Orden, Stephen

    2010-01-01

    Document cameras can have great utility in second language classrooms. However, entry-level consumer document cameras start at around $350. This article describes how the author built three document cameras and offers suggestions for how teachers can successfully build their own quality document camera using a webcam for around $100.

  11. Advanced system for Gamma Cameras modernization

    International Nuclear Information System (INIS)

    Osorio Deliz, J. F.; Diaz Garcia, A.; Arista Romeu, E. J.

    2015-01-01

    Analog and digital gamma cameras still largely used in developing countries. Many of them rely in old hardware electronics, which in many cases limits their use in actual nuclear medicine diagnostic studies. Consequently, there are different worldwide companies that produce medical equipment engaged into a partial or total Gamma Cameras modernization. Present work has demonstrated the possibility of substitution of almost entire signal processing electronics placed at inside a Gamma Camera detector head by a digitizer PCI card. this card includes four 12 Bits Analog-to-Digital-Converters of 50 MHz speed. It has been installed in a PC and controlled through software developed in Lab View. Besides, there were done some changes to the hardware inside the detector head including redesign of the Orientation Display Block (ODA card). Also a new electronic design was added to the Microprocessor Control Block (MPA card) which comprised a PIC micro controller acting as a tuning system for individual Photomultiplier Tubes. The images, obtained by measurement of 99m Tc point radioactive source, using modernized camera head demonstrate its overall performance. The system was developed and tested in an old Gamma Camera ORBITER II SIEMENS GAMMASONIC at National Institute of Oncology and Radiobiology (INOR) under CAMELUD project supported by National Program PNOULU and IAEA . (Author)

  12. Wired and Wireless Camera Triggering with Arduino

    Science.gov (United States)

    Kauhanen, H.; Rönnholm, P.

    2017-10-01

    Synchronous triggering is an important task that allows simultaneous data capture from multiple cameras. Accurate synchronization enables 3D measurements of moving objects or from a moving platform. In this paper, we describe one wired and four wireless variations of Arduino-based low-cost remote trigger systems designed to provide a synchronous trigger signal for industrial cameras. Our wireless systems utilize 315 MHz or 434 MHz frequencies with noise filtering capacitors. In order to validate the synchronization accuracy, we developed a prototype of a rotating trigger detection system (named RoTriDeS). This system is suitable to detect the triggering accuracy of global shutter cameras. As a result, the wired system indicated an 8.91 μs mean triggering time difference between two cameras. Corresponding mean values for the four wireless triggering systems varied between 7.92 and 9.42 μs. Presented values include both camera-based and trigger-based desynchronization. Arduino-based triggering systems appeared to be feasible, and they have the potential to be extended to more complicated triggering systems.

  13. Design of Endoscopic Capsule With Multiple Cameras.

    Science.gov (United States)

    Gu, Yingke; Xie, Xiang; Li, Guolin; Sun, Tianjia; Wang, Dan; Yin, Zheng; Zhang, Pengfei; Wang, Zhihua

    2015-08-01

    In order to reduce the miss rate of the wireless capsule endoscopy, in this paper, we propose a new system of the endoscopic capsule with multiple cameras. A master-slave architecture, including an efficient bus architecture and a four level clock management architecture, is applied for the Multiple Cameras Endoscopic Capsule (MCEC). For covering more area of the gastrointestinal tract wall with low power, multiple cameras with a smart image capture strategy, including movement sensitive control and camera selection, are used in the MCEC. To reduce the data transfer bandwidth and power consumption to prolong the MCEC's working life, a low complexity image compressor with PSNR 40.7 dB and compression rate 86% is implemented. A chipset is designed and implemented for the MCEC and a six cameras endoscopic capsule prototype is implemented by using the chipset. With the smart image capture strategy, the coverage rate of the MCEC prototype can achieve 98% and its power consumption is only about 7.1 mW.

  14. Occluded object imaging via optimal camera selection

    Science.gov (United States)

    Yang, Tao; Zhang, Yanning; Tong, Xiaomin; Ma, Wenguang; Yu, Rui

    2013-12-01

    High performance occluded object imaging in cluttered scenes is a significant challenging task for many computer vision applications. Recently the camera array synthetic aperture imaging is proved to be an effective way to seeing object through occlusion. However, the imaging quality of occluded object is often significantly decreased by the shadows of the foreground occluder. Although some works have been presented to label the foreground occluder via object segmentation or 3D reconstruction, these methods will fail in the case of complicated occluder and severe occlusion. In this paper, we present a novel optimal camera selection algorithm to solve the above problem. The main characteristics of this algorithm include: (1) Instead of synthetic aperture imaging, we formulate the occluded object imaging problem as an optimal camera selection and mosaicking problem. To the best of our knowledge, our proposed method is the first one for occluded object mosaicing. (2) A greedy optimization framework is presented to propagate the visibility information among various depth focus planes. (3) A multiple label energy minimization formulation is designed in each plane to select the optimal camera. The energy is estimated in the synthetic aperture image volume and integrates the multi-view intensity consistency, previous visibility property and camera view smoothness, which is minimized via Graph cuts. We compare our method with the state-of-the-art synthetic aperture imaging algorithms, and extensive experimental results with qualitative and quantitative analysis demonstrate the effectiveness and superiority of our approach.

  15. Designing Camera Networks by Convex Quadratic Programming

    KAUST Repository

    Ghanem, Bernard

    2015-05-04

    ​In this paper, we study the problem of automatic camera placement for computer graphics and computer vision applications. We extend the problem formulations of previous work by proposing a novel way to incorporate visibility constraints and camera-to-camera relationships. For example, the placement solution can be encouraged to have cameras that image the same important locations from different viewing directions, which can enable reconstruction and surveillance tasks to perform better. We show that the general camera placement problem can be formulated mathematically as a convex binary quadratic program (BQP) under linear constraints. Moreover, we propose an optimization strategy with a favorable trade-off between speed and solution quality. Our solution is almost as fast as a greedy treatment of the problem, but the quality is significantly higher, so much so that it is comparable to exact solutions that take orders of magnitude more computation time. Because it is computationally attractive, our method also allows users to explore the space of solutions for variations in input parameters. To evaluate its effectiveness, we show a range of 3D results on real-world floorplans (garage, hotel, mall, and airport). ​

  16. The eye of the camera: effects of security cameras on pro-social behavior

    NARCIS (Netherlands)

    van Rompay, T.J.L.; Vonk, D.J.; Fransen, M.L.

    2009-01-01

    This study addresses the effects of security cameras on prosocial behavior. Results from previous studies indicate that the presence of others can trigger helping behavior, arising from the need for approval of others. Extending these findings, the authors propose that security cameras can likewise

  17. Acceptance/Operational Test Report for Tank 241-AN-104 camera and camera purge control system

    International Nuclear Information System (INIS)

    Castleberry, J.L.

    1995-11-01

    This Acceptance/Operational Test Procedure (ATP/OTP) will document the satisfactory operation of the camera purge panel, purge control panel, color camera system and associated control components destined for installation. The final acceptance of the complete system will be performed in the field. The purge panel and purge control panel will be tested for its safety interlock which shuts down the camera and pan-and-tilt inside the tank vapor space during loss of purge pressure and that the correct purge volume exchanges are performed as required by NFPA 496. This procedure is separated into seven sections. This Acceptance/Operational Test Report documents the successful acceptance and operability testing of the 241-AN-104 camera system and camera purge control system

  18. Gate Simulation of a Gamma Camera

    International Nuclear Information System (INIS)

    Abidi, Sana; Mlaouhi, Zohra

    2008-01-01

    Medical imaging is a very important diagnostic because it allows for an exploration of the internal human body. The nuclear imaging is an imaging technique used in the nuclear medicine. It is to determine the distribution in the body of a radiotracers by detecting the radiation it emits using a detection device. Two methods are commonly used: Single Photon Emission Computed Tomography (SPECT) and the Positrons Emission Tomography (PET). In this work we are interested on modelling of a gamma camera. This simulation is based on Monte-Carlo language and in particular Gate simulator (Geant4 Application Tomographic Emission). We have simulated a clinical gamma camera called GAEDE (GKS-1) and then we validate these simulations by experiments. The purpose of this work is to monitor the performance of these gamma camera and the optimization of the detector performance and the the improvement of the images quality. (Author)

  19. Mechanical Design of the LSST Camera

    Energy Technology Data Exchange (ETDEWEB)

    Nordby, Martin; Bowden, Gordon; Foss, Mike; Guiffre, Gary; /SLAC; Ku, John; /Unlisted; Schindler, Rafe; /SLAC

    2008-06-13

    The LSST camera is a tightly packaged, hermetically-sealed system that is cantilevered into the main beam of the LSST telescope. It is comprised of three refractive lenses, on-board storage for five large filters, a high-precision shutter, and a cryostat that houses the 3.2 giga-pixel CCD focal plane along with its support electronics. The physically large optics and focal plane demand large structural elements to support them, but the overall size of the camera and its components must be minimized to reduce impact on the image stability. Also, focal plane and optics motions must be minimized to reduce systematic errors in image reconstruction. Design and analysis for the camera body and cryostat will be detailed.

  20. Collimated trans-axial tomographic scintillation camera

    International Nuclear Information System (INIS)

    1980-01-01

    The objects of this invention are first to reduce the time required to obtain statistically significant data in trans-axial tomographic radioisotope scanning using a scintillation camera. Secondly, to provide a scintillation camera system to increase the rate of acceptance of radioactive events to contribute to the positional information obtainable from a known radiation source without sacrificing spatial resolution. Thirdly to reduce the scanning time without loss of image clarity. The system described comprises a scintillation camera detector, means for moving this in orbit about a cranial-caudal axis relative to a patient and a collimator having septa defining apertures such that gamma rays perpendicular to the axis are admitted with high spatial resolution, parallel to the axis with low resolution. The septa may be made of strips of lead. Detailed descriptions are given. (U.K.)

  1. PEOPLE REIDENTIFCATION IN A DISTRIBUTED CAMERA NETWORK

    Directory of Open Access Journals (Sweden)

    Icaro Oliveira de Oliveira

    2010-06-01

    Full Text Available This paper presents an approach to the object reidentification problem in a distributed camera network system. The reidentification or reacquisition problem consists essentially on the matching process of images acquired from different cameras. This work is applied in a monitored environment by cameras. This application is important to modern security systems, in which the targets presence identification in the environment expands the capacity of action by security agents in real time and provides important parameters like localization for each target. We used target’s interest points and target’s color with features for reidentification. The satisfactory results were obtained from real experiments in public video datasets and synthetic images with noise.

  2. Small Orbital Stereo Tracking Camera Technology Development

    Science.gov (United States)

    Gagliano, L.; Bryan, T.; MacLeod, T.

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASAs Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  3. Phase camera experiment for Advanced Virgo

    International Nuclear Information System (INIS)

    Agatsuma, Kazuhiro; Beuzekom, Martin van; Schaaf, Laura van der; Brand, Jo van den

    2016-01-01

    We report on a study of the phase camera, which is a frequency selective wave-front sensor of a laser beam. This sensor is utilized for monitoring sidebands produced by phase modulations in a gravitational wave (GW) detector. Regarding the operation of the GW detectors, the laser modulation/demodulation method is used to measure mirror displacements and used for the position controls. This plays a significant role because the quality of controls affect the noise level of the GW detector. The phase camera is able to monitor each sideband separately, which has a great benefit for the manipulation of the delicate controls. Also, overcoming mirror aberrations will be an essential part of Advanced Virgo (AdV), which is a GW detector close to Pisa. Especially low-frequency sidebands can be affected greatly by aberrations in one of the interferometer cavities. The phase cameras allow tracking such changes because the state of the sidebands gives information on mirror aberrations. A prototype of the phase camera has been developed and is currently tested. The performance checks are almost completed and the installation of the optics at the AdV site has started. After the installation and commissioning, the phase camera will be combined to a thermal compensation system that consists of CO 2 lasers and compensation plates. In this paper, we focus on the prototype and show some limitations from the scanner performance. - Highlights: • The phase camera is being developed for a gravitational wave detector. • A scanner performance limits the operation speed and layout design of the system. • An operation range was found by measuring the frequency response of the scanner.

  4. Results with the UKIRT infrared camera

    International Nuclear Information System (INIS)

    Mclean, I.S.

    1987-01-01

    Recent advances in focal plane array technology have made an immense impact on infrared astronomy. Results from the commissioning of the first infrared camera on UKIRT (the world's largest IR telescope) are presented. The camera, called IRCAM 1, employs the 62 x 58 InSb DRO array from SBRC in an otherwise general purpose system which is briefly described. Several imaging modes are possible including staring, chopping and a high-speed snapshot mode. Results to be presented include the first true high resolution images at IR wavelengths of the entire Orion nebula

  5. Camera-enabled techniques for organic synthesis

    Directory of Open Access Journals (Sweden)

    Steven V. Ley

    2013-05-01

    Full Text Available A great deal of time is spent within synthetic chemistry laboratories on non-value-adding activities such as sample preparation and work-up operations, and labour intensive activities such as extended periods of continued data collection. Using digital cameras connected to computer vision algorithms, camera-enabled apparatus can perform some of these processes in an automated fashion, allowing skilled chemists to spend their time more productively. In this review we describe recent advances in this field of chemical synthesis and discuss how they will lead to advanced synthesis laboratories of the future.

  6. Nonmedical applications of a positron camera

    International Nuclear Information System (INIS)

    Hawkesworth, M.R.; Parker, D.J.; Fowles, P.; Crilly, J.F.; Jefferies, N.L.; Jonkers, G.

    1991-01-01

    The positron camera in the School on Physics and Space Research, University of Birmingham, is based on position-sensitive multiwire γ-ray detectors developed at the Rutherford Appleton Laboratory. The current characteristics of the camera are discussed with particular reference to its suitability for flow mapping in industrial subjects. The techniques developed for studying the dynamics of processes with time scales ranging from milliseconds to days are described, and examples of recent results from a variety of industrial applications are presented. (orig.)

  7. Compact Optical Technique for Streak Camera Calibration

    International Nuclear Information System (INIS)

    Bell, P; Griffith, R; Hagans, K; Lerche, R; Allen, C; Davies, T; Janson, F; Justin, R; Marshall, B; Sweningsen, O

    2004-01-01

    The National Ignition Facility (NIF) is under construction at the Lawrence Livermore National Laboratory (LLNL) for the U.S. Department of Energy Stockpile Stewardship Program. Optical streak cameras are an integral part of the experimental diagnostics instrumentation. To accurately reduce data from the streak cameras a temporal calibration is required. This article describes a technique for generating trains of precisely timed short-duration optical pulses1 (optical comb generators) that are suitable for temporal calibrations. These optical comb generators (Figure 1) are used with the LLNL optical streak cameras. They are small, portable light sources that produce a series of temporally short, uniformly spaced, optical pulses. Comb generators have been produced with 0.1, 0.5, 1, 3, 6, and 10-GHz pulse trains of 780-nm wavelength light with individual pulse durations of ∼25-ps FWHM. Signal output is via a fiber-optic connector. Signal is transported from comb generator to streak camera through multi-mode, graded-index optical fibers. At the NIF, ultra-fast streak-cameras are used by the Laser Fusion Program experimentalists to record fast transient optical signals. Their temporal resolution is unmatched by any other transient recorder. Their ability to spatially discriminate an image along the input slit allows them to function as a one-dimensional image recorder, time-resolved spectrometer, or multichannel transient recorder. Depending on the choice of photocathode, they can be made sensitive to photon energies from 1.1 eV to 30 keV and beyond. Comb generators perform two important functions for LLNL streak-camera users. First, comb generators are used as a precision time-mark generator for calibrating streak camera sweep rates. Accuracy is achieved by averaging many streak camera images of comb generator signals. Time-base calibrations with portable comb generators are easily done in both the calibration laboratory and in situ. Second, comb signals are applied

  8. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  9. The LLL compact 10-ps streak camera

    International Nuclear Information System (INIS)

    Thomas, S.W.; Houghton, J.W.; Tripp, G.R.; Coleman, L.W.

    1975-01-01

    The 10-ps streak camera has been redesigned to simplify its operation, reduce manufacturing costs, and improve its appearance. The electronics have been simplified, a film indexer added, and a contacted slit has been evaluated. Data support a 10-ps resolution. (author)

  10. Terrain mapping camera for Chandrayaan-1

    Indian Academy of Sciences (India)

    The Terrain Mapping Camera (TMC) on India's first satellite for lunar exploration, Chandrayaan-1, is for generating high-resolution 3-dimensional maps of the Moon. With this instrument, a complete topographic map of the Moon with 5 m spatial resolution and 10-bit quantization will be available for scientific studies.

  11. Thermoplastic film camera for holographic recording

    International Nuclear Information System (INIS)

    Liegeois, C.; Meyrueis, P.

    1982-01-01

    The design thermoplastic-film recording camera and its performance for holography of extended objects are reported. Special corona geometry and accurate control of development heat by constant current heating and high resolution measurement of the develop temperature make easy recording of reproducible, large aperture holograms possible. The experimental results give the transfer characteristics, the diffraction efficiency characteristics and the spatial frequency response. (orig.)

  12. A multidetector scintillation camera with 254 channels

    DEFF Research Database (Denmark)

    Sveinsdottir, E; Larsen, B; Rommer, P

    1977-01-01

    A computer-based scintillation camera has been designed for both dynamic and static radionuclide studies. The detecting head has 254 independent sodium iodide crystals, each with a photomultiplier and amplifier. In dynamic measurements simultaneous events can be recorded, and 1 million total coun...

  13. FPS camera sync and reset chassis

    International Nuclear Information System (INIS)

    Yates, G.J.

    1980-06-01

    The sync and reset chassis provides all the circuitry required to synchronize an event to be studied, a remote free-running focus projection and scanning (FPS) data-acquisition TV camera, and a video signal recording system. The functions, design, and operation of this chassis are described in detail

  14. The Legal Implications of Surveillance Cameras

    Science.gov (United States)

    Steketee, Amy M.

    2012-01-01

    The nature of school security has changed dramatically over the last decade. Schools employ various measures, from metal detectors to identification badges to drug testing, to promote the safety and security of staff and students. One of the increasingly prevalent measures is the use of security cameras. In fact, the U.S. Department of Education…

  15. A novel super-resolution camera model

    Science.gov (United States)

    Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli

    2015-05-01

    Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.

  16. Face identification in videos from mobile cameras

    NARCIS (Netherlands)

    Mu, Meiru; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2014-01-01

    It is still challenging to recognize faces reliably in videos from mobile camera, although mature automatic face recognition technology for still images has been available for quite some time. Suppose we want to be alerted when suspects appear in the recording of a police Body-Cam, even a good face

  17. A multidetector scintillation camera with 254 channels

    DEFF Research Database (Denmark)

    Sveinsdottir, E; Larsen, B; Rommer, P

    1977-01-01

    A computer-based scintillation camera has been designed for both dynamic and static radionuclide studies. The detecting head has 254 independent sodium iodide crystals, each with a photomultiplier and amplifier. In dynamic measurements simultaneous events can be recorded, and 1 million total counts...

  18. Digital Camera Project Fosters Communication Skills

    Science.gov (United States)

    Fisher, Ashley; Lazaros, Edward J.

    2009-01-01

    This article details the many benefits of educators' use of digital camera technology and provides an activity in which students practice taking portrait shots of classmates, manipulate the resulting images, and add language arts practice by interviewing their subjects to produce a photo-illustrated Word document. This activity gives…

  19. Phase camera experiment for Advanced Virgo

    NARCIS (Netherlands)

    Agatsuma, Kazuhiro; Van Beuzekom, Martin; Van Der Schaaf, Laura; Van Den Brand, Jo

    2016-01-01

    We report on a study of the phase camera, which is a frequency selective wave-front sensor of a laser beam. This sensor is utilized for monitoring sidebands produced by phase modulations in a gravitational wave (GW) detector. Regarding the operation of the GW detectors, the laser

  20. Integrating Gigabit ethernet cameras into EPICS at Diamond light source

    International Nuclear Information System (INIS)

    Cobb, T.

    2012-01-01

    At Diamond Light Source a range of cameras are used to provide images for diagnostic purposes in both the accelerator and photo beamlines. The accelerator and existing beamlines use Point Grey Flea and Flea2 Firewire cameras. We have selected Gigabit Ethernet cameras supporting GigE Vision for our new photon beamlines. GigE Vision is an interface standard for high speed Ethernet cameras which encourages inter-operability between manufacturers. This paper describes the challenges encountered while integrating GigE Vision cameras from a range of vendors into EPICS. GigE Vision cameras appear to be more reliable than the Firewire cameras, and the simple cabling makes much easier to move the cameras to different positions. Upcoming power over Ethernet versions of the cameras will reduce the number of cables still further

  1. New nuclear medicine gamma camera systems

    International Nuclear Information System (INIS)

    Villacorta, Edmundo V.

    1997-01-01

    The acquisition of the Open E.CAM and DIACAM gamma cameras by Makati Medical Center is expected to enhance the capabilities of its nuclear medicine facilities. When used as an aid to diagnosis, nuclear medicine entails the introduction of a minute amount of radioactive material into the patient; thus, no reaction or side-effect is expected. When it reaches the particular target organ, depending on the radiopharmaceutical, a lesion will appear as a decrease (cold) area or increase (hot) area in the radioactive distribution as recorded byu the gamma cameras. Gamma camera images in slices or SPECT (Single Photon Emission Computer Tomography), increase the sensitivity and accuracy in detecting smaller and deeply seated lesions, which otherwise may not be detected in the regular single planar images. Due to the 'open' design of the equipment, claustrophobic patients will no longer feel enclosed during the procedure. These new gamma cameras yield improved resolution and superb image quality, and the higher photon sensitivity shortens imaging acquisition time. The E.CAM, which is the latest generation gamma camera, is featured by its variable angle dual-head system, the only one available in the Philipines, and the excellent choice for Myocardial Perfusion Imaging (MPI). From the usual 45 minutes, the acquisition time for gated SPECT imaging of the heart has now been remarkably reduced to 12 minutes. 'Gated' infers snap-shots of the heart in selected phases of its contraction and relaxation as triggered by ECG. The DIACAM is installed in a room with access outside the main entrance of the department, intended specially for bed-borne patients. Both systems are equipped with a network of high performance Macintosh ICOND acquisition and processing computers. Added to the hardware is the ICON processing software which allows total simultaneous acquisition and processing capabilities in the same operator's terminal. Video film and color printers are also provided. Together

  2. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  3. X-ray imaging using digital cameras

    Science.gov (United States)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  4. Voice Controlled Stereographic Video Camera System

    Science.gov (United States)

    Goode, Georgianna D.; Philips, Michael L.

    1989-09-01

    For several years various companies have been developing voice recognition software. Yet, there are few applications of voice control in the robotics field and virtually no examples of voice controlled three dimensional (3-D) systems. In late 1987 ARD developed a highly specialized, voice controlled 3-D vision system for use in remotely controlled, non-tethered robotic applications. The system was designed as an operator's aid and incorporates features thought to be necessary or helpful in remotely maneuvering a vehicle. Foremost is the three dimensionality of the operator's console display. An image that provides normal depth perception cues over a range of depths greatly increases the ease with which an operator can drive a vehicle and investigate its environment. The availability of both vocal and manual control of all system functions allows the operator to guide the system according to his personal preferences. The camera platform can be panned +/-178 degrees and tilted +/-30 degrees for a full range of view of the vehicle's environment. The cameras can be zoomed and focused for close inspection of distant objects, while retaining substantial stereo effect by increasing the separation between the cameras. There is a ranging and measurement function, implemented through a graphical cursor, which allows the operator to mark objects in a scene to determine their relative positions. This feature will be helpful in plotting a driving path. The image seen on the screen is overlaid with icons and digital readouts which provide information about the position of the camera platform, the range to the graphical cursor and the measurement results. The cursor's "range" is actually the distance from the cameras to the object on which the cursor is resting. Other such features are included in the system and described in subsequent sections of this paper.

  5. Control of the movement of a ROV camera; Controle de posicionamento da camera de um ROV

    Energy Technology Data Exchange (ETDEWEB)

    Lima, Alexandre S. de; Dutra, Max Suell [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE); Reis, Ney Robinson S. dos [PETROBRAS, Rio de Janeiro, RJ (Brazil). Centro de Pesquisas; Santos, Auderi V. dos [Pontificia Univ. Catolica do Rio de Janeiro, RJ (Brazil)

    2004-07-01

    The ROV's (Remotely Operated Vehicles) are used for installation and maintenance of underwater exploration systems in the oil industry. These systems are operated in distant areas thus being of essential importance the use of a cameras for the visualization of the work area. The synchronization necessary in the accomplishment of the tasks when operating the manipulator and the movement of the camera for the operator is a complex task. For the accomplishment of this synchronization is presented in this work the analysis of the interconnection of the systems. The concatenation of the systems is made through the interconnection of the electric signals of the proportional valves of the actuators of the manipulator with the signals of the proportional valves of the actuators of the camera. With this interconnection the approach accompaniment of the movement of the manipulator for the camera, keeping the object of the visualization of the field of vision of the operator is obtained. (author)

  6. VUV Testing of Science Cameras at MSFC: QE Measurement of the CLASP Flight Cameras

    Science.gov (United States)

    Champey, Patrick R.; Kobayashi, Ken; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512x512 detector, dual channel analog readout electronics and an internally mounted cold block. At the flight operating temperature of -20 C, the CLASP cameras achieved the low-noise performance requirements (less than or equal to 25 e- read noise and greater than or equal to 10 e-/sec/pix dark current), in addition to maintaining a stable gain of approximately equal to 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Four flight-like cameras were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise, dark current and residual non-linearity of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV and EUV science cameras at MSFC.

  7. World's fastest and most sensitive astronomical camera

    Science.gov (United States)

    2009-06-01

    The next generation of instruments for ground-based telescopes took a leap forward with the development of a new ultra-fast camera that can take 1500 finely exposed images per second even when observing extremely faint objects. The first 240x240 pixel images with the world's fastest high precision faint light camera were obtained through a collaborative effort between ESO and three French laboratories from the French Centre National de la Recherche Scientifique/Institut National des Sciences de l'Univers (CNRS/INSU). Cameras such as this are key components of the next generation of adaptive optics instruments of Europe's ground-based astronomy flagship facility, the ESO Very Large Telescope (VLT). ESO PR Photo 22a/09 The CCD220 detector ESO PR Photo 22b/09 The OCam camera ESO PR Video 22a/09 OCam images "The performance of this breakthrough camera is without an equivalent anywhere in the world. The camera will enable great leaps forward in many areas of the study of the Universe," says Norbert Hubin, head of the Adaptive Optics department at ESO. OCam will be part of the second-generation VLT instrument SPHERE. To be installed in 2011, SPHERE will take images of giant exoplanets orbiting nearby stars. A fast camera such as this is needed as an essential component for the modern adaptive optics instruments used on the largest ground-based telescopes. Telescopes on the ground suffer from the blurring effect induced by atmospheric turbulence. This turbulence causes the stars to twinkle in a way that delights poets, but frustrates astronomers, since it blurs the finest details of the images. Adaptive optics techniques overcome this major drawback, so that ground-based telescopes can produce images that are as sharp as if taken from space. Adaptive optics is based on real-time corrections computed from images obtained by a special camera working at very high speeds. Nowadays, this means many hundreds of times each second. The new generation instruments require these

  8. Principle of some gamma cameras (efficiencies, limitations, development)

    International Nuclear Information System (INIS)

    Allemand, R.; Bourdel, J.; Gariod, R.; Laval, M.; Levy, G.; Thomas, G.

    1975-01-01

    The quality of scintigraphic images is shown to depend on the efficiency of both the input collimator and the detector. Methods are described by which the quality of these images may be improved by adaptations to either the collimator (Fresnel zone camera, Compton effect camera) or the detector (Anger camera, image amplification camera). The Anger camera and image amplification camera are at present the two main instruments whereby acceptable space and energy resolutions may be obtained. A theoretical comparative study of their efficiencies is carried out, independently of their technological differences, after which the instruments designed or under study at the LETI are presented: these include the image amplification camera, the electron amplifier tube camera using a semi-conductor target CdTe and HgI 2 detector [fr

  9. GPM GROUND VALIDATION DC-8 CAMERA NADIR GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation DC-8 Camera Nadir GCPEx dataset contains geo-located visible-wavelength imagery of the ground obtained from the nadir camera aboard the...

  10. GPM GROUND VALIDATION DC-8 CAMERA NADIR GCPEX V1

    Data.gov (United States)

    National Aeronautics and Space Administration — The GPM Ground Validation DC-8 Camera Nadir GCPEx dataset contains geo-located, visible-wavelength imagery of the ground obtained from the nadir camera aboard the...

  11. Declarative camera control for automatic cinematography

    Energy Technology Data Exchange (ETDEWEB)

    Christianson, D.B.; Anderson, S.E.; Li-wei He [Univ. of Washington, Seattle, WA (United States)] [and others

    1996-12-31

    Animations generated by interactive 3D computer graphics applications are typically portrayed either from a particular character`s point of view or from a small set of strategically-placed viewpoints. By ignoring camera placement, such applications fail to realize important storytelling capabilities that have been explored by cinematographers for many years. In this paper, we describe several of the principles of cinematography and show how they can be formalized into a declarative language, called the Declarative Camera Control Language (DCCL). We describe the application of DCCL within the context of a simple interactive video game and argue that DCCL represents cinematic knowledge at the same level of abstraction as expert directors by encoding 16 idioms from a film textbook. These idioms produce compelling animations, as demonstrated on the accompanying videotape.

  12. Cervical SPECT Camera for Parathyroid Imaging

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2012-08-31

    Primary hyperparathyroidism characterized by one or more enlarged parathyroid glands has become one of the most common endocrine diseases in the world affecting about 1 per 1000 in the United States. Standard treatment is highly invasive exploratory neck surgery called Parathyroidectomy. The surgery has a notable mortality rate because of the close proximity to vital structures. The move to minimally invasive parathyroidectomy is hampered by the lack of high resolution pre-surgical imaging techniques that can accurately localize the parathyroid with respect to surrounding structures. We propose to develop a dedicated ultra-high resolution (~ 1 mm) and high sensitivity (10x conventional camera) cervical scintigraphic imaging device. It will be based on a multiple pinhole-camera SPECT system comprising a novel solid state CZT detector that offers the required performance. The overall system will be configured to fit around the neck and comfortably image a patient.

  13. Advanced EVA Suit Camera System Development Project

    Science.gov (United States)

    Mock, Kyla

    2016-01-01

    The National Aeronautics and Space Administration (NASA) at the Johnson Space Center (JSC) is developing a new extra-vehicular activity (EVA) suit known as the Advanced EVA Z2 Suit. All of the improvements to the EVA Suit provide the opportunity to update the technology of the video imagery. My summer internship project involved improving the video streaming capabilities of the cameras that will be used on the Z2 Suit for data acquisition. To accomplish this, I familiarized myself with the architecture of the camera that is currently being tested to be able to make improvements on the design. Because there is a lot of benefit to saving space, power, and weight on the EVA suit, my job was to use Altium Design to start designing a much smaller and simplified interface board for the camera's microprocessor and external components. This involved checking datasheets of various components and checking signal connections to ensure that this architecture could be used for both the Z2 suit and potentially other future projects. The Orion spacecraft is a specific project that may benefit from this condensed camera interface design. The camera's physical placement on the suit also needed to be determined and tested so that image resolution can be maximized. Many of the options of the camera placement may be tested along with other future suit testing. There are multiple teams that work on different parts of the suit, so the camera's placement could directly affect their research or design. For this reason, a big part of my project was initiating contact with other branches and setting up multiple meetings to learn more about the pros and cons of the potential camera placements we are analyzing. Collaboration with the multiple teams working on the Advanced EVA Z2 Suit is absolutely necessary and these comparisons will be used as further progress is made for the overall suit design. This prototype will not be finished in time for the scheduled Z2 Suit testing, so my time was

  14. Dynamic gamma camera scintigraphy in primary hypoovarism

    International Nuclear Information System (INIS)

    Peshev, N.; Mladenov, B.; Topalov, I.; Tsanev, Ts.

    1988-01-01

    Twenty-seven patients with primary hypoovarism and 10 controls were examined. After intravenous injection of 111 to 175 MBq 99m Tc pertechnetate, dynamic gamma camera scintigraphy for 15 minutes was carried out. In the patients with primary amenorrhea no functioning ovarial tissue was visualized or the ovaries were diminished in size, strongly reduced and non-homogenous accumulation of the radionuclide with unclear and uneven delineation were observed. In the patients with primary infertility, the gamma camera investigation gave information not only about the presence of ovarial parenchyma, but about the extent of the inflammatory process, too. In the patients after surgical intervention, the dynamic radioisotope investigation gave information about the volume and the site of the surgical intervention, as well as about the conditions of the residual parenchyma

  15. Using a portable holographic camera in cosmetology

    Science.gov (United States)

    Bakanas, R.; Gudaitis, G. A.; Zacharovas, S. J.; Ratcliffe, D. B.; Hirsch, S.; Frey, S.; Thelen, A.; Ladrière, N.; Hering, P.

    2006-07-01

    The HSF-MINI portable holographic camera is used to record holograms of the human face. The recorded holograms are analyzed using a unique three-dimensional measurement system that provides topometric data of the face with resolution less than or equal to 0.5 mm. The main advantages of this method over other, more traditional methods (such as laser triangulation and phase-measurement triangulation) are discussed.

  16. Camera Development for the Cherenkov Telescope Array

    Science.gov (United States)

    Moncada, Roberto Jose

    2017-01-01

    With the Cherenkov Telescope Array (CTA), the very-high-energy gamma-ray universe, between 30 GeV and 300 TeV, will be probed at an unprecedented resolution, allowing deeper studies of known gamma-ray emitters and the possible discovery of new ones. This exciting project could also confirm the particle nature of dark matter by looking for the gamma rays produced by self-annihilating weakly interacting massive particles (WIMPs). The telescopes will use the imaging atmospheric Cherenkov technique (IACT) to record Cherenkov photons that are produced by the gamma-ray induced extensive air shower. One telescope design features dual-mirror Schwarzschild-Couder (SC) optics that allows the light to be finely focused on the high-resolution silicon photomultipliers of the camera modules starting from a 9.5-meter primary mirror. Each camera module will consist of a focal plane module and front-end electronics, and will have four TeV Array Readout with GSa/s Sampling and Event Trigger (TARGET) chips, giving them 64 parallel input channels. The TARGET chip has a self-trigger functionality for readout that can be used in higher logic across camera modules as well as across individual telescopes, which will each have 177 camera modules. There will be two sites, one in the northern and the other in the southern hemisphere, for full sky coverage, each spanning at least one square kilometer. A prototype SC telescope is currently under construction at the Fred Lawrence Whipple Observatory in Arizona. This work was supported by the National Science Foundation's REU program through NSF award AST-1560016.

  17. A Study towards Real Time Camera Calibration

    OpenAIRE

    Choudhury, Ragini

    2000-01-01

    Preliminary Report Prepared for the Project VISTEO; This report provides a detailed study of the problem of real time camera calibration. This analysis, based on the study of literature in the area, as well as the experiments carried out on real and synthetic data, is motivated by the requirements of the VISTEO project. VISTEO deals with a fusion of real images and synthetic environments, objects etc in TV video sequences. It thus deals with a challenging and fast growing area in virtual real...

  18. Combining local and global optimisation for virtual camera control

    OpenAIRE

    Burelli, Paolo; Yannakakis, Georgios N.; 2010 IEEE Symposium on Computational Intelligence and Games

    2010-01-01

    Controlling a virtual camera in 3D computer games is a complex task. The camera is required to react to dynamically changing environments and produce high quality visual results and smooth animations. This paper proposes an approach that combines local and global search to solve the virtual camera control problem. The automatic camera control problem is described and it is decomposed into sub-problems; then a hierarchical architecture that solves each sub-problem using the most appropriate op...

  19. The AOTF-Based NO2 Camera

    Science.gov (United States)

    Dekemper, E.; Fussen, D.; Vanhellemont, F.; Vanhamel, J.; Pieroux, D.; Berkenbosch, S.

    2017-12-01

    In an urban environment, nitrogen dioxide is emitted by a multitude of static and moving point sources (cars, industry, power plants, heating systems,…). Air quality models generally rely on a limited number of monitoring stations which do not capture the whole pattern, neither allow for full validation. So far, there has been a lack of instrument capable of measuring NO2 fields with the necessary spatio-temporal resolution above major point sources (power plants), or more extended ones (cities). We have developed a new type of passive remote sensing instrument aiming at the measurement of 2-D distributions of NO2 slant column densities (SCDs) with a high spatial (meters) and temporal (minutes) resolution. The measurement principle has some similarities with the popular filter-based SO2 camera (used in volcanic and industrial sulfur emissions monitoring) as it relies on spectral images taken at wavelengths where the molecule absorption cross section is different. But contrary to the SO2 camera, the spectral selection is performed by an acousto-optical tunable filter (AOTF) capable of resolving the target molecule's spectral features. A first prototype was successfully tested with the plume of a coal-firing power plant in Romania, revealing the dynamics of the formation of NO2 in the early plume. A lighter version of the NO2 camera is now being tested on other targets, such as oil refineries and urban air masses.

  20. Robust automatic camera pointing for airborne surveillance

    Science.gov (United States)

    Dwyer, David; Wren, Lee; Thornton, John; Bonsor, Nigel

    2002-08-01

    Airborne electro-optic surveillance from a moving platform currently requires regular interaction from a trained operator. Even simple tasks such as fixating on a static point on the ground can demand constant adjustment of the camera orientation to compensate for platform motion. In order to free up operator time for other tasks such as navigation and communication with ground assets, an automatic gaze control system is needed. This paper describes such a system, based purely on tracking points within the video image. A number of scene points are automatically selected and their inter-frame motion tracked. The scene motion is then estimated using a model of a planar projective transform. For reliable and accurate camera pointing, the modeling of the scene motion must be robust to common problems such as scene point obscuration, objects moving independently within the scene and image noise. This paper details a COTS based system for automatic camera fixation and describes ways of preventing objects moving in the scene or poor motion estimates from corrupting the scene motion model.

  1. Enhancing image quality produced by IR cameras

    Science.gov (United States)

    Dulski, R.; Powalisz, P.; Kastek, M.; Trzaskawka, P.

    2010-10-01

    Images produced by IR cameras are a specific source of information. The perception and interpretation of such image greatly depends on thermal properties of observed object and surrounding scenery. In practice, the optimal settings of the camera as well as automatic temperature range control do not guarantee the displayed images is optimal from observer's point of view. The solution to this could be the methods and algorithms of digital image processing implemented in the camera. Such solution should provide intelligent, dynamic contrast control applied not only across entire image but also selectively to specific areas in order to maintain optimal visualization of observed scenery. The paper discusses problems dealing with improvement of the visibility of low-contrast objects and presents method of image enhancement. The algorithm is based on adaptive histogram equalization. The image enhancement algorithm was tested on real IR images. The algorithm significantly improves the image quality and the effectiveness of object detection for the majority of thermal images. Due to its adaptive nature it should be effective for any given thermal image. The application of such algorithm is promising alternative to more expensive opto-electronic components like improved optics and detectors.

  2. Theory and applications of smart cameras

    CERN Document Server

    2016-01-01

    This book presents an overview of smart camera systems, considering practical applications but also reviewing fundamental aspects of the underlying technology.  It introduces in a tutorial style the principles of sensing and signal processing, and also describes topics such as wireless connection to the Internet of Things (IoT) which is expected to be the biggest market for smart cameras. It is an excellent guide to the fundamental of smart camera technology, and the chapters complement each other well as the authors have worked as a team under the auspice of GFP(Global Frontier Project), the largest-scale funded research in Korea.  This is the third of three books based on the Integrated Smart Sensors research project, which describe the development of innovative devices, circuits, and system-level enabling technologies.  The aim of the project was to develop common platforms on which various devices and sensors can be loaded, and to create systems offering significant improvements in information processi...

  3. CCD characterization for a range of color cameras

    NARCIS (Netherlands)

    Withagen, P.J.; Groen, F.C.A.; Schutte, K.

    2005-01-01

    CCD cameras are widely used for remote sensing and image processing applications. However, most cameras are produced to create nice images, not to do accurate measurements. Post processing operations such as gamma adjustment and automatic gain control are incorporated in the camera. When a (CCD)

  4. A generic model for camera based intelligent road crowd control ...

    African Journals Online (AJOL)

    This research proposes a model for intelligent traffic flow control by implementing camera based surveillance and feedback system. A series of cameras are set minimum three signals ahead from the target junction. The complete software system is developed to help integrating the multiple camera on road as feedback to ...

  5. MISR L1B3 Radiometric Camera-by-camera Cloud Mask Product subset for the RICO region V004

    Data.gov (United States)

    National Aeronautics and Space Administration — This file contains the Radiometric camera-by-camera Cloud Mask dataset over the RICO region. It is used to determine whether a scene is classified as clear or...

  6. Modeling and simulation of gamma camera

    International Nuclear Information System (INIS)

    Singh, B.; Kataria, S.K.; Samuel, A.M.

    2002-08-01

    Simulation techniques play a vital role in designing of sophisticated instruments and also for the training of operating and maintenance staff. Gamma camera systems have been used for functional imaging in nuclear medicine. Functional images are derived from the external counting of the gamma emitting radioactive tracer that after introduction in to the body mimics the behavior of native biochemical compound. The position sensitive detector yield the coordinates of the gamma ray interaction with the detector and are used to estimate the point of gamma ray emission within the tracer distribution space. This advanced imaging device is thus dependent on the performance of algorithm for coordinate computing, estimation of point of emission, generation of image and display of the image data. Contemporary systems also have protocols for quality control and clinical evaluation of imaging studies. Simulation of this processing leads to understanding of the basic camera design problems. This report describes a PC based package for design and simulation of gamma camera along with the options of simulating data acquisition and quality control of imaging studies. Image display and data processing the other options implemented in SIMCAM will be described in separate reports (under preparation). Gamma camera modeling and simulation in SIMCAM has preset configuration of the design parameters for various sizes of crystal detector with the option to pack the PMT on hexagon or square lattice. Different algorithm for computation of coordinates and spatial distortion removal are allowed in addition to the simulation of energy correction circuit. The user can simulate different static, dynamic, MUGA and SPECT studies. The acquired/ simulated data is processed for quality control and clinical evaluation of the imaging studies. Results show that the program can be used to assess these performances. Also the variations in performance parameters can be assessed due to the induced

  7. Using a laser scanning camera for reactor inspection

    International Nuclear Information System (INIS)

    Armour, I.A.; Adrain, R.S.; Klewe, R.C.

    1984-01-01

    Inspection of nuclear reactors is normally carried out using TV or film cameras. There are, however, several areas where these cameras show considerable shortcomings. To overcome these difficulties, laser scanning cameras have been developed. This type of camera can be used for general visual inspection as well as the provision of high resolution video images with high ratio on and off-axis zoom capability. In this paper, we outline the construction and operation of a laser scanning camera and give examples of how it has been used in various power stations, and indicate future potential developments. (author)

  8. Methods for identification of images acquired with digital cameras

    Science.gov (United States)

    Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki

    2001-02-01

    From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.

  9. Presence capture cameras - a new challenge to the image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  10. Initial laboratory evaluation of color video cameras: Phase 2

    Energy Technology Data Exchange (ETDEWEB)

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. The monochrome cameras were selected over color cameras because they have greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color camera technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories has established an on-going program to evaluate the newest color solid-state cameras. Phase One of the Sandia program resulted in the SAND91-2579/1 report titled: Initial Laboratory Evaluation of Color Video Cameras. The report briefly discusses imager chips, color cameras, and monitors, describes the camera selection, details traditional test parameters and procedures, and gives the results reached by evaluating 12 cameras. Here, in Phase Two of the report, we tested 6 additional cameras using traditional methods. In addition, all 18 cameras were tested by newly developed methods. This Phase 2 report details those newly developed test parameters and procedures, and evaluates the results.

  11. Analisis dan Perancangan Aplikasi Basis Data Penilaian Kinerja Karyawan menggunakan Metode 360-Degree Feedback Berbasis Web pada Pt Ifs Solutions Indonesia

    Directory of Open Access Journals (Sweden)

    Choirul Huda

    2013-06-01

    Full Text Available The purpose of this study is to analyze and design a database application for the assessment of employee performance to suit the needs of PT IFS Solutions Indonesia. This system creates and provides information quickly and accurately where the data is well integrated, so it can help the company to do an analysis and consideration for decision-making. The research methodology used includes library research, interviews with the HR company to obtain information about the system to be designed, and database design by creating conceptual models, logical models and physical models. After that, application design includes designing the DFD, menu structure, STD, and user interface. The result achieved is the creation of a web-based database application that can help decision-makers to support employees with good, more accurate, easier and quickly available when needed. The database application made can help companies manage and make decisions as well as produce better work.

  12. From Seven Years to 360 Degrees: Primitive Accumulation, the Social Common, and the Contractual Lockdown of Recording Artists at the Threshold of Digitalization

    Directory of Open Access Journals (Sweden)

    Matt Stahl

    2011-11-01

    Full Text Available This article examines the apparent paradox of the persistence of long-term employment contracts for cultural industry ‘talent’ in the context of broader trends toward short-term, flexible employment. While aspirants are numberless, bankable talent is in short supply. Long-term talent contracts appear to embody a durable axiom in employment: labor shortage favors employees. The article approaches this axiom through the lens of recent reconsiderations of the concept of primitive accumulation. In the case of employment, this concept highlights employers’ impetus to transcend legal and customary barriers to and limits on their capacity to capture and compel creative labor, and to appropriate the products of contracted creative labor. The article supports this argument through the analysis of contests between Los Angeles-based recording artists and record companies over the California and federal laws that govern their power and property relations. These struggles reveal a pattern of attempts by record companies to overcome or change laws that limit their power to control, compel and dispossess recording artists. The article suggests that as contractual forms change under digitalization, familiar political dynamics continue to characterize the relationships between recording artists and the companies that depend on their labor and output.

  13. Use of cameras for monitoring visibility impairment

    Science.gov (United States)

    Malm, William; Cismoski, Scott; Prenni, Anthony; Peters, Melanie

    2018-02-01

    Webcams and automated, color photography cameras have been routinely operated in many U.S. national parks and other federal lands as far back as 1988, with a general goal of meeting interpretive needs within the public lands system and communicating effects of haze on scenic vistas to the general public, policy makers, and scientists. Additionally, it would be desirable to extract quantifiable information from these images to document how visibility conditions change over time and space and to further reflect the effects of haze on a scene, in the form of atmospheric extinction, independent of changing lighting conditions due to time of day, year, or cloud cover. Many studies have demonstrated a link between image indexes and visual range or extinction in urban settings where visibility is significantly degraded and where scenes tend to be gray and devoid of color. In relatively clean, clear atmospheric conditions, clouds and lighting conditions can sometimes affect the image radiance field as much or more than the effects of haze. In addition, over the course of many years, cameras have been replaced many times as technology improved or older systems wore out, and therefore camera image pixel density has changed dramatically. It is shown that gradient operators are very sensitive to image resolution while contrast indexes are not. Furthermore, temporal averaging and time of day restrictions allow for developing quantitative relationships between atmospheric extinction and contrast-type indexes even when image resolution has varied over time. Temporal averaging effectively removes the variability of visibility indexes associated with changing cloud cover and weather conditions, and changes in lighting conditions resulting from sun angle effects are best compensated for by restricting averaging to only certain times of the day.

  14. OCAMS: The OSIRIS-REx Camera Suite

    Science.gov (United States)

    Rizk, B.; Drouet d'Aubigny, C.; Golish, D.; Fellows, C.; Merrill, C.; Smith, P.; Walker, M. S.; Hendershot, J. E.; Hancock, J.; Bailey, S. H.; DellaGiustina, D. N.; Lauretta, D. S.; Tanner, R.; Williams, M.; Harshman, K.; Fitzgibbon, M.; Verts, W.; Chen, J.; Connors, T.; Hamara, D.; Dowd, A.; Lowman, A.; Dubin, M.; Burt, R.; Whiteley, M.; Watson, M.; McMahon, T.; Ward, M.; Booher, D.; Read, M.; Williams, B.; Hunten, M.; Little, E.; Saltzman, T.; Alfred, D.; O'Dougherty, S.; Walthall, M.; Kenagy, K.; Peterson, S.; Crowther, B.; Perry, M. L.; See, C.; Selznick, S.; Sauve, C.; Beiser, M.; Black, W.; Pfisterer, R. N.; Lancaster, A.; Oliver, S.; Oquest, C.; Crowley, D.; Morgan, C.; Castle, C.; Dominguez, R.; Sullivan, M.

    2018-02-01

    The OSIRIS-REx Camera Suite (OCAMS) will acquire images essential to collecting a sample from the surface of Bennu. During proximity operations, these images will document the presence of satellites and plumes, record spin state, enable an accurate model of the asteroid's shape, and identify any surface hazards. They will confirm the presence of sampleable regolith on the surface, observe the sampling event itself, and image the sample head in order to verify its readiness to be stowed. They will document Bennu's history as an example of early solar system material, as a microgravity body with a planetesimal size-scale, and as a carbonaceous object. OCAMS is fitted with three cameras. The MapCam will record color images of Bennu as a point source on approach to the asteroid in order to connect Bennu's ground-based point-source observational record to later higher-resolution surface spectral imaging. The SamCam will document the sample site before, during, and after it is disturbed by the sample mechanism. The PolyCam, using its focus mechanism, will observe the sample site at sub-centimeter resolutions, revealing surface texture and morphology. While their imaging requirements divide naturally between the three cameras, they preserve a strong degree of functional overlap. OCAMS and the other spacecraft instruments will allow the OSIRIS-REx mission to collect a sample from a microgravity body on the same visit during which it was first optically acquired from long range, a useful capability as humanity reaches out to explore near-Earth, Main-Belt and Jupiter Trojan asteroids.

  15. Testing of a Commercial CCD Camera

    Science.gov (United States)

    Tulsee, Taran

    2000-01-01

    The results are presented of the examination and testing of a commercial CCD camera designed for use by amateur astronomers and university astronomy laboratory courses. The characteristics of the CCD chip are presented in graphical and tabular form. Individual and averaged bias frames are discussed. Dark frames were taken and counts are presented as a function of time. Flat field and other images were used to identify and locate bad pixel columns as well as pixels which vary significantly from the mean pixel sensitivity.

  16. Computational cameras for moving iris recognition

    Science.gov (United States)

    McCloskey, Scott; Venkatesha, Sharath

    2015-05-01

    Iris-based biometric identification is increasingly used for facility access and other security applications. Like all methods that exploit visual information, however, iris systems are limited by the quality of captured images. Optical defocus due to a small depth of field (DOF) is one such challenge, as is the acquisition of sharply-focused iris images from subjects in motion. This manuscript describes the application of computational motion-deblurring cameras to the problem of moving iris capture, from the underlying theory to system considerations and performance data.

  17. Collimator trans-axial tomographic scintillation camera

    International Nuclear Information System (INIS)

    Jaszczak, R.J.

    1977-01-01

    A collimator is provided for a scintillation camera system in which a detector precesses in an orbit about a patient. The collimator is designed to have high resolution and lower sensitivity with respect to radiation traveling in paths laying wholly within planes perpendicular to the cranial-caudal axis of the patient. The collimator has high sensitivity and lower resolution to radiation traveling in other planes. Variances in resolution and sensitivity are achieved by altering the length, spacing or thickness of the septa of the collimator

  18. Compact optical technique for streak camera calibration

    International Nuclear Information System (INIS)

    Bell, Perry; Griffith, Roger; Hagans, Karla; Lerche, Richard; Allen, Curt; Davies, Terence; Janson, Frans; Justin, Ronald; Marshall, Bruce; Sweningsen, Oliver

    2004-01-01

    To produce accurate data from optical streak cameras requires accurate temporal calibration sources. We have reproduced an older technology for generating optical timing marks that had been lost due to component availability. Many improvements have been made which allow the modern units to service a much larger need. Optical calibrators are now available that produce optical pulse trains of 780 nm wavelength light at frequencies ranging from 0.1 to 10 GHz, with individual pulse widths of approximately 25 ps full width half maximum. Future plans include the development of single units that produce multiple frequencies to cover a wide temporal range, and that are fully controllable via an RS232 interface

  19. Compact optical technique for streak camera calibration

    Science.gov (United States)

    Bell, Perry; Griffith, Roger; Hagans, Karla; Lerche, Richard; Allen, Curt; Davies, Terence; Janson, Frans; Justin, Ronald; Marshall, Bruce; Sweningsen, Oliver

    2004-10-01

    To produce accurate data from optical streak cameras requires accurate temporal calibration sources. We have reproduced an older technology for generating optical timing marks that had been lost due to component availability. Many improvements have been made which allow the modern units to service a much larger need. Optical calibrators are now available that produce optical pulse trains of 780 nm wavelength light at frequencies ranging from 0.1 to 10 GHz, with individual pulse widths of approximately 25 ps full width half maximum. Future plans include the development of single units that produce multiple frequencies to cover a wide temporal range, and that are fully controllable via an RS232 interface.

  20. Digital camera resolution and proximal caries detection.

    Science.gov (United States)

    Prapayasatok, S; Janhom, A; Verochana, K; Pramojanee, S

    2006-07-01

    To evaluate the diagnostic accuracy of proximal caries detection from digitized film images captured by a digital camera at different resolution settings. Twenty-five periapical radiographs of 50 premolar and 25 molar teeth were photographed using a digital camera, Sony Cyber-shot, DSC-S75 at three different resolution settings: 640 x 480, 1280 x 960 and 1600 x 1200. Seventy-five digital images were transferred to a computer, saved and opened using ACDSee software. In addition, a PowerPoint slide was made from each digital image. Five observers scored three groups of images (the films, the displayed 1:1 digital images on the ACDSee software, and the PowerPoint slides) for the existence of proximal caries using a 5-point confidence scale, and the depth of caries on a 4-point scale. Ground sections of the teeth were used as the gold standard. Az values under the receiver operating characteristic (ROC) curve of each group of images and at different resolutions were compared using the Friedman and Wilcoxon signed rank tests. Mean different values between the lesions' depth interpreted by the observers and that of the gold standard were analysed. Films showed the highest Az values. Only the 1280 x 960 images on the ACDSee software showed no significant difference of the Az value from the films (P=0.28). The digital images from three resolution settings on the PowerPoint slides showed no significant differences, either among each other or between them and the films. For caries depth, the 1280 x 960 images showed lower values of mean difference in enamel lesions compared with the other two resolution groups. This study showed that in order to digitize conventional films, it was not necessary to use the highest camera resolution setting to achieve high diagnostic accuracy for proximal caries detection. The 1280 x 960 resolution setting of the digital camera demonstrated comparable diagnostic accuracy with film and was adequate for digitizing radiographs for caries

  1. Clinical applications with the HIDAC positron camera

    Science.gov (United States)

    Frey, P.; Schaller, G.; Christin, A.; Townsend, D.; Tochon-Danguy, H.; Wensveen, M.; Donath, A.

    1988-06-01

    A high density avalanche chamber (HIDAC) positron camera has been used for positron emission tomographic (PET) imaging in three different human studies, including patients presenting with: (I) thyroid diseases (124 cases); (II) clinically suspected malignant tumours of the pharynx or larynx (ENT) region (23 cases); and (III) clinically suspected primary malignant and metastatic tumours of the liver (9 cases, 19 PET scans). The positron emitting radiopharmaceuticals used for the three studies were Na 124I (4.2 d half-life) for the thyroid, 55Co-bleomycin (17.5 h half-life) for the ENT-region and 68Ga-colloid (68 min half-life) for the liver. Tomographic imaging was performed: (I) 24 h after oral Na 124I administration to the thyroid patients, (II) 18 h after intraveneous administration of 55Co-bleomycin to the ENT patients and (III) 20 min following the intraveneous injection of 68Ga-colloid to the liver tumour patients. Three different imaging protocols were used with the HIDAC positron camera to perform appropriate tomographic imaging in each patient study. Promising results were obtained in all three studies, particularly in tomographic thyroid imaging, where a significant clinical contribution is made possible for diagnosis and therapy planning by the PET technique. In the other two PET studies encouraging results were obtained for the detection and precise localisation of malignant tumour disease including an estimate of the functional liver volume based on the reticulo-endothelial-system (RES) of the liver, obtained in vivo, and the three-dimensional display of liver PET data using shaded graphics techniques. The clinical significance of the overall results obtained in both the ENT and the liver PET study, however, is still uncertain and the respective role of PET as a new imaging modality in these applications is not yet clearly established. To appreciate the clinical impact made by PET in liver and ENT malignant tumour staging needs further investigation

  2. Thermal imaging cameras characteristics and performance

    CERN Document Server

    Williams, Thomas

    2009-01-01

    The ability to see through smoke and mist and the ability to use the variances in temperature to differentiate between targets and their backgrounds are invaluable in military applications and have become major motivators for the further development of thermal imagers. As the potential of thermal imaging is more clearly understood and the cost decreases, the number of industrial and civil applications being exploited is growing quickly. In order to evaluate the suitability of particular thermal imaging cameras for particular applications, it is important to have the means to specify and measur

  3. Women's Creation of Camera Phone Culture

    Directory of Open Access Journals (Sweden)

    Dong-Hoo Lee

    2005-01-01

    Full Text Available A major aspect of the relationship between women and the media is the extent to which the new media environment is shaping how women live and perceive the world. It is necessary to understand, in a concrete way, how the new media environment is articulated to our gendered culture, how the symbolic or physical forms of the new media condition women’s experiences, and the degree to which a ‘post-gendered re-codification’ can be realized within a new media environment. This paper intends to provide an ethnographic case study of women’s experiences with camera phones, examining the extent to which these experiences recreate or reconstruct women’s subjectivity or identity. By taking a close look at the ways in which women utilize and appropriate the camera phone in their daily lives, it focuses not only on women’s cultural practices in making meanings but also on their possible effect in the deconstruction of gendered techno-culture.

  4. Collimator trans-axial tomographic scintillation camera

    International Nuclear Information System (INIS)

    Jaszczak, Ronald J.

    1979-01-01

    An improved collimator is provided for a scintillation camera system that employs a detector head for transaxial tomographic scanning. One object of this invention is to significantly reduce the time required to obtain statistically significant data in radioisotope scanning using a scintillation camera. Another is to increase the rate of acceptance of radioactive events to contribute to the positional information obtainable from a radiation source of known strength without sacrificing spatial resolution. A further object is to reduce the necessary scanning time without degrading the images obtained. The collimator described has apertures defined by septa of different radiation transparency. The septa are aligned to provide greater radiation shielding from gamma radiation travelling within planes perpendicular to the cranial-caudal axis and less radiation shielding from gamma radiation travelling within other planes. Septa may also define apertures such that the collimator provides high spatial resolution of gamma rays traveling within planes perpendicular to the cranial-caudal axis and directed at the detector and high radiation sensitivity to gamma radiation travelling other planes and indicated at the detector. (LL)

  5. Camera-cinematography of the heart

    International Nuclear Information System (INIS)

    Adam, W.E.; Meyer, G.; Bitter, F.; Kampmann, H.; Bargon, G.; Stauch, M.; Ulm Univ.

    1975-01-01

    By 'camera-cinematography' of the heart, we mean an isotope method which permits detailed observation of cardiac mechanics without the use of a catheter. All that is necessary is an intravenous injection of 10 to 15 mCisup(99m)Tc human serum albumin followed after ten minutes by a five to ten minute period of observation with a scintillation camera. At this time the isotope has become distributed in the blood. Variations in the precordial impulses correspond with intra-cardiac changes of blood volume during a cardiac cycle. Analysis of the R-wave provides adequate information of cyclical volume changes in limited portions of the heart. This is achieved by a monitor with a pseudo-3-dimensional display; contraction and relaxation of the myocardium can be shown for any chosen longitudinal or horizontal diameter of the heart. Our programme allows simultaneous presentation of the movement of any point on the myocardium as a time-activity curve. The method is recommended as an addition to chest radiography, heart screening or cardiac kymography before carrying out cardiac catheterisation. (orig.) [de

  6. Gamma camera based FDG PET in oncology

    International Nuclear Information System (INIS)

    Park, C. H.

    2002-01-01

    Positron Emission Tomography(PET) was introduced as a research tool in the 1970s and it took about 20 years before PET became an useful clinical imaging modality. In the USA, insurance coverage for PET procedures in the 1990s was the turning point, I believe, for this progress. Initially PET was used in neurology but recently more than 80% of PET procedures are in oncological applications. I firmly believe, in the 21st century, one can not manage cancer patients properly without PET and PET is very important medical imaging modality in basic and clinical sciences. PET is grouped into 2 categories; conventional (c) and gamma camera based ( CB ) PET. CB PET is more readily available utilizing dual-head gamma cameras and commercially available FDG to many medical centers at low cost to patients. In fact there are more CB PET in operation than cPET in the USA. CB PET is inferior to cPET in its performance but clinical studies in oncology is feasible without expensive infrastructures such as staffing, rooms and equipments. At Ajou university Hospital, CBPET was installed in late 1997 for the first time in Korea as well as in Asia and the system has been used successfully and effectively in oncological applications. Our was the fourth PET operation in Korea and I believe this may have been instrumental for other institutions got interested in clinical PET. The following is a brief description of our clinical experience of FDG CBPET in oncology

  7. Vertically Integrated Edgeless Photon Imaging Camera

    Energy Technology Data Exchange (ETDEWEB)

    Fahim, Farah [Fermilab; Deptuch, Grzegorz [Fermilab; Shenai, Alpana [Fermilab; Maj, Piotr [AGH-UST, Cracow; Kmon, Piotr [AGH-UST, Cracow; Grybos, Pawel [AGH-UST, Cracow; Szczygiel, Robert [AGH-UST, Cracow; Siddons, D. Peter [Brookhaven; Rumaiz, Abdul [Brookhaven; Kuczewski, Anthony [Brookhaven; Mead, Joseph [Brookhaven; Bradford, Rebecca [Argonne; Weizeorick, John [Argonne

    2017-01-01

    The Vertically Integrated Photon Imaging Chip - Large, (VIPIC-L), is a large area, small pixel (65μm), 3D integrated, photon counting ASIC with zero-suppressed or full frame dead-time-less data readout. It features data throughput of 14.4 Gbps per chip with a full frame readout speed of 56kframes/s in the imaging mode. VIPIC-L contain 192 x 192 pixel array and the total size of the chip is 1.248cm x 1.248cm with only a 5μm periphery. It contains about 120M transistors. A 1.3M pixel camera module will be developed by arranging a 6 x 6 array of 3D VIPIC-L’s bonded to a large area silicon sensor on the analog side and to a readout board on the digital side. The readout board hosts a bank of FPGA’s, one per VIPIC-L to allow processing of up to 0.7 Tbps of raw data produced by the camera.

  8. Performance and quality control of scintillation cameras

    International Nuclear Information System (INIS)

    Moretti, J.L.; Iachetti, D.

    1983-01-01

    Acceptance testing, quality and control assurance of gamma-cameras are a part of diagnostic quality in clinical practice. Several parameters are required to achieve a good diagnostic reliability: intrinsic spatial resolution, spatial linearity, uniformities, energy resolution, count-rate characteristics, multiple window spatial analysis. Each parameter was measured and also estimated by a test easy to implement in routine practice. Material required was a 4028 multichannel analyzer linked to a microcomputeur, mini-computers and a set of phantoms (parallel slits, diffusing phantom, orthogonal hole transmission pattern). Gamma-cameras on study were:CGR 3400, CGR 3420, G.E.4000. Siemens ZLC 75 and large field Philips. Several tests proposed by N.E.M.A. and W.H.O. have to be improved concerning too punctual spatial determinations during distortion measurements with multiple window. Contrast control of image need to be monitored with high counting rate. This study shows the need to avoid punctual determinations and the interest to give sets of values of the same parameter on the whole field and to report mean values with their standard variation [fr

  9. Scalable IC Platform for Smart Cameras

    Directory of Open Access Journals (Sweden)

    Harry Broers

    2005-08-01

    Full Text Available Smart cameras are among the emerging new fields of electronics. The points of interest are in the application areas, software and IC development. In order to reduce cost, it is worthwhile to invest in a single architecture that can be scaled for the various application areas in performance (and resulting power consumption. In this paper, we show that the combination of an SIMD (single-instruction multiple-data processor and a general-purpose DSP is very advantageous for the image processing tasks encountered in smart cameras. While the SIMD processor gives the very high performance necessary by exploiting the inherent data parallelism found in the pixel crunching part of the algorithms, the DSP offers a friendly approach to the more complex tasks. The paper continues to motivate that SIMD processors have very convenient scaling properties in silicon, making the complete, SIMD-DSP architecture suitable for different application areas without changing the software suite. Analysis of the changes in power consumption due to scaling shows that for typical image processing tasks, it is beneficial to scale the SIMD processor to use the maximum level of parallelism available in the algorithm if the IC supply voltage can be lowered. If silicon cost is of importance, the parallelism of the processor should be scaled to just reach the desired performance given the speed of the silicon.

  10. CCD Camera Detection of HIV Infection.

    Science.gov (United States)

    Day, John R

    2017-01-01

    Rapid and precise quantification of the infectivity of HIV is important for molecular virologic studies, as well as for measuring the activities of antiviral drugs and neutralizing antibodies. An indicator cell line, a CCD camera, and image-analysis software are used to quantify HIV infectivity. The cells of the P4R5 line, which express the receptors for HIV infection as well as β-galactosidase under the control of the HIV-1 long terminal repeat, are infected with HIV and then incubated 2 days later with X-gal to stain the infected cells blue. Digital images of monolayers of the infected cells are captured using a high resolution CCD video camera and a macro video zoom lens. A software program is developed to process the images and to count the blue-stained foci of infection. The described method allows for the rapid quantification of the infected cells over a wide range of viral inocula with reproducibility, accuracy and at relatively low cost.

  11. Method for Adjusting the Anger Camera

    International Nuclear Information System (INIS)

    Oberhausen, E.; Neumann, K. J.; Schiffler, W.

    1969-01-01

    The uniformity of the Anger camera is a basic condition for the interpretation of scintiphotos and its importance increases with the accuracy of the evaluation method used for the interpretation of the scintiphotos. This is especially true for the use of three-dimensional multichannel-analysers with quantitative data output. With the standard method, used until now, all photo multipliers are tuned to give the same coun t-ra te when the crystal close to the centre of each photomultiplier is irradiatedby a collimated source. If there after the crystal is irradiated by a uniform flux of gamma rays, the density of the dots increases from the centre of the scintiphoto towards the outer edge. This means that uniformity has not been achieved. In clinical use hot spots would be simulated for those parts of an organ that are visualized at the outer part of the crystal. By using the magnetic core memory of a multichannel analyser for quantitative evaluation of the scintiphotos, we developed a method by which the photomultipliers are tuned with respect to their geometrical position. Thus, it is possible to achieve a uniformity within a few per cent over the whole crystal. The method and its theory are discussed. Data for the resolution of the Anger camera after this tuning process are given. (author)

  12. Evaluation of Red Light Camera Enforcement at Signalized Intersections

    Directory of Open Access Journals (Sweden)

    Abdulrahman AlJanahi

    2007-12-01

    Full Text Available The study attempts to find the effectiveness of adopting red light cameras in reducing red light violators. An experimental approach was adopted to investigate the use of red light cameras at signalized intersections in the Kingdom of Bahrain. The study locations were divided into three groups. The first group was related to the approaches monitored with red light cameras. The second group was related to approaches without red light cameras, but located within an intersection that had one of its approaches monitored with red light cameras. The third group was related to intersection approaches located at intersection without red light cameras (controlled sites. A methodology was developed for data collection. The data were then tested statistically by Z-test using proportion methods to compare the proportion of red light violations occurring at different sites. The study found that the proportion of red light violators at approaches monitored with red light cameras was significantly less than those at the controlled sites for most of the time. Approaches without red light cameras located within intersections having red light cameras showed, in general, fewer violations than controlled sites, but the results were not significant for all times of the day. The study reveals that red light cameras have a positive effect on reducing red light violations. However, these conclusions need further evaluations to justify their safe and economic use.

  13. Stitching images of dual-cameras onboard satellite

    Science.gov (United States)

    Jiang, Yonghua; Xu, Kai; Zhao, Ruishan; Zhang, Guo; Cheng, Kan; Zhou, Ping

    2017-06-01

    The way of installing dual-cameras on one satellite is adopted to further enlarge the imaging swath, thereby improving the efficiency of data capturing. In this case, stitching images of dual-cameras with high precision is a key step in the practical application. Due to the inadequate overlapping area of dual-cameras, stitching their images by classic methods may cause internal accuracy loss of the mosaic image. The reason is that classic methods estimate the geometric transformation of dual-cameras merely by a few unevenly distributed precise tie points in overlapping area of dual-cameras, which is similar to the case of using unevenly distributed ground control points (GCPs) in block adjustment. This paper proposed a new method to precisely stitch images of dual cameras without losing internal accuracy. First, a model was built to recover the relative geometric relation of dual-cameras and eliminate Charge-Coupled Device (CCD) distortions of each camera, then a virtual camera model depending on the calibrated geometric relation was adopted to achieve a seamless mosaic image. The panchromatic images of camera A and camera B onboard Yaogan-24 were collected as the experimental data. Experiment results show that the calibration accuracies of dual-cameras are better than 0.3 pixels, and the stitching accuracies can reach the sub-pixel level, ranging from 0.3 to 0.5 pixels. On the other hand, the positioning accuracies with GCPs of the mosaic image and of individual camera are better than 0.6 pixels and 0.5 pixels respectively, so the internal accuracy loss of the mosaic image only reaches 0.1 pixels, which can be neglected. This demonstrates that the proposed method can achieve seamless mosaic images without losing internal accuracy.

  14. Photometric Calibration of Consumer Video Cameras

    Science.gov (United States)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  15. Motorcycle detection and counting using stereo camera, IR camera, and microphone array

    Science.gov (United States)

    Ling, Bo; Gibson, David R. P.; Middleton, Dan

    2013-03-01

    Detection, classification, and characterization are the key to enhancing motorcycle safety, motorcycle operations and motorcycle travel estimation. Average motorcycle fatalities per Vehicle Mile Traveled (VMT) are currently estimated at 30 times those of auto fatalities. Although it has been an active research area for many years, motorcycle detection still remains a challenging task. Working with FHWA, we have developed a hybrid motorcycle detection and counting system using a suite of sensors including stereo camera, thermal IR camera and unidirectional microphone array. The IR thermal camera can capture the unique thermal signatures associated with the motorcycle's exhaust pipes that often show bright elongated blobs in IR images. The stereo camera in the system is used to detect the motorcyclist who can be easily windowed out in the stereo disparity map. If the motorcyclist is detected through his or her 3D body recognition, motorcycle is detected. Microphones are used to detect motorcycles that often produce low frequency acoustic signals. All three microphones in the microphone array are placed in strategic locations on the sensor platform to minimize the interferences of background noises from sources such as rain and wind. Field test results show that this hybrid motorcycle detection and counting system has an excellent performance.

  16. Camera systems in human motion analysis for biomedical applications

    Science.gov (United States)

    Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.

    2015-05-01

    Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.

  17. A SPECT demonstrator-revival of a gamma camera

    International Nuclear Information System (INIS)

    Valastyan, I.; Kerek, A.; Molnar, J.; Novak, D.; Vegh, J.; Emri, M.; Tron, L.

    2006-01-01

    A gamma camera has been updated and converted to serve as a demonstrator for educational purposes. The gantry and the camera head were the only part of the system that remained untouched. The main reason for this modernization was to increase the transparency of the gamma camera by partitioning the different logical building blocks of the system and thus providing access for inspection and improvements throughout the chain. New data acquisition and reconstruction software has been installed. By taking these measures, the camera is now used in education and also serves as a platform for tests of new hardware and software solutions. The camera is also used to demonstrate 3D (SPECT) imaging by collecting 2D projections from a rotatable cylindrical phantom. Since the camera head is not attached mechanically to the phantom, the effect of misalignment between the head and the rotation axis of the phantom can be studied

  18. The Policy of Enforcement: Red Light Cameras and Racial Profiling

    OpenAIRE

    Eger, Robert J.; Fortner, C. Kevin; Slade, Catherine P.

    2015-01-01

    The article of record as published may be located at http://dx.doi.org/10.1177/1098611115586174 We explore the question of whether some of the often conflicting evidence of racial profiling can be cleared up using red light camera observations to measure racial disparities in traffic violations. Using data from cameras at intersections matched to census data, we find that although citations from the red light cameras are issued to a disproportionate number of minorities based o...

  19. Calibration of Multiple Fish-Eye Cameras Using a Wand

    OpenAIRE

    Fu, Qiang; Quan, Quan; Cai, Kai-Yuan

    2014-01-01

    Fish-eye cameras are becoming increasingly popular in computer vision, but their use for 3D measurement is limited partly due to the lack of an accurate, efficient and user-friendly calibration procedure. For such a purpose, we propose a method to calibrate the intrinsic and extrinsic parameters (including radial distortion parameters) of two/multiple fish-eye cameras simultaneously by using a wand under general motions. Thanks to the generic camera model used, the proposed calibration method...

  20. IR Camera Report for the 7 Day Production Test

    Energy Technology Data Exchange (ETDEWEB)

    Holloway, Michael Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-22

    The following report gives a summary of the IR camera performance results and data for the 7 day production run that occurred from 10 Sep 2015 thru 16 Sep 2015. During this production run our goal was to see how well the camera performed its task of monitoring the target window temperature with our improved alignment procedure and emissivity measurements. We also wanted to see if the increased shielding would be effective in protecting the camera from damage and failure.

  1. Interview: Mr. Tevia Abrams, UNFPA Country Director for India.

    Science.gov (United States)

    1991-12-01

    The government of India set up a population program 25 years ago, yet the population is expected to surpass that of China in the near future. The current UN Population Fund (UNFPA) program for India covers the period 1991-95 with coordination, implementation, and evaluation. Improved services focus on states with high fertility and mortality, high infant mortality, self-reliance in contraceptive production, models for maternal health care and traditional health care, national communication strategy, public awareness enhancement, and raising women's status by female literacy expansion and employment generation. UNFPA trains, provides equipment and contraceptives, and nongovernmental organization participation. The bulk of the $90 million cost of the program will come from UNFPA: maternal-child health, family planning (FP), and information, education, and communication (IEC) will receive the most funding. Ethnic and tribal areas will get attention under a decentralized scheme in accordance with the concept of a multicultural society where early age at marriage and high economic value of children are realities. The Ministry is responsible for IEC and FP targets and allocation of funds. Government institutes and universities carry out population research. The creation of India POPIN patterned after the Asia-Pacific Population Information Network is under development under IEC activities. The status of women is varied throughout India, in the state of Kerala literacy reaches 100%, and the birth rate of 19.8%/1000 women is below the national average of 30.5. In contrast, the states of Bihar and Rajasthan with female literacy of 23% and 21%, respectively, have birth rates of 34.4% and 33.9%.

  2. Detector construction for a scintillation camera

    International Nuclear Information System (INIS)

    Ashe, J.B.

    1977-01-01

    An improved transducer construction for a scintillation camera in which a light conducting element is equipped with a layer of moisture impervious material is described. A scintillation crystal is thereafter positioned in optical communication with the moisture impervious layer and the remaining surfaces of the scintillation crystal are encompassed by a moisture shield. Affixing the moisture impervious layer to the light conducting element prior to attachment of the scintillation crystal reduces the requirement for mechanical strength in the moisture impervious layer and thereby allows a layer of reduced thickness to be utilized. Preferably, photodetectors are also positioned in optical communication with the light conducting element prior to positioning the scintillation crystal in contact with the impervious layer. 13 claims, 4 figures

  3. Operational experience with a CID camera system

    CERN Document Server

    Welsch, Carsten P; Burel, Bruno; Lefèvre, Thibaut

    2006-01-01

    In future high intensity, high energy accelerators particle losses must be minimized as activation of the vacuum chambers or other components makes maintenance and upgrade work time consuming and costly. It is imperative to have a clear understanding of the mechanisms that can lead to halo formation, and to have the possibility to test available theoretical models with an adequate experimental setup. Measurements based on optical transition radiation (OTR) provide an interesting opportunity for analyzing the transverse beam profile due to the fast time response and very good linearity of the signal with respect to the beam intensity. On the other hand, the dynamic range of typical acquisition systems as they are used in the CLIC test facility (CTF3) is typically limited and must be improved before these systems can be applied to halo measurements. One possibility for high dynamic range measurements is an innovative camera system based on charge injection device (CID) technology. With possible future measureme...

  4. Stop outbreak of SARS with infrared cameras

    Science.gov (United States)

    Wu, Yigang M.

    2004-04-01

    SARS (Severe Acute Respiratory Syndrome, commonly known as Atypical Pneumonia in mainland China) caused 8422 people affected and resulting in 918 deaths worldwide in half year. This disease can be transmitted by respiratory droplets or by contact with a patient's respiratory secretions. This means it can be spread out very rapidly through the public transportations by the travelers with the syndrome. The challenge was to stop the SARS carriers traveling around by trains, airplanes, coaches and etc. It is impractical with traditional oral thermometers or spot infrared thermometers to screen the tens of travelers with elevated body temperature from thousands of normal travelers in hours. The thermal imager with temperature measurement function is a logical choice for this special application although there are some limitations and drawbacks. This paper discusses the real SARS applications of industrial infrared cameras in China from April to July 2003.

  5. Smart Cameras for Remote Science Survey

    Science.gov (United States)

    Thompson, David R.; Abbey, William; Allwood, Abigail; Bekker, Dmitriy; Bornstein, Benjamin; Cabrol, Nathalie A.; Castano, Rebecca; Estlin, Tara; Fuchs, Thomas; Wagstaff, Kiri L.

    2012-01-01

    Communication with remote exploration spacecraft is often intermittent and bandwidth is highly constrained. Future missions could use onboard science data understanding to prioritize downlink of critical features [1], draft summary maps of visited terrain [2], or identify targets of opportunity for followup measurements [3]. We describe a generic approach to classify geologic surfaces for autonomous science operations, suitable for parallelized implementations in FPGA hardware. We map these surfaces with texture channels - distinctive numerical signatures that differentiate properties such as roughness, pavement coatings, regolith characteristics, sedimentary fabrics and differential outcrop weathering. This work describes our basic image analysis approach and reports an initial performance evaluation using surface images from the Mars Exploration Rovers. Future work will incorporate these methods into camera hardware for real-time processing.

  6. CHAMP (Camera, Handlens, and Microscope Probe)

    Science.gov (United States)

    Mungas, Greg S.; Boynton, John E.; Balzer, Mark A.; Beegle, Luther; Sobel, Harold R.; Fisher, Ted; Klein, Dan; Deans, Matthew; Lee, Pascal; Sepulveda, Cesar A.

    2005-01-01

    CHAMP (Camera, Handlens And Microscope Probe)is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As a robotic arm-mounted imager, CHAMP supports stereo imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision rangefinding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. CHAMP was originally developed through the Mars Instrument Development Program (MIDP) in support of robotic field investigations, but may also find application in new areas such as robotic in-orbit servicing and maintenance operations associated with spacecraft and human operations. We overview CHAMP'S instrument performance and basic design considerations below.

  7. Gain attenuation of gated framing camera

    International Nuclear Information System (INIS)

    Xiao Shali; Liu Shenye; Cao Zhurong; Li Hang; Zhang Haiying; Yuan Zheng; Wang Liwei

    2009-01-01

    The theoretic model of framing camera's gain attenuation is analyzed. The exponential attenuation curve of the gain along the pulse propagation time is simulated. An experiment to measure the coefficient of gain attenuation based on the gain attenuation theory is designed. Experiment result shows that the gain follows an exponential attenuation rule with a quotient of 0.0249 nm -1 , the attenuation coefficient of the pulse is 0.00356 mm -1 . The loss of the pulse propagation along the MCP stripline is the leading reason of gain attenuation. But in the figure of a single stripline, the gain dose not follow the rule of exponential attenuation completely, instead, there is a gain increase at the stripline bottom. That is caused by the reflection of the pulse. The reflectance is about 24.2%. Combining the experiment and theory, which design of the stripline MCP can improved the gain attenuation. (authors)

  8. Using television cameras to measure emittance

    International Nuclear Information System (INIS)

    Ross, M.

    1984-01-01

    Since the luminosity in a linear collider depends on the horizontal and vertical emittance (epsilon/sub x/, epsilon/sub y/) as 1/√(epsilon/sub x/epsilon/sub y/) a possible method for improving the performance would be to decrease one or both of these numbers. Once this has been done in a damping ring for example, great care must be taken to avoid effective emittance growth in the remainder of the collider. Therefore an effort should be made to measure epsilon, (x and y), as accurately as possible, both during machine development and operationally. One technique used for measuring epsilon is to insert a luminescent screen in the path of the beam and measure the size of the spot of light made as the beam passes with a television camera and some associated electronics. This has advantages over sampling type techniques (such as wire scanners) because it provides full pulse to pulse two-dimensional information

  9. Collimated trans-axial tomographic scintillation camera

    International Nuclear Information System (INIS)

    1980-01-01

    The principal problem in trans-axial tomographic radioisotope scanning is the length of time required to obtain meaningful data. Patient movement and radioisotope migration during the scanning period can cause distortion of the image. The object of this invention is to reduce the scanning time without degrading the images obtained. A system is described in which a scintillation camera detector is moved to an orbit about the cranial-caudal axis relative to the patient. A collimator is used in which lead septa are arranged so as to admit gamma rays travelling perpendicular to this axis with high spatial resolution and those travelling in the direction of the axis with low spatial resolution, thus increasing the rate of acceptance of radioactive events to contribute to the positional information obtainable without sacrificing spatial resolution. (author)

  10. CHAMP - Camera, Handlens, and Microscope Probe

    Science.gov (United States)

    Mungas, G. S.; Beegle, L. W.; Boynton, J.; Sepulveda, C. A.; Balzer, M. A.; Sobel, H. R.; Fisher, T. A.; Deans, M.; Lee, P.

    2005-01-01

    CHAMP (Camera, Handlens And Microscope Probe) is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As an arm-mounted imager, CHAMP supports stereo-imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision range-finding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. Currently designed with a filter wheel with 4 different filters, so that color and black and white images can be obtained over the entire Field-of-View, future designs will increase the number of filter positions to include 8 different filters. Finally, CHAMP incorporates controlled white and UV illumination so that images can be obtained regardless of sun position, and any potential fluorescent species can be identified so the most astrobiologically interesting samples can be identified.

  11. Simplification of camera models without loss of precision

    Science.gov (United States)

    Shang, Yang; Li, You; He, Yan; Wang, Weihua; Yu, Qifeng

    2007-12-01

    Camera parameters' redundancy and actions on imaging process are analyzed based on central perspective projection model with nonlinear lens distortion. By assigning some parameters' values or their relations in advance, seven kinds of simplified camera models are presented. The simplified models' availability is validated by simulated data and engineering applications. By using the simplified camera models, methods and arithmetics of videogrammetry can be simplified without precision losses. The calculation becomes faster and stabler. The solving condition requirements are reduced. These characteristics make the precision-reserved simplified camera models availible for engineering applications.

  12. Vibration factors impact analysis on aerial film camera imaging quality

    Science.gov (United States)

    Xie, Jun; Han, Wei; Xu, Zhonglin; Tan, Haifeng; Yang, Mingquan

    2017-08-01

    Aerial film camera can acquire ground target image information advantageous, but meanwhile the change of aircraft attitude, the film features and the work of camera inside system could result in a vibration which could depress the image quality greatly. This paper presented a design basis of vibration mitigation stabilized platform based on the vibration characteristic of the aerial film camera and indicated the application analysis that stabilized platform could support aerial camera to realize the shoot demand of multi-angle and large scale. According to the technique characteristics of stabilized platform, the development direction are high precision, more agility, miniaturization and low power.

  13. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    Science.gov (United States)

    Shortis, Mark

    2015-12-07

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  14. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    Directory of Open Access Journals (Sweden)

    Mark Shortis

    2015-12-01

    Full Text Available Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  15. Camera traps can be heard and seen by animals.

    Directory of Open Access Journals (Sweden)

    Paul D Meek

    Full Text Available Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5 and infrared illumination outputs (n = 7 of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21 and assessed the vision ranges (n = 3 of mammals species (where data existed to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals' hearing and produce illumination that can be seen by many species.

  16. Neutron emissivity profile camera diagnostics considering present and future tokamaks

    Energy Technology Data Exchange (ETDEWEB)

    Forsberg, S. [EURATOM-VR Association, Uppsala (Sweden)

    2001-12-01

    This thesis describes the neutron profile camera situated at JET. The profile camera is one of the most important neutron emission diagnostic devices operating at JET. It gives useful information of the total neutron yield rate but also about the neutron emissivity distribution. Data analysis was performed in order to compare three different calibration methods. The data was collected from the deuterium campaign, C4, in the beginning of 2001. The thesis also includes a section about the implication of a neutron profile camera for ITER, where the issue regarding interface difficulties is in focus. The ITER JCT (Joint Central Team) proposal of a neutron camera for ITER is studied in some detail.

  17. Mid-IR image acquisition using a standard CCD camera

    DEFF Research Database (Denmark)

    Dam, Jeppe Seidelin; Sørensen, Knud Palmelund; Pedersen, Christian

    2010-01-01

    Direct image acquisition in the 3-5 µm range is realized using a standard CCD camera and a wavelength up-converter unit. The converter unit transfers the image information to the NIR range were state-of-the-art cameras exist.......Direct image acquisition in the 3-5 µm range is realized using a standard CCD camera and a wavelength up-converter unit. The converter unit transfers the image information to the NIR range were state-of-the-art cameras exist....

  18. Camera Traps Can Be Heard and Seen by Animals

    Science.gov (United States)

    Meek, Paul D.; Ballard, Guy-Anthony; Fleming, Peter J. S.; Schaefer, Michael; Williams, Warwick; Falzon, Greg

    2014-01-01

    Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals’ hearing and produce illumination that can be seen by many species. PMID:25354356

  19. A SPATIO-SPECTRAL CAMERA FOR HIGH RESOLUTION HYPERSPECTRAL IMAGING

    Directory of Open Access Journals (Sweden)

    S. Livens

    2017-08-01

    Full Text Available Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600–900 nm in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots, horticulture (crop status monitoring to evaluate irrigation management in strawberry fields and geology (meteorite detection on a grassland field. Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475–925 nm, and we discuss future work.

  20. Camera traps can be heard and seen by animals.

    Science.gov (United States)

    Meek, Paul D; Ballard, Guy-Anthony; Fleming, Peter J S; Schaefer, Michael; Williams, Warwick; Falzon, Greg

    2014-01-01

    Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals' hearing and produce illumination that can be seen by many species.

  1. a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging

    Science.gov (United States)

    Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.

    2017-08-01

    Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.

  2. Calibration of a Stereo Radiation Detection Camera Using Planar Homography

    Directory of Open Access Journals (Sweden)

    Seung-Hae Baek

    2016-01-01

    Full Text Available This paper proposes a calibration technique of a stereo gamma detection camera. Calibration of the internal and external parameters of a stereo vision camera is a well-known research problem in the computer vision society. However, few or no stereo calibration has been investigated in the radiation measurement research. Since no visual information can be obtained from a stereo radiation camera, it is impossible to use a general stereo calibration algorithm directly. In this paper, we develop a hybrid-type stereo system which is equipped with both radiation and vision cameras. To calibrate the stereo radiation cameras, stereo images of a calibration pattern captured from the vision cameras are transformed in the view of the radiation cameras. The homography transformation is calibrated based on the geometric relationship between visual and radiation camera coordinates. The accuracy of the stereo parameters of the radiation camera is analyzed by distance measurements to both visual light and gamma sources. The experimental results show that the measurement error is about 3%.

  3. COMPARISON OF METHODS FOR GEOMETRIC CAMERA CALIBRATION

    Directory of Open Access Journals (Sweden)

    J. Hieronymus

    2012-09-01

    Full Text Available Methods for geometric calibration of cameras in close-range photogrammetry are established and well investigated. The most common one is based on test-fields with well-known pattern, which are observed from different directions. The parameters of a distortion model are calculated using bundle-block-adjustment-algorithms. This methods works well for short focal lengths, but is essentially more problematic to use with large focal lengths. Those would require very large test-fields and surrounding space. To overcome this problem, there is another common method for calibration used in remote sensing. It employs measurements using collimator and a goniometer. A third calibration method uses diffractive optical elements (DOE to project holograms of well known pattern. In this paper these three calibration methods are compared empirically, especially in terms of accuracy. A camera has been calibrated with those methods mentioned above. All methods provide a set of distortion correction parameters as used by the photogrammetric software Australis. The resulting parameter values are very similar for all investigated methods. The three sets of distortion parameters are crosscompared against all three calibration methods. This is achieved by inserting the gained distortion parameters as fixed input into the calibration algorithms and only adjusting the exterior orientation. The RMS (root mean square of the remaining image coordinate residuals are taken as a measure of distortion correction quality. There are differences resulting from the different calibration methods. Nevertheless the measure is small for every comparison, which means that all three calibration methods can be used for accurate geometric calibration.

  4. Hyper Suprime-Cam: Camera dewar design

    Science.gov (United States)

    Komiyama, Yutaka; Obuchi, Yoshiyuki; Nakaya, Hidehiko; Kamata, Yukiko; Kawanomoto, Satoshi; Utsumi, Yousuke; Miyazaki, Satoshi; Uraguchi, Fumihiro; Furusawa, Hisanori; Morokuma, Tomoki; Uchida, Tomohisa; Miyatake, Hironao; Mineo, Sogo; Fujimori, Hiroki; Aihara, Hiroaki; Karoji, Hiroshi; Gunn, James E.; Wang, Shiang-Yu

    2018-01-01

    This paper describes the detailed design of the CCD dewar and the camera system which is a part of the wide-field imager Hyper Suprime-Cam (HSC) on the 8.2 m Subaru Telescope. On the 1.°5 diameter focal plane (497 mm in physical size), 116 four-side buttable 2 k × 4 k fully depleted CCDs are tiled with 0.3 mm gaps between adjacent chips, which are cooled down to -100°C by two pulse tube coolers with a capability to exhaust 100 W heat at -100°C. The design of the dewar is basically a natural extension of Suprime-Cam, incorporating some improvements such as (1) a detailed CCD positioning strategy to avoid any collision between CCDs while maximizing the filling factor of the focal plane, (2) a spherical washers mechanism adopted for the interface points to avoid any deformation caused by the tilt of the interface surface to be transferred to the focal plane, (3) the employment of a truncated-cone-shaped window, made of synthetic silica, to save the back focal space, and (4) a passive heat transfer mechanism to exhaust efficiently the heat generated from the CCD readout electronics which are accommodated inside the dewar. Extensive simulations using a finite-element analysis (FEA) method are carried out to verify that the design of the dewar is sufficient to satisfy the assigned errors. We also perform verification tests using the actually assembled CCD dewar to supplement the FEA and demonstrate that the design is adequate to ensure an excellent image quality which is key to the HSC. The details of the camera system, including the control computer system, are described as well as the assembling process of the dewar and the process of installation on the telescope.

  5. Common aperture multispectral spotter camera: Spectro XR

    Science.gov (United States)

    Petrushevsky, Vladimir; Freiman, Dov; Diamant, Idan; Giladi, Shira; Leibovich, Maor

    2017-10-01

    The Spectro XRTM is an advanced color/NIR/SWIR/MWIR 16'' payload recently developed by Elbit Systems / ELOP. The payload's primary sensor is a spotter camera with common 7'' aperture. The sensor suite includes also MWIR zoom, EO zoom, laser designator or rangefinder, laser pointer / illuminator and laser spot tracker. Rigid structure, vibration damping and 4-axes gimbals enable high level of line-of-sight stabilization. The payload's list of features include multi-target video tracker, precise boresight, strap-on IMU, embedded moving map, geodetic calculations suite, and image fusion. The paper describes main technical characteristics of the spotter camera. Visible-quality, all-metal front catadioptric telescope maintains optical performance in wide range of environmental conditions. High-efficiency coatings separate the incoming light into EO, SWIR and MWIR band channels. Both EO and SWIR bands have dual FOV and 3 spectral filters each. Several variants of focal plane array formats are supported. The common aperture design facilitates superior DRI performance in EO and SWIR, in comparison to the conventionally configured payloads. Special spectral calibration and color correction extend the effective range of color imaging. An advanced CMOS FPA and low F-number of the optics facilitate low light performance. SWIR band provides further atmospheric penetration, as well as see-spot capability at especially long ranges, due to asynchronous pulse detection. MWIR band has good sharpness in the entire field-of-view and (with full HD FPA) delivers amount of detail far exceeding one of VGA-equipped FLIRs. The Spectro XR offers level of performance typically associated with larger and heavier payloads.

  6. Automatic Thermal Infrared Panoramic Imaging Sensor

    National Research Council Canada - National Science Library

    Gutin, Mikhail; Tsui, Eddy K; Gutin, Olga; Wang, Xu-Ming; Gutin, Alexey

    2006-01-01

    Panoramic cameras offer true real-time, 360-degree coverage of the surrounding area, valuable for a variety of defense and security applications, including force protection, asset protection, asset...

  7. PERFORMANCE EVALUATION OF THERMOGRAPHIC CAMERAS FOR PHOTOGRAMMETRIC MEASUREMENTS

    Directory of Open Access Journals (Sweden)

    N. Yastikli

    2013-05-01

    Full Text Available The aim of this research is the performance evaluation of the termographic cameras for possible use for photogrammetric documentation and deformation analyses caused by moisture and isolation problem of the historical and cultural heritage. To perform geometric calibration of the termographic camera, the 3D test object was designed with 77 control points which were distributed in different depths. For performance evaluation, Flir A320 termographic camera with 320 × 240 pixels and lens with 18 mm focal length was used. The Nikon D3X SLR digital camera with 6048 × 4032 pixels and lens with 20 mm focal length was used as reference for comparison. The size of pixel was 25 μm for the Flir A320 termographic camera and 6 μm for the Nikon D3X SLR digital camera. The digital images of the 3D test object were recorded with the Flir A320 termographic camera and Nikon D3X SLR digital camera and the image coordinate of the control points in the images were measured. The geometric calibration parameters, including the focal length, position of principal points, radial and tangential distortions were determined with introduced additional parameters in bundle block adjustments. The measurement of image coordinates and bundle block adjustments with additional parameters were performed using the PHIDIAS digital photogrammetric system. The bundle block adjustment was repeated with determined calibration parameter for both Flir A320 termographic camera and Nikon D3X SLR digital camera. The obtained standard deviation of measured image coordinates was 9.6 μm and 10.5 μm for Flir A320 termographic camera and 8.3 μm and 7.7 μm for Nikon D3X SLR digital camera. The obtained standard deviation of measured image points in Flir A320 termographic camera images almost same accuracy level with digital camera in comparison with 4 times bigger pixel size. The obtained results from this research, the interior geometry of the termographic cameras and lens distortion was

  8. Performance Evaluation of Thermographic Cameras for Photogrammetric Measurements

    Science.gov (United States)

    Yastikli, N.; Guler, E.

    2013-05-01

    The aim of this research is the performance evaluation of the termographic cameras for possible use for photogrammetric documentation and deformation analyses caused by moisture and isolation problem of the historical and cultural heritage. To perform geometric calibration of the termographic camera, the 3D test object was designed with 77 control points which were distributed in different depths. For performance evaluation, Flir A320 termographic camera with 320 × 240 pixels and lens with 18 mm focal length was used. The Nikon D3X SLR digital camera with 6048 × 4032 pixels and lens with 20 mm focal length was used as reference for comparison. The size of pixel was 25 μm for the Flir A320 termographic camera and 6 μm for the Nikon D3X SLR digital camera. The digital images of the 3D test object were recorded with the Flir A320 termographic camera and Nikon D3X SLR digital camera and the image coordinate of the control points in the images were measured. The geometric calibration parameters, including the focal length, position of principal points, radial and tangential distortions were determined with introduced additional parameters in bundle block adjustments. The measurement of image coordinates and bundle block adjustments with additional parameters were performed using the PHIDIAS digital photogrammetric system. The bundle block adjustment was repeated with determined calibration parameter for both Flir A320 termographic camera and Nikon D3X SLR digital camera. The obtained standard deviation of measured image coordinates was 9.6 μm and 10.5 μm for Flir A320 termographic camera and 8.3 μm and 7.7 μm for Nikon D3X SLR digital camera. The obtained standard deviation of measured image points in Flir A320 termographic camera images almost same accuracy level with digital camera in comparison with 4 times bigger pixel size. The obtained results from this research, the interior geometry of the termographic cameras and lens distortion was modelled efficiently

  9. A comparison of camera trap and permanent recording video camera efficiency in wildlife underpasses.

    Science.gov (United States)

    Jumeau, Jonathan; Petrod, Lana; Handrich, Yves

    2017-09-01

    In the current context of biodiversity loss through habitat fragmentation, the effectiveness of wildlife crossings, installed at great expense as compensatory measures, is of vital importance for ecological and socio-economic actors. The evaluation of these structures is directly impacted by the efficiency of monitoring tools (camera traps…), which are used to assess the effectiveness of these crossings by observing the animals that use them. The aim of this study was to quantify the efficiency of camera traps in a wildlife crossing evaluation. Six permanent recording video systems sharing the same field of view as six Reconyx HC600 camera traps installed in three wildlife underpasses were used to assess the exact proportion of missed events ( event being the presence of an animal within the field of view), and the error rate concerning underpass crossing behavior (defined as either Entry or Refusal ). A sequence of photographs was triggered by either animals ( true trigger ) or artefacts ( false trigger ). We quantified the number of false triggers that had actually been caused by animals that were not visible on the images ("false" false triggers). Camera traps failed to record 43.6% of small mammal events (voles, mice, shrews, etc.) and 17% of medium-sized mammal events. The type of crossing behavior ( Entry or Refusal ) was incorrectly assessed in 40.1% of events, with a higher error rate for entries than for refusals. Among the 3.8% of false triggers, 85% of them were "false" false triggers. This study indicates a global underestimation of the effectiveness of wildlife crossings for small mammals. Means to improve the efficiency are discussed.

  10. AIP GHz modulation detection using a streak camera: Suitability of streak cameras in the AWAKE experiment

    CERN Document Server

    Rieger, K; Reimann, O; Muggli, P

    2017-01-01

    Using frequency mixing, a modulated light pulse of ns duration is created. We show that, with a ps-resolution streak camera that is usually used for single short pulse measurements, we can detect via an FFT detection approach up to 450 GHz modulation in a pulse in a single measurement. This work is performed in the context of the AWAKE plasma wakefield experiment where modulation frequencies in the range of 80–280 GHz are expected.

  11. CameraHRV: robust measurement of heart rate variability using a camera

    Science.gov (United States)

    Pai, Amruta; Veeraraghavan, Ashok; Sabharwal, Ashutosh

    2018-02-01

    The inter-beat-interval (time period of the cardiac cycle) changes slightly for every heartbeat; this variation is measured as Heart Rate Variability (HRV). HRV is presumed to occur due to interactions between the parasym- pathetic and sympathetic nervous system. Therefore, it is sometimes used as an indicator of the stress level of an individual. HRV also reveals some clinical information about cardiac health. Currently, HRV is accurately measured using contact devices such as a pulse oximeter. However, recent research in the field of non-contact imaging Photoplethysmography (iPPG) has made vital sign measurements using just the video recording of any exposed skin (such as a person's face) possible. The current signal processing methods for extracting HRV using peak detection perform well for contact-based systems but have poor performance for the iPPG signals. The main reason for this poor performance is the fact that current methods are sensitive to large noise sources which are often present in iPPG data. Further, current methods are not robust to motion artifacts that are common in iPPG systems. We developed a new algorithm, CameraHRV, for robustly extracting HRV even in low SNR such as is common with iPPG recordings. CameraHRV combined spatial combination and frequency demodulation to obtain HRV from the instantaneous frequency of the iPPG signal. CameraHRV outperforms other current methods of HRV estimation. Ground truth data was obtained from FDA-approved pulse oximeter for validation purposes. CameraHRV on iPPG data showed an error of 6 milliseconds for low motion and varying skin tone scenarios. The improvement in error was 14%. In case of high motion scenarios like reading, watching and talking, the error was 10 milliseconds.

  12. First Light for World's Largest 'Thermometer Camera'

    Science.gov (United States)

    2007-08-01

    LABOCA in Service at APEX The world's largest bolometer camera for submillimetre astronomy is now in service at the 12-m APEX telescope, located on the 5100m high Chajnantor plateau in the Chilean Andes. LABOCA was specifically designed for the study of extremely cold astronomical objects and, with its large field of view and very high sensitivity, will open new vistas in our knowledge of how stars form and how the first galaxies emerged from the Big Bang. ESO PR Photo 35a/07 ESO PR Photo 35a/07 LABOCA on APEX "A large fraction of all the gas in the Universe has extremely cold temperatures of around minus 250 degrees Celsius, a mere 20 degrees above absolute zero," says Karl Menten, director at the Max Planck Institute for Radioastronomy (MPIfR) in Bonn, Germany, that built LABOCA. "Studying these cold clouds requires looking at the light they radiate in the submillimetre range, with very sophisticated detectors." Astronomers use bolometers for this task, which are, in essence, thermometers. They detect incoming radiation by registering the resulting rise in temperature. More specifically, a bolometer detector consists of an extremely thin foil that absorbs the incoming light. Any change of the radiation's intensity results in a slight change in temperature of the foil, which can then be registered by sensitive electronic thermometers. To be able to measure such minute temperature fluctuations requires the bolometers to be cooled down to less than 0.3 degrees above absolute zero, that is below minus 272.85 degrees Celsius. "Cooling to such low temperatures requires using liquid helium, which is no simple feat for an observatory located at 5100m altitude," says Carlos De Breuck, the APEX instrument scientist at ESO. Nor is it simple to measure the weak temperature radiation of astronomical objects. Millimetre and submillimetre radiation opens a window into the enigmatic cold Universe, but the signals from space are heavily absorbed by water vapour in the Earth

  13. An equalised global graphical model-based approach for multi-camera object tracking

    OpenAIRE

    Chen, Weihua; Cao, Lijun; Chen, Xiaotang; Huang, Kaiqi

    2015-01-01

    Non-overlapping multi-camera visual object tracking typically consists of two steps: single camera object tracking and inter-camera object tracking. Most of tracking methods focus on single camera object tracking, which happens in the same scene, while for real surveillance scenes, inter-camera object tracking is needed and single camera tracking methods can not work effectively. In this paper, we try to improve the overall multi-camera object tracking performance by a global graph model with...

  14. CALIBRATION OF LOW COST RGB AND NIR UAV CAMERAS

    Directory of Open Access Journals (Sweden)

    A. Fryskowska

    2016-06-01

    Full Text Available Non-metric digital cameras are being widely used for photogrammetric studies. The increase in resolution and quality of images obtained by non-metric cameras, allows to use it in low-cost UAV and terrestrial photogrammetry. Imagery acquired with non-metric cameras can be used in 3D modeling of objects or landscapes, reconstructing of historical sites, generating digital elevation models (DTM, orthophotos, or in the assessment of accidents. Non-metric digital camcorders are characterized by instability and ignorance of the interior orientation parameters. Therefore, the use of these devices requires prior calibration. Calibration research was conducted using non-metric camera, different calibration tests and various software. The first part of the paper contains a brief theoretical introduction including the basic definitions, like the construction of non-metric cameras or description of different optical distortions. The second part of the paper contains cameras calibration process, details of the calibration methods and models that have been used. Sony Nex 5 camera calibration has been done using software: Image Master Calib, Matlab - Camera Calibrator application and Agisoft Lens. For the study 2D test fields has been used. As a part of the research a comparative analysis of the results have been done.

  15. Three-Dimensional Particle Image Velocimetry Using a Plenoptic Camera

    NARCIS (Netherlands)

    Lynch, K.P.; Fahringer, T.; Thurow, B.

    2012-01-01

    A novel 3-D, 3-C PIV technique is described, based on volume illumination and a plenoptic camera to measure a velocity field. The technique is based on plenoptic photography, which uses a dense microlens array mounted near a camera sensor to sample the spatial and angular distribution of light

  16. The Technique of the Motion Picture Camera. Revised Edition.

    Science.gov (United States)

    Souto, H. Mario Raimondo

    Aimed at the professional but useful to others, this book provides comparative material on virtually all the motion picture cameras available from manufacturers in the United States, Britain, France, Russia, Japan, and other countries. Information is provided on camera design and on the operation and maintainance of individual models. An analysis…

  17. Endoscopic Camera Control by Head Movements for Thoracic Surgery

    NARCIS (Netherlands)

    Reilink, Rob; de Bruin, Gart; Franken, M.C.J.; Mariani, Massimo A.; Misra, Sarthak; Stramigioli, Stefano

    2010-01-01

    In current video-assisted thoracic surgery, the endoscopic camera is operated by an assistant of the surgeon, which has several disadvantages. This paper describes a system which enables the surgeon to control the endoscopic camera without the help of an assistant. The system is controlled using

  18. A focal plane camera for celestial XUV sources

    International Nuclear Information System (INIS)

    Huizenga, H.

    1980-01-01

    This thesis describes the development and performance of a new type of X-ray camera for the 2-250 0 A wavelength range (XUV). The camera features high position resolution (FWHM approximately 0.2 mm at 2 A, -13 erg/cm 2 s in a one year mission. (Auth.)

  19. Camera Layout Design for the Upper Stage Thrust Cone

    Science.gov (United States)

    Wooten, Tevin; Fowler, Bart

    2010-01-01

    Engineers in the Integrated Design and Analysis Division (EV30) use a variety of different tools to aid in the design and analysis of the Ares I vehicle. One primary tool in use is Pro-Engineer. Pro-Engineer is a computer-aided design (CAD) software that allows designers to create computer generated structural models of vehicle structures. For the Upper State thrust cone, Pro-Engineer was used to assist in the design of a layout for two camera housings. These cameras observe the separation between the first and second stage of the Ares I vehicle. For the Ares I-X, one standard speed camera was used. The Ares I design calls for two separate housings, three cameras, and a lighting system. With previous design concepts and verification strategies in mind, a new layout for the two camera design concept was developed with members of the EV32 team. With the new design, Pro-Engineer was used to draw the layout to observe how the two camera housings fit with the thrust cone assembly. Future analysis of the camera housing design will verify the stability and clearance of the camera with other hardware present on the thrust cone.

  20. Development of camera technology for monitoring nests. Chapter 15

    Science.gov (United States)

    W. Andrew Cox; M. Shane Pruett; Thomas J. Benson; Scott J. Chiavacci; Frank R., III Thompson

    2012-01-01

    Photo and video technology has become increasingly useful in the study of avian nesting ecology. However, researchers interested in using camera systems are often faced with insufficient information on the types and relative advantages of available technologies. We reviewed the literature for studies of nests that used cameras and summarized them based on study...

  1. Camera monologue: Cultural critique beyond collaboration, participation, and dialogue

    DEFF Research Database (Denmark)

    Suhr, Christian

    2018-01-01

    Cameras always seem to capture a little too little and a little too much. In ethnographic films, profound insights are often found in the tension between what we are socially taught to perceive, and the peculiar non-social perception of the camera. Ethnographic filmmakers study the worlds of huma...

  2. Imaging Emission Spectra with Handheld and Cellphone Cameras

    Science.gov (United States)

    Sitar, David

    2012-01-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboratory setting on a shoestring budget and get immediate results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon…

  3. The Impact of Courtroom Cameras on Media Coverage of Trials.

    Science.gov (United States)

    Lancaster, Dalton

    A study examined the effect the presence of television cameras had on media coverage of trials. In the separate trials of two men indicted for murder in Indianapolis, much of the same evidence and many of the same witnesses were used. However, television cameras had access to one trial but not the other. Data for the study were collected by…

  4. Microprocessor-controlled, wide-range streak camera

    International Nuclear Information System (INIS)

    Amy E. Lewis; Craig Hollabaugh

    2006-01-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized

  5. Calibration of Low Cost RGB and NIR Uav Cameras

    Science.gov (United States)

    Fryskowska, A.; Kedzierski, M.; Grochala, A.; Braula, A.

    2016-06-01

    Non-metric digital cameras are being widely used for photogrammetric studies. The increase in resolution and quality of images obtained by non-metric cameras, allows to use it in low-cost UAV and terrestrial photogrammetry. Imagery acquired with non-metric cameras can be used in 3D modeling of objects or landscapes, reconstructing of historical sites, generating digital elevation models (DTM), orthophotos, or in the assessment of accidents. Non-metric digital camcorders are characterized by instability and ignorance of the interior orientation parameters. Therefore, the use of these devices requires prior calibration. Calibration research was conducted using non-metric camera, different calibration tests and various software. The first part of the paper contains a brief theoretical introduction including the basic definitions, like the construction of non-metric cameras or description of different optical distortions. The second part of the paper contains cameras calibration process, details of the calibration methods and models that have been used. Sony Nex 5 camera calibration has been done using software: Image Master Calib, Matlab - Camera Calibrator application and Agisoft Lens. For the study 2D test fields has been used. As a part of the research a comparative analysis of the results have been done.

  6. Cinematic camera emulation using two-dimensional color transforms

    Science.gov (United States)

    McElvain, Jon S.; Gish, Walter

    2015-02-01

    For cinematic and episodic productions, on-set look management is an important component of the creative process, and involves iterative adjustments of the set, actors, lighting and camera configuration. Instead of using the professional motion capture device to establish a particular look, the use of a smaller form factor DSLR is considered for this purpose due to its increased agility. Because the spectral response characteristics will be different between the two camera systems, a camera emulation transform is needed to approximate the behavior of the destination camera. Recently, twodimensional transforms have been shown to provide high-accuracy conversion of raw camera signals to a defined colorimetric state. In this study, the same formalism is used for camera emulation, whereby a Canon 5D Mark III DSLR is used to approximate the behavior a Red Epic cinematic camera. The spectral response characteristics for both cameras were measured and used to build 2D as well as 3x3 matrix emulation transforms. When tested on multispectral image databases, the 2D emulation transforms outperform their matrix counterparts, particularly for images containing highly chromatic content.

  7. Students' Framing of Laboratory Exercises Using Infrared Cameras

    Science.gov (United States)

    Haglund, Jesper; Jeppsson, Fredrik; Hedberg, David; Schönborn, Konrad J.

    2015-01-01

    Thermal science is challenging for students due to its largely imperceptible nature. Handheld infrared cameras offer a pedagogical opportunity for students to see otherwise invisible thermal phenomena. In the present study, a class of upper secondary technology students (N = 30) partook in four IR-camera laboratory activities, designed around the…

  8. Do it yourself smartphone fundus camera – DIYretCAM

    Directory of Open Access Journals (Sweden)

    Biju Raju

    2016-01-01

    Full Text Available This article describes the method to make a do it yourself smartphone-based fundus camera which can image the central retina as well as the peripheral retina up to the pars plana. It is a cost-effective alternative to the fundus camera.

  9. A Simple Spectrophotometer Using Common Materials and a Digital Camera

    Science.gov (United States)

    Widiatmoko, Eko; Widayani; Budiman, Maman; Abdullah, Mikrajuddin; Khairurrijal

    2011-01-01

    A simple spectrophotometer was designed using cardboard, a DVD, a pocket digital camera, a tripod and a computer. The DVD was used as a diffraction grating and the camera as a light sensor. The spectrophotometer was calibrated using a reference light prior to use. The spectrophotometer was capable of measuring optical wavelengths with a…

  10. Augmenting camera images for operators of Unmanned Aerial Vehicles

    NARCIS (Netherlands)

    Veltman, J.A.; Oving, A.B.

    2003-01-01

    The manual control of the camera of an unmanned aerial vehicle (UAV) can be difficult due to several factors such as 1) time delays between steering input and changes of the monitor content, 2) low update rates of the camera images and 3) lack of situation awareness due to the remote position of the

  11. Improving Photometric Calibration of Meteor Video Camera Systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at approx. 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  12. ToF camera ego-motion estimation

    CSIR Research Space (South Africa)

    Ratshidaho, T

    2012-10-01

    Full Text Available sequences. Ego-motion facilitates the localisation of the robot. The ToF camera is characterised with a number of error models. Iterative Closest Point (ICP) is applied to consecutive range images of the ToF camera to estimate the relative pose transform...

  13. Raspberry Pi camera with intervalometer used as crescograph

    Science.gov (United States)

    Albert, Stefan; Surducan, Vasile

    2017-12-01

    The intervalometer is an attachment or facility on a photo-camera that operates the shutter regularly at set intervals over a period. Professional cameras with built in intervalometers are expensive and quite difficult to find. The Canon CHDK open source operating system allows intervalometer implementation on Canon cameras only. However finding a Canon camera with near infra-red (NIR) photographic lens at affordable price is impossible. On experiments requiring several cameras (used to measure growth in plants - the crescographs, but also for coarse evaluation of the water content of leaves), the costs of the equipment are often over budget. Using two Raspberry Pi modules each equipped with a low cost NIR camera and a WIFI adapter (for downloading pictures stored on the SD card) and some freely available software, we have implemented two low budget intervalometer cameras. The shutting interval, the number of pictures to be taken, image resolution and some other parameters can be fully programmed. Cameras have been in use continuously for three months (July-October 2017) in a relevant environment (outside), proving the concept functionality.

  14. The WEBERSAT camera - An inexpensive earth imaging system

    Science.gov (United States)

    Jackson, Stephen; Raetzke, Jeffrey

    WEBERSAT is a 27 pound LEO satellite launched in 1990 into a 500 mile polar orbit. One of its payloads is a low cost CCD color camera system developed by engineering students at Weber State University. The camera is a modified Canon CI-10 with a 25 mm lens, automatic iris, and 780 x 490 pixel resolution. The iris range control potentiometer was made programmable; a 10.7 MHz digitization clock, fixed focus support, and solid tantalum capacitors were added. Camera output signals, composite video, red, green, blue, and the digitization clock are fed to a flash digitizer, where they are processed for storage in RAM. Camera control commands are stored and executed via the onboard computer. The CCD camera has successfully imaged meteorological features of the earth, land masses, and a number of astronomical objects.

  15. Positron emission tomography with gamma camera in coincidence mode

    International Nuclear Information System (INIS)

    Hertel, A.; Hoer, G.

    1999-01-01

    Positron emission tomography using F-18 FDG has been estbalished in clinical diagnostics with first indications especially in oncology. To install a conventional PET tomography (dedicated PET) is financially costly and restricted to PET examinations only. Increasing demand for PET diagnostics on one hand and restricted financial resources in the health system on the other hand led industry to develop SPECT cameras to be operated in coincidence mode (camera PET) in order to offer nuclear medicine physicians cost-effective devices for PET diagnostic. At the same time camera PET is inferior to conventional PET regarding sensitivity and detection-efficiency for 511 keV photons. Does camera-PET offer a reliable alternative to conventional PET? The first larger comparative studies are now available, so a first apraisal about the technical clinical performance of camera-PET can be done. (orig.) [de

  16. Preliminary field evaluation of solid state cameras for security applications

    International Nuclear Information System (INIS)

    Murray, D.W.

    1987-01-01

    Recent developments in solid state imager technology have resulted in a series of compact, lightweight, all-solid-state closed circuit television (CCTV) cameras. Although it is widely known that the various solid state cameras have less light sensitivity and lower resolution than their vacuum tube counterparts, the potential for having a much longer Mean Time Between Failure (MTBF) for the all-solid-state cameras is generating considerable interest within the security community. Questions have been raised as to whether the newest and best of the solid state cameras are a viable alternative to the high maintenance vacuum tube cameras in exterior security applications. To help answer these questions, a series of tests were performed by Sandia National Laboratories at various test sites and under several lighting conditions. The results of these tests as well as a description of the test equipment, test sites, and procedures are presented in this report

  17. True RGB line scan camera for color machine vision applications

    Science.gov (United States)

    Lemstrom, Guy F.

    1994-11-01

    In this paper a true RGB 3-chip color line scan camera is described. The camera was mainly developed for accurate color measuring in industrial applications. Due to the camera's modularity it's also possible to use it as a B/W-camera. The color separation is made with a RGB-beam splitter. The CCD linear arrays are fixed with a high accuracy to the beam splitters output in order to match the pixels of the three different CCDs on each other. This makes the color analyses simple compared to color line arrays where line or pixel matching has to be done. The beam splitter can be custom made to separate spectral components other than standard RGB. The spectral range is from 200 to 1000 nm for most CCDs and two or three spectral areas can be separately measured with the beam splitter. The camera is totally digital and has a 16-bit parallel computer interface to communicate with a signal processing board. Because of the open architecture of the camera it's possible for the customer to design a board with some special functions handling the preprocessing of the data (for example RGB - HSI conversion). The camera can also be equipped with a high speed CPU-board with enough local memory to do some image processing inside the camera before sending the data forward. The camera has been used in real industrial applications and has proven that its high resolution and high dynamic range can be used to measure color differences of small amounts to separate or grade objects such as minerals, food or other materials that can't be measured with a black and white camera.

  18. Comparison of parameters of modern cooled and uncooled thermal cameras

    Science.gov (United States)

    Bareła, Jarosław; Kastek, Mariusz; Firmanty, Krzysztof; Krupiński, Michał

    2017-10-01

    During the design of a system employing thermal cameras one always faces a problem of choosing the camera types best suited for the task. In many cases such a choice is far from optimal one, and there are several reasons for that. System designers often favor tried and tested solution they are used to. They do not follow the latest developments in the field of infrared technology and sometimes their choices are based on prejudice and not on facts. The paper presents the results of measurements of basic parameters of MWIR and LWIR thermal cameras, carried out in a specialized testing laboratory. The measured parameters are decisive in terms of image quality generated by thermal cameras. All measurements were conducted according to current procedures and standards. However the camera settings were not optimized for a specific test conditions or parameter measurements. Instead the real settings used in normal camera operations were applied to obtain realistic camera performance figures. For example there were significant differences between measured values of noise parameters and catalogue data provided by manufacturers, due to the application of edge detection filters to increase detection and recognition ranges. The purpose of this paper is to provide help in choosing the optimal thermal camera for particular application, answering the question whether to opt for cheaper microbolometer device or apply slightly better (in terms of specifications) yet more expensive cooled unit. Measurements and analysis were performed by qualified personnel with several dozen years of experience in both designing and testing of thermal camera systems with both cooled and uncooled focal plane arrays. Cameras of similar array sizes and optics were compared, and for each tested group the best performing devices were selected.

  19. Comparison of overlay metrology with analogue and digital cameras

    Science.gov (United States)

    Rigden, Timothy C.; Soroka, Andrew J.; Binns, Lewis A.

    2005-05-01

    Overlay metrology is a very demanding image processing application; current applications are achieving dynamic precision of one hundredth of a pixel or better. As such it requires an accurate image acquisition system, with minimal distortions. Distortions can be physical (e.g. pixel size / shape) or electronic (e.g. clock skew) in nature. They can also affect the image shape, or the gray level intensity of individual pixels, the former causing severe problems to pattern recognition and measurement algorithms, the latter having an adverse effect primarily on the measurement itself. This paper considers the artifacts that are present in a particular analogue camera, with a discussion on how these artifacts translate into a reduction of overlay metrology performance, in particular their effect on precision and tool induced shift (TIS). The observed effects include, but are not limited to, banding and interlacing. This camera is then compared to two digital cameras. The first of these operates at the same frame rate as the analogue camera, and is found to have fewer distortions than the analogue camera. The second camera operates with a frame rate twice that of the other two. It is observed that this camera does not exhibit the distortions of the analogue camera, but instead has some very specific problems, particularly with regards to noise. The quantitative data on the effect on precision and TIS under a wide variety of conditions, is presented. These show that while it is possible to achieve metrology-capable images using an analogue camera, it is preferable to use a digital camera, both from the perspective of overall system performance, and overall system complexity.

  20. An ISPA-camera for gamma rays

    CERN Document Server

    Puertolas, D; Pani, R; Leutz, H; Gys, Thierry; De Notaristefani, F; D'Ambrosio, C

    1995-01-01

    With the recently developed ISPA (Imaging Silicon Pixel Array)-tube attached either to a planar YAlO3(Ce) (YAP) disc (1mm thick) or to a matrix of optically-separated YAP-crystals (5mm high, 0.6 x 0.6 mm2 cross-section) we achieved high spatial resolution of 57Co-122 keV photons. The vacuum-sealed ISPA-tube is only 4 cm long with 3.5 cm diameter and consists of a photocathode viewed at 3 cm distance by a silicon pixel chip, directly detecting the photoelectrons. The chip-anode consists of 1024 rectangular pixels with 75 µm x 500 µm edges, each bump-bonded to their individual front-end electronics. The total pixel array read-out time is 10 µs. The measured intrinsic spatial resolutions (FWHM) of this ISPA-camera are 700 µm (planar YAP) and 310 µm (YAP-matrix). Apart from its already demonstrated application for particle tracking with scintillating fibres, the ISPA-tube provides also an excellent tool in medicine, biology and chemistry.

  1. RAW camera DPCM compression performance analysis

    Science.gov (United States)

    Bouman, Katherine; Ramachandra, Vikas; Atanassov, Kalin; Aleksic, Mickey; Goma, Sergio R.

    2011-01-01

    The MIPI standard has adopted DPCM compression for RAW data images streamed from mobile cameras. This DPCM is line based and uses either a simple 1 or 2 pixel predictor. In this paper, we analyze the DPCM compression performance as MTF degradation. To test this scheme's performance, we generated Siemens star images and binarized them to 2-level images. These two intensity values where chosen such that their intensity difference corresponds to those pixel differences which result in largest relative errors in the DPCM compressor. (E.g. a pixel transition from 0 to 4095 corresponds to an error of 6 between the DPCM compressed value and the original pixel value). The DPCM scheme introduces different amounts of error based on the pixel difference. We passed these modified Siemens star chart images to this compressor and compared the compressed images with the original images using IT3 MTF response plots for slanted edges. Further, we discuss the PSF influence on DPCM error and its propagation through the image processing pipe.

  2. DEPTH CAMERAS ON UAVs: A FIRST APPROACH

    Directory of Open Access Journals (Sweden)

    A. Deris

    2017-02-01

    Full Text Available Accurate depth information retrieval of a scene is a field under investigation in the research areas of photogrammetry, computer vision and robotics. Various technologies, active, as well as passive, are used to serve this purpose such as laser scanning, photogrammetry and depth sensors, with the latter being a promising innovative approach for fast and accurate 3D object reconstruction using a broad variety of measuring principles including stereo vision, infrared light or laser beams. In this study we investigate the use of the newly designed Stereolab's ZED depth camera based on passive stereo depth calculation, mounted on an Unmanned Aerial Vehicle with an ad-hoc setup, specially designed for outdoor scene applications. Towards this direction, the results of its depth calculations and scene reconstruction generated by Simultaneous Localization and Mapping (SLAM algorithms are compared and evaluated based on qualitative and quantitative criteria with respect to the ones derived by a typical Structure from Motion (SfM and Multiple View Stereo (MVS pipeline for a challenging cultural heritage application.

  3. Driver head pose tracking with thermal camera

    Science.gov (United States)

    Bole, S.; Fournier, C.; Lavergne, C.; Druart, G.; Lépine, T.

    2016-09-01

    Head pose can be seen as a coarse estimation of gaze direction. In automotive industry, knowledge about gaze direction could optimize Human-Machine Interface (HMI) and Advanced Driver Assistance Systems (ADAS). Pose estimation systems are often based on camera when applications have to be contactless. In this paper, we explore uncooled thermal imagery (8-14μm) for its intrinsic night vision capabilities and for its invariance versus lighting variations. Two methods are implemented and compared, both are aided by a 3D model of the head. The 3D model, mapped with thermal texture, allows to synthesize a base of 2D projected models, differently oriented and labeled in yaw and pitch. The first method is based on keypoints. Keypoints of models are matched with those of the query image. These sets of matchings, aided with the 3D shape of the model, allow to estimate 3D pose. The second method is a global appearance approach. Among all 2D models of the base, algorithm searches the one which is the closest to the query image thanks to a weighted least squares difference.

  4. STRAY DOG DETECTION IN WIRED CAMERA NETWORK

    Directory of Open Access Journals (Sweden)

    C. Prashanth

    2013-08-01

    Full Text Available Existing surveillance systems impose high level of security on humans but lacks attention on animals. Stray dogs could be used as an alternative to humans to carry explosive material. It is therefore imperative to ensure the detection of stray dogs for necessary corrective action. In this paper, a novel composite approach to detect the presence of stray dogs is proposed. The captured frame from the surveillance camera is initially pre-processed using Gaussian filter to remove noise. The foreground object of interest is extracted utilizing ViBe algorithm. Histogram of Oriented Gradients (HOG algorithm is used as the shape descriptor which derives the shape and size information of the extracted foreground object. Finally, stray dogs are classified from humans using a polynomial Support Vector Machine (SVM of order 3. The proposed composite approach is simulated in MATLAB and OpenCV. Further it is validated with real time video feeds taken from an existing surveillance system. From the results obtained, it is found that a classification accuracy of about 96% is achieved. This encourages the utilization of the proposed composite algorithm in real time surveillance systems.

  5. A miniature VGA SWIR camera using MT6415CA ROIC

    Science.gov (United States)

    Eminoglu, Selim; Yilmaz, S. Gokhan; Kocak, Serhat

    2014-06-01

    This paper reports the development of a new miniature VGA SWIR camera called NanoCAM-6415, which is developed to demonstrate the key features of the MT6415CA ROIC such as high integration level, low-noise, and low-power in a small volume. The NanoCAM-6415 uses an InGaAs Focal Plane Array (FPA) with a format of 640 × 512 and pixel pitch of 15 μm built using MT6415CA ROIC. MT6415CA is a low-noise CTIA ROIC, which has a system-on-chip architecture, allows generation of all the required timing and biases on-chip in the ROIC without requiring any external components or inputs, thus enabling the development of compact and low-noise SWIR cameras, with reduced size, weight, and power (SWaP). NanoCAM-6415 camera supports snapshot operation using Integrate-Then-Read (ITR) and Integrate-While-Read (IWR) modes. The camera has three gain settings enabled by the ROIC through programmable Full-Well-Capacity (FWC) values of 10.000 e-, 20.000 e-, and 350.000 e- in the very high gain (VHG), high-gain (HG), and low-gain (LG) modes, respectively. The camera has an input referred noise level of 10 e- rms in the VHG mode at 1 ms integration time, suitable for low-noise SWIR imaging applications. In order to reduce the size and power of the camera, only 2 outputs out of 8 of the ROIC are connected to the external Analog-to-Digital Converters (ADCs) in the camera electronics, providing a maximum frame rate of 50 fps through a 26-pin SDR type Camera Link connector. NanoCAM-6415 SWIR camera without the optics measures 32 mm × 32 mm × 35 mm, weighs 45gr, and dissipates less than 1.8 W using a 5 V supply. These results show that MT6415CA ROIC can successfully be used to develop cameras for SWIR imaging applications where SWaP is a concern. Mikro-Tasarim has also developed new imaging software to demonstrate the functionality of this miniature VGA camera. Mikro-Tasarim provides tested ROIC wafers and also offers compact and easy-to-use test electronics, demo cameras, and hardware

  6. Benchmarking the Optical Resolving Power of Uav Based Camera Systems

    Science.gov (United States)

    Meißner, H.; Cramer, M.; Piltz, B.

    2017-08-01

    UAV based imaging and 3D object point generation is an established technology. Some of the UAV users try to address (very) highaccuracy applications, i.e. inspection or monitoring scenarios. In order to guarantee such level of detail and accuracy high resolving imaging systems are mandatory. Furthermore, image quality considerably impacts photogrammetric processing, as the tie point transfer, mandatory for forming the block geometry, fully relies on the radiometric quality of images. Thus, empirical testing of radiometric camera performance is an important issue, in addition to standard (geometric) calibration, which normally is covered primarily. Within this paper the resolving power of ten different camera/lens installations has been investigated. Selected systems represent different camera classes, like DSLRs, system cameras, larger format cameras and proprietary systems. As the systems have been tested in wellcontrolled laboratory conditions and objective quality measures have been derived, individual performance can be compared directly, thus representing a first benchmark on radiometric performance of UAV cameras. The results have shown, that not only the selection of appropriate lens and camera body has an impact, in addition the image pre-processing, i.e. the use of a specific debayering method, significantly influences the final resolving power.

  7. BENCHMARKING THE OPTICAL RESOLVING POWER OF UAV BASED CAMERA SYSTEMS

    Directory of Open Access Journals (Sweden)

    H. Meißner

    2017-08-01

    Full Text Available UAV based imaging and 3D object point generation is an established technology. Some of the UAV users try to address (very highaccuracy applications, i.e. inspection or monitoring scenarios. In order to guarantee such level of detail and accuracy high resolving imaging systems are mandatory. Furthermore, image quality considerably impacts photogrammetric processing, as the tie point transfer, mandatory for forming the block geometry, fully relies on the radiometric quality of images. Thus, empirical testing of radiometric camera performance is an important issue, in addition to standard (geometric calibration, which normally is covered primarily. Within this paper the resolving power of ten different camera/lens installations has been investigated. Selected systems represent different camera classes, like DSLRs, system cameras, larger format cameras and proprietary systems. As the systems have been tested in wellcontrolled laboratory conditions and objective quality measures have been derived, individual performance can be compared directly, thus representing a first benchmark on radiometric performance of UAV cameras. The results have shown, that not only the selection of appropriate lens and camera body has an impact, in addition the image pre-processing, i.e. the use of a specific debayering method, significantly influences the final resolving power.

  8. Analysis of Camera Arrays Applicable to the Internet of Things.

    Science.gov (United States)

    Yang, Jiachen; Xu, Ru; Lv, Zhihan; Song, Houbing

    2016-03-22

    The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are used in various applications and analyzed in the research work, there are few discussions on the comparison of them. Therefore, we make a detailed analysis about their performance over different shooting distances. From our analysis, we find that the threshold of shooting distance for converged cameras is 7 m. In addition, we design a camera array in our work that can be used as a parallel camera array, as well as a converged camera array and take some images and videos with it to identify the threshold.

  9. Determining Vision Graphs for Distributed Camera Networks Using Feature Digests

    Directory of Open Access Journals (Sweden)

    Richard J. Radke

    2007-01-01

    Full Text Available We propose a decentralized method for obtaining the vision graph for a distributed, ad-hoc camera network, in which each edge of the graph represents two cameras that image a sufficiently large part of the same environment. Each camera encodes a spatially well-distributed set of distinctive, approximately viewpoint-invariant feature points into a fixed-length “feature digest” that is broadcast throughout the network. Each receiver camera robustly matches its own features with the decompressed digest and decides whether sufficient evidence exists to form a vision graph edge. We also show how a camera calibration algorithm that passes messages only along vision graph edges can recover accurate 3D structure and camera positions in a distributed manner. We analyze the performance of different message formation schemes, and show that high detection rates (>0.8 can be achieved while maintaining low false alarm rates (<0.05 using a simulated 60-node outdoor camera network.

  10. The Alfred Nobel rocket camera. An early aerial photography attempt

    Science.gov (United States)

    Ingemar Skoog, A.

    2010-02-01

    Alfred Nobel (1833-1896), mainly known for his invention of dynamite and the creation of the Nobel Prices, was an engineer and inventor active in many fields of science and engineering, e.g. chemistry, medicine, mechanics, metallurgy, optics, armoury and rocketry. Amongst his inventions in rocketry was the smokeless solid propellant ballistite (i.e. cordite) patented for the first time in 1887. As a very wealthy person he actively supported many Swedish inventors in their work. One of them was W.T. Unge, who was devoted to the development of rockets and their applications. Nobel and Unge had several rocket patents together and also jointly worked on various rocket applications. In mid-1896 Nobel applied for patents in England and France for "An Improved Mode of Obtaining Photographic Maps and Earth or Ground Measurements" using a photographic camera carried by a "…balloon, rocket or missile…". During the remaining of 1896 the mechanical design of the camera mechanism was pursued and cameras manufactured. In April 1897 (after the death of Alfred Nobel) the first aerial photos were taken by these cameras. These photos might be the first documented aerial photos taken by a rocket borne camera. Cameras and photos from 1897 have been preserved. Nobel did not only develop the rocket borne camera but also proposed methods on how to use the photographs taken for ground measurements and preparing maps.

  11. Microprocessor-controlled, wide-range streak camera

    Energy Technology Data Exchange (ETDEWEB)

    Amy E. Lewis, Craig Hollabaugh

    2006-09-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera’s user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  12. Active learning in camera calibration through vision measurement application

    Science.gov (United States)

    Li, Xiaoqin; Guo, Jierong; Wang, Xianchun; Liu, Changqing; Cao, Binfang

    2017-08-01

    Since cameras are increasingly more used in scientific application as well as in the applications requiring precise visual information, effective calibration of such cameras is getting more important. There are many reasons why the measurements of objects are not accurate. The largest reason is that the lens has a distortion. Another detrimental influence on the evaluation accuracy is caused by the perspective distortions in the image. They happen whenever we cannot mount the camera perpendicularly to the objects we want to measure. In overall, it is very important for students to understand how to correct lens distortions, that is camera calibration. If the camera is calibrated, the images are rectificated, and then it is possible to obtain undistorted measurements in world coordinates. This paper presents how the students should develop a sense of active learning for mathematical camera model besides the theoretical scientific basics. The authors will present the theoretical and practical lectures which have the goal of deepening the students understanding of the mathematical models of area scan cameras and building some practical vision measurement process by themselves.

  13. Image responses to x-ray radiation in ICCD camera

    Science.gov (United States)

    Ma, Jiming; Duan, Baojun; Song, Yan; Song, Guzhou; Han, Changcai; Zhou, Ming; Du, Jiye; Wang, Qunshu; Zhang, Jianqi

    2013-08-01

    When used in digital radiography, ICCD camera will be inevitably irradiated by x-ray and the output image will degrade. In this research, we separated ICCD camera into two optical-electric parts, CCD camera and MCP image intensifier, and irradiated them respectively on Co-60 gamma ray source and pulsed x-ray source. By changing time association between radiation and the shutter of CCD camera, the state of power supply of MCP image intensifier, significant differences have been observed in output images. A further analysis has revealed the influence of the CCD chip, readout circuit in CCD camera, and the photocathode, microchannel plate and fluorescent screen in MCP image intensifier on image quality of an irradiated ICCD camera. The study demonstrated that compared with other parts, irradiation response of readout circuit is very slight and in most cases negligible. The interaction of x-ray with CCD chip usually behaves as bright spots or rough background in output images, which depends on x-ray doses. As to the MCP image intensifier, photocathode and microchannel plate are the two main steps that degrade output images. When being irradiated by x-ray, microchannel plate in MCP image intensifier tends to contribute a bright background in output images. Background caused by the photocathode looks more bright and fluctuant. Image responses of fluorescent screen in MCP image intensifier in ICCD camera and that of a coupling fiber bundle are also evaluated in this presentation.

  14. Global Calibration of Multiple Cameras Based on Sphere Targets

    Directory of Open Access Journals (Sweden)

    Junhua Sun

    2016-01-01

    Full Text Available Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three, while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view.

  15. LSST camera readout chip ASPIC: test tools

    Science.gov (United States)

    Antilogus, P.; Bailly, Ph; Jeglot, J.; Juramy, C.; Lebbolo, H.; Martin, D.; Moniez, M.; Tocut, V.; Wicek, F.

    2012-02-01

    The LSST camera will have more than 3000 video-processing channels. The readout of this large focal plane requires a very compact readout chain. The correlated ''Double Sampling technique'', which is generally used for the signal readout of CCDs, is also adopted for this application and implemented with the so called ''Dual Slope integrator'' method. We have designed and implemented an ASIC for LSST: the Analog Signal Processing asIC (ASPIC). The goal is to amplify the signal close to the output, in order to maximize signal to noise ratio, and to send differential outputs to the digitization. Others requirements are that each chip should process the output of half a CCD, that is 8 channels and should operate at 173 K. A specific Back End board has been designed especially for lab test purposes. It manages the clock signals, digitizes the analog differentials outputs of ASPIC and stores data into a memory. It contains 8 ADCs (18 bits), 512 kwords memory and an USB interface. An FPGA manages all signals from/to all components on board and generates the timing sequence for ASPIC. Its firmware is written in Verilog and VHDL languages. Internals registers permit to define various tests parameters of the ASPIC. A Labview GUI allows to load or update these registers and to check a proper operation. Several series of tests, including linearity, noise and crosstalk, have been performed over the past year to characterize the ASPIC at room and cold temperature. At present, the ASPIC, Back-End board and CCD detectors are being integrated to perform a characterization of the whole readout chain.

  16. Kinect Fusion improvement using depth camera calibration

    Directory of Open Access Journals (Sweden)

    D. Pagliari

    2014-06-01

    Full Text Available Scene's 3D modelling, gesture recognition and motion tracking are fields in rapid and continuous development which have caused growing demand on interactivity in video-game and e-entertainment market. Starting from the idea of creating a sensor that allows users to play without having to hold any remote controller, the Microsoft Kinect device was created. The Kinect has always attract researchers in different fields, from robotics to Computer Vision (CV and biomedical engineering as well as third-party communities that have released several Software Development Kit (SDK versions for Kinect in order to use it not only as a game device but as measurement system. Microsoft Kinect Fusion control libraries (firstly released in March 2013 allow using the device as a 3D scanning and produce meshed polygonal of a static scene just moving the Kinect around. A drawback of this sensor is the geometric quality of the delivered data and the low repeatability. For this reason the authors carried out some investigation in order to evaluate the accuracy and repeatability of the depth measured delivered by the Kinect. The paper will present a throughout calibration analysis of the Kinect imaging sensor, with the aim of establishing the accuracy and precision of the delivered information: a straightforward calibration of the depth sensor in presented and then the 3D data are correct accordingly. Integrating the depth correction algorithm and correcting the IR camera interior and exterior orientation parameters, the Fusion Libraries are corrected and a new reconstruction software is created to produce more accurate models.

  17. Kinect Fusion improvement using depth camera calibration

    Science.gov (United States)

    Pagliari, D.; Menna, F.; Roncella, R.; Remondino, F.; Pinto, L.

    2014-06-01

    Scene's 3D modelling, gesture recognition and motion tracking are fields in rapid and continuous development which have caused growing demand on interactivity in video-game and e-entertainment market. Starting from the idea of creating a sensor that allows users to play without having to hold any remote controller, the Microsoft Kinect device was created. The Kinect has always attract researchers in different fields, from robotics to Computer Vision (CV) and biomedical engineering as well as third-party communities that have released several Software Development Kit (SDK) versions for Kinect in order to use it not only as a game device but as measurement system. Microsoft Kinect Fusion control libraries (firstly released in March 2013) allow using the device as a 3D scanning and produce meshed polygonal of a static scene just moving the Kinect around. A drawback of this sensor is the geometric quality of the delivered data and the low repeatability. For this reason the authors carried out some investigation in order to evaluate the accuracy and repeatability of the depth measured delivered by the Kinect. The paper will present a throughout calibration analysis of the Kinect imaging sensor, with the aim of establishing the accuracy and precision of the delivered information: a straightforward calibration of the depth sensor in presented and then the 3D data are correct accordingly. Integrating the depth correction algorithm and correcting the IR camera interior and exterior orientation parameters, the Fusion Libraries are corrected and a new reconstruction software is created to produce more accurate models.

  18. Robust Pedestrian Detection by Combining Visible and Thermal Infrared Cameras

    Directory of Open Access Journals (Sweden)

    Ji Hoon Lee

    2015-05-01

    Full Text Available With the development of intelligent surveillance systems, the need for accurate detection of pedestrians by cameras has increased. However, most of the previous studies use a single camera system, either a visible light or thermal camera, and their performances are affected by various factors such as shadow, illumination change, occlusion, and higher background temperatures. To overcome these problems, we propose a new method of detecting pedestrians using a dual camera system that combines visible light and thermal cameras, which are robust in various outdoor environments such as mornings, afternoons, night and rainy days. Our research is novel, compared to previous works, in the following four ways: First, we implement the dual camera system where the axes of visible light and thermal cameras are parallel in the horizontal direction. We obtain a geometric transform matrix that represents the relationship between these two camera axes. Second, two background images for visible light and thermal cameras are adaptively updated based on the pixel difference between an input thermal and pre-stored thermal background images. Third, by background subtraction of thermal image considering the temperature characteristics of background and size filtering with morphological operation, the candidates from whole image (CWI in the thermal image is obtained. The positions of CWI (obtained by background subtraction and the procedures of shadow removal, morphological operation, size filtering, and filtering of the ratio of height to width in the visible light image are projected on those in the thermal image by using the geometric transform matrix, and the searching regions for pedestrians are defined in the thermal image. Fourth, within these searching regions, the candidates from the searching image region (CSI of pedestrians in the thermal image are detected. The final areas of pedestrians are located by combining the detected positions of the CWI and CSI of

  19. Design and Construction of an X-ray Lightning Camera

    Science.gov (United States)

    Schaal, M.; Dwyer, J. R.; Rassoul, H. K.; Uman, M. A.; Jordan, D. M.; Hill, J. D.

    2010-12-01

    A pinhole-type camera was designed and built for the purpose of producing high-speed images of the x-ray emissions from rocket-and-wire-triggered lightning. The camera consists of 30 7.62-cm diameter NaI(Tl) scintillation detectors, each sampling at 10 million frames per second. The steel structure of the camera is encased in 1.27-cm thick lead, which blocks x-rays that are less than 400 keV, except through a 7.62-cm diameter “pinhole” aperture located at the front of the camera. The lead and steel structure is covered in 0.16-cm thick aluminum to block RF noise, water and light. All together, the camera weighs about 550-kg and is approximately 1.2-m x 0.6-m x 0.6-m. The image plane, which is adjustable, was placed 32-cm behind the pinhole aperture, giving a field of view of about ±38° in both the vertical and horizontal directions. The elevation of the camera is adjustable between 0 and 50° from horizontal and the camera may be pointed in any azimuthal direction. In its current configuration, the camera’s angular resolution is about 14°. During the summer of 2010, the x-ray camera was located 44-m from the rocket-launch tower at the UF/Florida Tech International Center for Lightning Research and Testing (ICLRT) at Camp Blanding, FL and several rocket-triggered lightning flashes were observed. In this presentation, I will discuss the design, construction and operation of this x-ray camera.

  20. Object recognition through turbulence with a modified plenoptic camera

    Science.gov (United States)

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher

    2015-03-01

    Atmospheric turbulence adds accumulated distortion to images obtained by cameras and surveillance systems. When the turbulence grows stronger or when the object is further away from the observer, increasing the recording device resolution helps little to improve the quality of the image. Many sophisticated methods to correct the distorted images have been invented, such as using a known feature on or near the target object to perform a deconvolution process, or use of adaptive optics. However, most of the methods depend heavily on the object's location, and optical ray propagation through the turbulence is not directly considered. Alternatively, selecting a lucky image over many frames provides a feasible solution, but at the cost of time. In our work, we propose an innovative approach to improving image quality through turbulence by making use of a modified plenoptic camera. This type of camera adds a micro-lens array to a traditional high-resolution camera to form a semi-camera array that records duplicate copies of the object as well as "superimposed" turbulence at slightly different angles. By performing several steps of image reconstruction, turbulence effects will be suppressed to reveal more details of the object independently (without finding references near the object). Meanwhile, the redundant information obtained by the plenoptic camera raises the possibility of performing lucky image algorithmic analysis with fewer frames, which is more efficient. In our work, the details of our modified plenoptic cameras and image processing algorithms will be introduced. The proposed method can be applied to coherently illuminated object as well as incoherently illuminated objects. Our result shows that the turbulence effect can be effectively suppressed by the plenoptic camera in the hardware layer and a reconstructed "lucky image" can help the viewer identify the object even when a "lucky image" by ordinary cameras is not achievable.

  1. Camera Coverage Estimation Based on Multistage Grid Subdivision

    Directory of Open Access Journals (Sweden)

    Meizhen Wang

    2017-04-01

    Full Text Available Visual coverage is one of the most important quality indexes for depicting the usability of an individual camera or camera network. It is the basis for camera network deployment, placement, coverage-enhancement, planning, etc. Precision and efficiency are critical influences on applications, especially those involving several cameras. This paper proposes a new method to efficiently estimate superior camera coverage. First, the geographic area that is covered by the camera and its minimum bounding rectangle (MBR without considering obstacles is computed using the camera parameters. Second, the MBR is divided into grids using the initial grid size. The status of the four corners of each grid is estimated by a line of sight (LOS algorithm. If the camera, considering obstacles, covers a corner, the status is represented by 1, otherwise by 0. Consequently, the status of a grid can be represented by a code that is a combination of 0s or 1s. If the code is not homogeneous (not four 0s or four 1s, the grid will be divided into four sub-grids until the sub-grids are divided into a specific maximum level or their codes are homogeneous. Finally, after performing the process above, total camera coverage is estimated according to the size and status of all grids. Experimental results illustrate that the proposed method’s accuracy is determined by the method that divided the coverage area into the smallest grids at the maximum level, while its efficacy is closer to the method that divided the coverage area into the initial grids. It considers both efficiency and accuracy. The initial grid size and maximum level are two critical influences on the proposed method, which can be determined by weighing efficiency and accuracy.

  2. The GCT camera for the Cherenkov Telescope Array

    Science.gov (United States)

    Lapington, J. S.; Abchiche, A.; Allan, D.; Amans, J.-P.; Armstrong, T. P.; Balzer, A.; Berge, D.; Boisson, C.; Bousquet, J.-J.; Bose, R.; Brown, A. M.; Bryan, M.; Buchholtz, G.; Buckley, J.; Chadwick, P. M.; Costantini, H.; Cotter, G.; Daniel, M. K.; De Franco, A.; De Frondat, F.; Dournaux, J.-L.; Dumas, D.; Ernenwein, J.-P.; Fasola, G.; Funk, S.; Gironnet, J.; Graham, J. A.; Greenshaw, T.; Hervet, O.; Hidaka, N.; Hinton, J. A.; Huet, J.-M.; Jankowsky, D.; Jegouzo, I.; Jogler, T.; Kawashima, T.; Kraus, M.; Laporte, P.; Leach, S.; Lefaucheur, J.; Markoff, S.; Melse, T.; Minaya, I. A.; Mohrmann, L.; Molyneux, P.; Moore, P.; Nolan, S. J.; Okumura, A.; Osborne, J. P.; Parsons, R. D.; Rosen, S.; Ross, D.; Rowell, G.; Rulten, C. B.; Sato, Y.; Sayede, F.; Schmoll, J.; Schoorlemmer, H.; Servillat, M.; Sol, H.; Stamatescu, V.; Stephan, M.; Stuik, R.; Sykes, J.; Tajima, H.; Thornhill, J.; Tibaldo, L.; Trichard, C.; Varner, G.; Vink, J.; Watson, J. J.; White, R.; Yamane, N.; Zech, A.; Zink, A.; Zorn, J.; CTA Consortium

    2017-12-01

    The Gamma Cherenkov Telescope (GCT) is one of the designs proposed for the Small Sized Telescope (SST) section of the Cherenkov Telescope Array (CTA). The GCT uses dual-mirror optics, resulting in a compact telescope with good image quality and a large field of view with a smaller, more economical, camera than is achievable with conventional single mirror solutions. The photon counting GCT camera is designed to record the flashes of atmospheric Cherenkov light from gamma and cosmic ray initiated cascades, which last only a few tens of nanoseconds. The GCT optics require that the camera detectors follow a convex surface with a radius of curvature of 1 m and a diameter of 35 cm, which is approximated by tiling the focal plane with 32 modules. The first camera prototype is equipped with multi-anode photomultipliers, each comprising an 8×8 array of 6×6 mm2 pixels to provide the required angular scale, adding up to 2048 pixels in total. Detector signals are shaped, amplified and digitised by electronics based on custom ASICs that provide digitisation at 1 GSample/s. The camera is self-triggering, retaining images where the focal plane light distribution matches predefined spatial and temporal criteria. The electronics are housed in the liquid-cooled, sealed camera enclosure. LED flashers at the corners of the focal plane provide a calibration source via reflection from the secondary mirror. The first GCT camera prototype underwent preliminary laboratory tests last year. In November 2015, the camera was installed on a prototype GCT telescope (SST-GATE) in Paris and was used to successfully record the first Cherenkov light of any CTA prototype, and the first Cherenkov light seen with such a dual-mirror optical system. A second full-camera prototype based on Silicon Photomultipliers is under construction. Up to 35 GCTs are envisaged for CTA.

  3. Review of up-to date digital cameras interfaces

    Science.gov (United States)

    Linkemann, Joachim

    2013-04-01

    Over the past 15 years, various interfaces on digital industrial cameras have been available on the market. This tutorial will give an overview of interfaces such as LVDS (RS644), Channel Link and Camera Link. In addition, other interfaces such as FireWire, Gigabit Ethernet, and now USB 3.0 have become more popular. Owing to their ease of use, these interfaces cover most of the market. Nevertheless, for certain applications and especially for higher bandwidths, Camera Link and CoaXPress are very useful. This tutorial will give a description of the advantages and disadvantages, comment on bandwidths, and provide recommendations on when to use which interface.

  4. A direct-view customer-oriented digital holographic camera

    Science.gov (United States)

    Besaga, Vira R.; Gerhardt, Nils C.; Maksimyak, Peter P.; Hofmann, Martin R.

    2018-01-01

    In this paper, we propose a direct-view digital holographic camera system consisting mostly of customer-oriented components. The camera system is based on standard photographic units such as camera sensor and objective and is adapted to operate under off-axis external white-light illumination. The common-path geometry of the holographic module of the system ensures direct-view operation. The system can operate in both self-reference and self-interference modes. As a proof of system operability, we present reconstructed amplitude and phase information of a test sample.

  5. Single-camera, three-dimensional particle tracking velocimetry

    OpenAIRE

    Peterson, K.; Regaard, B.; Heinemann, S.; Sick, V.

    2012-01-01

    This paper introduces single-camera, three-dimensional particle tracking velocimetry (SC3D-PTV), an image-based, single-camera technique for measuring 3-component, volumetric velocity fields in environments with limited optical access, in particular, optically accessible internal combustion engines. The optical components used for SC3D-PTV are similar to those used for two-camera stereoscopic-PIV, but are adapted to project two simultaneous images onto a single image sensor. A novel PTV algor...

  6. Whole body scan system based on γ camera

    International Nuclear Information System (INIS)

    Ma Tianyu; Jin Yongjie

    2001-01-01

    Most existing domestic γ cameras can not perform whole body scan protocol, which is of important use in clinic. The authors designed a set of whole body scan system, which is made up of a scan bed, an ISA interface card controlling the scan bed and the data acquisition software based on a data acquisition and image processing system for γ cameras. The image was obtained in clinical experiment, and the authors think it meets the need of clinical diagnosis. Application of this system in γ cameras can provide whole body scan function at low cost

  7. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    Science.gov (United States)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellut, Paolo; Sherwin, Gary

    2011-01-01

    TIR cameras can be used for day/night Unmanned Ground Vehicle (UGV) autonomous navigation when stealth is required. The quality of uncooled TIR cameras has significantly improved over the last decade, making them a viable option at low speed Limiting factors for stereo ranging with uncooled LWIR cameras are image blur and low texture scenes TIR perception capabilities JPL has explored includes: (1) single and dual band TIR terrain classification (2) obstacle detection (pedestrian, vehicle, tree trunks, ditches, and water) (3) perception thru obscurants

  8. What about getting physiological information into dynamic gamma camera studies

    International Nuclear Information System (INIS)

    Kiuru, A.; Nickles, R. J.; Holden, J. E.; Polcyn, R. E.

    1976-01-01

    A general technique has been developed for the multiplexing of time dependent analog signals into the individual frames of a gamma camera dynamic function study. A pulse train, frequency-modulated by the physiological signal, is capacitively coupled to the preamplifier servicing anyone of the outer phototubes of the camera head. These negative tail pulses imitate photoevents occuring at a point outside of the camera field of view, chosen to occupy a data cell in an unused corner of the computer-stored square image. By defining a region of interest around this cell, the resulting time-activity curve displays the physiological variable in temporal synchrony with the radiotracer distribution. (author)

  9. A pinhole camera for ultrahigh-intensity laser plasma experiments

    Science.gov (United States)

    Wang, C.; An, H. H.; Xiong, J.; Fang, Z. H.; Wang, Y. W.; Zhang, Z.; Hua, N.; Sun, J. R.; Wang, W.

    2017-11-01

    A pinhole camera is an important instrument for the detection of radiation in laser plasmas. It can monitor the laser focus directly and assist in the analysis of the experimental data. However, conventional pinhole cameras are difficult to use when the target is irradiated by an ultrahigh-power laser because of the high background of hard X-ray emission generated in the laser/target region. Therefore, an improved pinhole camera has been developed that uses a grazing-incidence mirror that enables soft X-ray imaging while avoiding the effect of hard X-ray from hot dense plasmas.

  10. Applications of a shadow camera system for energy meteorology

    Science.gov (United States)

    Kuhn, Pascal; Wilbert, Stefan; Prahl, Christoph; Garsche, Dominik; Schüler, David; Haase, Thomas; Ramirez, Lourdes; Zarzalejo, Luis; Meyer, Angela; Blanc, Philippe; Pitz-Paal, Robert

    2018-02-01

    Downward-facing shadow cameras might play a major role in future energy meteorology. Shadow cameras directly image shadows on the ground from an elevated position. They are used to validate other systems (e.g. all-sky imager based nowcasting systems, cloud speed sensors or satellite forecasts) and can potentially provide short term forecasts for solar power plants. Such forecasts are needed for electricity grids with high penetrations of renewable energy and can help to optimize plant operations. In this publication, two key applications of shadow cameras are briefly presented.

  11. A versatile photogrammetric camera automatic calibration suite for multi-spectral fusion and optical helmet tracking

    CSIR Research Space (South Africa)

    De Villiers, J

    2014-05-01

    Full Text Available This paper presents a system to determine the photogrammetric parameters of a camera. The lens distortion, focal length and camera six degree of freedom (DOF) position are calculated. The system caters for cameras of different sensitivity spectra...

  12. Comparison of myocardial perfusion imaging between the new high-speed gamma camera and the standard anger camera

    International Nuclear Information System (INIS)

    Tanaka, Hirokazu; Chikamori, Taishiro; Hida, Satoshi

    2013-01-01

    Cadmium-zinc-telluride (CZT) solid-state detectors have been recently introduced into the field of myocardial perfusion imaging. The aim of this study was to prospectively compare the diagnostic performance of the CZT high-speed gamma camera (Discovery NM 530c) with that of the standard 3-head gamma camera in the same group of patients. The study group consisted of 150 consecutive patients who underwent a 1-day stress-rest 99m Tc-sestamibi or tetrofosmin imaging protocol. Image acquisition was performed first on a standard gamma camera with a 15-min scan time each for stress and for rest. All scans were immediately repeated on a CZT camera with a 5-min scan time for stress and a 3-min scan time for rest, using list mode. The correlations between the CZT camera and the standard camera for perfusion and function analyses were strong within narrow Bland-Altman limits of agreement. Using list mode analysis, image quality for stress was rated as good or excellent in 97% of the 3-min scans, and in 100% of the ≥4-min scans. For CZT scans at rest, similarly, image quality was rated as good or excellent in 94% of the 1-min scans, and in 100% of the ≥2-min scans. The novel CZT camera provides excellent image quality, which is equivalent to standard myocardial single-photon emission computed tomography, despite a short scan time of less than half of the standard time. (author)

  13. RELATIVE AND ABSOLUTE CALIBRATION OF A MULTIHEAD CAMERA SYSTEM WITH OBLIQUE AND NADIR LOOKING CAMERAS FOR A UAS

    Directory of Open Access Journals (Sweden)

    F. Niemeyer

    2013-08-01

    Full Text Available Numerous unmanned aerial systems (UAS are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis“ software and will give an overview of the results and experiences of test flights.

  14. The Light Field Attachment: Turning a DSLR into a Light Field Camera Using a Low Budget Camera Ring.

    Science.gov (United States)

    Wang, Yuwang; Liu, Yebin; Heidrich, Wolfgang; Dai, Qionghai

    2017-10-01

    We propose a concept for a lens attachment that turns a standard DSLR camera and lens into a light field camera. The attachment consists of eight low-resolution, low-quality side cameras arranged around the central high-quality SLR lens. Unlike most existing light field camera architectures, this design provides a high-quality 2D image mode, while simultaneously enabling a new high-quality light field mode with a large camera baseline but little added weight, cost, or bulk compared with the base DSLR camera. From an algorithmic point of view, the high-quality light field mode is made possible by a new light field super-resolution method that first improves the spatial resolution and image quality of the side cameras and then interpolates additional views as needed. At the heart of this process is a super-resolution method that we call iterative Patch- And Depth-based Synthesis (iPADS), which combines patch-based and depth-based synthesis in a novel fashion. Experimental results obtained for both real captured data and synthetic data confirm that our method achieves substantial improvements in super-resolution for side-view images as well as the high-quality and view-coherent rendering of dense and high-resolution light fields.

  15. The Light Field Attachment: Turning a DSLR into a Light Field Camera Using a Low Budget Camera Ring

    KAUST Repository

    Wang, Yuwang

    2016-11-16

    We propose a concept for a lens attachment that turns a standard DSLR camera and lens into a light field camera. The attachment consists of 8 low-resolution, low-quality side cameras arranged around the central high-quality SLR lens. Unlike most existing light field camera architectures, this design provides a high-quality 2D image mode, while simultaneously enabling a new high-quality light field mode with a large camera baseline but little added weight, cost, or bulk compared with the base DSLR camera. From an algorithmic point of view, the high-quality light field mode is made possible by a new light field super-resolution method that first improves the spatial resolution and image quality of the side cameras and then interpolates additional views as needed. At the heart of this process is a super-resolution method that we call iterative Patch- And Depth-based Synthesis (iPADS), which combines patch-based and depth-based synthesis in a novel fashion. Experimental results obtained for both real captured data and synthetic data confirm that our method achieves substantial improvements in super-resolution for side-view images as well as the high-quality and view-coherent rendering of dense and high-resolution light fields.

  16. Glaucoma Screening in Nepal: Cup-to-Disc Estimate With Standard Mydriatic Fundus Camera Compared to Portable Nonmydriatic Camera.

    Science.gov (United States)

    Miller, Sarah E; Thapa, Suman; Robin, Alan L; Niziol, Leslie M; Ramulu, Pradeep Y; Woodward, Maria A; Paudyal, Indira; Pitha, Ian; Kim, Tyson N; Newman-Casey, Paula Anne

    2017-10-01

    To compare cup-to-disc ratio (CDR) measurements from images taken with a portable, 45-degree nonmydriatic fundus camera to images from a traditional tabletop mydriatic fundus camera. Prospective, cross-sectional, comparative instrument validation study. Setting: Clinic-based. A total of 422 eyes of 211 subjects were recruited from the Tilganga Institute of Ophthalmology (Kathmandu, Nepal). Two masked readers measured CDR and noted possible evidence of glaucoma (CDR ≥ 0.7 or the presence of a notch or disc hemorrhage) from fundus photographs taken with a nonmydriatic portable camera and a mydriatic standard camera. Each image was graded twice. Effect of camera modality on CDR measurement; inter- and intraobserver agreement for each camera for the diagnosis of glaucoma. A total of 196 eyes (46.5%) were diagnosed with glaucoma by chart review; 41.2%-59.0% of eyes were remotely diagnosed with glaucoma over grader, repeat measurement, and camera modality. There was no significant difference in CDR measurement between cameras after adjusting for grader and measurement order (estimate = 0.004, 95% confidence interval [CI], 0.003-0.011, P = .24). There was moderate interobserver reliability for the diagnosis of glaucoma (Pictor: κ = 0.54, CI, 0.46-0.61; Topcon: κ = 0.63, CI, 0.55-0.70) and moderate intraobserver agreement upon repeat grading (Pictor: κ = 0.63 and 0.64, for graders 1 and 2, respectively; Topcon: κ = 0.72 and 0.80, for graders 1 and 2, respectively). A portable, nonmydriatic, fundus camera can facilitate remote evaluation of disc images on par with standard mydriatic fundus photography. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Single-Camera Trap Survey Designs Miss Detections: Impacts on Estimates of Occupancy and Community Metrics

    OpenAIRE

    Pease, Brent S.; Nielsen, Clayton K.; Holzmueller, Eric J.

    2016-01-01

    The use of camera traps as a tool for studying wildlife populations is commonplace. However, few have considered how the number of detections of wildlife differ depending upon the number of camera traps placed at cameras-sites, and how this impacts estimates of occupancy and community composition. During December 2015-February 2016, we deployed four camera traps per camera-site, separated into treatment groups of one, two, and four camera traps, in southern Illinois to compare whether estimat...

  18. Silicon Photomultipliers for Compact Neutron Scatter Cameras

    Science.gov (United States)

    Ruch, Marc L.

    The ability to locate and identify special nuclear material (SNM) is critical for treaty verification and emergency response applications. SNM is used as the nuclear explosive in a nuclear weapon. This material emits neutrons, either spontaneously or when interrogated. The ability to form an image of the neutron source can be used for characterization and/or to confirm that the item is a weapon by determining whether its shape is consistent with that of a weapon. Additionally, treaty verification and emergency response applications might not be conducive to non-portable instruments. In future weapons treaties, for example, it is unlikely that host countries will make great efforts to facilitate large, bulky, and/or fragile inspection equipment. Furthermore, inspectors and especially emergency responders may need to access locations not easily approachable by vehicles. Therefore, there is a considerable need for a compact, human-portable neutron imaging system. Of the currently available neutron imaging technologies, only neutron scatter cameras (NSCs) can be made truly compact because aperture-based imagers, and time-encoded imagers, rely on large amounts of materials to modulate the neutron signal. NSCs, in contrast, can be made very small because most of the volume of the imager can be filled with active detector material. Also, unlike other neutron imaging technologies, NSCs have the inherent ability to act as neutron spectrometers which gives them an additional means of identifying a neutron source. Until recently, NSCs have relied on photomultiplier tubes (PMT) readouts, which are bulky and fragile, require high voltage, and are very sensitive to magnetic fields. Silicon photomultipliers (SiPMs) do not suffer from these drawbacks and are comparable to PMTs in many respects such as gain, and cost with better time resolution. Historically, SiPMs have been too noisy for these applications; however, recent advancements have greatly reduced this issue and they have

  19. A Novel Hemispherical and Dynamic Camera for EVAs Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This SBIR project is to develop a novel Hemispherical and Dynamic Camera(HDC) with ultra-wide field of view and low geometric distortion. The novel technology we...

  20. Face Liveness Detection Using a Light Field Camera

    Directory of Open Access Journals (Sweden)

    Sooyeon Kim

    2014-11-01

    Full Text Available A light field camera is a sensor that can record the directions as well as the colors of incident rays. This camera is widely utilized from 3D reconstruction to face and iris recognition. In this paper, we suggest a novel approach for defending spoofing face attacks, like printed 2D facial photos (hereinafter 2D photos and HD tablet images, using the light field camera. By viewing the raw light field photograph from a different standpoint, we extract two special features which cannot be obtained from the conventional camera. To verify the performance, we compose light field photograph databases and conduct experiments. Our proposed method achieves at least 94.78% accuracy or up to 99.36% accuracy under different types of spoofing attacks.

  1. Cameras instead of sieves for aggregate characterization : research spotlight

    Science.gov (United States)

    2012-01-01

    Michigan researchers explored the use of cameras and software that may eventually replace the use of screen sieves in sizing and assessing crushed aggregate for pavement construction. This research explored approaches to imaging aggregate as a way to...

  2. High quality neutron radiography imaging using cooled CCD camera

    International Nuclear Information System (INIS)

    Kobayashi, Hisao

    1993-01-01

    An electronic imaging technique using cooled charge-coupled-device camera (C-CCD) was applied to neutron radiography. The camera was examined for linearities of signal outputs and its dynamic ranges about the number of photons generated in a converter by an incident neutron beam. It is expected that the camera can be applied to high quality NR imaging especially to tomographic imaging for static objects. When the C-CCD camera is applied to get tomogram on the basis of its excellent characteristics, the results will be discussed about the quality of the image through a dynamic range of CT value which is defined in this paper, and a guide of dimensional limitation which can reasonably reconstruct tomograms. (author)

  3. Photogrammetric Processing of Apollo 15 Metric Camera Oblique Images

    Science.gov (United States)

    Edmundson, K. L.; Alexandrov, O.; Archinal, B. A.; Becker, K. J.; Becker, T. L.; Kirk, R. L.; Moratto, Z. M.; Nefian, A. V.; Richie, J. O.; Robinson, M. S.

    2016-06-01

    The integrated photogrammetric mapping system flown on the last three Apollo lunar missions (15, 16, and 17) in the early 1970s incorporated a Metric (mapping) Camera, a high-resolution Panoramic Camera, and a star camera and laser altimeter to provide support data. In an ongoing collaboration, the U.S. Geological Survey's Astrogeology Science Center, the Intelligent Robotics Group of the NASA Ames Research Center, and Arizona State University are working to achieve the most complete cartographic development of Apollo mapping system data into versatile digital map products. These will enable a variety of scientific/engineering uses of the data including mission planning, geologic mapping, geophysical process modelling, slope dependent correction of spectral data, and change detection. Here we describe efforts to control the oblique images acquired from the Apollo 15 Metric Camera.

  4. Holographic stereogram using camera array in dense arrangement

    Science.gov (United States)

    Yamamoto, Kenji; Oi, Ryutaro; Senoh, Takanori; Ichihashi, Yasuyuki; Kurita, Taiichiro

    2011-02-01

    Holographic stereograms can display 3D objects by using ray information. To display high quality representations of real 3D objects by using holographic stereograms, relatively dense ray information must be prepared as the 3D object information. One promising method of obtaining this information uses a combination of a camera array and view interpolation which is signal processing technique. However, it is still technically difficult to synthesize ray information without visible error by using view interpolation. Our approach uses a densely arranged camera array to reduce this difficulty. Even though view interpolation is a simple signal processing technique, the synthesized ray information produced by this camera array should be adequate. We designed and manufactured a densely arranged camera array and used it to generate holographic stereograms.

  5. HST WIDE FIELD PLANETARY CAMERA 2 OBSERVATIONS OF MARS

    Data.gov (United States)

    National Aeronautics and Space Administration — The Hubble Space Telescope Wide Field Planetary Camera 2 data archive contains calibrated data of Mars observed between April 27, 1999 and September 4, 2001. These...

  6. The first GCT camera for the Cherenkov Telescope Array

    CERN Document Server

    De Franco, A.; Allan, D.; Armstrong, T.; Ashton, T.; Balzer, A.; Berge, D.; Bose, R.; Brown, A.M.; Buckley, J.; Chadwick, P.M.; Cooke, P.; Cotter, G.; Daniel, M.K.; Funk, S.; Greenshaw, T.; Hinton, J.; Kraus, M.; Lapington, J.; Molyneux, P.; Moore, P.; Nolan, S.; Okumura, A.; Ross, D.; Rulten, C.; Schmoll, J.; Schoorlemmer, H.; Stephan, M.; Sutcliffe, P.; Tajima, H.; Thornhill, J.; Tibaldo, L.; Varner, G.; Watson, J.; Zink, A.

    2015-01-01

    The Gamma Cherenkov Telescope (GCT) is proposed to be part of the Small Size Telescope (SST) array of the Cherenkov Telescope Array (CTA). The GCT dual-mirror optical design allows the use of a compact camera of diameter roughly 0.4 m. The curved focal plane is equipped with 2048 pixels of ~0.2{\\deg} angular size, resulting in a field of view of ~9{\\deg}. The GCT camera is designed to record the flashes of Cherenkov light from electromagnetic cascades, which last only a few tens of nanoseconds. Modules based on custom ASICs provide the required fast electronics, facilitating sampling and digitisation as well as first level of triggering. The first GCT camera prototype is currently being commissioned in the UK. On-telescope tests are planned later this year. Here we give a detailed description of the camera prototype and present recent progress with testing and commissioning.

  7. Optical Design of the Submillimeter High Angular Resolution Camera (SHARC)

    Science.gov (United States)

    Hunter, T. R.; Benford, D. J.; Serabyn, E.

    1996-11-01

    The optical and mechanical design and performance of the Submillimeter High Angular Resolution Camera (SHARC) is described. The camera currently operates with a monolithic 24-pixel linear bolometer array in the 350 and 450 micron atmospheric windows at the Caltech Submillimeter Observatory (CSO). The design extends the techniques of geometric optics employed in optical and near-infrared cameras to submillimeter wavelengths. Using an off-axis ellipsoidal mirror and cold stops, excellent imaging (Strehl ratio > 0.95) is achieved across a 2' by 2' focal plane field even with secondary throws of up to 4'. The camera's symmetric mechanical assembly provides fixed, machined alignment of the optical elements. We demonstrate the imaging capabilities of the system with 350 micron observations of a point source at the telescope. The optical design can easily accommodate future planned upgrades to two-dimensional bolometer arrays. (SECTION: Astronomical Instrumentation)

  8. Ge Quantum Dot Infrared Imaging Camera, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Luna Innovations Incorporated proposes to develop a high performance Ge quantum dots-based infrared (IR) imaging camera on Si substrate. The high sensitivity, large...

  9. The CORONA Camera System - Iteks Contribution to World Security

    Science.gov (United States)

    Madden, F.

    This paper describes the camera system that made the Iron Curtain transparent, that dispelled the “missile gap” myth, possibly averted a nuclear war and most certainly kept an awful lot of people up on many, many long nights.

  10. Optimization of gamma-ray cameras of Anger type

    International Nuclear Information System (INIS)

    Jatteau, Michel; Lelong, Pierre; Normand, Gerard; Ott, Jean; Pauvert, Joseph; Pergrale, Jean

    1979-01-01

    Most of the radionuclide imaging equipments used for the diagnosis in nuclear medicine include a scintillation camera of the Anger type. Following a period of camera improvements connected to pure technological advances, perfecting the camera can only result nowadays from more thorough studies based on numerical approaches and computer simulations. Two important contributions to an optimization study of Anger gamma-ray cameras are presented, the first one being related to the light collection by the photomultiplier tubes, i.e. one of the processes which determine for a large part the performance parameters; the second one being connected to the computation of the intrinsic geometrical and spectral resolutions, which are two of the main characteristics acting on the image quality. The validity of computer simulation is shown by comparison between theoretical and experimental results before the simulation programmes to study the influence of various parameters are used [fr

  11. Software for minimalistic data management in large camera trap studies.

    Science.gov (United States)

    Krishnappa, Yathin S; Turner, Wendy C

    2014-11-01

    The use of camera traps is now widespread and their importance in wildlife studies well understood. Camera trap studies can produce millions of photographs and there is a need for software to help manage photographs efficiently. In this paper, we describe a software system that was built to successfully manage a large behavioral camera trap study that produced more than a million photographs. We describe the software architecture and the design decisions that shaped the evolution of the program over the study's three year period. The software system has the ability to automatically extract metadata from images, and add customized metadata to the images in a standardized format. The software system can be installed as a standalone application on popular operating systems. It is minimalistic, scalable and extendable so that it can be used by small teams or individual researchers for a broad variety of camera trap studies.

  12. Non-contact measurement of rotation angle with solo camera

    Science.gov (United States)

    Gan, Xiaochuan; Sun, Anbin; Ye, Xin; Ma, Liqun

    2015-02-01

    For the purpose to measure a rotation angle around the axis of an object, a non-contact rotation angle measurement method based on solo camera was promoted. The intrinsic parameters of camera were calibrated using chessboard on principle of plane calibration theory. The translation matrix and rotation matrix between the object coordinate and the camera coordinate were calculated according to the relationship between the corners' position on object and their coordinates on image. Then the rotation angle between the measured object and the camera could be resolved from the rotation matrix. A precise angle dividing table (PADT) was chosen as the reference to verify the angle measurement error of this method. Test results indicated that the rotation angle measurement error of this method did not exceed +/- 0.01 degree.

  13. Sensor modelling and camera calibration for close-range photogrammetry

    Science.gov (United States)

    Luhmann, Thomas; Fraser, Clive; Maas, Hans-Gerd

    2016-05-01

    Metric calibration is a critical prerequisite to the application of modern, mostly consumer-grade digital cameras for close-range photogrammetric measurement. This paper reviews aspects of sensor modelling and photogrammetric calibration, with attention being focussed on techniques of automated self-calibration. Following an initial overview of the history and the state of the art, selected topics of current interest within calibration for close-range photogrammetry are addressed. These include sensor modelling, with standard, extended and generic calibration models being summarised, along with non-traditional camera systems. Self-calibration via both targeted planar arrays and targetless scenes amenable to SfM-based exterior orientation are then discussed, after which aspects of calibration and measurement accuracy are covered. Whereas camera self-calibration is largely a mature technology, there is always scope for additional research to enhance the models and processes employed with the many camera systems nowadays utilised in close-range photogrammetry.

  14. Posture metrology for aerospace camera in the assembly of spacecraft

    Science.gov (United States)

    Yang, ZaiHua; Yang, Song; Wan, Bile; Pan, Tingyao; Long, Changyu

    2016-01-01

    During the spacecraft assembly process, the posture of the aerospace camera to the spacecraft coordinate system needs to be measured precisely, because the posture data are very important for the earth observing. In order to measure the angles between the camera optical axis and the spacecraft coordinate system's three axes x, y, z, a measurement scheme was designed. The scheme was based on the principle of space intersection measurement with theodolites. Three thodolites were used to respectively collimate the camera axis and two faces of a base cube. Then, through aiming at each other, a measurement network was built. Finally, the posture of the camera was measured. The error analysis and measurement experiments showed that the precision can reach 6″. This method has been used in the assembly of satellite GF-2 with satisfactory results.

  15. Application of infrared camera to bituminous concrete pavements: measuring vehicle

    Science.gov (United States)

    Janků, Michal; Stryk, Josef

    2017-09-01

    Infrared thermography (IR) has been used for decades in certain fields. However, the technological level of advancement of measuring devices has not been sufficient for some applications. Over the recent years, good quality thermal cameras with high resolution and very high thermal sensitivity have started to appear on the market. The development in the field of measuring technologies allowed the use of infrared thermography in new fields and for larger number of users. This article describes the research in progress in Transport Research Centre with a focus on the use of infrared thermography for diagnostics of bituminous road pavements. A measuring vehicle, equipped with a thermal camera, digital camera and GPS sensor, was designed for the diagnostics of pavements. New, highly sensitive, thermal cameras allow to measure very small temperature differences from the moving vehicle. This study shows the potential of a high-speed inspection without lane closures while using IR thermography.

  16. A practical block detector for a depth encoding PET camera

    International Nuclear Information System (INIS)

    Rogers, J.G.; Moisan, C.; Hoskinson, E.M.

    1995-10-01

    The depth-of-interaction effect in block detectors degrades the image resolution in commercial PET cameras and impedes the natural evolution of smaller, less expensive cameras. A method for correcting the measured position of each detected gamma ray by measuring its depth-of-interaction was tested and found to recover 38% of the lost resolution in a table-top 50 cm diameter camera. To obtain the desired depth sensitivity, standard commercial detectors were modified by a simple and practical process, which is suitable for mass production of the detectors. The impact of the detectors modifications on central image resolution and on the ability of the camera to correct for object scatter were also measured. (authors)

  17. Adaptive control of camera position for stereo vision

    Science.gov (United States)

    Crisman, Jill D.; Cleary, Michael E.

    1994-03-01

    A major problem in using two-camera stereo machine vision to perform real-world tasks, such as visual object tracking, is deciding where to position the cameras. Humans accomplish the analogous task by positioning their heads and eyes for optimal stereo effects. This paper describes recent work toward developing automated control strategies for camera motion in stereo machine vision systems for mobile robot navigation. Our goal is to achieve fast, reliable pursuit of a target while avoiding obstacles. Our strategy results in smooth, stable camera motion despite robot and target motion. Our algorithm has been shown to be successful at navigating a mobile robot, mediating visual target tracking and ultrasonic obstacle detection. The architecture, hardware, and simulation results are discussed.

  18. Quality control of plane and tomographic gamma cameras

    International Nuclear Information System (INIS)

    Moretti, J.L.; Roussi, A.

    1993-01-01

    In this article, the authors present different methods of gamma camera quality control in matters of uniformity, spatial resolution, spatial linearity, sensitivity, energy resolution, counting rate performance, SPECT parameters. The authors refer mainly to NEMA standards. 14 figs., 8 tabs

  19. Real Time Indoor Robot Localization Using a Stationary Fisheye Camera

    OpenAIRE

    Delibasis , Konstantinos ,; Plagianakos , Vasilios ,; Maglogiannis , Ilias

    2013-01-01

    Part 7: Intelligent Signal and Image Processing; International audience; A core problem in robotics is the localization of a mobile robot (determination of the location or pose) in its environment, since the robot’s behavior depends on its position. In this work, we propose the use of a stationary fisheye camera for real time robot localization in indoor environments. We employ an image formation model for the fisheye camera, which is used for accelerating the segmentation of the robot’s top ...

  20. CAMERA-BASED SOFTWARE IN REHABILITATION/THERAPY INTERVENTION

    DEFF Research Database (Denmark)

    Brooks, Anthony Lewis

    2014-01-01

    Use of an affordable, easily adaptable, ‘non-specific camera-based software’ that is rarely used in the field of rehabilitation is reported in a study with 91 participants over the duration of six workshop sessions. ‘Non-specific camera-based software’ refers to software that is not dependent......, and accessible software EyeCon is a potent and significant tool in the field of rehabilitation/therapy and warrants wider exploration....

  1. Triggered streak and framing rotating-mirror cameras

    International Nuclear Information System (INIS)

    Huston, A.E.; Tabrar, A.

    1975-01-01

    A pulse motor has been developed which enables a mirror to be rotated to speeds in excess of 20,000 rpm with 10 -4 s. High-speed cameras of both streak and framing type have been assembled which incorporate this mirror drive, giving streak writing speeds up to 2,000ms -1 , and framing speeds up to 500,000 frames s -1 , in each case with the capability of triggering the camera from the event under investigation. (author)

  2. Fusion neutron damage to a charge coupled device camera

    OpenAIRE

    Amaden, Christopher Dean

    1997-01-01

    Approved for public release; distribution is unlimited A charge coupled device (CCD) camera's performance has been degraded by damage produced by 14 MeV neutrons (n) from the Rotating Target Neutron Source. High energy neutrons produce atomic dislocation in doped silicon electronics. This thesis explores changes in Dark Current (J), Charge Transfer Inefficiency (CTI), and Contrast Transfer Function (CTF) as measures of neutron damage. The camera was irradiated to a fluence, Phi, of 6.60 x ...

  3. Openmv: A Python powered, extensible machine vision camera

    OpenAIRE

    Abdelkader, Ibrahim; El-Sonbaty, Yasser; El-Habrouk, Mohamed

    2017-01-01

    Advances in semiconductor manufacturing processes and large scale integration keep pushing demanding applications further away from centralized processing, and closer to the edges of the network (i.e. Edge Computing). It has become possible to perform complex in-network image processing using low-power embedded smart cameras, enabling a multitude of new collaborative image processing applications. This paper introduces OpenMV, a new low-power smart camera that lends itself naturally to wirele...

  4. Towards Interaction Around Unmodified Camera-equipped Mobile Devices

    OpenAIRE

    Grubert, Jens; Ofek, Eyal; Pahud, Michel; Kranz, Matthias; Schmalstieg, Dieter

    2017-01-01

    Around-device interaction promises to extend the input space of mobile and wearable devices beyond the common but restricted touchscreen. So far, most around-device interaction approaches rely on instrumenting the device or the environment with additional sensors. We believe, that the full potential of ordinary cameras, specifically user-facing cameras, which are integrated in most mobile devices today, are not used to their full potential, yet. We To this end, we present a novel approach for...

  5. Dichromatic Gray Pixel for Camera-agnostic Color Constancy

    OpenAIRE

    Qian, Yanlin; Chen, Ke; Nikkanen, Jarno; Kämäräinen, Joni-Kristian; Matas, Jiri

    2018-01-01

    We propose a novel statistical color constancy method, especially suitable for the Camera-agnostic Color Constancy, i.e. the scenario where nothing is known a priori about the capturing devices. The method, called Dichromatic Gray Pixel, or DGP, relies on a novel gray pixel detection algorithm derived using the Dichromatic Reflection Model. DGP is suitable for camera-agnostic color constancy since varying devices are set to make achromatic pixels look gray under standard neutral illumination....

  6. A trajectory observer for camera-based underwater motion measurements

    DEFF Research Database (Denmark)

    Berg, Tor; Jouffroy, Jerome; Johansen, Vegar

    This work deals with the issue of estimating the trajectory of a vehicle or object moving underwater based on camera measurements. The proposed approach consists of a diffusion-based trajectory observer (Jouffroy and Opderbecke, 2004) processing whole segments of a trajectory at a time. Additiona....... Additionally, the observer contains a Tikhonov regularizer for smoothing the estimates. Then, a method for including the camera measurements in an appropriate manner is proposed....

  7. Multi-Angle Snowflake Camera Value-Added Product

    Energy Technology Data Exchange (ETDEWEB)

    Shkurko, Konstantin [Univ. of Utah, Salt Lake City, UT (United States); Garrett, T. [Univ. of Utah, Salt Lake City, UT (United States); Gaustad, K [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-12-01

    The Multi-Angle Snowflake Camera (MASC) addresses a need for high-resolution multi-angle imaging of hydrometeors in freefall with simultaneous measurement of fallspeed. As illustrated in Figure 1, the MASC consists of three cameras, separated by 36°, each pointing at an identical focal point approximately 10 cm away. Located immediately above each camera, a light aims directly at the center of depth of field for its corresponding camera. The focal point at which the cameras are aimed lies within a ring through which hydrometeors fall. The ring houses a system of near-infrared emitter-detector pairs, arranged in two arrays separated vertically by 32 mm. When hydrometeors pass through the lower array, they simultaneously trigger all cameras and lights. Fallspeed is calculated from the time it takes to traverse the distance between the upper and lower triggering arrays. The trigger electronics filter out ambient light fluctuations associated with varying sunlight and shadows. The microprocessor onboard the MASC controls the camera system and communicates with the personal computer (PC). The image data is sent via FireWire 800 line, and fallspeed (and camera control) is sent via a Universal Serial Bus (USB) line that relies on RS232-over-USB serial conversion. See Table 1 for specific details on the MASC located at the Oliktok Point Mobile Facility on the North Slope of Alaska. The value-added product (VAP) detailed in this documentation analyzes the raw data (Section 2.0) using Python: images rely on OpenCV image processing library and derived aggregated statistics rely on some clever averaging. See Sections 4.1 and 4.2 for more details on what variables are computed.

  8. Wide-angle infrared camera for industry and medicine.

    Science.gov (United States)

    Sundstrom, E

    1968-09-01

    With the introduction of ir detectors that exhibit short time constants, a fast scan ir camera became feasible. This paper describes such a camera, outlines some of its possible uses, and discusses the results obtained in practical applications. Included in the review of nondestructive testing applications are a casting ladle, heating pipe, oil burning furnace, and building insulation. An example of breast cancer detection is included.

  9. Electronically shuttered camera system for the acquisition of precise images

    Science.gov (United States)

    Struck, Jacob K.

    1992-08-01

    An accuracy requirement of +/-0.011 degrees in the declination measurement of a remotely imaged munition cannot be satisfied using a conventional camera. A camera, error characterization, and error correction techniques are designed and developed that satisfy the accuracy requirement. The images are acquired and processed using an ARDEC developed data acquisition and image processing system. Based on internal testing, the developed system is expected to meet design goals during a formal certification process.

  10. Optical design of camera optics for mobile phones

    Science.gov (United States)

    Steinich, Thomas; Blahnik, Vladan

    2012-03-01

    At present, compact camera modules are included in many mobile electronic devices such as mobile phones, personal digital assistants or tablet computers. They have various uses, from snapshots of everyday situations to capturing barcodes for product information. This paper presents an overview of the key design challenges and some typical solutions. A lens design for a mobile phone camera is compared to a downscaled 35 mm format lens to demonstrate the main differences in optical design. Particular attention is given to scaling effects.

  11. Camera traps as sensor networks for monitoring animal communities

    OpenAIRE

    Kays, R.W.; Kranstauber, B.; Jansen, P.A.; Carbone, C.; Rowcliffe, M.; Fountain, T.; Tilak, S.

    2009-01-01

    Studying animal movement and distribution is of critical importance to addressing environmental challenges including invasive species, infectious diseases, climate and land-use change. Motion sensitive camera traps offer a visual sensor to record the presence of a species at a location, recording their movement in the Eulerian sense. Modern digital camera traps that record video present new analytical opportunities, but also new data management challenges. This paper describes our experience ...

  12. Underwater television camera for monitoring inner side of pressure vessel

    International Nuclear Information System (INIS)

    Takayama, Kazuhiko.

    1997-01-01

    An underwater television support device equipped with a rotatable and vertically movable underwater television camera and an underwater television camera controlling device for monitoring images of the inside of the reactor core photographed by the underwater television camera to control the position of the underwater television camera and the underwater light are disposed on an upper lattice plate of a reactor pressure vessel. Both of them are electrically connected with each other by way of a cable to rapidly observe the inside of the reactor core by the underwater television camera. The reproducibility is extremely satisfactory by efficiently concentrating the position of the camera and image information upon inspection and observation. As a result, the steps for periodical inspection can be reduced to shorten the days for the periodical inspection. Since there is no requirement to withdraw fuel assemblies over a wide reactor core region, and the device can be used with the fuel assemblies being left as they are in the reactor, it is suitable for inspection of detectors for nuclear instrumentation. (N.H.)

  13. New reconstruction method for the advanced compton camera

    International Nuclear Information System (INIS)

    Kurihara, Takashi; Ogawa, Koichi

    2007-01-01

    Conventional gammacameras employ a mechanical collimator, which reduces the number of photons detected by such cameras. To address this issue, a Compton camera has been proposed to improve the efficiency of data acquisition by employing electronic collimation. With regard to Compton cameras, the advanced Compton camera (ACC) which has been proposed by Tanimori et al. can restrict the source locations with the help of the recoil electrons that are emitted in the process of Compton scattering. However, the reconstruction methods employed in conventional Compton cameras are inefficient in reconstructing images from the data acquired with the ACC. In this paper, we propose a new reconstruction method that is designed specifically for the ACC. This method, which is an improved version of the source space tree algorithm (SSTA), permits the source distribution to be reconstructed accurately and efficiently. The SSTA is one of the reconstruction methods for conventional Compton cameras proposed by Rohe et al. Our proposed algorithm employs a set of lines that are defined at equiangular intervals in the reconstruction region and the specified voxels of interest that include the search points located on the above predefined lines at equally spaced intervals. The validity of our method is demonstrated by simulations involving the reconstruction of a point source and a disk source. (author)

  14. Stereo Calibration and Rectification for Omnidirectional Multi-Camera Systems

    Directory of Open Access Journals (Sweden)

    Yanchang Wang

    2012-10-01

    Full Text Available Stereo vision has been studied for decades as a fundamental problem in the field of computer vision. In recent years, computer vision and image processing with a large field of view, especially using omnidirectional vision and panoramic images, has been receiving increasing attention. An important problem for stereo vision is calibration. Although various kinds of calibration methods for omnidirectional cameras are proposed, most of them are limited to calibrate catadioptric cameras or fish-eye cameras and cannot be applied directly to multi-camera systems. In this work, we propose an easy calibration method with closed-form initialization and iterative optimization for omnidirectional multi-camera systems. The method only requires image pairs of the 2D target plane in a few different views. A method based on the spherical camera model is also proposed for rectifying omnidirectional stereo pairs. Using real data captured by Ladybug3, we carry out some experiments, including stereo calibration, rectification and 3D reconstruction. Statistical analyses and comparisons of the experimental results are also presented. As the experimental results show, the calibration results are precise and the effect of rectification is promising.

  15. High-Resolution Mars Camera Test Image of Moon (Infrared)

    Science.gov (United States)

    2005-01-01

    This crescent view of Earth's Moon in infrared wavelengths comes from a camera test by NASA's Mars Reconnaissance Orbiter spacecraft on its way to Mars. The mission's High Resolution Imaging Science Experiment camera took the image on Sept. 8, 2005, while at a distance of about 10 million kilometers (6 million miles) from the Moon. The dark feature on the right is Mare Crisium. From that distance, the Moon would appear as a star-like point of light to the unaided eye. The test verified the camera's focusing capability and provided an opportunity for calibration. The spacecraft's Context Camera and Optical Navigation Camera also performed as expected during the test. The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across.

  16. Distributed embedded smart cameras architectures, design and applications

    CERN Document Server

    Velipasalar, Senem

    2014-01-01

    This publication addresses distributed embedded smart camerascameras that perform onboard analysis and collaborate with other cameras. This book provides the material required to better understand the architectural design challenges of embedded smart camera systems, the hardware/software ecosystem, the design approach for, and applications of distributed smart cameras together with the state-of-the-art algorithms. The authors concentrate on the architecture, hardware/software design, realization of smart camera networks from applications to architectures, in particular in the embedded and mobile domains. •                    Examines energy issues related to wireless communication such as decreasing energy consumption to increase battery-life •                    Discusses processing large volumes of video data on an embedded environment in real-time •                    Covers design of realistic applications of distributed and embedded smart...

  17. A novel simultaneous streak and framing camera without principle errors

    Science.gov (United States)

    Jingzhen, L.; Fengshan, S.; Ningwen, L.; Xiangdong, G.; Bin, H.; Qingyang, W.; Hongyi, C.; Yi, C.; Xiaowei, L.

    2018-02-01

    A novel simultaneous streak and framing camera with continuous access, the perfect information of which is far more important for the exact interpretation and precise evaluation of many detonation events and shockwave phenomena, has been developed. The camera with the maximum imaging frequency of 2 × 106 fps and the maximum scanning velocity of 16.3 mm/μs has fine imaging properties which are the eigen resolution of over 40 lp/mm in the temporal direction and over 60 lp/mm in the spatial direction and the framing frequency principle error of zero for framing record, and the maximum time resolving power of 8 ns and the scanning velocity nonuniformity of 0.136%~-0.277% for streak record. The test data have verified the performance of the camera quantitatively. This camera, simultaneously gained frames and streak with parallax-free and identical time base, is characterized by the plane optical system at oblique incidence different from space system, the innovative camera obscura without principle errors, and the high velocity motor driven beryllium-like rotating mirror, made of high strength aluminum alloy with cellular lateral structure. Experiments demonstrate that the camera is very useful and reliable to take high quality pictures of the detonation events.

  18. A cooperative control algorithm for camera based observational systems.

    Energy Technology Data Exchange (ETDEWEB)

    Young, Joseph G.

    2012-01-01

    Over the last several years, there has been considerable growth in camera based observation systems for a variety of safety, scientific, and recreational applications. In order to improve the effectiveness of these systems, we frequently desire the ability to increase the number of observed objects, but solving this problem is not as simple as adding more cameras. Quite often, there are economic or physical restrictions that prevent us from adding additional cameras to the system. As a result, we require methods that coordinate the tracking of objects between multiple cameras in an optimal way. In order to accomplish this goal, we present a new cooperative control algorithm for a camera based observational system. Specifically, we present a receding horizon control where we model the underlying optimal control problem as a mixed integer linear program. The benefit of this design is that we can coordinate the actions between each camera while simultaneously respecting its kinematics. In addition, we further improve the quality of our solution by coupling our algorithm with a Kalman filter. Through this integration, we not only add a predictive component to our control, but we use the uncertainty estimates provided by the filter to encourage the system to periodically observe any outliers in the observed area. This combined approach allows us to intelligently observe the entire region of interest in an effective and thorough manner.

  19. Inspecting rapidly moving surfaces for small defects using CNN cameras

    Science.gov (United States)

    Blug, Andreas; Carl, Daniel; Höfler, Heinrich

    2013-04-01

    A continuous increase in production speed and manufacturing precision raises a demand for the automated detection of small image features on rapidly moving surfaces. An example are wire drawing processes where kilometers of cylindrical metal surfaces moving with 10 m/s have to be inspected for defects such as scratches, dents, grooves, or chatter marks with a lateral size of 100 μm in real time. Up to now, complex eddy current systems are used for quality control instead of line cameras, because the ratio between lateral feature size and surface speed is limited by the data transport between camera and computer. This bottleneck is avoided by "cellular neural network" (CNN) cameras which enable image processing directly on the camera chip. This article reports results achieved with a demonstrator based on this novel analogue camera - computer system. The results show that computational speed and accuracy of the analogue computer system are sufficient to detect and discriminate the different types of defects. Area images with 176 x 144 pixels are acquired and evaluated in real time with frame rates of 4 to 10 kHz - depending on the number of defects to be detected. These frame rates correspond to equivalent line rates on line cameras between 360 and 880 kHz, a number far beyond the available features. Using the relation between lateral feature size and surface speed as a figure of merit, the CNN based system outperforms conventional image processing systems by an order of magnitude.

  20. Poor Man's Virtual Camera: Real-Time Simultaneous Matting and Camera Pose Estimation.

    Science.gov (United States)

    Szentandrasi, Istvan; Dubska, Marketa; Zacharias, Michal; Herout, Adam

    2016-03-18

    Today's film and advertisement production heavily uses computer graphics combined with living actors by chromakeying. The matchmoving process typically takes a considerable manual effort. Semi-automatic matchmoving tools exist as well, but they still work offline and require manual check-up and correction. In this article, we propose an instant matchmoving solution for green screen. It uses a recent technique of planar uniform marker fields. Our technique can be used in indie and professional filmmaking as a cheap and ultramobile virtual camera, and for shot prototyping and storyboard creation. The matchmoving technique based on marker fields of shades of green is very computationally efficient: we developed and present in the article a mobile application running at 33 FPS. Our technique is thus available to anyone with a smartphone at low cost and with easy setup, opening space for new levels of filmmakers' creative expression.

  1. Volumetric particle image velocimetry with a single plenoptic camera

    Science.gov (United States)

    Fahringer, Timothy W.; Lynch, Kyle P.; Thurow, Brian S.

    2015-11-01

    A novel three-dimensional (3D), three-component (3C) particle image velocimetry (PIV) technique based on volume illumination and light field imaging with a single plenoptic camera is described. A plenoptic camera uses a densely packed microlens array mounted near a high resolution image sensor to sample the spatial and angular distribution of light collected by the camera. The multiplicative algebraic reconstruction technique (MART) computed tomography algorithm is used to reconstruct a volumetric intensity field from individual snapshots and a cross-correlation algorithm is used to estimate the velocity field from a pair of reconstructed particle volumes. This work provides an introduction to the basic concepts of light field imaging with a plenoptic camera and describes the unique implementation of MART in the context of plenoptic image data for 3D/3C PIV measurements. Simulations of a plenoptic camera using geometric optics are used to generate synthetic plenoptic particle images, which are subsequently used to estimate the quality of particle volume reconstructions at various particle number densities. 3D reconstructions using this method produce reconstructed particles that are elongated by a factor of approximately 4 along the optical axis of the camera. A simulated 3D Gaussian vortex is used to test the capability of single camera plenoptic PIV to produce a 3D/3C vector field, where it was found that lateral displacements could be measured to approximately 0.2 voxel accuracy in the lateral direction and 1 voxel in the depth direction over a 300× 200× 200 voxel volume. The feasibility of the technique is demonstrated experimentally using a home-built plenoptic camera based on a 16-megapixel interline CCD camera and a 289× 193 array of microlenses and a pulsed Nd:YAG laser. 3D/3C measurements were performed in the wake of a low Reynolds number circular cylinder and compared with measurements made using a conventional 2D/2C PIV system. Overall, single camera

  2. Accuracy Potential and Applications of MIDAS Aerial Oblique Camera System

    Science.gov (United States)

    Madani, M.

    2012-07-01

    Airborne oblique cameras such as Fairchild T-3A were initially used for military reconnaissance in 30s. A modern professional digital oblique camera such as MIDAS (Multi-camera Integrated Digital Acquisition System) is used to generate lifelike three dimensional to the users for visualizations, GIS applications, architectural modeling, city modeling, games, simulators, etc. Oblique imagery provide the best vantage for accessing and reviewing changes to the local government tax base, property valuation assessment, buying & selling of residential/commercial for better decisions in a more timely manner. Oblique imagery is also used for infrastructure monitoring making sure safe operations of transportation, utilities, and facilities. Sanborn Mapping Company acquired one MIDAS from TrackAir in 2011. This system consists of four tilted (45 degrees) cameras and one vertical camera connected to a dedicated data acquisition computer system. The 5 digital cameras are based on the Canon EOS 1DS Mark3 with Zeiss lenses. The CCD size is 5,616 by 3,744 (21 MPixels) with the pixel size of 6.4 microns. Multiple flights using different camera configurations (nadir/oblique (28 mm/50 mm) and (50 mm/50 mm)) were flown over downtown Colorado Springs, Colorado. Boresight fights for 28 mm nadir camera were flown at 600 m and 1,200 m and for 50 mm nadir camera at 750 m and 1500 m. Cameras were calibrated by using a 3D cage and multiple convergent images utilizing Australis model. In this paper, the MIDAS system is described, a number of real data sets collected during the aforementioned flights are presented together with their associated flight configurations, data processing workflow, system calibration and quality control workflows are highlighted and the achievable accuracy is presented in some detail. This study revealed that the expected accuracy of about 1 to 1.5 GSD (Ground Sample Distance) for planimetry and about 2 to 2.5 GSD for vertical can be achieved. Remaining systematic

  3. ACCURACY POTENTIAL AND APPLICATIONS OF MIDAS AERIAL OBLIQUE CAMERA SYSTEM

    Directory of Open Access Journals (Sweden)

    M. Madani

    2012-07-01

    Full Text Available Airborne oblique cameras such as Fairchild T-3A were initially used for military reconnaissance in 30s. A modern professional digital oblique camera such as MIDAS (Multi-camera Integrated Digital Acquisition System is used to generate lifelike three dimensional to the users for visualizations, GIS applications, architectural modeling, city modeling, games, simulators, etc. Oblique imagery provide the best vantage for accessing and reviewing changes to the local government tax base, property valuation assessment, buying & selling of residential/commercial for better decisions in a more timely manner. Oblique imagery is also used for infrastructure monitoring making sure safe operations of transportation, utilities, and facilities. Sanborn Mapping Company acquired one MIDAS from TrackAir in 2011. This system consists of four tilted (45 degrees cameras and one vertical camera connected to a dedicated data acquisition computer system. The 5 digital cameras are based on the Canon EOS 1DS Mark3 with Zeiss lenses. The CCD size is 5,616 by 3,744 (21 MPixels with the pixel size of 6.4 microns. Multiple flights using different camera configurations (nadir/oblique (28 mm/50 mm and (50 mm/50 mm were flown over downtown Colorado Springs, Colorado. Boresight fights for 28 mm nadir camera were flown at 600 m and 1,200 m and for 50 mm nadir camera at 750 m and 1500 m. Cameras were calibrated by using a 3D cage and multiple convergent images utilizing Australis model. In this paper, the MIDAS system is described, a number of real data sets collected during the aforementioned flights are presented together with their associated flight configurations, data processing workflow, system calibration and quality control workflows are highlighted and the achievable accuracy is presented in some detail. This study revealed that the expected accuracy of about 1 to 1.5 GSD (Ground Sample Distance for planimetry and about 2 to 2.5 GSD for vertical can be achieved. Remaining

  4. Can camera traps monitor Komodo dragons a large ectothermic predator?

    Science.gov (United States)

    Ariefiandy, Achmad; Purwandana, Deni; Seno, Aganto; Ciofi, Claudio; Jessop, Tim S

    2013-01-01

    Camera trapping has greatly enhanced population monitoring of often cryptic and low abundance apex carnivores. Effectiveness of passive infrared camera trapping, and ultimately population monitoring, relies on temperature mediated differences between the animal and its ambient environment to ensure good camera detection. In ectothermic predators such as large varanid lizards, this criterion is presumed less certain. Here we evaluated the effectiveness of camera trapping to potentially monitor the population status of the Komodo dragon (Varanus komodoensis), an apex predator, using site occupancy approaches. We compared site-specific estimates of site occupancy and detection derived using camera traps and cage traps at 181 trapping locations established across six sites on four islands within Komodo National Park, Eastern Indonesia. Detection and site occupancy at each site were estimated using eight competing models that considered site-specific variation in occupancy (ψ)and varied detection probabilities (p) according to detection method, site and survey number using a single season site occupancy modelling approach. The most parsimonious model [ψ (site), p (site survey); ω = 0.74] suggested that site occupancy estimates differed among sites. Detection probability varied as an interaction between site and survey number. Our results indicate that overall camera traps produced similar estimates of detection and site occupancy to cage traps, irrespective of being paired, or unpaired, with cage traps. Whilst one site showed some evidence detection was affected by trapping method detection was too low to produce an accurate occupancy estimate. Overall, as camera trapping is logistically more feasible it may provide, with further validation, an alternative method for evaluating long-term site occupancy patterns in Komodo dragons, and potentially other large reptiles, aiding conservation of this species.

  5. The upgrade of the H.E.S.S. cameras

    Science.gov (United States)

    Giavitto, Gianluca; Ashton, Terry; Balzer, Arnim; Berge, David; Brun, Francois; Chaminade, Thomas; Delagnes, Eric; Fontaine, Gerard; Füßling, Matthias; Giebels, Berrie; Glicenstein, Jean-Francois; Gräber, Tobias; Hinton, Jim; Jahnke, Albert; Klepser, Stefan; Kossatz, Marko; Kretzschmann, Axel; Lefranc, Valentin; Leich, Holger; Lüdecke, Hartmut; Lypova, Iryna; Manigot, Pascal; Marandon, Vincent; Moulin, Emmanuel; Naurois, Mathieu de; Nayman, Patrick; Ohm, Stefan; Penno, Marek; Ross, Duncan; Salek, David; Schade, Markus; Schwab, Thomas; Simoni, Rachel; Stegmann, Christian; Steppa, Constantin; Thornhill, Julian; Toussnel, Francois

    2017-12-01

    The High Energy Stereoscopic System (HESS) is an array of imaging atmospheric Cherenkov telescopes (IACTs) located in the Khomas highland in Namibia. It was built to detect Very High Energy (VHE > 100 GeV) cosmic gamma rays. Since 2003, HESS has discovered the majority of the known astrophysical VHE gamma-ray sources, opening a new observational window on the extreme non-thermal processes at work in our universe. HESS consists of four 12-m diameter Cherenkov telescopes (CT1-4), which started data taking in 2002, and a larger 28-m telescope (CT5), built in 2012, which lowers the energy threshold of the array to 30 GeV . The cameras of CT1-4 are currently undergoing an extensive upgrade, with the goals of reducing their failure rate, reducing their readout dead time and improving the overall performance of the array. The entire camera electronics has been renewed from ground-up, as well as the power, ventilation and pneumatics systems, and the control and data acquisition software. Only the PMTs and their HV supplies have been kept from the original cameras. Novel technical solutions have been introduced, which will find their way into some of the Cherenkov cameras foreseen for the next-generation Cherenkov Telescope Array (CTA) observatory. In particular, the camera readout system is the first large-scale system based on the analog memory chip NECTAr, which was designed for CTA cameras. The camera control subsystems and the control software framework also pursue an innovative design, exploiting cutting-edge hardware and software solutions which excel in performance, robustness and flexibility. The CT1 camera has been upgraded in July 2015 and is currently taking data; CT2-4 have been upgraded in fall 2016. Together they will assure continuous operation of HESS at its full sensitivity until and possibly beyond the advent of CTA. This contribution describes the design, the testing and the in-lab and on-site performance of all components of the newly upgraded HESS

  6. Radiometric calibration of digital cameras using neural networks

    Science.gov (United States)

    Grunwald, Michael; Laube, Pascal; Schall, Martin; Umlauf, Georg; Franz, Matthias O.

    2017-08-01

    Digital cameras are used in a large variety of scientific and industrial applications. For most applications, the acquired data should represent the real light intensity per pixel as accurately as possible. However, digital cameras are subject to physical, electronic and optical effects that lead to errors and noise in the raw image. Temperature- dependent dark current, read noise, optical vignetting or different sensitivities of individual pixels are examples of such effects. The purpose of radiometric calibration is to improve the quality of the resulting images by reducing the influence of the various types of errors on the measured data and thus improving the quality of the overall application. In this context, we present a specialized neural network architecture for radiometric calibration of digital cameras. Neural networks are used to learn a temperature- and exposure-dependent mapping from observed gray-scale values to true light intensities for each pixel. In contrast to classical at-fielding, neural networks have the potential to model nonlinear mappings which allows for accurately capturing the temperature dependence of the dark current and for modeling cameras with nonlinear sensitivities. Both scenarios are highly relevant in industrial applications. The experimental comparison of our network approach to classical at-fielding shows a consistently higher reconstruction quality, also for linear cameras. In addition, the calibration is faster than previous machine learning approaches based on Gaussian processes.

  7. Multispectral imaging using a stereo camera: concept, design and assessment

    Directory of Open Access Journals (Sweden)

    Mansouri Alamin

    2011-01-01

    Full Text Available Abstract This paper proposes a one-shot six-channel multispectral color image acquisition system using a stereo camera and a pair of optical filters. The two filters from the best pair, selected from among readily available filters such that they modify the sensitivities of the two cameras in such a way that they produce optimal estimation of spectral reflectance and/or color, are placed in front of the two lenses of the stereo camera. The two images acquired from the stereo camera are then registered for pixel-to-pixel correspondence. The spectral reflectance and/or color at each pixel on the scene are estimated from the corresponding camera outputs in the two images. Both simulations and experiments have shown that the proposed system performs well both spectrally and colorimetrically. Since it acquires the multispectral images in one shot, the proposed system can solve the limitations of slow and complex acquisition process, and costliness of the state of the art multispectral imaging systems, leading to its possible uses in widespread applications.

  8. Determination of Glomerular Filtration Rate by Using Gamma camera

    International Nuclear Information System (INIS)

    Amer, M.; Salim, D.

    2007-01-01

    Glomerular filtration rate (GFR) is a commonly accepted standard measure of renal function. It is routinely measured using tracers that are cleared exclusively by glomerular filtration. The aim of this study was to apply new nuclear medicine technique based on direct determination of clearance of a radioactive tracer provided that all the uptake compartments of the radioactive tracer should be included in the field of the view of the gamma camera. A total of 10 men and 7 women range from 27 to 64 years old were studied using dual head gamma camera. The data for clearance calculation comprises : (1) a transmission scan of part of the body using water phantom with a uniform distribution of the radioisotope , (2) the background corrected activity curves in the anterior and posterior views over all uptake compartments following the injection of radioactive tracer, and (3) the activity of radioactive tracer in two blood samples drawn during the examination. The results for GFR above 30 ml min -1 , the regression line of GFR by using simplified multiple sample method versus GFR in gamma camera method were not significantly different from the line of identity. The reliability of the gamma camera method was about 16%, 12 % and 8% for GFR values of 30, 60 and 100 ml min-1. Therefore, the reliability of the gamma camera and the simplified multiple sample method for prediction of GFR were almost the same.

  9. High-Speed Smart Camera with High Resolution

    Directory of Open Access Journals (Sweden)

    J. Dubois

    2007-02-01

    Full Text Available High-speed video cameras are powerful tools for investigating for instance the biomechanics analysis or the movements of mechanical parts in manufacturing processes. In the past years, the use of CMOS sensors instead of CCDs has enabled the development of high-speed video cameras offering digital outputs, readout flexibility, and lower manufacturing costs. In this paper, we propose a high-speed smart camera based on a CMOS sensor with embedded processing. Two types of algorithms have been implemented. A compression algorithm, specific to high-speed imaging constraints, has been implemented. This implementation allows to reduce the large data flow (6.55 Gbps and to propose a transfer on a serial output link (USB 2.0. The second type of algorithm is dedicated to feature extraction such as edge detection, markers extraction, or image analysis, wavelet analysis, and object tracking. These image processing algorithms have been implemented into an FPGA embedded inside the camera. These implementations are low-cost in terms of hardware resources. This FPGA technology allows us to process in real time 500 images per second with a 1280×1024 resolution. This camera system is a reconfigurable platform, other image processing algorithms can be implemented.

  10. High-Speed Smart Camera with High Resolution

    Directory of Open Access Journals (Sweden)

    Mosqueron R

    2007-01-01

    Full Text Available High-speed video cameras are powerful tools for investigating for instance the biomechanics analysis or the movements of mechanical parts in manufacturing processes. In the past years, the use of CMOS sensors instead of CCDs has enabled the development of high-speed video cameras offering digital outputs, readout flexibility, and lower manufacturing costs. In this paper, we propose a high-speed smart camera based on a CMOS sensor with embedded processing. Two types of algorithms have been implemented. A compression algorithm, specific to high-speed imaging constraints, has been implemented. This implementation allows to reduce the large data flow (6.55 Gbps and to propose a transfer on a serial output link (USB 2.0. The second type of algorithm is dedicated to feature extraction such as edge detection, markers extraction, or image analysis, wavelet analysis, and object tracking. These image processing algorithms have been implemented into an FPGA embedded inside the camera. These implementations are low-cost in terms of hardware resources. This FPGA technology allows us to process in real time 500 images per second with a 1280×1024 resolution. This camera system is a reconfigurable platform, other image processing algorithms can be implemented.

  11. View from Above of Phoenix's Stowed Robotic Arm Camera

    Science.gov (United States)

    2008-01-01

    [figure removed for brevity, see original site] Click on image for animation This artist's animation of an imaginary camera zooming in from above shows the location of the Robotic Arm Camera on NASA's Phoenix Mars Lander as it acquires an image of the scoop at the end of the arm. Located just beneath the Robotic Arm Camera lens, the scoop is folded in the stowed position, with its open end facing the Robotic Arm Camera. The last frame in the animation shows the first image taken by the Robotic Arm Camera, one day after Phoenix landed on Mars. In the center of the image is the robotic scoop the lander will use to dig into the surface, collect samples and touch water ice on Mars for the first time. The scoop is in the stowed position, awaiting deployment of the robotic arm. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  12. Spectroscopic gamma camera for use in high dose environments

    Science.gov (United States)

    Ueno, Yuichiro; Takahashi, Isao; Ishitsu, Takafumi; Tadokoro, Takahiro; Okada, Koichi; Nagumo, Yasushi; Fujishima, Yasutake; Kometani, Yutaka; Suzuki, Yasuhiko; Umegaki, Kikuo

    2016-06-01

    We developed a pinhole gamma camera to measure distributions of radioactive material contaminants and to identify radionuclides in extraordinarily high dose regions (1000 mSv/h). The developed gamma camera is characterized by: (1) tolerance for high dose rate environments; (2) high spatial and spectral resolution for identifying unknown contaminating sources; and (3) good usability for being carried on a robot and remotely controlled. These are achieved by using a compact pixelated detector module with CdTe semiconductors, efficient shielding, and a fine resolution pinhole collimator. The gamma camera weighs less than 100 kg, and its field of view is an 8 m square in the case of a distance of 10 m and its image is divided into 256 (16×16) pixels. From the laboratory test, we found the energy resolution at the 662 keV photopeak was 2.3% FWHM, which is enough to identify the radionuclides. We found that the count rate per background dose rate was 220 cps h/mSv and the maximum count rate was 300 kcps, so the maximum dose rate of the environment where the gamma camera can be operated was calculated as 1400 mSv/h. We investigated the reactor building of Unit 1 at the Fukushima Dai-ichi Nuclear Power Plant using the gamma camera and could identify the unknown contaminating source in the dose rate environment that was as high as 659 mSv/h.

  13. CMOS IMAGING SENSOR TECHNOLOGY FOR AERIAL MAPPING CAMERAS

    Directory of Open Access Journals (Sweden)

    K. Neumann

    2016-06-01

    Full Text Available In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.

  14. Super-Resolution in Plenoptic Cameras Using FPGAs

    Directory of Open Access Journals (Sweden)

    Joel Pérez

    2014-05-01

    Full Text Available Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA devices using VHDL (very high speed integrated circuit (VHSIC hardware description language. With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes.

  15. Decentralized tracking of humans using a camera network

    Science.gov (United States)

    Gruenwedel, Sebastian; Jelaca, Vedran; Niño-Castañeda, Jorge Oswaldo; Van Hese, Peter; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried

    2012-01-01

    Real-time tracking of people has many applications in computer vision and typically requires multiple cameras; for instance for surveillance, domotics, elderly-care and video conferencing. However, this problem is very challenging because of the need to deal with frequent occlusions and environmental changes. Another challenge is to develop solutions which scale well with the size of the camera network. Such solutions need to carefully restrict overall communication in the network and often involve distributed processing. In this paper we present a distributed person tracker, addressing the aforementioned issues. Real-time processing is achieved by distributing tasks between the cameras and a fusion node. The latter fuses only high level data based on low-bandwidth input streams from the cameras. This is achieved by performing tracking first on the image plane of each camera followed by sending only metadata to a local fusion node. We designed the proposed system with respect to a low communication load and towards robustness of the system. We evaluate the performance of the tracker in meeting scenarios where persons are often occluded by other persons and/or furniture. We present experimental results which show that our tracking approach is accurate even in cases of severe occlusions in some of the views.

  16. Cheetah: A high frame rate, high resolution SWIR image camera

    Science.gov (United States)

    Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob

    2008-10-01

    A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.

  17. TWO METHODS FOR SELF CALIBRATION OF DIGITAL CAMERA

    Directory of Open Access Journals (Sweden)

    A. Sampath

    2012-07-01

    Full Text Available Photogrammetric mapping using Commercial of the Shelf (COTS cameras is becoming more popular. Their popularity is augmented by the increasing use of Unmanned Aerial Vehicles (UAV as a platform for mapping. The mapping precision of these methods can be increased by using a calibrated camera. The USGS/EROS has developed an inexpensive, easy to use method, particularly for calibrating short focal length cameras. The method builds on a self-calibration procedure developed for the USGS EROS Data Center by Pictometry (and augmented by Dr. C.S Fraser, that uses a series of coded targets. These coded targets form different patterns that are imaged from nine different locations with differing camera orientations. A free network solution using collinearity equations is used to determine the calibration parameters. For the smaller focal length COTS cameras, the USGS has developed a procedure that uses a small prototype box that contains these coded targets. The design of the box is discussed, along with best practices for calibration procedure. Results of calibration parameters obtained using the box are compared with the parameters obtained using more established standard procedures.

  18. Spectroscopic gamma camera for use in high dose environments

    Energy Technology Data Exchange (ETDEWEB)

    Ueno, Yuichiro, E-mail: yuichiro.ueno.bv@hitachi.com [Research and Development Group, Hitachi, Ltd., Hitachi-shi, Ibaraki-ken 319-1221 (Japan); Takahashi, Isao; Ishitsu, Takafumi; Tadokoro, Takahiro; Okada, Koichi; Nagumo, Yasushi [Research and Development Group, Hitachi, Ltd., Hitachi-shi, Ibaraki-ken 319-1221 (Japan); Fujishima, Yasutake; Kometani, Yutaka [Hitachi Works, Hitachi-GE Nuclear Energy, Ltd., Hitachi-shi, Ibaraki-ken (Japan); Suzuki, Yasuhiko [Measuring Systems Engineering Dept., Hitachi Aloka Medical, Ltd., Ome-shi, Tokyo (Japan); Umegaki, Kikuo [Faculty of Engineering, Hokkaido University, Sapporo-shi, Hokkaido (Japan)

    2016-06-21

    We developed a pinhole gamma camera to measure distributions of radioactive material contaminants and to identify radionuclides in extraordinarily high dose regions (1000 mSv/h). The developed gamma camera is characterized by: (1) tolerance for high dose rate environments; (2) high spatial and spectral resolution for identifying unknown contaminating sources; and (3) good usability for being carried on a robot and remotely controlled. These are achieved by using a compact pixelated detector module with CdTe semiconductors, efficient shielding, and a fine resolution pinhole collimator. The gamma camera weighs less than 100 kg, and its field of view is an 8 m square in the case of a distance of 10 m and its image is divided into 256 (16×16) pixels. From the laboratory test, we found the energy resolution at the 662 keV photopeak was 2.3% FWHM, which is enough to identify the radionuclides. We found that the count rate per background dose rate was 220 cps h/mSv and the maximum count rate was 300 kcps, so the maximum dose rate of the environment where the gamma camera can be operated was calculated as 1400 mSv/h. We investigated the reactor building of Unit 1 at the Fukushima Dai-ichi Nuclear Power Plant using the gamma camera and could identify the unknown contaminating source in the dose rate environment that was as high as 659 mSv/h.

  19. Super-resolution in plenoptic cameras using FPGAs.

    Science.gov (United States)

    Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime

    2014-05-16

    Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes.

  20. X-Ray Powder Diffraction with Guinier - Haegg Focusing Cameras

    International Nuclear Information System (INIS)

    Brown, Allan

    1970-12-01

    The Guinier - Haegg focusing camera is discussed with reference to its use as an instrument for rapid phase analysis. An actual camera and the alignment procedure employed in its setting up are described. The results obtained with the instrument are compared with those obtained with Debye - Scherrer cameras and powder diffractometers. Exposure times of 15 - 30 minutes with compounds of simple structure are roughly one-sixth of those required for Debye - Scherrer patterns. Coupled with the lower background resulting from the use of a monochromatic X-ray beam, the shorter exposure time gives a ten-fold increase in sensitivity for the detection of minor phases as compared with the Debye - Scherrer camera. Attention is paid to the precautions taken to obtain reliable Bragg angles from Guinier - Haegg film measurements, with particular reference to calibration procedures. The evaluation of unit cell parameters from Guinier - Haegg data is discussed together with the application of tests for the presence of angle-dependent systematic errors. It is concluded that with proper calibration procedures and least squares treatment of the data, accuracies of the order of 0.005% are attainable. A compilation of diffraction data for a number of compounds examined in the Active Central Laboratory at Studsvik is presented to exemplify the scope of this type of powder camera

  1. Soft x-ray streak camera for laser fusion applications

    International Nuclear Information System (INIS)

    Stradling, G.L.

    1981-04-01

    This thesis reviews the development and significance of the soft x-ray streak camera (SXRSC) in the context of inertial confinement fusion energy development. A brief introduction of laser fusion and laser fusion diagnostics is presented. The need for a soft x-ray streak camera as a laser fusion diagnostic is shown. Basic x-ray streak camera characteristics, design, and operation are reviewed. The SXRSC design criteria, the requirement for a subkilovolt x-ray transmitting window, and the resulting camera design are explained. Theory and design of reflector-filter pair combinations for three subkilovolt channels centered at 220 eV, 460 eV, and 620 eV are also presented. Calibration experiments are explained and data showing a dynamic range of 1000 and a sweep speed of 134 psec/mm are presented. Sensitivity modifications to the soft x-ray streak camera for a high-power target shot are described. A preliminary investigation, using a stepped cathode, of the thickness dependence of the gold photocathode response is discussed. Data from a typical Argus laser gold-disk target experiment are shown

  2. Reliable and Repeatable Characterization of Optical Streak Cameras

    International Nuclear Information System (INIS)

    Charest Jr., Michael; Torres, Peter III; Silbernagel, Christopher; Kalantar, Daniel

    2008-01-01

    Optical streak cameras are used as primary diagnostics for a wide range of physics and laser experiments at facilities such as the National Ignition Facility (NIF). To meet the strict accuracy requirements needed for these experiments, the systematic nonlinearities of the streak cameras (attributed to nonlinearities in the optical and electrical components that make up the streak camera system) must be characterized. In some cases the characterization information is used as a guide to help determine how experiment data should be taken. In other cases, the characterization data are applied to the raw data images to correct for the nonlinearities. In order to characterize an optical streak camera, a specific set of data is collected, where the response to defined inputs are recorded. A set of analysis software routines has been developed to extract information such as spatial resolution, dynamic range, and temporal resolution from this data set. The routines are highly automated, requiring very little user input and thus provide very reliable and repeatable results that are not subject to interpretation. An emphasis on quality control has been placed on these routines due to the high importance of the camera characterization information

  3. Visible camera imaging of plasmas in Proto-MPEX

    Science.gov (United States)

    Mosby, R.; Skeen, C.; Biewer, T. M.; Renfro, R.; Ray, H.; Shaw, G. C.

    2015-11-01

    The prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device being developed at Oak Ridge National Laboratory (ORNL). This machine plans to study plasma-material interaction (PMI) physics relevant to future fusion reactors. Measurements of plasma light emission will be made on Proto-MPEX using fast, visible framing cameras. The cameras utilize a global shutter, which allows a full frame image of the plasma to be captured and compared at multiple times during the plasma discharge. Typical exposure times are ~10-100 microseconds. The cameras are capable of capturing images at up to 18,000 frames per second (fps). However, the frame rate is strongly dependent on the size of the ``region of interest'' that is sampled. The maximum ROI corresponds to the full detector area, of ~1000x1000 pixels. The cameras have an internal gain, which controls the sensitivity of the 10-bit detector. The detector includes a Bayer filter, for ``true-color'' imaging of the plasma emission. This presentation will exmine the optimized camera settings for use on Proto-MPEX. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.

  4. Template matching based people tracking using a smart camera network

    Science.gov (United States)

    Guan, Junzhi; Van Hese, Peter; Niño-Castañeda, Jorge Oswaldo; Bo Bo, Nyan; Gruenwedel, Sebastian; Van Haerenborgh, Dirk; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried

    2014-03-01

    In this paper, we proposes a people tracking system composed of multiple calibrated smart cameras and one fusion server which fuses the information from all cameras. Each smart camera estimates the ground plane positions of people based on the current frame and feedback from the server from the previous time. Correlation coefficient based template matching, which is invariant to illumination changes, is proposed to estimate the position of people in each smart camera. Only the estimated position and the corresponding correlation coefficient are sent to the server. This minimal amount of information exchange makes the system highly scalable with the number of cameras. The paper focuses on creating and updating a good template for the tracked person using feedback from the server. Additionally, a static background image of the empty room is used to improve the results of template matching. We evaluated the performance of the tracker in scenarios where persons are often occluded by other persons or furniture, and illumination changes occur frequently e.g., due to switching the light on or off. For two sequences (one minute for each, one with table in the room, one without table) with frequent illumination changes, the proposed tracker never lose track of the persons. We compare the performance of our tracking system to a state-of-the-art tracking system. Our approach outperforms it in terms of tracking accuracy and people loss.

  5. Monocular camera and IMU integration for indoor position estimation.

    Science.gov (United States)

    Zhang, Yinlong; Tan, Jindong; Zeng, Ziming; Liang, Wei; Xia, Ye

    2014-01-01

    This paper presents a monocular camera (MC) and inertial measurement unit (IMU) integrated approach for indoor position estimation. Unlike the traditional estimation methods, we fix the monocular camera downward to the floor and collect successive frames where textures are orderly distributed and feature points robustly detected, rather than using forward oriented camera in sampling unknown and disordered scenes with pre-determined frame rate and auto-focus metric scale. Meanwhile, camera adopts the constant metric scale and adaptive frame rate determined by IMU data. Furthermore, the corresponding distinctive image feature point matching approaches are employed for visual localizing, i.e., optical flow for fast motion mode; Canny Edge Detector & Harris Feature Point Detector & Sift Descriptor for slow motion mode. For superfast motion and abrupt rotation where images from camera are blurred and unusable, the Extended Kalman Filter is exploited to estimate IMU outputs and to derive the corresponding trajectory. Experimental results validate that our proposed method is effective and accurate in indoor positioning. Since our system is computationally efficient and in compact size, it's well suited for visually impaired people indoor navigation and wheelchaired people indoor localization.

  6. RELATIVE CAMERA POSE ESTIMATION METHOD USING OPTIMIZATION ON THE MANIFOLD

    Directory of Open Access Journals (Sweden)

    C. Cheng

    2017-05-01

    Full Text Available To solve the problem of relative camera pose estimation, a method using optimization with respect to the manifold is proposed. Firstly from maximum-a-posteriori (MAP model to nonlinear least squares (NLS model, the general state estimation model using optimization is derived. Then the camera pose estimation model is applied to the general state estimation model, while the parameterization of rigid body transformation is represented by Lie group/algebra. The jacobian of point-pose model with respect to Lie group/algebra is derived in detail and thus the optimization model of rigid body transformation is established. Experimental results show that compared with the original algorithms, the approaches with optimization can obtain higher accuracy both in rotation and translation, while avoiding the singularity of Euler angle parameterization of rotation. Thus the proposed method can estimate relative camera pose with high accuracy and robustness.

  7. Detectors for timing measurements 2. Streak cameras and fast photodiodes

    International Nuclear Information System (INIS)

    Tanaka, Yoshihito; Adachi, Shin-ichi

    2009-01-01

    Measuring methods of the pulse structure and its stability of the synchrotron radiation, and the method of time resolving measurement by synchronizing the external signal with the synchrotron radiation pulse are explained. The synchrotron radiation pulse width ranges from some ten to some hundred pico-seconds, and the time resolution of the detectors should be better than some pico-seconds. The streak camera is such a fast detector. Instead of the streak camera which is large-scaled and expensive, fast photo-detectors can be employed to know entire time structures. Some examples of time-synchronized measurements using the streak camera and the fast photo-detector are presented. (K.Y.)

  8. Development and evaluation of a Gamma Camera tuning system

    International Nuclear Information System (INIS)

    Arista Romeu, E. J.; Diaz Garcia, A.; Osorio Deliz, J. F.

    2015-01-01

    Correct operation of conventional analogue Gamma Cameras implies a good conformation of the position signals that correspond to a specific photo-peak of the radionuclide of interest. In order to achieve this goal the energy spectrum from each photo multiplier tube (PMT) has to be set within the same energy window. For this reason a reliable tuning system is an important part of all gamma cameras processing systems. In this work is being tested and evaluated a new prototype of tuning card that was developed and setting up for this purpose. The hardware and software of the circuit allow the regulation if each PMT high voltage. By this means a proper gain control for each of them is accomplished. The Tuning Card prototype was simulated in a virtual model and its satisfactory operation was proven in a Siemens Orbiter Gamma Camera. (Author)

  9. Geometric Calibration and Radiometric Correction of the Maia Multispectral Camera

    Science.gov (United States)

    Nocerino, E.; Dubbini, M.; Menna, F.; Remondino, F.; Gattelli, M.; Covi, D.

    2017-10-01

    Multispectral imaging is a widely used remote sensing technique, whose applications range from agriculture to environmental monitoring, from food quality check to cultural heritage diagnostic. A variety of multispectral imaging sensors are available on the market, many of them designed to be mounted on different platform, especially small drones. This work focuses on the geometric and radiometric characterization of a brand-new, lightweight, low-cost multispectral camera, called MAIA. The MAIA camera is equipped with nine sensors, allowing for the acquisition of images in the visible and near infrared parts of the electromagnetic spectrum. Two versions are available, characterised by different set of band-pass filters, inspired by the sensors mounted on the WorlView-2 and Sentinel2 satellites, respectively. The camera details and the developed procedures for the geometric calibrations and radiometric correction are presented in the paper.

  10. Development of high resolution camera for observations of superconducting cavities

    Directory of Open Access Journals (Sweden)

    Yoshihisa Iwashita

    2008-09-01

    Full Text Available A system for inspecting the inner surface of superconducting rf cavities is developed in order to study the relation between the achievable field gradient and the defects in the inner surface. The inspection system consists of a high resolution complementary metal-oxide-semiconductor camera and a special illumination system built in a cylinder that has a diameter of 50 mm. The camera cylinder can be inserted into the L-band 9 cell superconducting cavity. The system provides a resolution of about 7.5  μm/pixel. Thus far, there have been good correlations between locations identified by thermometry measurements and positions of defects found by this system. The heights or depths of the defects can also be estimated by measuring wall gradients using the reflection angle relation between the camera position and the strip illumination position. This paper presents a detailed description of the system and the data obtained from it.

  11. Streak cameras for soft x-ray and optical radiation

    International Nuclear Information System (INIS)

    Medecki, H.

    1983-01-01

    The principal component of a streak camera is the image converter tube. A slit-shaped photocathode transforms the radiation into a proportional emission of electrons. An electron - optics arrangement accelerates the electrons and projects them into a phosphor screen creating the image of the slit. A pair of deflection plates deflects the electronic beam along a direction perpendicular to the main dimension of the slit. Different portions of the phosphor screen show the instantaneous image of the slit with brightness proportional to the number of emitted electrons and, consequently, to the intensity of the radiation. For our x-ray streak cameras, we use the RCA C73435A image conventer tube intended for the measurement of the radiation of light and modified to have an x-ray sensitive photocathode. Practical considerations lead to the use of transparent rather than reflecting photocathodes. Several of these camera tubes are briefly described

  12. Users' guide to the positron camera DDP516 computer system

    International Nuclear Information System (INIS)

    Bracher, B.H.

    1979-08-01

    This publication is a guide to the operation, use and software for a DDP516 computer system provided by the Data Handling Group primarily for the development of a Positron Camera. The various sections of the publication fall roughly into three parts. (1) Sections forming the Operators Guide cover the basic operation of the machine, system utilities and back-up procedures. Copies of these sections are kept in a 'Nyrex' folder with the computer. (2) Sections referring to the software written particularly for Positron Camera Data Collection describe the system in outline and lead to details of file formats and program source files. (3) The remainder of the guide, describes General-Purpose Software. Much of this has been written over some years by various members of the Data Handling Group, and is available for use in other applications besides the positron camera. (UK)

  13. The TolTEC Camera for the LMT Telescope

    Science.gov (United States)

    Bryan, Sean

    2018-01-01

    TolTEC is a new camera being built for the 50-meter Large Millimeter-wave Telescope (LMT) on Sierra Negra in Puebla, Mexico. The instrument will discover and characterize distant galaxies by detecting the thermal emission of dust heated by starlight. The polarimetric capabilities of the camera will measure magnetic fields in star-forming regions in the Milky Way. The optical design of the camera uses mirrors, lenses, and dichroics to simultaneously couple a 4 arcminute diameter field of view onto three single-band focal planes at 150, 220, and 280 GHz. The 7000 polarization-selective detectors are single-band horn-coupled LEKID detectors fabricated at NIST. A rotating half wave plate operates at ambient temperature to modulate the polarized signal. In addition to the galactic and extragalactic surveys already planned, TolTEC installed at the LMT will provide open observing time to the community.

  14. Single-camera, three-dimensional particle tracking velocimetry.

    Science.gov (United States)

    Peterson, Kevin; Regaard, Boris; Heinemann, Stefan; Sick, Volker

    2012-04-09

    This paper introduces single-camera, three-dimensional particle tracking velocimetry (SC3D-PTV), an image-based, single-camera technique for measuring 3-component, volumetric velocity fields in environments with limited optical access, in particular, optically accessible internal combustion engines. The optical components used for SC3D-PTV are similar to those used for two-camera stereoscopic-µPIV, but are adapted to project two simultaneous images onto a single image sensor. A novel PTV algorithm relying on the similarity of the particle images corresponding to a single, physical particle produces 3-component, volumetric velocity fields, rather than the 3-component, planar results obtained with stereoscopic PIV, and without the reconstruction of an instantaneous 3D particle field. The hardware and software used for SC3D-PTV are described, and experimental results are presented.

  15. A mathematical model for camera calibration based on straight lines

    Directory of Open Access Journals (Sweden)

    Antonio M. G. Tommaselli

    2005-12-01

    Full Text Available In other to facilitate the automation of camera calibration process, a mathematical model using straight lines was developed, which is based on the equivalent planes mathematical model. Parameter estimation of the developed model is achieved by the Least Squares Method with Conditions and Observations. The same method of adjustment was used to implement camera calibration with bundles, which is based on points. Experiments using simulated and real data have shown that the developed model based on straight lines gives results comparable to the conventional method with points. Details concerning the mathematical development of the model and experiments with simulated and real data will be presented and the results with both methods of camera calibration, with straight lines and with points, will be compared.

  16. Impact of New Camera Technologies on Discoveries in Cell Biology.

    Science.gov (United States)

    Stuurman, Nico; Vale, Ronald D

    2016-08-01

    New technologies can make previously invisible phenomena visible. Nowhere is this more obvious than in the field of light microscopy. Beginning with the observation of "animalcules" by Antonie van Leeuwenhoek, when he figured out how to achieve high magnification by shaping lenses, microscopy has advanced to this day by a continued march of discoveries driven by technical innovations. Recent advances in single-molecule-based technologies have achieved unprecedented resolution, and were the basis of the Nobel prize in Chemistry in 2014. In this article, we focus on developments in camera technologies and associated image processing that have been a major driver of technical innovations in light microscopy. We describe five types of developments in camera technology: video-based analog contrast enhancement, charge-coupled devices (CCDs), intensified sensors, electron multiplying gain, and scientific complementary metal-oxide-semiconductor cameras, which, together, have had major impacts in light microscopy. © 2016 Marine Biological Laboratory.

  17. Automated Meteor Detection by All-Sky Digital Camera Systems

    Science.gov (United States)

    Suk, Tomáš; Šimberová, Stanislava

    2017-12-01

    We have developed a set of methods to detect meteor light traces captured by all-sky CCD cameras. Operating at small automatic observatories (stations), these cameras create a network spread over a large territory. Image data coming from these stations are merged in one central node. Since a vast amount of data is collected by the stations in a single night, robotic storage and analysis are essential to processing. The proposed methodology is adapted to data from a network of automatic stations equipped with digital fish-eye cameras and includes data capturing, preparation, pre-processing, analysis, and finally recognition of objects in time sequences. In our experiments we utilized real observed data from two stations.

  18. CAMERA-BASED SOFTWARE IN REHABILITATION/THERAPY INTERVENTION

    Directory of Open Access Journals (Sweden)

    Anthony Lewis Brooks

    2014-06-01

    Full Text Available Use of an affordable, easily adaptable, ‘non-specific camera-based software’ that is rarely used in the field of rehabilitation is reported in a study with 91 participants over the duration of six workshop sessions. ‘Non-specific camera-based software’ refers to software that is not dependent on specific hardware. Adaptable means that human tracking and created artefact interaction in the camera field of view is relatively easily changed as one desires via a user-friendly GUI. The significance of having both available for contemporary intervention is argued. Conclusions are that the mature, robust, and accessible software EyeCon is a potent and significant user-friendly tool in the field of rehabilitation/therapy and warrants wider exploration.

  19. Capturing migration phenology of terrestrial wildlife using camera traps

    Science.gov (United States)

    Tape, Ken D.; Gustine, David D.

    2014-01-01

    Remote photography, using camera traps, can be an effective and noninvasive tool for capturing the migration phenology of terrestrial wildlife. We deployed 14 digital cameras along a 104-kilometer longitudinal transect to record the spring migrations of caribou (Rangifer tarandus) and ptarmigan (Lagopus spp.) in the Alaskan Arctic. The cameras recorded images at 15-minute intervals, producing approximately 40,000 images, including 6685 caribou observations and 5329 ptarmigan observations. The northward caribou migration was evident because the median caribou observation (i.e., herd median) occurred later with increasing latitude; average caribou migration speed also increased with latitude (r2 = .91). Except at the northernmost latitude, a northward ptarmigan migration was similarly evident (r2 = .93). Future applications of this method could be used to examine the conditions proximate to animal movement, such as habitat or snow cover, that may influence migration phenology.

  20. A Neutron Streak Camera Designed for ICF Fuel Ion Temperature

    Science.gov (United States)

    Chen, Jiabin; Liao, Hua; Chen, Ming

    2007-11-01

    A neutron streak camera was designed for inertial confinement fusion (ICF) fuel ion temperature diagnostic. It is made of a 1 cm thick x8 cm diam piece of 3% benzophenone quenched plastic scintillator with about a 190 ps FWHM and a streak tube (55ps time resolution) with large-area photocathode (φ30 mm) showed no slit. The electron beam from the photocathode is focused into a little spot (φ1mm). Then the spot is scanned directly and multiplied by an internal microchannel plate. This greatly improves the sensitivity of the tube. The neutron streak camera combines the advangtages of scintillation detector (with high neutron detection efficiency) and of streak camera (with fast time response). The whole detection system time resolution is 300ps and can record neutron time of flight signals from ICF implosion target with yields of 10^7 DT neutron per shot.

  1. Spectral colors capture and reproduction based on digital camera

    Science.gov (United States)

    Chen, Defen; Huang, Qingmei; Li, Wei; Lu, Yang

    2018-01-01

    The purpose of this work is to develop a method for the accurate reproduction of the spectral colors captured by digital camera. The spectral colors being the purest color in any hue, are difficult to reproduce without distortion on digital devices. In this paper, we attempt to achieve accurate hue reproduction of the spectral colors by focusing on two steps of color correction: the capture of the spectral colors and the color characterization of digital camera. Hence it determines the relationship among the spectral color wavelength, the RGB color space of the digital camera device and the CIEXYZ color space. This study also provides a basis for further studies related to the color spectral reproduction on digital devices. In this paper, methods such as wavelength calibration of the spectral colors and digital camera characterization were utilized. The spectrum was obtained through the grating spectroscopy system. A photo of a clear and reliable primary spectrum was taken by adjusting the relative parameters of the digital camera, from which the RGB values of color spectrum was extracted in 1040 equally-divided locations. Calculated using grating equation and measured by the spectrophotometer, two wavelength values were obtained from each location. The polynomial fitting method for the camera characterization was used to achieve color correction. After wavelength calibration, the maximum error between the two sets of wavelengths is 4.38nm. According to the polynomial fitting method, the average color difference of test samples is 3.76. This has satisfied the application needs of the spectral colors in digital devices such as display and transmission.

  2. High-Speed Edge-Detecting Line Scan Smart Camera

    Science.gov (United States)

    Prokop, Norman F.

    2012-01-01

    A high-speed edge-detecting line scan smart camera was developed. The camera is designed to operate as a component in a NASA Glenn Research Center developed inlet shock detection system. The inlet shock is detected by projecting a laser sheet through the airflow. The shock within the airflow is the densest part and refracts the laser sheet the most in its vicinity, leaving a dark spot or shadowgraph. These spots show up as a dip or negative peak within the pixel intensity profile of an image of the projected laser sheet. The smart camera acquires and processes in real-time the linear image containing the shock shadowgraph and outputting the shock location. Previously a high-speed camera and personal computer would perform the image capture and processing to determine the shock location. This innovation consists of a linear image sensor, analog signal processing circuit, and a digital circuit that provides a numerical digital output of the shock or negative edge location. The smart camera is capable of capturing and processing linear images at over 1,000 frames per second. The edges are identified as numeric pixel values within the linear array of pixels, and the edge location information can be sent out from the circuit in a variety of ways, such as by using a microcontroller and onboard or external digital interface to include serial data such as RS-232/485, USB, Ethernet, or CAN BUS; parallel digital data; or an analog signal. The smart camera system can be integrated into a small package with a relatively small number of parts, reducing size and increasing reliability over the previous imaging system..

  3. Camera-based measurement of respiratory rates is reliable.

    Science.gov (United States)

    Becker, Christoph; Achermann, Stefan; Rocque, Mukul; Kirenko, Ihor; Schlack, Andreas; Dreher-Hummel, Thomas; Zumbrunn, Thomas; Bingisser, Roland; Nickel, Christian H

    2017-06-01

    Respiratory rate (RR) is one of the most important vital signs used to detect whether a patient is in critical condition. It is part of many risk scores and its measurement is essential for triage of patients in emergency departments. It is often not recorded as measurement is cumbersome and time-consuming. We intended to evaluate the accuracy of camera-based measurements as an alternative measurement to the current practice of manual counting. We monitored the RR of healthy male volunteers with a camera-based prototype application and simultaneously by manual counting and by capnography, which was considered the gold standard. The four assessors were mutually blinded. We simulated normoventilation, hypoventilation and hyperventilation as well as deep, normal and superficial breathing depths to assess potential clinical settings. The volunteers were assessed while being undressed, wearing a T-shirt or a winter coat. In total, 20 volunteers were included. The results of camera-based measurements of RRs and capnography were in close agreement throughout all clothing styles and respiratory patterns (Pearson's correlation coefficient, r=0.90-1.00, except for one scenario, in which the volunteer breathed slowly dressed in a winter coat r=0.84). In the winter-coat scenarios, the camera-based prototype application was superior to human counters. In our pilot study, we found that camera-based measurements delivered accurate and reliable results. Future studies need to show that camera-based measurements are a secure alternative for measuring RRs in clinical settings as well.

  4. The development of high-speed 100 fps CCD camera

    International Nuclear Information System (INIS)

    Hoffberg, M.; Laird, R.; Lenkzsus, F.; Liu, C.; Rodricks, B.

    1997-01-01

    This paper describes the development of a high-speed CCD digital camera system. The system has been designed to use CCDs from various manufacturers with minimal modifications. The first camera built on this design utilizes a Thomson 512 x 512 pixel CCD as its sensor, which is read out from two parallel outputs at a speed of 15 MHz/pixel/output. The data undergo correlated double sampling after which it is digitized into 12 bits. The throughput of the system translates into 60 MB/second, which is either stored directly in a PC or transferred to a custom-designed VXI module. The PC data acquisition version of the camera can collect sustained data in real time that is limited to the memory installed in the PC. The VXI version of the camera, also controlled by a PC, stores 512 MB of real-time data before it must be read out to the PC disk storage. The uncooled CCD can be used either with lenses for visible light imaging or with a phosphor screen for X-ray imaging. This camera has been tested with a phosphor screen coupled to a fiber-optic face plate for high-resolution, high-speed X-ray imaging. The camera is controlled through a custom event-driven user-friendly Windows package. The pixel clock speed can be changed from 1 to 15 MHz. The noise was measured to be 1.05 bits at a 13.3 MHz pixel clock. This paper will describe the electronics, software, and characterizations that have been performed using both visible and X-ray photons. (orig.)

  5. Target-Tracking Camera for a Metrology System

    Science.gov (United States)

    Liebe, Carl; Bartman, Randall; Chapsky, Jacob; Abramovici, Alexander; Brown, David

    2009-01-01

    An analog electronic camera that is part of a metrology system measures the varying direction to a light-emitting diode that serves as a bright point target. In the original application for which the camera was developed, the metrological system is used to determine the varying relative positions of radiating elements of an airborne synthetic aperture-radar (SAR) antenna as the airplane flexes during flight; precise knowledge of the relative positions as a function of time is needed for processing SAR readings. It has been common metrology system practice to measure the varying direction to a bright target by use of an electronic camera of the charge-coupled-device or active-pixel-sensor type. A major disadvantage of this practice arises from the necessity of reading out and digitizing the outputs from a large number of pixels and processing the resulting digital values in a computer to determine the centroid of a target: Because of the time taken by the readout, digitization, and computation, the update rate is limited to tens of hertz. In contrast, the analog nature of the present camera makes it possible to achieve an update rate of hundreds of hertz, and no computer is needed to determine the centroid. The camera is based on a position-sensitive detector (PSD), which is a rectangular photodiode with output contacts at opposite ends. PSDs are usually used in triangulation for measuring small distances. PSDs are manufactured in both one- and two-dimensional versions. Because it is very difficult to calibrate two-dimensional PSDs accurately, the focal-plane sensors used in this camera are two orthogonally mounted one-dimensional PSDs.

  6. Image-converter streak cameras with very high gain

    International Nuclear Information System (INIS)

    1975-01-01

    A new camera is described with slit scanning and very high photonic gain (G=5000). Development of the technology of tubes and microchannel plates has enabled integration of such an amplifying element in an image converter tube which does away with the couplings and the intermediary electron-photon-electron conversions of the classical converter systems having external amplification. It is thus possible to obtain equal or superior performance while retaining considerable gain for the camera, great compactness, great flexibility in use, and easy handling. (author)

  7. Robust Visual Control of Parallel Robots under Uncertain Camera Orientation

    Directory of Open Access Journals (Sweden)

    Miguel A. Trujano

    2012-10-01

    Full Text Available This work presents a stability analysis and experimental assessment of a visual control algorithm applied to a redundant planar parallel robot under uncertainty in relation to camera orientation. The key feature of the analysis is a strict Lyapunov function that allows the conclusion of asymptotic stability without invoking the Barbashin-Krassovsky-LaSalle invariance theorem. The controller does not rely on velocity measurements and has a structure similar to a classic Proportional Derivative control algorithm. Experiments in a laboratory prototype show that uncertainty in camera orientation does not significantly degrade closed-loop performance.

  8. A smart camera for High Dynamic Range imaging

    OpenAIRE

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2013-01-01

    International audience; A camera or a video camera is able to capture only a part of a high dynamic range scene information. The same scene can be almost totally perceived by the human visual system. This is true especially for real scenes where the difference in light intensity between the dark areas and bright areas is high. The imaging technique which can overcome this problem is called HDR (High Dynamic Range). It produces images from a set of multiple LDR images (Low Dynamic Range), capt...

  9. Time-of-flight cameras principles, methods and applications

    CERN Document Server

    Hansard, Miles; Choi, Ouk; Horaud, Radu

    2012-01-01

    Time-of-flight (TOF) cameras provide a depth value at each pixel, from which the 3D structure of the scene can be estimated. This new type of active sensor makes it possible to go beyond traditional 2D image processing, directly to depth-based and 3D scene processing. Many computer vision and graphics applications can benefit from TOF data, including 3D reconstruction, activity and gesture recognition, motion capture and face detection. It is already possible to use multiple TOF cameras, in order to increase the scene coverage, and to combine the depth data with images from several colour came

  10. Holographic interferometry using a digital photo-camera

    International Nuclear Information System (INIS)

    Sekanina, H.; Hledik, S.

    2001-01-01

    The possibilities of running digital holographic interferometry using commonly available compact digital zoom photo-cameras are studied. The recently developed holographic setup, suitable especially for digital photo-cameras equipped with an un detachable object lens, is used. The method described enables a simple and straightforward way of both recording and reconstructing of a digital holographic interferograms. The feasibility of the new method is verified by digital reconstruction of the interferograms acquired, using a numerical code based on the fast Fourier transform. Experimental results obtained are presented and discussed. (authors)

  11. Advances in pediatric gastroenterology: introducing video camera capsule endoscopy.

    Science.gov (United States)

    Siaw, Emmanuel O

    2006-04-01

    The video camera capsule endoscope is a gastrointestinal endoscope approved by the U.S. Food and Drug Administration in 2001 for use in diagnosing gastrointestinal disorders in adults. In 2003, the agency approved the device for use in children ages 10 and older, and the endoscope is currently in use at Arkansas Children's Hospital. A capsule camera, lens, battery, transmitter and antenna together record images of the small intestine as the endoscope makes its way through the bowel. The instrument is used with minimal risk to the patient while offering a high degree of accuracy in diagnosing small intestine disorders.

  12. SCC500: next-generation infrared imaging camera core products with highly flexible architecture for unique camera designs

    Science.gov (United States)

    Rumbaugh, Roy N.; Grealish, Kevin; Kacir, Tom; Arsenault, Barry; Murphy, Robert H.; Miller, Scott

    2003-09-01

    A new 4th generation MicroIR architecture is introduced as the latest in the highly successful Standard Camera Core (SCC) series by BAE SYSTEMS to offer an infrared imaging engine with greatly reduced size, weight, power, and cost. The advanced SCC500 architecture provides great flexibility in configuration to include multiple resolutions, an industry standard Real Time Operating System (RTOS) for customer specific software application plug-ins, and a highly modular construction for unique physical and interface options. These microbolometer based camera cores offer outstanding and reliable performance over an extended operating temperature range to meet the demanding requirements of real-world environments. A highly integrated lens and shutter is included in the new SCC500 product enabling easy, drop-in camera designs for quick time-to-market product introductions.

  13. A Quality Evaluation of Single and Multiple Camera Calibration Approaches for an Indoor Multi Camera Tracking System

    Directory of Open Access Journals (Sweden)

    M. Adduci

    2014-06-01

    Full Text Available Human detection and tracking has been a prominent research area for several scientists around the globe. State of the art algorithms have been implemented, refined and accelerated to significantly improve the detection rate and eliminate false positives. While 2D approaches are well investigated, 3D human detection and tracking is still an unexplored research field. In both 2D/3D cases, introducing a multi camera system could vastly expand the accuracy and confidence of the tracking process. Within this work, a quality evaluation is performed on a multi RGB-D camera indoor tracking system for examining how camera calibration and pose can affect the quality of human tracks in the scene, independently from the detection and tracking approach used. After performing a calibration step on every Kinect sensor, state of the art single camera pose estimators were evaluated for checking how good the quality of the poses is estimated using planar objects such as an ordinate chessboard. With this information, a bundle block adjustment and ICP were performed for verifying the accuracy of the single pose estimators in a multi camera configuration system. Results have shown that single camera estimators provide high accuracy results of less than half a pixel forcing the bundle to converge after very few iterations. In relation to ICP, relative information between cloud pairs is more or less preserved giving a low score of fitting between concatenated pairs. Finally, sensor calibration proved to be an essential step for achieving maximum accuracy in the generated point clouds, and therefore in the accuracy of the produced 3D trajectories, from each sensor.

  14. A Robust Camera-Based Interface for Mobile Entertainment

    Directory of Open Access Journals (Sweden)

    Maria Francesca Roig-Maimó

    2016-02-01

    Full Text Available Camera-based interfaces in mobile devices are starting to be used in games and apps, but few works have evaluated them in terms of usability or user perception. Due to the changing nature of mobile contexts, this evaluation requires extensive studies to consider the full spectrum of potential users and contexts. However, previous works usually evaluate these interfaces in controlled environments such as laboratory conditions, therefore, the findings cannot be generalized to real users and real contexts. In this work, we present a robust camera-based interface for mobile entertainment. The interface detects and tracks the user’s head by processing the frames provided by the mobile device’s front camera, and its position is then used to interact with the mobile apps. First, we evaluate the interface as a pointing device to study its accuracy, and different factors to configure such as the gain or the device’s orientation, as well as the optimal target size for the interface. Second, we present an in the wild study to evaluate the usage and the user’s perception when playing a game controlled by head motion. Finally, the game is published in an application store to make it available to a large number of potential users and contexts and we register usage data. Results show the feasibility of using this robust camera-based interface for mobile entertainment in different contexts and by different people.

  15. Movement-based interaction in camera spaces: a conceptual framework

    DEFF Research Database (Denmark)

    Eriksson, Eva; Hansen, Thomas Riisgaard; Lykke-Olesen, Andreas

    2007-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movementbased projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space,...

  16. Programmable electronic system for analog and digital gamma cameras modernization

    International Nuclear Information System (INIS)

    Osorio Deliz, J. F.; Diaz Garcia, A.; Arista Omeu, E. J.

    2013-01-01

    At present the use of analog and digital gamma cameras is continuously increasing in developing countries. Many of them still largely rely in old hardware electronics, which in many cases limits their use in actual nuclear medicine diagnostic studies. For this reason worldwide there are different medical equipment manufacturing companies engaged into partial or total Gamma Cameras modernization. Nevertheless in several occasions acquisition prices are not affordable for developing countries. This work describes the basic features of a programmable electronic system that allows improving acquisitions functions and processing of analog and digital gamma cameras. This system is based on an electronic board for the acquisition and digitization of nuclear pulses which have been generated by gamma camera detector. It comprises a hardware interface with PC and the associated software to fully signal processing. Signal shaping and image processing are included. The extensive use of reference tables in the processing and signal imaging software allowed the optimization of the processing speed. Time design and system cost were also decreased. (Author)

  17. CCD digital camera maps the East Pacific Rise

    Science.gov (United States)

    Edwards, Margo H.; Smith, Milton O.; Fornari, Daniel J.

    Since the pioneering work of Ewing et al. [1946] and Edgerton [1963] on the development of modern deep-sea camera systems, photographs of the deep seabed have been fundamental to marine geological investigations, portraying deep-sea fauna and permitting study of seafloor morphology at scales ranging from centimeters to meters [e.g., Heezen and Hollister, 1971; Spiess and Tyce, 1973; Grassle et al., 1979; Ballard and Moore, 1977; Lonsdale and Spiess, 1980; Fox et al., 1988]. Deep-sea photography has advanced from single-frame bounce cameras to sophisticated remotely operated vehicles (ROV) containing a complement of optical and acoustical data sensors and altitude-recording devices. Recent advances in camera technology, notably the development of digital camera systems [e.g., Harris et al., 1987], are rapidly increasing the information content of deep-sea photographs. Digital photographs are superior to their analog counterparts because they can be computer enhanced to extract features that are difficult to resolve due to poor lighting, for example. They also lend themselves to quantitative analysis, facilitating numerical comparisons between acoustic backscatter data and optical imagery of various seafloor terrains.

  18. Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras.

    Science.gov (United States)

    Wu, Dewen; Chen, Ruizhi; Chen, Liang

    2017-11-16

    Artificial Intelligence (AI) technologies and their related applications are now developing at a rapid pace. Indoor positioning will be one of the core technologies that enable AI applications because people spend 80% of their time indoors. Humans can locate themselves related to a visually well-defined object, e.g., a door, based on their visual observations. Can a smartphone camera do a similar job when it points to an object? In this paper, a visual positioning solution was developed based on a single image captured from a smartphone camera pointing to a well-defined object. The smartphone camera simulates the process of human eyes for the purpose of relatively locating themselves against a well-defined object. Extensive experiments were conducted with five types of smartphones on three different indoor settings, including a meeting room, a library, and a reading room. Experimental results shown that the average positioning accuracy of the solution based on five smartphone cameras is 30.6 cm, while that for the human-observed solution with 300 samples from 10 different people is 73.1 cm.

  19. Astronauts Cooper and Conrad prepare cameras during visual acuity tests

    Science.gov (United States)

    1965-01-01

    Astronauts L. Gordon Cooper Jr. (left), command pilot, and Charles Conrad Jr., pilot, the prime crew of the Gemini 5 space flight, prepare their cameras while aboard a C-130 aircraft flying near Laredo. The two astronauts are taking part in a series of visual acuity experiments to aid them in learning to identify known terrestrial features under controlled conditions.

  20. The moving camera in Flimmer

    DEFF Research Database (Denmark)

    Juel, Henrik

    2018-01-01

    No human actors are seen, but Flimmer still seethes with motion, both motion within the frame and motion of the frame. The subtle camera movements, perhaps at first unnoticed, play an important role in creating the poetic mood of the film, curious, playful and reflexive....