WorldWideScience

Sample records for abrams 360-degree camera

  1. Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?

    Science.gov (United States)

    Barazzetti, L.; Previtali, M.; Roncoroni, F.

    2018-05-01

    360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.

  2. Virtual displays for 360-degree video

    Science.gov (United States)

    Gilbert, Stephen; Boonsuk, Wutthigrai; Kelly, Jonathan W.

    2012-03-01

    In this paper we describe a novel approach for comparing users' spatial cognition when using different depictions of 360- degree video on a traditional 2D display. By using virtual cameras within a game engine and texture mapping of these camera feeds to an arbitrary shape, we were able to offer users a 360-degree interface composed of four 90-degree views, two 180-degree views, or one 360-degree view of the same interactive environment. An example experiment is described using these interfaces. This technique for creating alternative displays of wide-angle video facilitates the exploration of how compressed or fish-eye distortions affect spatial perception of the environment and can benefit the creation of interfaces for surveillance and remote system teleoperation.

  3. WebVR meets WebRTC: Towards 360-degree social VR experiences

    NARCIS (Netherlands)

    Gunkel, S.; Prins, M.J.; Stokking, H.M.; Niamut, O.A.

    2017-01-01

    Virtual Reality (VR) and 360-degree video are reshaping the media landscape, creating a fertile business environment. During 2016 new 360-degree cameras and VR headsets entered the consumer market, distribution platforms are being established and new production studios are emerging. VR is evermore

  4. Research on auto-calibration technology of the image plane's center of 360-degree and all round looking camera

    Science.gov (United States)

    Zhang, Shaojun; Xu, Xiping

    2015-10-01

    The 360-degree and all round looking camera, as its characteristics of suitable for automatic analysis and judgment on the ambient environment of the carrier by image recognition algorithm, is usually applied to opto-electronic radar of robots and smart cars. In order to ensure the stability and consistency of image processing results of mass production, it is necessary to make sure the centers of image planes of different cameras are coincident, which requires to calibrate the position of the image plane's center. The traditional mechanical calibration method and electronic adjusting mode of inputting the offsets manually, both exist the problem of relying on human eyes, inefficiency and large range of error distribution. In this paper, an approach of auto- calibration of the image plane of this camera is presented. The imaging of the 360-degree and all round looking camera is a ring-shaped image consisting of two concentric circles, the center of the image is a smaller circle and the outside is a bigger circle. The realization of the technology is just to exploit the above characteristics. Recognizing the two circles through HOUGH TRANSFORM algorithm and calculating the center position, we can get the accurate center of image, that the deviation of the central location of the optic axis and image sensor. The program will set up the image sensor chip through I2C bus automatically, we can adjusting the center of the image plane automatically and accurately. The technique has been applied to practice, promotes productivity and guarantees the consistent quality of products.

  5. Reflux and Belching After 270 Degree Versus 360 Degree Laparoscopic Posterior Fundoplication

    NARCIS (Netherlands)

    Broeders, Joris A.; Bredenoord, Albert J.; Hazebroek, Eric J.; Broeders, Ivo A.; Gooszen, Hein G.; Smout, André J.

    2012-01-01

    Objective: To investigate differences in effects of 270 degrees (270 degrees LPF) and 360 degrees laparoscopic posterior fundoplication (360 degrees LPF) on reflux characteristics and belching. Background: Three hundred sixty degrees LPF greatly reduces the ability of the stomach to vent ingested

  6. A novel 360-degree shape measurement using a simple setup with two mirrors and a laser MEMS scanner

    Science.gov (United States)

    Jin, Rui; Zhou, Xiang; Yang, Tao; Li, Dong; Wang, Chao

    2017-09-01

    There is no denying that 360-degree shape measurement technology plays an important role in the field of threedimensional optical metrology. Traditional optical 360-degree shape measurement methods are mainly two kinds: the first kind, by placing multiple scanners to achieve 360-degree measurements; the second kind, through the high-precision rotating device to get 360-degree shape model. The former increases the number of scanners and costly, while the latter using rotating devices lead to time consuming. This paper presents a low cost and fast optical 360-degree shape measurement method, which possesses the advantages of full static, fast and low cost. The measuring system consists of two mirrors with a certain angle, a laser projection system, a stereoscopic calibration block, and two cameras. And most of all, laser MEMS scanner can achieve precise movement of laser stripes without any movement mechanism, improving the measurement accuracy and efficiency. What's more, a novel stereo calibration technology presented in this paper can achieve point clouds data registration, and then get the 360-degree model of objects. A stereoscopic calibration block with special coded patterns on six sides is used in this novel stereo calibration method. Through this novel stereo calibration technology we can quickly get the 360-degree models of objects.

  7. Developing 360 degree feedback system for KINS

    Energy Technology Data Exchange (ETDEWEB)

    Han, In Soo; Cheon, B. M.; Kim, T. H.; Ryu, J. H. [Chungman National Univ., Daejeon (Korea, Republic of)

    2003-12-15

    This project aims to investigate the feasibility of a 360 degree feedback systems for KINS and to design guiding rules and structures in implementing that systems. Literature survey, environmental analysis and questionnaire survey were made to ensure that 360 degree feedback is the right tool to improve performance in KINS. That review leads to conclusion that more readiness and careful feasibility review are needed before implementation of 360 degree feedback in KINS. Further the project suggests some guiding rules that can be helpful for successful implementation of that system in KINS. Those include : start with development, experiment with one department, tie it to a clear organization's goal, train everyone involve, make sure to try that system in an atmosphere of trust.

  8. Developing 360 degree feedback system for KINS

    International Nuclear Information System (INIS)

    Han, In Soo; Cheon, B. M.; Kim, T. H.; Ryu, J. H.

    2003-12-01

    This project aims to investigate the feasibility of a 360 degree feedback systems for KINS and to design guiding rules and structures in implementing that systems. Literature survey, environmental analysis and questionnaire survey were made to ensure that 360 degree feedback is the right tool to improve performance in KINS. That review leads to conclusion that more readiness and careful feasibility review are needed before implementation of 360 degree feedback in KINS. Further the project suggests some guiding rules that can be helpful for successful implementation of that system in KINS. Those include : start with development, experiment with one department, tie it to a clear organization's goal, train everyone involve, make sure to try that system in an atmosphere of trust

  9. The 360 Degree Fulldome Production "Clockwork Ocean"

    Science.gov (United States)

    Baschek, B.; Heinsohn, R.; Opitz, D.; Fischer, T.; Baschek, T.

    2016-02-01

    The investigation of submesoscale eddies and fronts is one of the leading oceanographic topics at the Ocean Sciences Meeting 2016. In order to observe these small and short-lived phenomena, planes equipped with high-resolution cameras and fast vessels were deployed during the Submesoscale Experiments (SubEx) leading to some of the first high-resolution observations of these eddies. In a future experiment, a zeppelin will be used the first time in marine sciences. The relevance of submesoscale processes for the oceans and the work of the eddy hunters is described in the fascinating 9-minute long 360 degree fulldome production Clockwork Ocean. The fully animated movie is introduced in this presentation taking the observer from the bioluminescence in the deep ocean to a view of our blue planet from space. The immersive media is used to combine fascination for a yet unknown environment with scientific education of a broad audience. Detailed background information is available at the parallax website www.clockwork-ocean.com. The Film is also available for Virtual Reality glasses and smartphones to reach a broader distribution. A unique Mobile Dome with an area of 70 m² and seats for 40 people is used for science education at events, festivals, for politicians and school classes. The spectators are also invited to participate in the experiments by presenting 360 degree footage of the measurements. The premiere of Clockwork Ocean was in July 2015 in Hamburg, Germany and will be worldwide available in English and German as of fall 2015. Clockwork Ocean is a film of the Helmholtz-Zentrum Geesthacht produced by Daniel Opitz and Ralph Heinsohn.

  10. Magnetic field control of 90 Degree-Sign , 180 Degree-Sign , and 360 Degree-Sign domain wall resistance

    Energy Technology Data Exchange (ETDEWEB)

    Majidi, Roya, E-mail: royamajidi@gmail.com [Department of Physics, Shahid Rajaee Teacher Training University, Lavizan, 16788-15811 Tehran (Iran, Islamic Republic of)

    2012-10-01

    In the present work, we have compared the resistance of the 90 Degree-Sign , 180 Degree-Sign , and 360 Degree-Sign domain walls in the presence of external magnetic field. The calculations are based on the Boltzmann transport equation within the relaxation time approximation. One-dimensional Neel-type domain walls between two domains whose magnetization differs by angle of 90 Degree-Sign , 180 Degree-Sign , and 360 Degree-Sign are considered. The results indicate that the resistance of the 360 Degree-Sign DW is more considerable than that of the 90 Degree-Sign and 180 Degree-Sign DWs. It is also found that the domain wall resistance can be controlled by applying transverse magnetic field. Increasing the strength of the external magnetic field enhances the domain wall resistance. In providing spintronic devices based on magnetic nanomaterials, considering and controlling the effect of domain wall on resistivity are essential.

  11. Design and Implementation of a Novel Portable 360° Stereo Camera System with Low-Cost Action Cameras

    Science.gov (United States)

    Holdener, D.; Nebiker, S.; Blaser, S.

    2017-11-01

    The demand for capturing indoor spaces is rising with the digitalization trend in the construction industry. An efficient solution for measuring challenging indoor environments is mobile mapping. Image-based systems with 360° panoramic coverage allow a rapid data acquisition and can be processed to georeferenced 3D images hosted in cloud-based 3D geoinformation services. For the multiview stereo camera system presented in this paper, a 360° coverage is achieved with a layout consisting of five horizontal stereo image pairs in a circular arrangement. The design is implemented as a low-cost solution based on a 3D printed camera rig and action cameras with fisheye lenses. The fisheye stereo system is successfully calibrated with accuracies sufficient for the applied measurement task. A comparison of 3D distances with reference data delivers maximal deviations of 3 cm on typical distances in indoor space of 2-8 m. Also the automatic computation of coloured point clouds from the stereo pairs is demonstrated.

  12. Should 360-Degree Feedback Be Used Only for Developmental Purposes?

    Science.gov (United States)

    Bracken, David W.; Dalton, Maxine A.; Jako, Robert A.; McCauley, Cynthia D.; Pollman, Victoria A.

    This booklet presents five papers that address the issue of whether 360-degree feedback (in which a manager or executive receives feedback on how bosses, peers, and direct reports see him or her) should be used only for development, or whether 360-degree feedback (also known as multi-rater feedback) should be used for administrative purposes such…

  13. Managing "Academic Value": The 360-Degree Perspective

    Science.gov (United States)

    Wilson, Margaret R.; Corr, Philip J.

    2018-01-01

    The "raison d'etre" of all universities is to create and deliver "academic value", which we define as the sum total of the contributions from the 360-degree "angles" of the academic community, including all categories of staff, as well as external stakeholders (e.g. regulatory, commercial, professional and community…

  14. Improving School Improvement: Development and Validation of the CSIS-360, a 360-Degree Feedback Assessment for School Improvement Specialists

    Science.gov (United States)

    McDougall, Christie M.

    2013-01-01

    The purpose of the mixed methods study was to develop and validate the CSIS-360, a 360-degree feedback assessment to measure competencies of school improvement specialists from multiple perspectives. The study consisted of eight practicing school improvement specialists from a variety of settings. The specialists nominated 23 constituents to…

  15. 360-degree videos: a new visualization technique for astrophysical simulations

    Science.gov (United States)

    Russell, Christopher M. P.

    2017-11-01

    360-degree videos are a new type of movie that renders over all 4π steradian. Video sharing sites such as YouTube now allow this unique content to be shared via virtual reality (VR) goggles, hand-held smartphones/tablets, and computers. Creating 360° videos from astrophysical simulations is not only a new way to view these simulations as you are immersed in them, but is also a way to create engaging content for outreach to the public. We present what we believe is the first 360° video of an astrophysical simulation: a hydrodynamics calculation of the central parsec of the Galactic centre. We also describe how to create such movies, and briefly comment on what new science can be extracted from astrophysical simulations using 360° videos.

  16. Do 360-degree feedback survey results relate to patient satisfaction measures?

    Science.gov (United States)

    Hageman, Michiel G J S; Ring, David C; Gregory, Paul J; Rubash, Harry E; Harmon, Larry

    2015-05-01

    There is evidence that feedback from 360-degree surveys-combined with coaching-can improve physician team performance and quality of patient care. The Physicians Universal Leadership-Teamwork Skills Education (PULSE) 360 is one such survey tool that is used to assess work colleagues' and coworkers' perceptions of a physician's leadership, teamwork, and clinical practice style. The Clinician & Group-Consumer Assessment of Healthcare Providers and System (CG-CAHPS), developed by the US Department of Health and Human Services to serve as the benchmark for quality health care, is a survey tool for patients to provide feedback that is based on their recent experiences with staff and clinicians and soon will be tied to Medicare-based compensation of participating physicians. Prior research has indicated that patients and coworkers often agree in their assessment of physicians' behavioral patterns. The goal of the current study was to determine whether 360-degree, also called multisource, feedback provided by coworkers could predict patient satisfaction/experience ratings. A significant relationship between these two forms of feedback could enable physicians to take a more proactive approach to reinforce their strengths and identify any improvement opportunities in their patient interactions by reviewing feedback from team members. An automated 360-degree software process may be a faster, simpler, and less resource-intensive approach than telephoning and interviewing patients for survey responses, and it potentially could facilitate a more rapid credentialing or quality improvement process leading to greater fiscal and professional development gains for physicians. Our primary research question was to determine if PULSE 360 coworkers' ratings correlate with CG-CAHPS patients' ratings of overall satisfaction, recommendation of the physician, surgeon respect, and clarity of the surgeon's explanation. Our secondary research questions were to determine whether CG-CAHPS scores

  17. 360-degree videos: a new visualization technique for astrophysical simulations, applied to the Galactic Center

    Science.gov (United States)

    Russell, Christopher

    2018-01-01

    360-degree videos are a new type of movie that renders over all 4π steradian. Video sharing sites such as YouTube now allow this unique content to be shared via virtual reality (VR) goggles, hand-held smartphones/tablets, and computers. Creating 360-degree videos from astrophysical simulations not only provide a new way to view these simulations due to their immersive nature, but also yield engaging content for outreach to the public. We present our 360-degree video of an astrophysical simulation of the Galactic center: a hydrodynamics calculation of the colliding and accreting winds of the 30 Wolf-Rayet stars orbiting within the central parsec. Viewing the movie, which renders column density, from the location of the supermassive black hole gives a unique and immersive perspective of the shocked wind material inspiraling and tidally stretching as it plummets toward the black hole. We also describe how to create such movies, discuss what type of content does and does not look appealing in 360-degree format, and briefly comment on what new science can be extracted from astrophysical simulations using 360-degree videos.

  18. IMPROVING SPHERICAL PHOTOGRAMMETRY USING 360° OMNI-CAMERAS: USE CASES AND NEW APPLICATIONS

    Directory of Open Access Journals (Sweden)

    G. Fangi

    2018-05-01

    Full Text Available During the last few years, there has been a growing exploitation of consumer-grade cameras allowing one to capture 360° images. Each device has different features and the choice should be entrusted on the use and the expected final output. The interest on such technology within the research community is related to its use versatility, enabling the user to capture the world with an omnidirectional view with just one shot. The potential is huge and the literature presents many use cases in several research domains, spanning from retail to construction, from tourism to immersive virtual reality solutions. However, the domain that could the most benefit is Cultural Heritage (CH, since these sensors are particularly suitable for documenting a real scene with architectural detail. Following the previous researches conducted by Fangi, which introduced its own methodology called Spherical Photogrammetry (SP, the aim of this paper is to present some tests conducted with the omni-camera Panono 360° which reach a final resolution comparable with a traditional camera and to validate, after almost ten years from the first experiment, its reliability for architectural surveying purposes. Tests have been conducted choosing as study cases Santa Maria della Piazza and San Francesco alle scale Churches in Ancona, Italy, since they were previously surveyed and documented with SP methodology. In this way, it has been possible to validate the accuracy of the new survey, performed by means an omni-camera, compared with the previous one for both outdoor and indoor scenario. The core idea behind this work is to validate if this new sensor can replace the standard image collection phase, speeding up the process, assuring at the same time the final accuracy of the survey. The experiment conducted demonstrate that, w.r.t. the SP methodology developed so far, the main advantage in using 360° omni-directional cameras lies on increasing the rapidity of acquisition and

  19. Doctors' perceptions of why 360-degree feedback does (not) work: a qualitative study.

    Science.gov (United States)

    Overeem, Karlijn; Wollersheim, Hub; Driessen, Erik; Lombarts, Kiki; van de Ven, Geertje; Grol, Richard; Arah, Onyebuchi

    2009-09-01

    Delivery of 360-degree feedback is widely used in revalidation programmes. However, little has been done to systematically identify the variables that influence whether or not performance improvement is actually achieved after such assessments. This study aims to explore which factors represent incentives, or disincentives, for consultants to implement suggestions for improvement from 360-degree feedback. In 2007, 109 consultants in the Netherlands were assessed using 360-degree feedback and portfolio learning. We carried out a qualitative study using semi-structured interviews with 23 of these consultants, purposively sampled based on gender, hospital, work experience, specialty and views expressed in a previous questionnaire. A grounded theory approach was used to analyse the transcribed tape-recordings. We identified four groups of factors that can influence consultants' practice improvement after 360-degree feedback: (i) contextual factors related to workload, lack of openness and social support, lack of commitment from hospital management, free-market principles and public distrust; (ii) factors related to feedback; (iii) characteristics of the assessment system, such as facilitators and a portfolio to encourage reflection, concrete improvement goals and annual follow-up interviews, and (iv) individual factors, such as self-efficacy and motivation. It appears that 360-degree feedback can be a positive force for practice improvement provided certain conditions are met, such as that skilled facilitators are available to encourage reflection, concrete goals are set and follow-up interviews are carried out. This study underscores the fact that hospitals and consultant groups should be aware of the existing lack of openness and absence of constructive feedback. Consultants indicated that sharing personal reflections with colleagues could improve the quality of collegial relationships and heighten the chance of real performance improvement.

  20. Developing Your 360-Degree Leadership Potential.

    Science.gov (United States)

    Verma, Nupur; Mohammed, Tan-Lucien; Bhargava, Puneet

    2017-09-01

    Radiologists serve in leadership roles throughout their career, making leadership education an integral part of their development. A maxim of leadership style is summarized by 360-Degree Leadership, which highlights the ability of a leader to lead from any position within the organization while relying on core characteristics to build confidence from within their team. The qualities of leadership discussed can be learned and applied by radiologists at any level. These traits can form a foundation for the leader when faced with unfavorable events, which themselves allow the leader an opportunity to build trust. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  1. Intra prediction using face continuity in 360-degree video coding

    Science.gov (United States)

    Hanhart, Philippe; He, Yuwen; Ye, Yan

    2017-09-01

    This paper presents a new reference sample derivation method for intra prediction in 360-degree video coding. Unlike the conventional reference sample derivation method for 2D video coding, which uses the samples located directly above and on the left of the current block, the proposed method considers the spherical nature of 360-degree video when deriving reference samples located outside the current face to which the block belongs, and derives reference samples that are geometric neighbors on the sphere. The proposed reference sample derivation method was implemented in the Joint Exploration Model 3.0 (JEM-3.0) for the cubemap projection format. Simulation results for the all intra configuration show that, when compared with the conventional reference sample derivation method, the proposed method gives, on average, luma BD-rate reduction of 0.3% in terms of the weighted spherical PSNR (WS-PSNR) and spherical PSNR (SPSNR) metrics.

  2. Maximizing the Value of 360-Degree Feedback: A Process for Successful Individual and Organizational Development.

    Science.gov (United States)

    Tornow, Walter W.; London, Manuel

    Ways in which organizations can enhance their use of "360-degree feedback" are presented. The book begins with a review of the process itself, emphasizing that 360-degree feedback should be a core element of self-development. The book is divided into three parts. Part 1 describes how to maximize the value of the process for individual…

  3. The Applicability of 360 Degree Feedback Performance Appraisal System: A Brigade Sample

    Directory of Open Access Journals (Sweden)

    Hakan TURGUT

    2012-03-01

    Full Text Available On the subject of measuring individual performances, which considered to be one of the fundamental functions of the human resources management process, 360 Degree Feedback Performance Appraisal System (360 DFPAS preferred for taking into account more than one aspect while providing opportunities in order to achieve more objective results. It’s been thought that the applicability of the above mentioned method is not investigated enough in the public sector where most of the employments take place in Turkey. The purpose of this study is to probe into the applicability of the 360 DFPAS on low and mid-level managers in a brigade. Within this framework, differences between the raters (manager, subordinate, peer, client, self-evaluation have been examined and comparisons made between traditional performance appraisal and 360 DFPAS. There are meaningful differences only inner client between two different evaluation methods (Traditional manager evalution-360 DFPAS within evaluation raters. But there aren’t any meaningful differences between traditional performance appraisal and 360 DFPAS as whole method.

  4. Antecedents and Consequences of Reactions to Developmental 360[degrees] Feedback

    Science.gov (United States)

    Atwater, Leanne E.; Brett, Joan F.

    2005-01-01

    This study investigated the factors that influence leaders' reactions to 360[degrees] feedback and the relationship of feedback reactions to subsequent development activities and changes in leader behavior. For leaders with low ratings, those who agreed with others about their ratings were less motivated than those who received low ratings and…

  5. 360-degrees profilometry using strip-light projection coupled to Fourier phase-demodulation.

    Science.gov (United States)

    Servin, Manuel; Padilla, Moises; Garnica, Guillermo

    2016-01-11

    360 degrees (360°) digitalization of three dimensional (3D) solids using a projected light-strip is a well-established technique in academic and commercial profilometers. These profilometers project a light-strip over the digitizing solid while the solid is rotated a full revolution or 360-degrees. Then, a computer program typically extracts the centroid of this light-strip, and by triangulation one obtains the shape of the solid. Here instead of using intensity-based light-strip centroid estimation, we propose to use Fourier phase-demodulation for 360° solid digitalization. The advantage of Fourier demodulation over strip-centroid estimation is that the accuracy of phase-demodulation linearly-increases with the fringe density, while in strip-light the centroid-estimation errors are independent. Here we proposed first to construct a carrier-frequency fringe-pattern by closely adding the individual light-strip images recorded while the solid is being rotated. Next, this high-density fringe-pattern is phase-demodulated using the standard Fourier technique. To test the feasibility of this Fourier demodulation approach, we have digitized two solids with increasing topographic complexity: a Rubik's cube and a plastic model of a human-skull. According to our results, phase demodulation based on the Fourier technique is less noisy than triangulation based on centroid light-strip estimation. Moreover, Fourier demodulation also provides the amplitude of the analytic signal which is a valuable information for the visualization of surface details.

  6. 360-degree interactive video application for Cultural Heritage Education

    OpenAIRE

    Argyriou, L.; Economou, D.; Bouki, V.

    2017-01-01

    There is a growing interest nowadays of using immersive technologies to promote Cultural Heritage (CH), engage and educate visitors, tourists and citizens. Such examples refer mainly to the use of Virtual Reality (VR) technology or focus on the enhancement of the real world by superimposing digital artefacts, so called Augmented Reality (AR) applications. A new medium that has been introduced lately as an innovative form of experiencing immersion is the 360-degree video, imposing further rese...

  7. Perceptions of Women and Men Leaders Following 360-Degree Feedback Evaluations

    Science.gov (United States)

    Pfaff, Lawrence A.; Boatwright, Karyn J.; Potthoff, Andrea L.; Finan, Caitlin; Ulrey, Leigh Ann; Huber, Daniel M.

    2013-01-01

    In this study, researchers used a customized 360-degree method to examine the frequency with which 1,546 men and 721 women leaders perceived themselves and were perceived by colleagues as using 10 relational and 10 task-oriented leadership behaviors, as addressed in the Management-Leadership Practices Inventory (MLPI). As hypothesized, men and…

  8. Spherical rotation orientation indication for HEVC and JEM coding of 360 degree video

    Science.gov (United States)

    Boyce, Jill; Xu, Qian

    2017-09-01

    Omnidirectional (or "360 degree") video, representing a panoramic view of a spherical 360° ×180° scene, can be encoded using conventional video compression standards, once it has been projection mapped to a 2D rectangular format. Equirectangular projection format is currently used for mapping 360 degree video to a rectangular representation for coding using HEVC/JEM. However, video in the top and bottom regions of the image, corresponding to the "north pole" and "south pole" of the spherical representation, is significantly warped. We propose to perform spherical rotation of the input video prior to HEVC/JEM encoding in order to improve the coding efficiency, and to signal parameters in a supplemental enhancement information (SEI) message that describe the inverse rotation process recommended to be applied following HEVC/JEM decoding, prior to display. Experiment results show that up to 17.8% bitrate gain (using the WS-PSNR end-to-end metric) can be achieved for the Chairlift sequence using HM16.15 and 11.9% gain using JEM6.0, and an average gain of 2.9% for HM16.15 and 2.2% for JEM6.0.

  9. 360-Degree Rhetorical Analysis of Job Hunting: A Four-Part, Multimodal Project

    Science.gov (United States)

    Ding, Huiling; Ding, Xin

    2013-01-01

    This article proposes the use of a four-component multimodal employment project that offers students a 360-degree understanding of the rhetorical situations surrounding job searches. More specifically, we argue for the use of the four deliverables of written resumes and cover letters, mock oral onsite interview, video resume analysis, and peer…

  10. Evaluating the effectiveness of a 360-degree performance appraisal and feedback in a selected steel organisation / Koetlisi Eugene Lithakong

    OpenAIRE

    Lithakong, Koetlisi Eugene

    2014-01-01

    Most companies are competing in the diverse global markets, and competitive advantage through human capital is becoming very important. Employee development for high productivity and the use of effective tools to measure their performance are therefore paramount. One such tool is the 360-degree performance appraisal system. The study on the effectiveness of the 360-degree performance appraisal was conducted on a selected steel organisation. The primary objective of the research...

  11. Extreme embrittlement of austenitic stainless steel irradiated to 75-81 dpa at 335-360{degrees}C

    Energy Technology Data Exchange (ETDEWEB)

    Porollo, S.I.; Vorobjev, A.N.; Konobeev, Yu.V. [Institute of Physics and Power Engineering, Obninsk (Russian Federation)] [and others

    1997-04-01

    It is generally accepted that void swelling of austenitic steels ceases below some temperature in the range 340-360{degrees}C, and exhibits relatively low swelling rates up to {approximately}400{degrees}C. This perception may not be correct at all irradiation conditions, however, since it was largely developed from data obtained at relatively high displacement rates in fast reactors whose inlet temperatures were in the range 360-370{degrees}C. There is an expectation, however, that the swelling regime can shift to lower temperatures at low displacement rates via the well-known {open_quotes}temperature shift{close_quotes} phenomenon. It is also known that the swelling rates at the lower end of the swelling regime increase continuously at a sluggish rate, never approaching the terminal 1%/dpa level within the duration of previous experiments. This paper presents the results of an experiment conducted in the BN-350 fast reactor in Kazakhstan that involved the irradiation of argon-pressurized thin-walled tubes (0-200 MPa hoop stress) constructed from Fe-16Cr-15Ni-3Mo-Nb stabilized steel in contact with the sodium coolant, which enters the reactor at {approx}270{degrees}C. Tubes in the annealed condition reached 75 dpa at 335{degrees}C, and another set in the 20% cold-worked condition reached 81 dpa at 360{degrees}C. Upon disassembly all tubes, except those in the stress-free condition, were found to have failed in an extremely brittle fashion. The stress-free tubes exhibited diameter changes that imply swelling levels ranging from 9 to 16%. It is expected that stress-enhancement of swelling induced even larger swelling levels in the stressed tubes.

  12. Extreme embrittlement of austenitic stainless steel irradiated to 75-81 dpa at 335-360 degrees C

    International Nuclear Information System (INIS)

    Porollo, S.I.; Vorobjev, A.N.; Konobeev, Yu.V.

    1997-01-01

    It is generally accepted that void swelling of austenitic steels ceases below some temperature in the range 340-360 degrees C, and exhibits relatively low swelling rates up to ∼400 degrees C. This perception may not be correct at all irradiation conditions, however, since it was largely developed from data obtained at relatively high displacement rates in fast reactors whose inlet temperatures were in the range 360-370 degrees C. There is an expectation, however, that the swelling regime can shift to lower temperatures at low displacement rates via the well-known open-quotes temperature shiftclose quotes phenomenon. It is also known that the swelling rates at the lower end of the swelling regime increase continuously at a sluggish rate, never approaching the terminal 1%/dpa level within the duration of previous experiments. This paper presents the results of an experiment conducted in the BN-350 fast reactor in Kazakhstan that involved the irradiation of argon-pressurized thin-walled tubes (0-200 MPa hoop stress) constructed from Fe-16Cr-15Ni-3Mo-Nb stabilized steel in contact with the sodium coolant, which enters the reactor at ∼270 degrees C. Tubes in the annealed condition reached 75 dpa at 335 degrees C, and another set in the 20% cold-worked condition reached 81 dpa at 360 degrees C. Upon disassembly all tubes, except those in the stress-free condition, were found to have failed in an extremely brittle fashion. The stress-free tubes exhibited diameter changes that imply swelling levels ranging from 9 to 16%. It is expected that stress-enhancement of swelling induced even larger swelling levels in the stressed tubes

  13. 360 Degrees Project: Final Report of 1972-73. National Career Education Television Project.

    Science.gov (United States)

    Wisconsin Univ., Madison. Univ. Extension.

    Project 360 Degrees was a mass-media, multi-State, one-year effort in adult career education initiated by WHA-TV, the public television station of the University of Wisconsin-Extension, and funded by the U.S. Office of Education. The overall goal of the project was to provide, through a coordinated media system, information and motivation that…

  14. Use of 360-degree assessment of residents in internal medicine in a Danish setting

    DEFF Research Database (Denmark)

    Allerup, Peter

    2007-01-01

    objectives to be assessed. We considered 22 of these suitable for assessment by 360-degrees assessment. METHODS: Medical departments of six hospitals contributed 42 interns to the study. Each resident was assessed by ten persons of whom one was a secretary, four were nurses and five senior doctors...

  15. The 360-degree evaluation model: A method for assessing competency in graduate nursing students. A pilot research study.

    Science.gov (United States)

    Cormack, Carrie L; Jensen, Elizabeth; Durham, Catherine O; Smith, Gigi; Dumas, Bonnie

    2018-05-01

    The 360 Degree Evaluation Model is one means to provide a comprehensive view of clinical competency and readiness for progression in an online nursing program. This pilot project aimed to evaluate the effectiveness of implementing a 360 Degree Evaluation of clinical competency of graduate advanced practice nursing students. The 360 Degree Evaluation, adapted from corporate industry, encompasses assessment of student knowledge, skills, behaviors and attitudes and validates student's progression from novice to competent. Cohort of advanced practice nursing students in four progressive clinical semesters. Graduate advanced practice nursing students (N = 54). Descriptive statistics and Jonckheere's Trend Test were used to evaluate OSCE's scores from graded rubric, standardized patient survey scores, student reflection and preceptor evaluation. We identified all students passed the four OSCEs during a first attempt or second attempt. Scaffolding OSCE's over time allowed faculty to identify cohort weakness and create subsequent learning opportunities. Standardized patients' evaluation of the students' performance in the domains of knowledge, skills and attitudes, showed high scores of 96% in all OSCEs. Students' self-reflection comments were a mix of strengths and weaknesses in their self-evaluation, demonstrating themes as students progressed. Preceptor evaluation scores revealed the largest increase in knowledge and learning skills (NONPF domain 1), from an aggregate average of 90% in the first clinical course, to an average of 95%. The 360 Degree Evaluation Model provided a comprehensive evaluation of the student and critical information for the faculty ensuring individual student and cohort data and ability to analyze cohort themes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. 360-degree video and X-ray modeling of the Galactic center's inner parsec

    Science.gov (United States)

    Russell, Christopher Michael Post; Wang, Daniel; Cuadra, Jorge

    2017-08-01

    360-degree videos, which render an image over all 4pi steradian, provide a unique and immersive way to visualize astrophysical simulations. Video sharing sites such as YouTube allow these videos to be shared with the masses; they can be viewed in their 360° nature on computer screens, with smartphones, or, best of all, in virtual-reality (VR) goggles. We present the first such 360° video of an astrophysical simulation: a hydrodynamics calculation of the Wolf-Rayet stars and their ejected winds in the inner parsec of the Galactic center. Viewed from the perspective of the super-massive black hole (SMBH), the most striking aspect of the video, which renders column density, is the inspiraling and stretching of clumps of WR-wind material as they makes their way towards the SMBH. We will brielfy describe how to make 360° videos and how to publish them online in their desired 360° format. Additionally we discuss computing the thermal X-ray emission from a suite of Galactic-center hydrodynamic simulations that have various SMBH feedback mechanisms, which are compared to Chandra X-ray Visionary Program observations of the region. Over a 2-5” ring centered on Sgr A*, the spectral shape is well matched, indicating that the WR winds are the dominant source of the thermal X-ray emission. Furthermore, the X-ray flux depends on the SMBH feedback due to the feedback's ability to clear out material from the central parsec. A moderate outburst is necessary to explain the current thermal X-ray flux, even though the outburst ended ˜100 yr ago.

  17. The usefulness of 360 degree feedback in developing a personal work style

    OpenAIRE

    Chicu Nicoleta; Nedelcu Alexandra Catalina

    2017-01-01

    The present study focuses on a new approach in the process of developing personal work styles, based on the usefulness of 360 degree feedback, taking into consideration the following dimensions: work-life balance, gender-age, self-development and the behavior a person has, following the process of self-development and defining work style. Using different approaches, the study attempts to identify if there are some differences between the evaluations received from the family and the ones from ...

  18. General Creighton Abrams: Ethical Leadership at the Strategic Level

    National Research Council Canada - National Science Library

    Leatherman, John

    1998-01-01

    .... This study describes General Abrams' ethical strategic leadership style during his Army career and examines the extent that his ethical principles and examples affected his soldiers and the Army...

  19. 360-Degree Feedback Implementation Plan: Dean Position, Graduate School of Business and Public Policy, Naval Postgraduate School

    National Research Council Canada - National Science Library

    Morrison, Devin

    2002-01-01

    360-degree feedback is a personal development and appraisal tool designed to quantify the competencies and skills of fellow employees by tapping the collective experience of their superiors, subordinates, and peers...

  20. 360-degree suture trabeculotomy ab interno to treat open-angle glaucoma: 2-year outcomes

    Science.gov (United States)

    Sato, Tomoki; Kawaji, Takahiro; Hirata, Akira; Mizoguchi, Takanori

    2018-01-01

    Purpose The purpose of this study was to evaluate the efficacy of 360-degree suture trabeculotomy (360S-LOT) ab interno for treating open-angle glaucoma (OAG). Risk factors of surgical failure were examined. Patients and methods 360S-LOT ab interno alone was performed for patients with uncontrolled OAG, and combined 360S-LOT ab interno/phacoemulsification was performed for patients with controlled OAG with a visually significant cataract between March 2014 and September 2015 at a single center. The patients were prospectively followed for 2 years. The main outcome measures included 2-year intraocular pressure (IOP), number of anti-glaucoma medications used, postoperative complications, and predictive factors of surgical failure. Kaplan–Meier analysis was performed, with surgical success (with or without medication use) defined as postoperative IOP ≤15 mmHg and IOP reduction ≥20% (criterion A) or IOP ≤12 mmHg and IOP reduction ≥30% (criterion B). Predictive factors were evaluated using Cox proportional hazard ratios. Results A total of 64 eyes of 64 patients were included, and 50 (78%) eyes of 64 eyes underwent a phacoemulsification combination procedure. Surgery significantly reduced IOP from 18.4 ± 2.9 mmHg before surgery to 13.4 ± 3.0 mmHg after surgery (P interno procedure is a favorable option for treating eyes with mild or moderate OAG. PMID:29844656

  1. D Modelling with the Samsung Gear 360

    Science.gov (United States)

    Barazzetti, L.; Previtali, M.; Roncoroni, F.

    2017-02-01

    The Samsung Gear 360 is a consumer grade spherical camera able to capture photos and videos. The aim of this work is to test the metric accuracy and the level of detail achievable with the Samsung Gear 360 coupled with digital modelling techniques based on photogrammetry/computer vision algorithms. Results demonstrate that the direct use of the projection generated inside the mobile phone or with Gear 360 Action Direction (the desktop software for post-processing) have a relatively low metric accuracy. As results were in contrast with the accuracy achieved by using the original fisheye images (front and rear facing images) in photogrammetric reconstructions, an alternative solution to generate the equirectangular projections was developed. A calibration aimed at understanding the intrinsic parameters of the two lenses camera, as well as their relative orientation, allowed one to generate new equirectangular projections from which a significant improvement of geometric accuracy has been achieved.

  2. Eesti tankivalikud: Abrams või Leopard 2 / Holger Roonemaa

    Index Scriptorium Estoniae

    Roonemaa, Holger

    2010-01-01

    Ametlikku otsust kaitseväele tankide ostmiseks veel ei ole, kuid kui Eesti kaitsevägi asub tanke ostma, siis tõenäoliselt hakatakse valima Leopard 2 ja Abrams vahel. Tankidega seotud kuludest ja olukorrast Norras

  3. 360 DEGREE PHOTOGRAPHY TO DOCUMENT and TRAIN and ORIENT PERSONNEL FOR DECONTAMINATION and DECOMMISSIONING

    International Nuclear Information System (INIS)

    LEBARON, G.J.

    2001-01-01

    360 o photo technology is being used to document conditions, especially hazardous conditions, at US. Department of Energy (DOE) facilities that are being closed. Traditional efforts to document the condition of rooms and cells, especially those difficult to enter due to the hazards present, using engineering drawings, documents, ''traditional flat'' photographs or videos, don't provide perspective. These miss items or quickly pan across areas of interest with little opportunity to study details. Therefore, it becomes necessary to make multiple entries into these hazardous areas resulting in work activities taking longer and increasing exposure and the risk of accidents. High-resolution digital cameras, in conjunction with software techniques, make possible 360 o photos that allow a person to look all around, up and down, and zoom in or out. The software provides the opportunity to attach other information to a 360 o photo such as sound files providing audio information; flat photos providing additional detail or information about what is behind a panel or around a comer; and text information which can be used to show radiological conditions or identify other hazards present but not readily visible. The software also allows other 360 o photos to be attached to create a virtual tour where the user can move from area to area or room to room. The user is able to stop, study and zoom in on areas of interest. A virtual tour of a building or room can be used for facility documentation, work planning and orientation, and training. Documentation is developed during facility closure so people involved in follow-on activities can gain a perspective of the area, focus on points of interest and discuss what they would do or how they would respond to and manage conditions. Decontamination and Decommissioning (D and D) planners and workers can make use of the tour to plan work and decide ahead of time, while looking at the areas of interest, what and how tasks will be performed

  4. Microscopic Abrams-Strogatz model of language competition

    OpenAIRE

    Stauffer, Dietrich; Castello, Xavier; Eguiluz, Victor M.; Miguel, Maxi San

    2006-01-01

    The differential equation of Abrams and Strogatz for the competition between two languages is compared with agent based Monte Carlo simulations for fully connected networks as well as for lattices in one, two and three dimensions, with up to 10^9 agents. In the case of socially equivalent languages, agent-based models and a mean field approximation give grossly different results.

  5. The usefulness of 360 degree feedback in developing a personal work style

    Directory of Open Access Journals (Sweden)

    Chicu Nicoleta

    2017-07-01

    Full Text Available The present study focuses on a new approach in the process of developing personal work styles, based on the usefulness of 360 degree feedback, taking into consideration the following dimensions: work-life balance, gender-age, self-development and the behavior a person has, following the process of self-development and defining work style. Using different approaches, the study attempts to identify if there are some differences between the evaluations received from the family and the ones from the work environment. All these factors aim at improving personal, but also organizational performances. Based on the current body of the literature, a discussion is made and conclusions are presented.

  6. 3D MODELLING WITH THE SAMSUNG GEAR 360

    Directory of Open Access Journals (Sweden)

    L. Barazzetti

    2017-02-01

    Full Text Available The Samsung Gear 360 is a consumer grade spherical camera able to capture photos and videos. The aim of this work is to test the metric accuracy and the level of detail achievable with the Samsung Gear 360 coupled with digital modelling techniques based on photogrammetry/computer vision algorithms. Results demonstrate that the direct use of the projection generated inside the mobile phone or with Gear 360 Action Direction (the desktop software for post-processing have a relatively low metric accuracy. As results were in contrast with the accuracy achieved by using the original fisheye images (front and rear facing images in photogrammetric reconstructions, an alternative solution to generate the equirectangular projections was developed. A calibration aimed at understanding the intrinsic parameters of the two lenses camera, as well as their relative orientation, allowed one to generate new equirectangular projections from which a significant improvement of geometric accuracy has been achieved.

  7. A 360-degree floating 3D display based on light field regeneration.

    Science.gov (United States)

    Xia, Xinxing; Liu, Xu; Li, Haifeng; Zheng, Zhenrong; Wang, Han; Peng, Yifan; Shen, Weidong

    2013-05-06

    Using light field reconstruction technique, we can display a floating 3D scene in the air, which is 360-degree surrounding viewable with correct occlusion effect. A high-frame-rate color projector and flat light field scanning screen are used in the system to create the light field of real 3D scene in the air above the spinning screen. The principle and display performance of this approach are investigated in this paper. The image synthesis method for all the surrounding viewpoints is analyzed, and the 3D spatial resolution and angular resolution of the common display zone are employed to evaluate display performance. The prototype is achieved and the real 3D color animation image has been presented vividly. The experimental results verified the representability of this method.

  8. Wideband 360 degrees microwave photonic phase shifter based on slow light in semiconductor optical amplifiers.

    Science.gov (United States)

    Xue, Weiqi; Sales, Salvador; Capmany, José; Mørk, Jesper

    2010-03-15

    In this work we demonstrate for the first time, to the best of our knowledge, a continuously tunable 360 degrees microwave phase shifter spanning a microwave bandwidth of several tens of GHz (up to 40 GHz). The proposed device exploits the phenomenon of coherent population oscillations, enhanced by optical filtering, in combination with a regeneration stage realized by four-wave mixing effects. This combination provides scalability: three hybrid stages are demonstrated but the technology allows an all-integrated device. The microwave operation frequency limitations of the suggested technique, dictated by the underlying physics, are also analyzed.

  9. Yield of abrams needle pleural biopsy in exudative pleural effusion

    International Nuclear Information System (INIS)

    Khan, I.N.; Zaman, M.; Khan, N.; Jadoon, H.; Ahmed, A.

    2009-01-01

    Pleural effusion is the abnormal collection of fluid in the pleural space resulting from excessive fluid production or decreased absorption and it is one of the most common clinical conditions that we come across in pulmonology clinics and in hospitals. The objective of prospective study was to evaluate the diagnostic role of Abrams Needle Biopsy in Exudative Pleural Effusion The study was performed at the Department of Pulmonology, Ayub Teaching Hospital, Abbottabad over a period of 1 year, i.e., January 2008 to December 2008. Sixty-three patients of either sex and all ages with exudative pleural effusion, on whom Abrams Needle Biopsy was performed were included in the study. Minimum of four specimens from each patient were taken and histopathology done. Out of 63 patients, histopathology revealed the cause in 60 (95%) cases. Tuberculosis, malignancy and rheumatoid pleurisy were confirmed in 34, 24, and 2 cases respectively. Specimens of 3 patients did not reveal any result and showed non-specific inflammation and were further investigated accordingly. The diagnostic yield of Biopsy was 95%. Pleural biopsy is still a reliable and valuable investigation in diagnosing pleural effusion, provided that adequate pleural specimen is taken. (author)

  10. Leadership development in a professional medical society using 360-degree survey feedback to assess emotional intelligence.

    Science.gov (United States)

    Gregory, Paul J; Robbins, Benjamin; Schwaitzberg, Steven D; Harmon, Larry

    2017-09-01

    The current research evaluated the potential utility of a 360-degree survey feedback program for measuring leadership quality in potential committee leaders of a professional medical association (PMA). Emotional intelligence as measured by the extent to which self-other agreement existed in the 360-degree survey ratings was explored as a key predictor of leadership quality in the potential leaders. A non-experimental correlational survey design was implemented to assess the variation in leadership quality scores across the sample of potential leaders. A total of 63 of 86 (76%) of those invited to participate did so. All potential leaders received feedback from PMA Leadership, PMA Colleagues, and PMA Staff and were asked to complete self-ratings regarding their behavior. Analyses of variance revealed a consistent pattern of results as Under-Estimators and Accurate Estimators-Favorable were rated significantly higher than Over-Estimators in several leadership behaviors. Emotional intelligence as conceptualized in this study was positively related to overall performance ratings of potential leaders. The ever-increasing roles and potential responsibilities for PMAs suggest that these organizations should consider multisource performance reviews as these potential future PMA executives rise through their organizations to assume leadership positions with profound potential impact on healthcare. The current findings support the notion that potential leaders who demonstrated a humble pattern or an accurate pattern of self-rating scored significantly higher in their leadership, teamwork, and interpersonal/communication skills than those with an aggrandizing self-rating.

  11. Hospital 360°.

    Science.gov (United States)

    Giraldo Valencia, Juan Carlos; Delgado, Liliana Claudia

    2015-01-01

    There are forces that are greater than the individual performance of each hospital institution and of the health system structural of each country. The world is changing and to face up to the future in the best possible way, we need to understand how contexts and emerging trends link up and how they affect the hospital sector. The Columbian Association of Hospitals and Clinics, ACHC, has thus come up with the Hospital 360° concept which uses hospitals capable of anticipating changing contexts by means of the transition between present and future and takes on board the experience of global, socio-economic, demographic, political, environmental and technological fields as its model. Hospital 360° is an invitation to reinvent processes and institution themselves allowing them to adapt and incorporate a high degree of functional flexibility. Hospital 360° purses goals of efficiency, effectiveness and relevance, but also of impact and sustainability, and is coherent with the internal needs of hospital institutions and society for long-term benefits.

  12. Penilaian Kinerja dengan Menggunakan Konsep 360 Derajat Feedback

    OpenAIRE

    Widya, Rita

    2004-01-01

    The concept of 360 degree appraisal is straight forward enough. In the systems, individuals evalute themselves and receive feedback from other employees and organizational members. The feedback comes from an individuals immediate supervisor and peers, and if the individual is manager from his or her direct subordinates. Employees performance can improved through feedback with evaluate themselves and receives feedback from other employees. With apply the concept of 360 degree feedback for gett...

  13. 360 Derece Performans Değerlendirme ve Geri Bildirim: Bir Üniversite Mediko-Sosyal Merkezi Birim Amirlerinin Yönetsel Yetkinliklerinin Değerlendirilmesi Üzerine Pilot Uygulama Örneği(360 Degree Performance Appraisal And Feedback: “A Pilot Study Illustration in Appraising the Managerial Skills of Supervisors Working in Health Care Centre of a University”

    Directory of Open Access Journals (Sweden)

    Selin Metin CAMGÖZ

    2006-01-01

    Full Text Available In this study, “360 Degree Performance Appraisal” which is one of the most current and controversial issues of human resource practices is extensively examined and supported with an empirical research. The study contains 2 parts. After explaining the necessity and the general utilities of the classical performance appraisal system, the first theoretical part shifts to the emergence of 360 degree performance appraisal, discusses its distinctive benefits over the classical appraisal system and focuses its attention to the raters (superiors, subordinates, peers, self involving in the 360 degree performance appraisal.The second empirical part illustrates the development of 360 degree performance appraisal system as well as its application and sample feedback reports for feedback purposes in order to appraise the managerial skills of supervisors working in Health Care Centre of a public university.

  14. Photogrammetric Applications of Immersive Video Cameras

    OpenAIRE

    Kwiatek, K.; Tokarczyk, R.

    2014-01-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to ov...

  15. Good to great: using 360-degree feedback to improve physician emotional intelligence.

    Science.gov (United States)

    Hammerly, Milton E; Harmon, Larry; Schwaitzberg, Steven D

    2014-01-01

    The past decade has seen intense interest and dramatic change in how hospitals and physician organizations review physician behaviors. The characteristics of successful physicians extend past their technical and cognitive skills. Two of the six core clinical competencies (professionalism and interpersonal/communication skills) endorsed by the Accreditation Council for Graduate Medical Education, the American Board of Medical Specialties, and The Joint Commission require physicians to succeed in measures associated with emotional intelligence (EI). Using 360-degree anonymous feedback surveys to screen for improvement opportunities in these two core competencies enables organizations to selectively offer education to further develop physician EI. Incorporating routine use of these tools and interventions into ongoing professional practice evaluation and focused professional practice evaluation processes may be a cost-effective strategy for preventing disruptive behaviors and increasing the likelihood of success when transitioning to an employed practice model. On the basis of a literature review, we determined that physician EI plays a key role in leadership; teamwork; and clinical, financial, and organizational outcomes. This finding has significant implications for healthcare executives seeking to enhance physician alignment and transition to a team-based delivery model.

  16. DEĞERLENDİRİCİLER ARASI GÜVENİLİRLİK VE TATMİN BAĞLAMINDA 360 DERECE PERFORMANS DEĞERLENDİRME - 360-DEGREE PERFORMANCE APPRAISAL IN THE CONTEXT OF INTERRATER RELIABILITY AND SATISFACTION

    Directory of Open Access Journals (Sweden)

    Adem BALTACI

    2014-03-01

    Full Text Available ÖzetGünümüzün en popüler değerlendirme sistemi olarak kabul edilen 360 derece değerlendirme sistemi gücünü, farklı kaynaklardan elde edilecek olan sonuçların daha objektif ve kapsayıcı olacağı görüşünden almaktadır. Ancak burada hangi değerlendiricinin daha geçerli ve güvenilir bilgi sağladığı halen belirsizliğini koruyan bir konudur. Bu belirsizliğe rağmen 360 derece değerlendirme sistemi çalışana kendini ve diğerlerini değerlendirme şansı tanıyor olması nedeniyle sistemden duyulan tatmini arttırmaktadır. Bu bağlamda yapılan bu çalışmada, değerlendirme sisteminden duyulan tatmin ve değerlendiriciler arası güvenilirlik özelinde 360 derece değerlendirme sistemi ele alınmıştır. Bu amaçla bu sistemi uygulayan bir işletmenin çalışanlarının değerlendirme sonuçları incelenmiş ve ayrıca çalışanlara sistemden duydukları tatmini ölçen bir anket uygulanmıştır. Analizler sonucunda demografik değişkenlerin performans puanları üzerinde olmasa da farklı kaynaklardan gelen değerlendirmeler üzerinde etkili olabildiği görülmüştür. Ayrıca üstlerin çalışanların gerçek performans puanlarına en yakın değerlendirmeleri yaptığı incelemeler sonucunda ortaya çıkmıştır. Bunun yanı sıra sisteme karşı duyulan tatmin ile çalışanların performansları arasında kuvvetli bir ilişki tespit edilmiştir.AbstractThe 360-degree appraisal system, viewed as today’s most popular appraisal system, gets its strength from the view that results from different sources would be much more objective and inclusive. Yet, the question of exactly which rating source provides relatively more valid and reliable information remains to be answered. This uncertainty notwithstanding, the 360-degree performance appraisal system leads to higher satisfaction with the system as it allows employees to assess both themselves and others. Against this background, this study addresses the

  17. 360° FILM BRINGS BOMBED CHURCH TO LIFE

    Directory of Open Access Journals (Sweden)

    K. Kwiatek

    2012-09-01

    Full Text Available This paper explores how a computer-generated reconstruction of a church can be adapted to create a panoramic film that is presented in a panoramic viewer and also on a wrap-around projection system. It focuses on the fundamental principles of creating 360º films, not only in 3D modelling software, but also presents how to record 360º video using panoramic cameras inside the heritage site. These issues are explored in a case study of Charles Church in Plymouth, UK that was bombed in 1941 and has never been rebuilt. The generation of a 3D model of the bombed church started from the creation of five spherical panoramas and through the use of Autodesk ImageModeler software. The processed files were imported and merged together in Autodesk 3ds Max where a visualisation of the ruin was produced. A number of historical images were found and this collection enabled the process of a virtual reconstruction of the site. The aspect of merging two still or two video panoramas (one from 3D modelling software, the other one recorded on the site from the same locations or with the same trajectories is also discussed. The prototype of 360º non-linear film tells a narrative of a wartime wedding that occurred in this church. The film was presented on two 360º screens where members of the audience could make decisions on whether to continue the ceremony or whether to run away when the bombing of the church starts. 3D modelling software made this possible to render a number of different alternati ves (360º images and 360º video. Immersive environments empower the visitor to imagine the building before it was destroyed.

  18. 360° Film Brings Bombed Church to Life

    Science.gov (United States)

    Kwiatek, K.

    2011-09-01

    This paper explores how a computer-generated reconstruction of a church can be adapted to create a panoramic film that is presented in a panoramic viewer and also on a wrap-around projection system. It focuses on the fundamental principles of creating 360º films, not only in 3D modelling software, but also presents how to record 360º video using panoramic cameras inside the heritage site. These issues are explored in a case study of Charles Church in Plymouth, UK that was bombed in 1941 and has never been rebuilt. The generation of a 3D model of the bombed church started from the creation of five spherical panoramas and through the use of Autodesk ImageModeler software. The processed files were imported and merged together in Autodesk 3ds Max where a visualisation of the ruin was produced. A number of historical images were found and this collection enabled the process of a virtual reconstruction of the site. The aspect of merging two still or two video panoramas (one from 3D modelling software, the other one recorded on the site) from the same locations or with the same trajectories is also discussed. The prototype of 360º non-linear film tells a narrative of a wartime wedding that occurred in this church. The film was presented on two 360º screens where members of the audience could make decisions on whether to continue the ceremony or whether to run away when the bombing of the church starts. 3D modelling software made this possible to render a number of different alternati ves (360º images and 360º video). Immersive environments empower the visitor to imagine the building before it was destroyed.

  19. A panoramic coded aperture gamma camera for radioactive hotspots localization

    Science.gov (United States)

    Paradiso, V.; Amgarou, K.; Blanc De Lanaute, N.; Schoepff, V.; Amoyal, G.; Mahe, C.; Beltramello, O.; Liénard, E.

    2017-11-01

    A known disadvantage of the coded aperture imaging approach is its limited field-of-view (FOV), which often results insufficient when analysing complex dismantling scenes such as post-accidental scenarios, where multiple measurements are needed to fully characterize the scene. In order to overcome this limitation, a panoramic coded aperture γ-camera prototype has been developed. The system is based on a 1 mm thick CdTe detector directly bump-bonded to a Timepix readout chip, developed by the Medipix2 collaboration (256 × 256 pixels, 55 μm pitch, 14.08 × 14.08 mm2 sensitive area). A MURA pattern coded aperture is used, allowing for background subtraction without the use of heavy shielding. Such system is then combined with a USB color camera. The output of each measurement is a semi-spherical image covering a FOV of 360 degrees horizontally and 80 degrees vertically, rendered in spherical coordinates (θ,phi). The geometrical shapes of the radiation-emitting objects are preserved by first registering and stitching the optical images captured by the prototype, and applying, subsequently, the same transformations to their corresponding radiation images. Panoramic gamma images generated by using the technique proposed in this paper are described and discussed, along with the main experimental results obtained in laboratories campaigns.

  20. Taking it all in : special camera films in 3-D

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, L.

    2006-07-15

    Details of a 360-degree digital camera designed by Immersive Media Telemmersion were presented. The camera has been employed extensively in the United States for homeland security and intelligence-gathering purposes. In Canada, the cameras are now being used by the oil and gas industry. The camera has 11 lenses pointing in all directions and generates high resolution movies that can be analyzed frame-by-frame from every angle. Global positioning satellite data can be gathered during filming so that operators can pinpoint any location. The 11 video streams use more than 100 million pixels per second. After filming, the system displays synchronized, high-resolution video streams, capturing a full motion spherical world complete with directional sound. It can be viewed on a computer monitor, video screen, or head-mounted display. Pembina Pipeline Corporation recently used the Telemmersion system to plot a proposed pipeline route between Alberta's Athabasca region and Edmonton. It was estimated that more than $50,000 was saved by using the camera. The resulting video has been viewed by Pembina's engineering, environmental and geotechnical groups who were able to accurately note the route's river crossings. The cameras were also used to estimate timber salvage. Footage was then given to the operations group, to help staff familiarize themselves with the terrain, the proposed route's right-of-way, and the number of water crossings and access points. Oil and gas operators have also used the equipment on a recently acquired block of land to select well sites. 4 figs.

  1. 360度绩效考评在政府财政绩效管理中的应用%The Application of 360-degree Feedback in the Government Finance Performance

    Institute of Scientific and Technical Information of China (English)

    邓洁

    2012-01-01

    基于目前我国财政绩效管理中存在着资金管理效益低下、绩效考评主体单一、监督机制缺失等问题,分析将360度绩效考评方法引入财政绩效管理中的优势,具体表现为:信息充足、可信度高、有利于提高政府行政能力和增强大众监督。从主体确定、权重设计、实施步骤三个方面探讨360度财政绩效考评框架,提出在应用中应创造和谐氛围、加强主体培训、充分利用网络引入媒体监督等对策建议。%There are some problems in the government finance performance,such as inefficient fund management,unitary evaluation subject,and lack of supervision mechanism.In view of these problems,the paper analyzes the advantages of 360-degree feedback being introduced into the government finance performance.Its advantages are: adequate in information,high in credibility,and beneficial for government executive ability improvement and public supervision enhancement.The paper discusses the basic performance appraisal framework from the aspects of subject deciding,weight design and implementation steps,and then offers some proposals for the application,such proposals as creating a harmonious atmosphere,strengthening subject training,and introducing media supervision by making full use of Internet.

  2. Braudel and Abrams open the door to an insoluble debate: The City

    Directory of Open Access Journals (Sweden)

    Tomás Antônio Moreira

    2016-08-01

    Full Text Available This paper looks to understand the definitions of the city to enrich the new reflections, in the current days. The starting point for reflection is the confrontation of positions in the understanding of the urban phenomenon, the city, by Fernand Braudel and Philip Abrams, later to emerge in front of the dilemma posed in the confrontation of these two authors, samplings of the main elaborate theorizing on City. Among the findings, it is emphasized that the names of propositions about the city seeking to account for the dynamic evolution of human settlements: métapole, edge city and tecnocity.

  3. Navigated Pattern Laser System versus Single-Spot Laser System for Postoperative 360-Degree Laser Retinopexy.

    Science.gov (United States)

    Kulikov, Alexei N; Maltsev, Dmitrii S; Boiko, Ernest V

    2016-01-01

    Purpose . To compare three 360°-laser retinopexy (LRP) approaches (using navigated pattern laser system, single-spot slit-lamp (SL) laser delivery, and single-spot indirect ophthalmoscope (IO) laser delivery) in regard to procedure duration, procedural pain score, technical difficulties, and the ability to achieve surgical goals. Material and Methods . Eighty-six rhegmatogenous retinal detachment patients (86 eyes) were included in this prospective randomized study. The mean procedural time, procedural pain score (using 4-point Verbal Rating Scale), number of laser burns, and achievement of the surgical goals were compared between three groups (pattern LRP (Navilas® laser system), 36 patients; SL-LRP, 28 patients; and IO-LRP, 22 patients). Results . In the pattern LRP group, the amount of time needed for LRP and pain level were statistically significantly lower, whereas the number of applied laser burns was higher compared to those in the SL-LRP group and in the IO-LRP group. In the pattern LRP, SL-LRP, and IO-LRP groups, surgical goals were fully achieved in 28 (77.8%), 17 (60.7%), and 13 patients (59.1%), respectively ( p > 0.05). Conclusion . The navigated pattern approach allows improving the treatment time and pain in postoperative 360° LRP. Moreover, 360° pattern LRP is at least as effective in achieving the surgical goal as the conventional (slit-lamp or indirect ophthalmoscope) approaches with a single-spot laser.

  4. 360/degree/ digital phase detector with 100-kHz bandwidth

    International Nuclear Information System (INIS)

    Reid, D.W.; Riggin, D.; Fazio, M.V.; Biddle, R.S.; Patton, R.D.; Jackson, H.A.

    1981-01-01

    The general availability of digital circuit components with propagation delay times of a few nanoseconds makes a digital phase detector with good bandwidth feasible. Such a circuit has a distinct advantage over its analog counterpart because of its linearity over a wide range of phase shift. A description is given of a phase detector that is being built at Los Alamos National Laboratory for the Fusion Materials Irradiation Test (FMIT) project. The specifications are 100-kHz bandwidth, linearity of /plus or minus/1/degree/ over /plus or minus/180/degree/ of phase shift, and 0.66/degree/ resolution. To date, the circuit has achieved the bandwidth and resolution. The linearity is approximately /plus or minus/3/degree/ over /plus or minus/180/degree/ phase shift. 3 refs

  5. Cheap streak camera based on the LD-S-10 intensifier tube

    Science.gov (United States)

    Dashevsky, Boris E.; Krutik, Mikhail I.; Surovegin, Alexander L.

    1992-01-01

    Basic properties of a new streak camera and its test results are reported. To intensify images on its screen, we employed modular G1 tubes, the LD-A-1.0 and LD-A-0.33, enabling magnification of 1.0 and 0.33, respectively. If necessary, the LD-A-0.33 tube may be substituted by any other image intensifier of the LDA series, the choice to be determined by the size of the CCD matrix with fiber-optical windows. The reported camera employs a 12.5- mm-long CCD strip consisting of 1024 pixels, each 12 X 500 micrometers in size. Registered radiation was imaged on a 5 X 0.04 mm slit diaphragm tightly connected with the LD-S- 10 fiber-optical input window. Electrons escaping the cathode are accelerated in a 5 kV electric field and focused onto a phosphor screen covering a fiber-optical plate as they travel between deflection plates. Sensitivity of the latter was 18 V/mm, which implies that the total deflecting voltage was 720 V per 40 mm of the screen surface, since reversed-polarity scan pulses +360 V and -360 V were applied across the deflection plate. The streak camera provides full scan times over the screen of 15, 30, 50, 100, 250, and 500 ns. Timing of the electrically or optically driven camera was done using a 10 ns step-controlled-delay (0 - 500 ns) circuit.

  6. Performans Değerlendirme Yöntemi Olarak 360 Derece Geribildirim Sürecinin Orta Kademe Yöneticilerin İş Başarısına Olan Etkisi: 5 Yıldızlı Otel İşletmelerinde Bir Uygulama = The Effect of 360 Degree Feedback Performance Evaluation Process on the Achievement of Middle-Level Managers: an Application in 5-Star Accommodation Establishments

    Directory of Open Access Journals (Sweden)

    Derya KARA

    2010-01-01

    Full Text Available This study sets out to reveal the effect of 360 degree feedback performance evaluation process on the achievement of middle-level managers. To serve this purpose, 5-star establishments, in total 182, with tourism certification and operating in 5 major provinces (Antalya, İstanbul, Muğla, Ankara, İzmir made up the application field of the study. The study aims to determine to what extent the 360 degree feedback performance evaluation process differentiate compared to traditional performance evaluation processes within the context of job achievement. The results of the study suggest that 7 dimensions (leadership, task performance, adaptation to change, communication, human relations, creating output, employee training and development were found to be more effective in the job performances of the middle-level managers.

  7. Implications of Articulating Machinery on Operator Line of Sight and Efficacy of Camera Based Proximity Detection Systems

    Directory of Open Access Journals (Sweden)

    Nicholas Schwabe

    2017-07-01

    Full Text Available The underground mining industry, and some above ground operations, rely on the use of heavy equipment that articulates to navigate corners in the tight confines of the tunnels. Poor line of sight (LOS has been identified as a problem for safe operation of this machinery. Proximity detection systems, such as a video system designed to provide a 360 degree view around the machine have been implemented to improve the available LOS for the operator. A four-camera system was modeled in a computer environment to assess LOS on a 3D cad model of a typical, articulated machine. When positioned without any articulation, the system is excellent at removing blind spots for a machine driving straight forward or backward in a straight tunnel. Further analysis reveals that when the machine articulates in a simulated corner section, some camera locations are no longer useful for improving LOS into the corner. In some cases, the operator has a superior view into the corner, when compared to the best available view from the camera. The work points to the need to integrate proximity detection systems at the design, build, and manufacture stage, and to consider proper policy and procedures that would address the gains and limits of the systems prior to implementation.

  8. Wideband 360 degrees microwave photonic phase shifter based on slow light in semiconductor optical amplifiers

    DEFF Research Database (Denmark)

    Xue, Weiqi; Sales, Salvador; Capmany, Jose

    2010-01-01

    In this work we demonstrate for the first time, to the best of our knowledge, a continuously tunable 360° microwave phase shifter spanning a microwave bandwidth of several tens of GHz (up to 40 GHz) by slow light effects. The proposed device exploits the phenomenon of coherent population oscillat...... of the suggested technique, dictated by the underlying physics, are also analyzed....

  9. 47 CFR 3.60 - Reports.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Reports. 3.60 Section 3.60 Telecommunication... MARITIME AND MARITIME MOBILE-SATELLITE RADIO SERVICES Reporting Requirements § 3.60 Reports. (a) Initial... authority is required to submit to the FCC a report on additions, modifications or deletions to its list of...

  10. 360° Operative Videos: A Randomised Cross-Over Study Evaluating Attentiveness and Information Retention.

    Science.gov (United States)

    Harrington, Cuan M; Kavanagh, Dara O; Wright Ballester, Gemma; Wright Ballester, Athena; Dicker, Patrick; Traynor, Oscar; Hill, Arnold; Tierney, Sean

    2017-11-06

    Although two-dimensional (2D) and three-dimensional videos have traditionally provided foundations for reviewing operative procedures, the recent 360º format may provide new dimensions to surgical education. This study sought to describe the production of a high quality 360º video for an index-operation (augmented with educational material), while evaluating for variances in attentiveness, information retention, and appraisal compared to 2D. A 6-camera synchronised array (GoPro Omni, [California, United States]) was suspended inverted and recorded an elective laparoscopic cholecystectomy in 2016. A single-blinded randomised cross-over study was performed to evaluate this video in 360º vs 2D formats. Group A experienced the 360º video using Samsung (Suwon, South-Korea) GearVR virtual-reality headsets, followed by the 2D experience on a 75-inch television. Group B were reversed. Each video was probed at designated time points for engagement levels and task-unrelated images or thoughts. Alternating question banks were administered following each video experience. Feedback was obtained via a short survey at study completion. The New Academic and Education Building (NAEB) in Dublin, Royal College of Surgeons in Ireland, July 2017. Preclinical undergraduate students from a medical university in Ireland. Forty students participated with a mean age of 23.2 ± 4.5 years and equal sex involvement. The 360º video demonstrated significantly higher engagement (p video as their learning platform of choice. Mean appraisal levels for the 360º platform were positive with mean responses of >8/10 for the platform for learning, immersion, and entertainment. This study describes the successful development and evaluation of a 360º operative video. This new video format demonstrated significant engagement and attentiveness benefits compared to traditional 2D formats. This requires further evaluation in the field of technology enhanced learning. Copyright © 2017 Association of

  11. Scleral depressed vitreous shaving, 360 laser, and perfluoropropane (C 3 F 8 for retinal detachment

    Directory of Open Access Journals (Sweden)

    Vivek Chaturvedi

    2014-01-01

    Full Text Available Purpose : To review the characteristics and outcomes of patients who underwent pars plana vitrectomy (PPV with scleral depressed vitreous shaving, 360 degree peripheral endolaser, and 14% C3F8 gas for rhegmatogenous retinal detachment (RRD. Materials and Methods : A retrospective review of a consecutive series of patients who underwent primary repair of RRD by PPV with scleral depressed vitreous shaving, 360 degree peripheral endolaser, and 14% perfluoropropane (C 3 F 8 was conducted. Patients with less than 3 months follow-up, previous retinal surgery, and higher than grade B proliferative vitreoretinopathy were excluded. Results : Ninety-one eyes were included in the study. The mean age was 60.1 years. The mean follow-up was 13.7 months. The macula was detached in 63% (58/91 of the eyes. The reattachment rate after one surgical procedure was 95% (86/91 while overall reattachment rate was 100%. There was no statistically significant difference between reattachment rates of superior, nasal/temporal, or inferior RRDs. The mean final best corrected visual acuity (BCVA was 20/40. Of all the patients, 66% of patients with macula-off RRDs had a final BCVA of 20/40 or better. Conclusions: PPV with scleral depressed vitreous shaving, 360 degree peripheral endolaser, and 14% C 3 F 8 leads to successful anatomical reattachment with visual improvement in patients with primary RRD.

  12. 49 CFR 178.360-2 - Manufacture.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Manufacture. 178.360-2 Section 178.360-2 Transportation Other Regulations Relating to Transportation PIPELINE AND HAZARDOUS MATERIALS SAFETY... Specifications for Packagings for Class 7 (Radioactive) Materials § 178.360-2 Manufacture. The ends of the vessel...

  13. [Method for evaluating the positional accuracy of a six-degrees-of-freedom radiotherapy couch using high definition digital cameras].

    Science.gov (United States)

    Takemura, Akihiro; Ueda, Shinichi; Noto, Kimiya; Kurata, Yuichi; Shoji, Saori

    2011-01-01

    In this study, we proposed and evaluated a positional accuracy assessment method with two high-resolution digital cameras for add-on six-degrees-of-freedom radiotherapy (6D) couches. Two high resolution digital cameras (D5000, Nikon Co.) were used in this accuracy assessment method. These cameras were placed on two orthogonal axes of a linear accelerator (LINAC) coordinate system and focused on the isocenter of the LINAC. Pictures of a needle that was fixed on the 6D couch were taken by the cameras during couch motions of translation and rotation of each axis. The coordinates of the needle in the pictures were obtained using manual measurement, and the coordinate error of the needle was calculated. The accuracy of a HexaPOD evo (Elekta AB, Sweden) was evaluated using this method. All of the mean values of the X, Y, and Z coordinate errors in the translation tests were within ±0.1 mm. However, the standard deviation of the Z coordinate errors in the Z translation test was 0.24 mm, which is higher than the others. In the X rotation test, we found that the X coordinate of the rotational origin of the 6D couch was shifted. We proposed an accuracy assessment method for a 6D couch. The method was able to evaluate the accuracy of the motion of only the 6D couch and revealed the deviation of the origin of the couch rotation. This accuracy assessment method is effective for evaluating add-on 6D couch positioning.

  14. Universal crystal cooling device for precession cameras, rotation cameras and diffractometers

    International Nuclear Information System (INIS)

    Hajdu, J.; McLaughlin, P.J.; Helliwell, J.R.; Sheldon, J.; Thompson, A.W.

    1985-01-01

    A versatile crystal cooling device is described for macromolecular crystallographic applications in the 290 to 80 K temperature range. It utilizes a fluctuation-free cold-nitrogen-gas supply, an insulated Mylar crystal cooling chamber and a universal ball joint, which connects the cooling chamber to the goniometer head and the crystal. The ball joint is a novel feature over all previous designs. As a result, the device can be used on various rotation cameras, precession cameras and diffractometers. The lubrication of the interconnecting parts with graphite allows the cooling chamber to remain stationary while the crystal and goniometer rotate. The construction allows for 360 0 rotation of the crystal around the goniometer axis and permits any settings on the arcs and slides of the goniometer head (even if working at 80 K). There are no blind regions associated with the frame holding the chamber. Alternatively, the interconnecting ball joint can be tightened and fixed. This results in a set up similar to the construction described by Bartunik and Schubert where the cooling chamber rotates with the crystal. The flexibility of the systems allows for the use of the device on most cameras or diffractometers. THis device has been installed at the protein crystallographic stations of the Synchrotron Radiation Source at Daresbury Laboratory and in the Laboratory of Molecular Biophysics, Oxford. Several data sets have been collected with processing statistics typical of data collected without a cooling chamber. Tests using the full white beam of the synchrotron also look promising. (orig./BHO)

  15. Localization and Mapping Using a Non-Central Catadioptric Camera System

    Science.gov (United States)

    Khurana, M.; Armenakis, C.

    2018-05-01

    This work details the development of an indoor navigation and mapping system using a non-central catadioptric omnidirectional camera and its implementation for mobile applications. Omnidirectional catadioptric cameras find their use in navigation and mapping of robotic platforms, owing to their wide field of view. Having a wider field of view, or rather a potential 360° field of view, allows the system to "see and move" more freely in the navigation space. A catadioptric camera system is a low cost system which consists of a mirror and a camera. Any perspective camera can be used. A platform was constructed in order to combine the mirror and a camera to build a catadioptric system. A calibration method was developed in order to obtain the relative position and orientation between the two components so that they can be considered as one monolithic system. The mathematical model for localizing the system was determined using conditions based on the reflective properties of the mirror. The obtained platform positions were then used to map the environment using epipolar geometry. Experiments were performed to test the mathematical models and the achieved location and mapping accuracies of the system. An iterative process of positioning and mapping was applied to determine object coordinates of an indoor environment while navigating the mobile platform. Camera localization and 3D coordinates of object points obtained decimetre level accuracies.

  16. 360° virtual reality video for the acquisition of knot tying skills: A randomised controlled trial.

    Science.gov (United States)

    Yoganathan, S; Finch, D A; Parkin, E; Pollard, J

    2018-04-10

    360° virtual reality (VR) video is an exciting and evolving field. Current technology promotes a totally immersive, 3-dimensional (3D), 360° experience anywhere in the world using simply a smart phone and virtual reality headset. The potential for its application in the field of surgical education is enormous. The aim of this study was to determine knot tying skills taught with a 360-degree VR video compared to conventional 2D video teaching. This trial was a prospective, randomised controlled study. 40 foundation year doctors (first year postgraduate) were randomised to either the 360-degree VR video (n = 20) or 2D video teaching (n = 20). Participants were given 15 min to watch their allocated video. Ability to tie a single handed reef knot was then assessed against a marking criteria developed for the Royal College of Surgeons, England, (RCSeng) Basic Surgical Skills (BSS) course, by a blinded assessor competent in knot tying. Each candidate then underwent further teaching using Peyton's four step model. Knot tying technique was then re-assessed. Knot tying scores were significantly better in the VR video teaching arm when compared with conventional (median knot score 5.0 vs 4.0 p = 0.04). When used in combination with face to face skills teaching this difference persisted (median knot score 9.5 vs 9.0 p = 0.01). More people in the VR arm constructed a complete reef knot than in the 2D arm following face to face teaching (17/20 vs 12/20). No difference between the groups existed in the time taken to construct a reef knot following video and teaching (median time 31.0s vs 30.5s p = 0.89). This study shows there is significant merit in the application of 360-degree VR video technology in surgical training, both as an independent teaching aid and when used as an adjunct to traditional face to face teaching. Copyright © 2018 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  17. 19 CFR 360.102 - Online registration.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Online registration. 360.102 Section 360.102... ANALYSIS SYSTEM § 360.102 Online registration. (a) In general. (1) Any importer, importing company, customs.... boxes will not be accepted. A user identification number will be issued within two business days...

  18. 46 CFR 132.360 - Fire axes.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Fire axes. 132.360 Section 132.360 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) OFFSHORE SUPPLY VESSELS FIRE-PROTECTION EQUIPMENT Miscellaneous § 132.360 Fire axes. (a) Each vessel of less than 100 gross tons must carry one fire axe. (b) Each...

  19. Mosaic of Apollo 16 Descartes landing site taken from TV transmission

    Science.gov (United States)

    1972-01-01

    A 360 degree field of view of the Apollo 16 Descartes landing site area composed of individual scenes taken from a color transmission made by the color RCA TV camera mounted on the Lunar Roving Vehicle. This panorama was made while the LRV was parked at the rim of North Ray crater (Stations 11 and 12) during the third Apollo 16 lunar surface extravehicular activity (EVA-3) by Astronauts John W. Young and Charles M. Duke Jr. The overlay identifies the directions and the key lunar terrain features. The camera panned across the rear portion of the LRV in its 360 degree sweep. Note Young and Duke walking along the edge of the crater in one of the scenes. The TV camera was remotely controlled from a console in the Mission Control Center.

  20. Payload topography camera of Chang'e-3

    International Nuclear Information System (INIS)

    Yu, Guo-Bin; Liu, En-Hai; Zhao, Ru-Jin; Zhong, Jie; Zhou, Xiang-Dong; Zhou, Wu-Lin; Wang, Jin; Chen, Yuan-Pei; Hao, Yong-Jie

    2015-01-01

    Chang'e-3 was China's first soft-landing lunar probe that achieved a successful roving exploration on the Moon. A topography camera functioning as the lander's “eye” was one of the main scientific payloads installed on the lander. It was composed of a camera probe, an electronic component that performed image compression, and a cable assembly. Its exploration mission was to obtain optical images of the lunar topography in the landing zone for investigation and research. It also observed rover movement on the lunar surface and finished taking pictures of the lander and rover. After starting up successfully, the topography camera obtained static images and video of rover movement from different directions, 360° panoramic pictures of the lunar surface around the lander from multiple angles, and numerous pictures of the Earth. All images of the rover, lunar surface, and the Earth were clear, and those of the Chinese national flag were recorded in true color. This paper describes the exploration mission, system design, working principle, quality assessment of image compression, and color correction of the topography camera. Finally, test results from the lunar surface are provided to serve as a reference for scientific data processing and application. (paper)

  1. Letter of the Synod of Paris (360/1 to eastern bishops

    Directory of Open Access Journals (Sweden)

    Zakharov Georgii

    2013-08-01

    Full Text Available The Russian translation from Latin of the letter of the synod of Paris (360/1 to Eastern bishops which is included in Fragmenta historica of Hilarius of Poitiers is published for the first time. The text contains some important evidences on participation of Gallic bishops in the Arian controversy and also attests the high degree of consolidation of the local Churches in Gaul.

  2. Implementing and measuring safety goals and safety culture. 3. Shifting to a Coaching Culture Through a 360-Degree Assessment Process

    International Nuclear Information System (INIS)

    Snow, Bruce A.; Maciuska, Frank

    2001-01-01

    Error-free operation is the ultimate objective of any safety culture. Ginna Training and Operations has embarked on an approach directed at further developing coaching skills, attitudes, and values. To accomplish this, a 360-deg assessment process designed to enhance coaching skills, attitudes, and values has been implemented. The process includes measuring participants based on a set of values and an individual self-development plan based on the feedback from the 360-deg assessment. The skills and experience of the people who make up that culture are irreplaceable. As nuclear organizations mature and generations retire, knowledge and skills must be transferred to the incoming generations without a loss in performance. The application of a 360- deg assessment process can shift the culture to include coaching in a strong command and control environment. It is a process of change management strengthened by experience while meeting the challenge to improve human performance by changing workplace attitudes. At Ginna, training programs and new processes were initiated to pursue the ultimate objective: error-free operation. The overall objective of the programs is to create a common knowledge base and the skill required to consistently incorporate ownership of 'coach and collaborate' responsibility into a strong existing 'command and control' culture. This involves the role of coach; the role of communications; and concept integration, which includes communications, coaching, and team dimensional training (TDT). The overall objective of the processes, TDT and shifting to a coaching culture through the application of a 360-deg assessment process, is to provide guidance for applying the skills learned in the programs. As depicted in Fig. 1, the TDT (a process that identifies 'strengths and challenges') can be greatly improved by applying good communications and coaching practices. As the training programs were implemented, the participants were observed and coached in

  3. 46 CFR 183.360 - Semiconductor rectifier systems.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false Semiconductor rectifier systems. 183.360 Section 183.360... TONS) ELECTRICAL INSTALLATION Power Sources and Distribution Systems § 183.360 Semiconductor rectifier systems. (a) Each semiconductor rectifier system must have an adequate heat removal system that prevents...

  4. Physics and instrumentation of emission computed tomography

    International Nuclear Information System (INIS)

    Links, J.M.

    1986-01-01

    Transverse emission computed tomography can be divided into two distinct classes: single photon emission computed tomography (SPECT) and positron emission tomography (PET). SPECT is usually accomplished with specially-adapted scintillation cameras, although dedicated SPECT scanners are available. The special SPECT cameras are standard cameras which are mounted on gantries that allow 360 degree rotation around the long axis of the head or body. The camera stops at a number of angles around the body (usually 64-128), acquiring a ''projection'' image at each stop. The data from these projections are used to reconstruct transverse images with a standard ''filtered back-projection'' algorithm, identical to that used in transmission CT. Because the scintillation camera acquires two-dimensional images, a simple 360 degree rotation around the patient results in the acquisition of data for a number of contiguous transverse slices. These slices, once reconstructed, can be ''stacked'' in computer memory, and orthogonal coronal and sagittal slices produced. Additionally, reorienting algorithms allow the generation of slices that are oblique to the long axis of the body

  5. HAL/S-360-user's manual

    Science.gov (United States)

    Kole, R. E.; Helmers, P. H.; Hotz, R. L.

    1974-01-01

    This is a reference document to be used in the process of getting HAL/S programs compiled and debugged on the IBM 360 computer. Topics from the operating system communication to interpretation of debugging aids are discussed. Features of HAL programming system that have specific system/360 dependencies are presented.

  6. 46 CFR 129.360 - Semiconductor-rectifier systems.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Semiconductor-rectifier systems. 129.360 Section 129.360... INSTALLATIONS Power Sources and Distribution Systems § 129.360 Semiconductor-rectifier systems. (a) Each semiconductor-rectifier system must have an adequate heat-removal system to prevent overheating. (b) If a...

  7. 46 CFR 120.360 - Semiconductor rectifier systems.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Semiconductor rectifier systems. 120.360 Section 120.360... INSTALLATION Power Sources and Distribution Systems § 120.360 Semiconductor rectifier systems. (a) Each semiconductor rectifier system must have an adequate heat removal system that prevents overheating. (b) Where a...

  8. 12 CFR 360.7 - Post-insolvency interest.

    Science.gov (United States)

    2010-01-01

    ... becomes proven. (4) Post-insolvency interest shall be determined using a simple interest method of... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Post-insolvency interest. 360.7 Section 360.7... RESOLUTION AND RECEIVERSHIP RULES § 360.7 Post-insolvency interest. (a) Purpose and scope. This section...

  9. 12 CFR 360.1 - Least-cost resolution.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Least-cost resolution. 360.1 Section 360.1 Banks and Banking FEDERAL DEPOSIT INSURANCE CORPORATION REGULATIONS AND STATEMENTS OF GENERAL POLICY RESOLUTION AND RECEIVERSHIP RULES § 360.1 Least-cost resolution. (a) General rule. Except as provided in...

  10. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  11. Automatic Thermal Infrared Panoramic Imaging Sensor

    National Research Council Canada - National Science Library

    Gutin, Mikhail; Tsui, Eddy K; Gutin, Olga; Wang, Xu-Ming; Gutin, Alexey

    2006-01-01

    Panoramic cameras offer true real-time, 360-degree coverage of the surrounding area, valuable for a variety of defense and security applications, including force protection, asset protection, asset...

  12. Inspecting rapidly moving surfaces for small defects using CNN cameras

    Science.gov (United States)

    Blug, Andreas; Carl, Daniel; Höfler, Heinrich

    2013-04-01

    A continuous increase in production speed and manufacturing precision raises a demand for the automated detection of small image features on rapidly moving surfaces. An example are wire drawing processes where kilometers of cylindrical metal surfaces moving with 10 m/s have to be inspected for defects such as scratches, dents, grooves, or chatter marks with a lateral size of 100 μm in real time. Up to now, complex eddy current systems are used for quality control instead of line cameras, because the ratio between lateral feature size and surface speed is limited by the data transport between camera and computer. This bottleneck is avoided by "cellular neural network" (CNN) cameras which enable image processing directly on the camera chip. This article reports results achieved with a demonstrator based on this novel analogue camera - computer system. The results show that computational speed and accuracy of the analogue computer system are sufficient to detect and discriminate the different types of defects. Area images with 176 x 144 pixels are acquired and evaluated in real time with frame rates of 4 to 10 kHz - depending on the number of defects to be detected. These frame rates correspond to equivalent line rates on line cameras between 360 and 880 kHz, a number far beyond the available features. Using the relation between lateral feature size and surface speed as a figure of merit, the CNN based system outperforms conventional image processing systems by an order of magnitude.

  13. 20 CFR 405.360 - Official record.

    Science.gov (United States)

    2010-04-01

    ... in making the decision under review and any additional evidence or written statements that the... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Official record. 405.360 Section 405.360 Employees' Benefits SOCIAL SECURITY ADMINISTRATION ADMINISTRATIVE REVIEW PROCESS FOR ADJUDICATING INITIAL...

  14. A 360 degrees evaluation of a night-float system for general surgery: a response to mandated work-hours reduction.

    Science.gov (United States)

    Goldstein, Michael J; Kim, Eugene; Widmann, Warren D; Hardy, Mark A

    2004-01-01

    New York State Code 405 and societal/political pressure have led the RRC and ACGME to mandate strict limitations on resident work hours. In an attempt to meet these limitations, we have switched from the previous Q3 call schedule to a specialized night float (NF) system, the continuity-care system (CCS). The purpose of this CCS is to maximize resident duty time spent on direct patient care, operative experience, and outpatient clinics, while reducing duty hours spent on performing routine tasks and call coverage. The implementation of the CCS is the fundamental step in the restructuring of our residency program. In addition to a change in the call system, we added physician assistants to aid in performing some service tasks. We performed a 360 degrees evaluation of this work in progress. In May 2002, the standard Q3 call system was abolished on the general surgery services at the New York Presbyterian Hospital, Columbia campus. Two dedicated teams were created to provide day and night coverage, a day continuity-care team (DCT) and a night continuity-care team (NCT). The DCTs, consisting of PGY1-5 residents, provide daily in-house coverage from 6 AM to 5 PM with no regular weekday night-call responsibilities. The DCT residents provide Friday night, Saturday, and daytime Sunday call coverage 3 to 4 days per month. The NCT, consisting of 5 PGY1-5 residents, provides nightly continuous care, 5 PM to 6 AM, Sunday through Thursday, with no other weekend call responsibilities. This system creates a schedule with less than 80 duty hours per week, on average, with one 24-hour period off a week, one complete weekend off per month, and no more than 24 hours of consecutive duty time. After 1 year of use, the system was evaluated by a 360 degrees method in which residents, residents' spouses, nurses, and faculty were surveyed using a Likert-type scale. Statistical significance was calculated using the Student t-test. Patient satisfaction was measured both by internal review of

  15. 37 CFR 360.4 - Compliance with statutory dates.

    Science.gov (United States)

    2010-07-01

    ... dates. 360.4 Section 360.4 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS SUBMISSION OF ROYALTY CLAIMS FILING OF CLAIMS TO ROYALTY FEES COLLECTED UNDER COMPULSORY LICENSE Cable Claims § 360.4 Compliance with statutory dates. (a) Claims filed with the Copyright Royalty Board...

  16. 37 CFR 360.24 - Compliance with statutory dates.

    Science.gov (United States)

    2010-07-01

    ... dates. 360.24 Section 360.24 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS SUBMISSION OF ROYALTY CLAIMS FILING OF CLAIMS TO ROYALTY FEES COLLECTED UNDER COMPULSORY LICENSE Digital Audio Recording Devices and Media Royalty Claims § 360.24 Compliance with statutory dates. (a...

  17. 37 CFR 360.13 - Compliance with statutory dates.

    Science.gov (United States)

    2010-07-01

    ... dates. 360.13 Section 360.13 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS SUBMISSION OF ROYALTY CLAIMS FILING OF CLAIMS TO ROYALTY FEES COLLECTED UNDER COMPULSORY LICENSE Satellite Claims § 360.13 Compliance with statutory dates. (a) Claims filed with the Copyright Royalty Board...

  18. 44 CFR 360.4 - Administrative procedures.

    Science.gov (United States)

    2010-10-01

    ... of training as well as costs of delivery and student travel and per diem are to be estimated. Special.... 360.4 Section 360.4 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY...) Issuance of a request for application: Each State emergency management agency will receive a Request for...

  19. 46 CFR 185.360 - Use of auto pilot.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false Use of auto pilot. 185.360 Section 185.360 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SMALL PASSENGER VESSELS (UNDER 100 GROSS TONS) OPERATIONS Miscellaneous Operating Requirements § 185.360 Use of auto pilot. Whenever an automatic pilot is...

  20. 31 CFR 360.1 - Official agencies.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false Official agencies. 360.1 Section 360.1 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL SERVICE... Bank of Kansas City, 925 Grand Boulevard, Kansas City, MO 64106 Dallas, San Francisco, Kansas City, St...

  1. IR-360 nuclear power plant safety functions and component classification

    International Nuclear Information System (INIS)

    Yousefpour, F.; Shokri, F.; Soltani, H.

    2010-01-01

    The IR-360 nuclear power plant as a 2-loop PWR of 360 MWe power generation capacity is under design in MASNA Company. For design of the IR-360 structures, systems and components (SSCs), the codes and standards and their design requirements must be determined. It is a prerequisite to classify the IR-360 safety functions and safety grade of structures, systems and components correctly for selecting and adopting the suitable design codes and standards. This paper refers to the IAEA nuclear safety codes and standards as well as USNRC standard system to determine the IR-360 safety functions and to formulate the principles of the IR-360 component classification in accordance with the safety philosophy and feature of the IR-360. By implementation of defined classification procedures for the IR-360 SSCs, the appropriate design codes and standards are specified. The requirements of specific codes and standards are used in design process of IR-360 SSCs by design engineers of MASNA Company. In this paper, individual determination of the IR-360 safety functions and definition of the classification procedures and roles are presented. Implementation of this work which is described with example ensures the safety and reliability of the IR-360 nuclear power plant.

  2. IR-360 nuclear power plant safety functions and component classification

    Energy Technology Data Exchange (ETDEWEB)

    Yousefpour, F., E-mail: fyousefpour@snira.co [Management of Nuclear Power Plant Construction Company (MASNA) (Iran, Islamic Republic of); Shokri, F.; Soltani, H. [Management of Nuclear Power Plant Construction Company (MASNA) (Iran, Islamic Republic of)

    2010-10-15

    The IR-360 nuclear power plant as a 2-loop PWR of 360 MWe power generation capacity is under design in MASNA Company. For design of the IR-360 structures, systems and components (SSCs), the codes and standards and their design requirements must be determined. It is a prerequisite to classify the IR-360 safety functions and safety grade of structures, systems and components correctly for selecting and adopting the suitable design codes and standards. This paper refers to the IAEA nuclear safety codes and standards as well as USNRC standard system to determine the IR-360 safety functions and to formulate the principles of the IR-360 component classification in accordance with the safety philosophy and feature of the IR-360. By implementation of defined classification procedures for the IR-360 SSCs, the appropriate design codes and standards are specified. The requirements of specific codes and standards are used in design process of IR-360 SSCs by design engineers of MASNA Company. In this paper, individual determination of the IR-360 safety functions and definition of the classification procedures and roles are presented. Implementation of this work which is described with example ensures the safety and reliability of the IR-360 nuclear power plant.

  3. Void -spelprototyp för Xbox360 / Void -game prototype for Xbox360

    OpenAIRE

    Lindberg, Joakim; Gullbrandson, Fredrik; Nilsson, Bo Martin

    2008-01-01

    Detta examensarbete består utav av två delar, en produktionsdel och denna slutreflektion. Produktionsdelen varade i 15 veckor och gick ut på att utveckla en spelprototyp. Spelprototypen är ämnad för spelkonsolen Xbox360, som är den nya generationens tv-spel. För att kunna utveckla till denna konsol måste man använda sig av XNA, som är ett utvecklingsverktyg ämnat specifikt för Xbox360 och ett måste om man vill kunna utveckla till denna konsol som icke licensierad spelutvecklare. Denna rapport...

  4. 46 CFR 122.360 - Use of auto pilot.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Use of auto pilot. 122.360 Section 122.360 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SMALL PASSENGER VESSELS CARRYING MORE THAN 150... Requirements § 122.360 Use of auto pilot. Whenever an automatic pilot is used the master shall ensure that: (a...

  5. Exploring the World using Street View 360 Images

    Science.gov (United States)

    Bailey, J.

    2016-12-01

    The phrase "A Picture is Worth a Thousand Words" is an idiom of unknown 20th century origin. There is some belief that the modern use of the phrase stems from an article in a 1921 issue of a popular trade journal, that used "One Look is Worth A Thousand Words" to promote the use of images in advertisements on the sides of streetcars. There is a certain irony to this as nearly a century later the camera technologies on "Street View cars" are collecting images that look everywhere at once. However, while it can be to fun drive along the World's streets, it was the development of Street View imaging systems that could be mounted on other modes of transport or capture platforms (Street View Special Collects) that opened the door for these 360 images to become a tool for exploration and storytelling. Using Special Collect imagery captured in "off-road" and extreme locations, scientists are now using Street View images to assess changes to species habitats, show the impact of natural disasters and even perform "armchair" geology. A powerful example is the imagery captured before and after the 2011 earthquake and tsunami that devastated Japan. However, it is use of the immersive nature of 360 images that truly allows them to create wonder and awe, especially when combined with Virtual Reality (VR) viewers. Combined with the Street View App or Google Expeditions, VR provides insight into what it is like to swim with sealions in the Galapagos or climb El Capitan in Yosemite National Park. While these image could never replace experiencing these locations in real-life, they can inspire the viewer to explore and learn more about the many wonders of our planet. https://www.google.com/streetview/https://www.google.com/expeditions/

  6. 31 CFR 360.48 - Restrictions on reissue; denominational exchange.

    Science.gov (United States)

    2010-07-01

    ...; denominational exchange. 360.48 Section 360.48 Money and Finance: Treasury Regulations Relating to Money and... GOVERNING DEFINITIVE UNITED STATES SAVINGS BONDS, SERIES I Reissue and Denominational Exchange § 360.48 Restrictions on reissue; denominational exchange. Reissue is not permitted solely to change denominations. ...

  7. Design of a Day/Night Star Camera System

    Science.gov (United States)

    Alexander, Cheryl; Swift, Wesley; Ghosh, Kajal; Ramsey, Brian

    1999-01-01

    This paper describes the design of a camera system capable of acquiring stars during both the day and night cycles of a high altitude balloon flight (35-42 km). The camera system will be filtered to operate in the R band (590-810 nm). Simulations have been run using MODTRAN atmospheric code to determine the worse case sky brightness at 35 km. With a daytime sky brightness of 2(exp -05) W/sq cm/str/um in the R band, the sensitivity of the camera system will allow acquisition of at least 1-2 stars/sq degree at star magnitude limits of 8.25-9.00. The system will have an F2.8, 64.3 mm diameter lens and a 1340X1037 CCD array digitized to 12 bits. The CCD array is comprised of 6.8 X 6.8 micron pixels with a well depth of 45,000 electrons and a quantum efficiency of 0.525 at 700 nm. The camera's field of view will be 6.33 sq degree and provide attitude knowledge to 8 arcsec or better. A test flight of the system is scheduled for fall 1999.

  8. 19 CFR 360.108 - Loss of electronic licensing privileges.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Loss of electronic licensing privileges. 360.108 Section 360.108 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE STEEL IMPORT MONITORING AND ANALYSIS SYSTEM § 360.108 Loss of electronic licensing privileges. Should Commerce determine...

  9. The creep and intergranular cracking behavior of Ni-Cr-Fe-C alloys in 360 degree C water

    International Nuclear Information System (INIS)

    Angeliu, T.M.; Paraventi, D.J.; Was, G.S.

    1995-01-01

    Mechanical testing of controlled-purity Ni-xCr-9Fe-yC alloys at 360 C revealed an environmental enhancement in IG cracking and time-dependent deformation in high purity and primary water over that exhibited in argon. Dimples on the IG facets indicate a creep void nucleation and growth failure mode. IG cracking was primarily located at the interior of the specimen and not necessarily linked to direct contact with the environment. Controlled potential CERT experiments showed increases in IG cracking as the applied potential decreased, suggesting that hydrogen is detrimental to the mechanical properties. It is proposed that the environment, through the presence of hydrogen, enhances IG cracking by enhancing the matrix dislocation mobility. This is based on observations that dislocation-controlled creep controls the IG cracking of controlled-purity Ni-xCr-9Fe-yC in argon at 360 C and grain boundary cavitation and sliding results that show the environmental enhancement of the creep rate is primarily due to an increase in matrix plastic deformation. However, controlled potential CLT experiments did not exhibit a change in the creep rate as the applied potential decreased. While this does not clearly support hydrogen assisted creep, the material may already be saturated with hydrogen at these applied potentials and thus no effect was realized. Chromium and carbon decrease the IG cracking in high purity and primary water by increasing the creep resistance. The surface film does not play a significant role in the creep or IG cracking behavior under the conditions investigated

  10. 37 CFR 360.22 - Form and content of claims.

    Science.gov (United States)

    2010-07-01

    .... 360.22 Section 360.22 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS SUBMISSION OF ROYALTY CLAIMS FILING OF CLAIMS TO ROYALTY FEES COLLECTED UNDER COMPULSORY LICENSE Digital Audio Recording Devices and Media Royalty Claims § 360.22 Form and content of claims. (a) Forms. (1...

  11. Our Journey to Summon and 360: The KAUST experience

    KAUST Repository

    Ramli, Rindra M.

    2017-09-12

    Depicts the journey undertaken by KAUST (King Abdullah University of Science and Technology), an international graduate research university located on the shores of the Red Sea, in implementing Summon as its new webscale discovery layer. We will also describe the implementation of 360 suite of products namely: 360 Core & 360 LINK, 360 Marc. The presentation will cover the early days of library’s foray into discovery layers and the difficulties faced by the library that gave the impetus to embark on the project to evaluate, assess and recommend for a new and robust discovery layer. On top of that, the presenters would elaborate the project timeline (which also include the implementation phase for Summon and 360 Core), the challenges faced by the project team and lessons learnt.

  12. 37 CFR 360.25 - Copies of claims.

    Science.gov (United States)

    2010-07-01

    ... Section 360.25 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS SUBMISSION OF ROYALTY CLAIMS FILING OF CLAIMS TO ROYALTY FEES COLLECTED UNDER COMPULSORY LICENSE Digital Audio Recording Devices and Media Royalty Claims § 360.25 Copies of claims. A claimant shall, for each claim...

  13. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Romps, David [Univ. of California, Berkeley, CA (United States); Oktem, Rusen [Univ. of California, Berkeley, CA (United States)

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together to obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.

  14. 24 CFR 1006.360 - Conflict of interest.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false Conflict of interest. 1006.360... DEVELOPMENT NATIVE HAWAIIAN HOUSING BLOCK GRANT PROGRAM Program Requirements § 1006.360 Conflict of interest. In the procurement of property and services by the DHHL and contractors, the conflict of interest...

  15. 31 CFR 360.29 - Adjudication of claims.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false Adjudication of claims. 360.29 Section 360.29 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL... the ordinary course of business. (b) Claims filed 10 years after payment. Any claim filed 10 years or...

  16. 31 CFR 360.61 - Payment after death.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false Payment after death. 360.61 Section 360.61 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FISCAL... after death. After the death of the ward, and at any time prior to the representative's discharge, the...

  17. Comprehensive Forced Response Analysis of J2X Turbine Bladed-Discs with 360 Degree Variation in CFD Loading

    Science.gov (United States)

    Elrod, David; Christensen, Eric; Brown, Andrew

    2011-01-01

    The temporal frequency content of the dynamic pressure predicted by a 360 degree computational fluid dynamics (CFD) analysis of a turbine flow field provides indicators of forcing function excitation frequencies (e.g., multiples of blade pass frequency) for turbine components. For the Pratt and Whitney Rocketdyne J-2X engine turbopumps, Campbell diagrams generated using these forcing function frequencies and the results of NASTRAN modal analyses show a number of components with modes in the engine operating range. As a consequence, forced response and static analyses are required for the prediction of combined stress, high cycle fatigue safety factors (HCFSF). Cyclically symmetric structural models have been used to analyze turbine vane and blade rows, not only in modal analyses, but also in forced response and static analyses. Due to the tortuous flow pattern in the turbine, dynamic pressure loading is not cyclically symmetric. Furthermore, CFD analyses predict dynamic pressure waves caused by adjacent and non-adjacent blade/vane rows upstream and downstream of the row analyzed. A MATLAB script has been written to calculate displacements due to the complex cyclically asymmetric dynamic pressure components predicted by CFD analysis, for all grids in a blade/vane row, at a chosen turbopump running speed. The MATLAB displacements are then read into NASTRAN, and dynamic stresses are calculated, including an adjustment for possible mistuning. In a cyclically symmetric NASTRAN static analysis, static stresses due to centrifugal, thermal, and pressure loading at the mode running speed are calculated. MATLAB is used to generate the HCFSF at each grid in the blade/vane row. When compared to an approach assuming cyclic symmetry in the dynamic flow field, the current approach provides better assurance that the worst case safety factor has been identified. An extended example for a J-2X turbopump component is provided.

  18. FieldSAFE

    DEFF Research Database (Denmark)

    Kragh, Mikkel Fly; Christiansen, Peter; Laursen, Morten Stigaard

    2017-01-01

    In this paper, we present a novel multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 hours of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal...... camera, web camera, 360-degree camera, lidar, and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present including humans, mannequin dolls, rocks, barrels, buildings, vehicles, and vegetation. All obstacles have ground truth object labels...

  19. Estimation of signal intensity for online measurement X-ray pinhole camera

    International Nuclear Information System (INIS)

    Dong Jianjun; Liu Shenye; Yang Guohong; Yu Yanning

    2009-01-01

    The signal intensity was estimated for on-line measurement X-ray pinhole camera with CCD as measurement equipment. The X-ray signal intensity counts after the attenuation of thickness-varied Be filters and different material flat mirrors respectively were estimated using the energy spectrum of certain laser prototype and the quantum efficiency curve of PI-SX1300 CCD camera. The calculated results indicate that Be filters no thicker than 200 μm can only reduce signal intensity by one order of magnitude, and so can Au flat mirror with 3 degree incident angle, Ni, C and Si flat mirrors with 5 degree incident angle,but the signal intensity counts for both attenuation methods are beyond the saturation counts of the CCD camera. We also calculated the attenuation of signal intensity for different thickness Be filters combined with flat mirrors, indicates that the combination of Be filters with the thickness between 20 and 40 μm and Au flat mirror with 3 degree incident angle or Ni flat mirror with 5 degree incident angle is a good choice for the attenuation of signal intensity. (authors)

  20. 37 CFR 360.23 - Content of notices regarding independent administrators.

    Science.gov (United States)

    2010-07-01

    ... independent administrators. 360.23 Section 360.23 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS SUBMISSION OF ROYALTY CLAIMS FILING OF CLAIMS TO ROYALTY FEES COLLECTED UNDER COMPULSORY LICENSE Digital Audio Recording Devices and Media Royalty Claims § 360.23 Content of notices...

  1. 20 CFR 416.360 - Cancellation of a request to withdraw.

    Science.gov (United States)

    2010-04-01

    ....360 Section 416.360 Employees' Benefits SOCIAL SECURITY ADMINISTRATION SUPPLEMENTAL SECURITY INCOME FOR THE AGED, BLIND, AND DISABLED Filing of Applications Withdrawal of Application § 416.360... received after we have approved the withdrawal, the cancellation request is filed no later than 60 days...

  2. 31 CFR 360.22 - Payment or reissue pursuant to divorce.

    Science.gov (United States)

    2010-07-01

    ... divorce. 360.22 Section 360.22 Money and Finance: Treasury Regulations Relating to Money and Finance... divorce. (a) Divorce. (1) The Department of the Treasury will recognize a divorce decree that ratifies or.... (2) The evidence required under § 360.23 must be submitted in every case. When the divorce decree...

  3. 42 CFR 495.360 - Software and ownership rights.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Software and ownership rights. 495.360 Section 495.360 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) STANDARDS AND CERTIFICATION STANDARDS FOR THE ELECTRONIC HEALTH RECORD TECHNOLOGY INCENTIVE...

  4. 360-Degree Visual Detection and Target Tracking on an Autonomous Surface Vehicle

    Science.gov (United States)

    Wolf, Michael T; Assad, Christopher; Kuwata, Yoshiaki; Howard, Andrew; Aghazarian, Hrand; Zhu, David; Lu, Thomas; Trebi-Ollennu, Ashitey; Huntsberger, Terry

    2010-01-01

    This paper describes perception and planning systems of an autonomous sea surface vehicle (ASV) whose goal is to detect and track other vessels at medium to long ranges and execute responses to determine whether the vessel is adversarial. The Jet Propulsion Laboratory (JPL) has developed a tightly integrated system called CARACaS (Control Architecture for Robotic Agent Command and Sensing) that blends the sensing, planning, and behavior autonomy necessary for such missions. Two patrol scenarios are addressed here: one in which the ASV patrols a large harbor region and checks for vessels near a fixed asset on each pass and one in which the ASV circles a fixed asset and intercepts approaching vessels. This paper focuses on the ASV's central perception and situation awareness system, dubbed Surface Autonomous Visual Analysis and Tracking (SAVAnT), which receives images from an omnidirectional camera head, identifies objects of interest in these images, and probabilistically tracks the objects' presence over time, even as they may exist outside of the vehicle's sensor range. The integrated CARACaS/SAVAnT system has been implemented on U.S. Navy experimental ASVs and tested in on-water field demonstrations.

  5. 12 CFR Appendix C to Part 360 - Deposit File Structure

    Science.gov (United States)

    2010-01-01

    .... • STATE = State government. • COMM = Commercial. • CORP = Corporate. • BANK = Bank Owned. • DUE TO = Other... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Deposit File Structure C Appendix C to Part 360... RESOLUTION AND RECEIVERSHIP RULES Pt. 360, App. C Appendix C to Part 360—Deposit File Structure This is the...

  6. 31 CFR 360.55 - Individuals authorized to certify.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false Individuals authorized to certify. 360.55 Section 360.55 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued... imprint of either the corporate seal of the institution or of the issuing or paying agent's stamp. The...

  7. TIC360. Concepto, Conclusiones y Líneas Abiertas.

    OpenAIRE

    Prado, Andrés

    2015-01-01

    TIC360. Concepto, conclusiones y líneas abiertas derivadas de las jornadas de la sectorial TIC de CRUE celebradas en el campus de Toledo de la UCLM. En estas jornadas se trazó una visión 360º de las TIC en la Universidad, contando con todos los agentes implicados

  8. 20 CFR 410.360 - Determination of dependency; widow.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Determination of dependency; widow. 410.360... OF 1969, TITLE IV-BLACK LUNG BENEFITS (1969- ) Relationship and Dependency § 410.360 Determination of dependency; widow. (a) General. An individual who is the miner's widow (see § 410.320) will be determined to...

  9. Non-contact measurement of rotation angle with solo camera

    Science.gov (United States)

    Gan, Xiaochuan; Sun, Anbin; Ye, Xin; Ma, Liqun

    2015-02-01

    For the purpose to measure a rotation angle around the axis of an object, a non-contact rotation angle measurement method based on solo camera was promoted. The intrinsic parameters of camera were calibrated using chessboard on principle of plane calibration theory. The translation matrix and rotation matrix between the object coordinate and the camera coordinate were calculated according to the relationship between the corners' position on object and their coordinates on image. Then the rotation angle between the measured object and the camera could be resolved from the rotation matrix. A precise angle dividing table (PADT) was chosen as the reference to verify the angle measurement error of this method. Test results indicated that the rotation angle measurement error of this method did not exceed +/- 0.01 degree.

  10. Hacking 360 Link: A hybrid approach

    Directory of Open Access Journals (Sweden)

    John Durno

    2012-10-01

    Full Text Available When the University of Victoria Libraries switched from a locally-hosted link resolver (SFX to a vendor-hosted link resolver (360Link, new strategies were required to effectively integrate the vendor-hosted link resolver with the Libraries' other systems and services. Custom javascript is used to add links to the 360Link page; these links then point at local PHP code running on UVic servers, which can then redirect to appropriate local service or display a form directly. An open source PHP OpenURL parser class is announced. Consideration is given to the importance of maintaining open protocols and standards in the transition to vendor-hosted services.

  11. COPD360social Online Community: A Social Media Review.

    Science.gov (United States)

    Stellefson, Michael; Paige, Samantha R; Alber, Julia M; Stewart, Margaret

    2018-06-01

    People living with chronic obstructive pulmonary disease (COPD) commonly report feelings of loneliness and social isolation due to lack of support from family, friends, and health care providers. COPD360social is an interactive and disease-specific online community and social network dedicated to connecting people living with COPD to evidence-based resources. Through free access to collaborative forums, members can explore, engage, and discuss an array of disease-related topics, such as symptom management. This social media review provides an overview of COPD360social, specifically its features that practitioners can leverage to facilitate patient-provider communication, knowledge translation, and community building. The potential of COPD360social for chronic disease self-management is maximized through community recognition programming and interactive friend-finding tools that encourage members to share their own stories through blogs and multimedia (e.g., images, videos). The platform also fosters collaborative knowledge dissemination and helping relationships among patients, family members, friends, and health care providers. Successful implementation of COPD360social has dramatically expanded patient education and self-management support resources for people affected by COPD. Practitioners should refer patients and their families to online social networks such as COPD360social to increase knowledge and awareness of evidence-based chronic disease management practices.

  12. ESO unveils an amazing, interactive, 360-degree panoramic view of the entire night sky

    Science.gov (United States)

    2009-09-01

    The first of three images of ESO's GigaGalaxy Zoom project - a new magnificent 800-million-pixel panorama of the entire sky as seen from ESO's observing sites in Chile - has just been released online. The project allows stargazers to explore and experience the Universe as it is seen with the unaided eye from the darkest and best viewing locations in the world. This 360-degree panoramic image, covering the entire celestial sphere, reveals the cosmic landscape that surrounds our tiny blue planet. This gorgeous starscape serves as the first of three extremely high-resolution images featured in the GigaGalaxy Zoom project, launched by ESO within the framework of the International Year of Astronomy 2009 (IYA2009). GigaGalaxy Zoom features a web tool that allows users to take a breathtaking dive into our Milky Way. With this tool users can learn more about many different and exciting objects in the image, such as multicoloured nebulae and exploding stars, just by clicking on them. In this way, the project seeks to link the sky we can all see with the deep, "hidden" cosmos that astronomers study on a daily basis. The wonderful quality of the images is a testament to the splendour of the night sky at ESO's sites in Chile, which are the most productive astronomical observatories in the world. The plane of our Milky Way Galaxy, which we see edge-on from our perspective on Earth, cuts a luminous swath across the image. The projection used in GigaGalaxy Zoom place the viewer in front of our Galaxy with the Galactic Plane running horizontally through the image - almost as if we were looking at the Milky Way from the outside. From this vantage point, the general components of our spiral galaxy come clearly into view, including its disc, marbled with both dark and glowing nebulae, which harbours bright, young stars, as well as the Galaxy's central bulge and its satellite galaxies. The painstaking production of this image came about as a collaboration between ESO, the renowned

  13. Autodesk Infraworks 360 and Autodesk Infraworks 360 LT essentials

    CERN Document Server

    Chappell, Eric

    2015-01-01

    Get up to speed and get to work quickly with the official InfraWorks handbook Autodesk InfraWorks and InfraWorks 360 Essentials, 2nd Edition is your comprehensive, hands-on guide to this popular civil engineering software. This unique guide features concise, straightforward explanations and real world exercises to bring you up to speed on InfraWorks' core features and functions, giving you the skills you need to quickly become productive. Following a workflow-based approach that mirrors how projects progress in the real world, this book walks you through the process of designing a residential

  14. Perancangan dan Pembuatan Alat Scanner 3D Menggunakan Sensor Kinect Xbox 360

    Directory of Open Access Journals (Sweden)

    Arif Armansyah

    2018-04-01

    make scanner 3D with high accuracy results. Scanner 3D is made using the XBOX 360 Kinect sensor. How to work from kinect namely with combining between some camera, Color Cimos (VNA38209015 this camera work help in the introduction of objects and other detection feature and IR camera CMOS (VCA379C7130, and IR Projector (OG12 as depth censorship or the depth sensor is a projector infrared and a monochrome sensor CMOS working together to see the room or area in the form of 3D without neglecting the light conditions. To process and display the results from the object that is already in the scan using KScan3D application Then to the connection between the PC with media drives using Bluetooth HC-06. After the test is done obtained the model picture 3D with the results of the accuracy high enough. Key Word: Bluetooth HC-06, Infra Merah , Line Laser, Kinect, KScan3D, Scanner 3D, Ultrasonik

  15. 24 CFR 3282.360 - PIA acceptance of product certification programs or listings.

    Science.gov (United States)

    2010-04-01

    ... ENFORCEMENT REGULATIONS Primary Inspection Agencies § 3282.360 PIA acceptance of product certification... 24 Housing and Urban Development 5 2010-04-01 2010-04-01 false PIA acceptance of product certification programs or listings. 3282.360 Section 3282.360 Housing and Urban Development Regulations Relating...

  16. 37 CFR 360.10 - General.

    Science.gov (United States)

    2010-07-01

    ....10 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS SUBMISSION OF ROYALTY CLAIMS FILING OF CLAIMS TO ROYALTY FEES COLLECTED UNDER COMPULSORY LICENSE Satellite Claims § 360... to be entitled to compulsory license royalty fees for secondary transmissions by satellite carriers...

  17. 14 CFR 121.360 - Ground proximity warning-glide slope deviation alerting system.

    Science.gov (United States)

    2010-01-01

    ... deviation alerting system. 121.360 Section 121.360 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION... Equipment Requirements § 121.360 Ground proximity warning-glide slope deviation alerting system. (a) No... system that meets the performance and environmental standards of TSO-C92 (available from the FAA, 800...

  18. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  19. High sensitivity broadband 360GHz passive receiver for TeraSCREEN

    Science.gov (United States)

    Wang, Hui; Oldfield, Matthew; Maestrojuán, Itziar; Platt, Duncan; Brewster, Nick; Viegas, Colin; Alderman, Byron; Ellison, Brian N.

    2016-05-01

    TeraSCREEN is an EU FP7 Security project aimed at developing a combined active, with frequency channel centered at 360 GHz, and passive, with frequency channels centered at 94, 220 and 360 GHz, imaging system for border controls in airport and commercial ferry ports. The system will include automatic threat detection and classification and has been designed with a strong focus on the ethical, legal and practical aspects of operating in these environments and with the potential threats in mind. Furthermore, both the passive and active systems are based on array receivers with the active system consisting of a 16 element MIMO FMCW radar centered at 360 GHz with a bandwidth of 30 GHz utilizing a custom made direct digital synthesizer. The 16 element passive receiver system at 360 GHz uses commercial Gunn diode oscillators at 90 GHz followed by custom made 90 to 180 GHz frequency doublers supplying the local oscillator for 360 GHz sub-harmonic mixers. This paper describes the development of the passive antenna module, local oscillator chain, frequency mixers and detectors used in the passive receiver array of this system. The complete passive receiver chain is characterized in this paper.

  20. 37 CFR 360.1 - General.

    Science.gov (United States)

    2010-07-01

    ... Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS SUBMISSION OF ROYALTY CLAIMS FILING OF CLAIMS TO ROYALTY FEES COLLECTED UNDER COMPULSORY LICENSE Cable Claims § 360.1 General... to cable compulsory license royalty fees shall file claims with the Copyright Royalty Board. ...

  1. 44 CFR 360.1 - Purpose.

    Science.gov (United States)

    2010-10-01

    ... intergovernmental endeavor which combines financial and human resources to fill the unique training needs of local... PREPAREDNESS STATE ASSISTANCE PROGRAMS FOR TRAINING AND EDUCATION IN COMPREHENSIVE EMERGENCY MANAGEMENT § 360.1 Purpose. The Emergency Management Training Program is designed to enhance the States' emergency management...

  2. 37 CFR 360.20 - General.

    Science.gov (United States)

    2010-07-01

    ....20 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS SUBMISSION OF ROYALTY CLAIMS FILING OF CLAIMS TO ROYALTY FEES COLLECTED UNDER COMPULSORY LICENSE Digital Audio Recording Devices and Media Royalty Claims § 360.20 General. This subpart prescribes procedures pursuant to 17 U.S.C...

  3. Calibration and verification of thermographic cameras for geometric measurements

    Science.gov (United States)

    Lagüela, S.; González-Jorge, H.; Armesto, J.; Arias, P.

    2011-03-01

    Infrared thermography is a technique with an increasing degree of development and applications. Quality assessment in the measurements performed with the thermal cameras should be achieved through metrology calibration and verification. Infrared cameras acquire temperature and geometric information, although calibration and verification procedures are only usual for thermal data. Black bodies are used for these purposes. Moreover, the geometric information is important for many fields as architecture, civil engineering and industry. This work presents a calibration procedure that allows the photogrammetric restitution and a portable artefact to verify the geometric accuracy, repeatability and drift of thermographic cameras. These results allow the incorporation of this information into the quality control processes of the companies. A grid based on burning lamps is used for the geometric calibration of thermographic cameras. The artefact designed for the geometric verification consists of five delrin spheres and seven cubes of different sizes. Metrology traceability for the artefact is obtained from a coordinate measuring machine. Two sets of targets with different reflectivity are fixed to the spheres and cubes to make data processing and photogrammetric restitution possible. Reflectivity was the chosen material propriety due to the thermographic and visual cameras ability to detect it. Two thermographic cameras from Flir and Nec manufacturers, and one visible camera from Jai are calibrated, verified and compared using calibration grids and the standard artefact. The calibration system based on burning lamps shows its capability to perform the internal orientation of the thermal cameras. Verification results show repeatability better than 1 mm for all cases, being better than 0.5 mm for the visible one. As it must be expected, also accuracy appears higher in the visible camera, and the geometric comparison between thermographic cameras shows slightly better

  4. 21 CFR 111.360 - What are the requirements for sanitation?

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 2 2010-04-01 2010-04-01 false What are the requirements for sanitation? 111.360... for Manufacturing Operations § 111.360 What are the requirements for sanitation? You must conduct all manufacturing operations in accordance with adequate sanitation principles. ...

  5. Segment scheduling method for reducing 360° video streaming latency

    Science.gov (United States)

    Gudumasu, Srinivas; Asbun, Eduardo; He, Yong; Ye, Yan

    2017-09-01

    360° video is an emerging new format in the media industry enabled by the growing availability of virtual reality devices. It provides the viewer a new sense of presence and immersion. Compared to conventional rectilinear video (2D or 3D), 360° video poses a new and difficult set of engineering challenges on video processing and delivery. Enabling comfortable and immersive user experience requires very high video quality and very low latency, while the large video file size poses a challenge to delivering 360° video in a quality manner at scale. Conventionally, 360° video represented in equirectangular or other projection formats can be encoded as a single standards-compliant bitstream using existing video codecs such as H.264/AVC or H.265/HEVC. Such method usually needs very high bandwidth to provide an immersive user experience. While at the client side, much of such high bandwidth and the computational power used to decode the video are wasted because the user only watches a small portion (i.e., viewport) of the entire picture. Viewport dependent 360°video processing and delivery approaches spend more bandwidth on the viewport than on non-viewports and are therefore able to reduce the overall transmission bandwidth. This paper proposes a dual buffer segment scheduling algorithm for viewport adaptive streaming methods to reduce latency when switching between high quality viewports in 360° video streaming. The approach decouples the scheduling of viewport segments and non-viewport segments to ensure the viewport segment requested matches the latest user head orientation. A base layer buffer stores all lower quality segments, and a viewport buffer stores high quality viewport segments corresponding to the most recent viewer's head orientation. The scheduling scheme determines viewport requesting time based on the buffer status and the head orientation. This paper also discusses how to deploy the proposed scheduling design for various viewport adaptive video

  6. 46 CFR 174.360 - Calculations.

    Science.gov (United States)

    2010-10-01

    ... GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY SPECIAL RULES PERTAINING TO SPECIFIC VESSEL TYPES Special Rules Pertaining to Dry Cargo Ships § 174.360 Calculations. Each ship to... for that ship by the International Convention for the Safety of Life at Sea, 1974, as amended, chapter...

  7. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  8. A novel super-resolution camera model

    Science.gov (United States)

    Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli

    2015-05-01

    Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.

  9. 360-MAM-Affect: Sentiment Analysis with the Google Prediction API and EmoSenticNet

    Directory of Open Access Journals (Sweden)

    Eleanor Mulholland

    2015-08-01

    Full Text Available Online recommender systems are useful for media asset management where they select the best content from a set of media assets. We have developed an architecture for 360-MAM- Select, a recommender system for educational video content. 360-MAM-Select will utilise sentiment analysis and gamification techniques for the recommendation of media assets. 360-MAM-Select will increase user participation with digital content through improved video recommendations. Here, we discuss the architecture of 360-MAM-Select and the use of the Google Prediction API and EmoSenticNet for 360-MAM-Affect, 360-MAM-Select's sentiment analysis module. Results from testing two models for sentiment analysis, Sentiment Classifier (Google Prediction API and EmoSenticNetClassifer (Google Prediction API + EmoSenticNet are promising. Future work includes the implementation and testing of 360-MAM-Select on video data from YouTube EDU and Head Squeeze.

  10. 20 CFR 408.360 - Can you cancel your request to withdraw your application?

    Science.gov (United States)

    2010-04-01

    ... application? 408.360 Section 408.360 Employees' Benefits SOCIAL SECURITY ADMINISTRATION SPECIAL BENEFITS FOR CERTAIN WORLD WAR II VETERANS Filing Applications Withdrawal of Application § 408.360 Can you cancel your... (c) A cancellation request received after we have approved your withdrawal must be filed no later...

  11. 31 CFR 360.60 - Payment to representative of an estate.

    Science.gov (United States)

    2010-07-01

    ... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false Payment to representative of an estate. 360.60 Section 360.60 Money and Finance: Treasury Regulations Relating to Money and Finance... court, other proof of qualification. (2) Except in the case of corporate fiduciaries, the evidence must...

  12. 21 CFR 1.360 - What are the record retention requirements?

    Science.gov (United States)

    2010-04-01

    ... food, including foods preserved by freezing, dehydrating, or being placed in a hermetically sealed... 21 Food and Drugs 1 2010-04-01 2010-04-01 false What are the record retention requirements? 1.360 Section 1.360 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL...

  13. 29 CFR 779.360 - Classification of liquefied-petroleum-gas sales.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Classification of liquefied-petroleum-gas sales. 779.360... Establishments Liquefied-Petroleum-Gas and Fuel Oil Dealers § 779.360 Classification of liquefied-petroleum-gas... ultimate consumer of liquefied-petroleum-gas, whether delivered in portable cylinders or in bulk to the...

  14. 31 CFR 360.23 - Evidence.

    Science.gov (United States)

    2010-07-01

    ... than six months prior to the presentation of the bond. (c) Receiver in equity or similar court officer... BONDS, SERIES I Judicial Proceedings § 360.23 Evidence. (a) General. To establish the validity of... months prior to the presentation of the bond, there must also be submitted a certification from the clerk...

  15. Advances in pediatric gastroenterology: introducing video camera capsule endoscopy.

    Science.gov (United States)

    Siaw, Emmanuel O

    2006-04-01

    The video camera capsule endoscope is a gastrointestinal endoscope approved by the U.S. Food and Drug Administration in 2001 for use in diagnosing gastrointestinal disorders in adults. In 2003, the agency approved the device for use in children ages 10 and older, and the endoscope is currently in use at Arkansas Children's Hospital. A capsule camera, lens, battery, transmitter and antenna together record images of the small intestine as the endoscope makes its way through the bowel. The instrument is used with minimal risk to the patient while offering a high degree of accuracy in diagnosing small intestine disorders.

  16. 31 CFR 360.64 - Payment or reinvestment-voluntary guardian of an incapacitated person.

    Science.gov (United States)

    2010-07-01

    ... guardian of an incapacitated person. 360.64 Section 360.64 Money and Finance: Treasury Regulations Relating..., Absentees, et al. § 360.64 Payment or reinvestment—voluntary guardian of an incapacitated person. (a..., responsible for the owner's care and support may submit an application for recognition as voluntary guardian...

  17. 12 CFR Appendix F to Part 360 - Customer File Structure

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Customer File Structure F Appendix F to Part... POLICY RESOLUTION AND RECEIVERSHIP RULES Pt. 360, App. F Appendix F to Part 360—Customer File Structure This is the structure of the data file to provide to the FDIC information related to each customer who...

  18. Rapid objective measurement of gamma camera resolution using statistical moments.

    Science.gov (United States)

    Hander, T A; Lancaster, J L; Kopp, D T; Lasher, J C; Blumhardt, R; Fox, P T

    1997-02-01

    An easy and rapid method for the measurement of the intrinsic spatial resolution of a gamma camera was developed. The measurement is based on the first and second statistical moments of regions of interest (ROIs) applied to bar phantom images. This leads to an estimate of the modulation transfer function (MTF) and the full-width-at-half-maximum (FWHM) of a line spread function (LSF). Bar phantom images were acquired using four large field-of-view (LFOV) gamma cameras (Scintronix, Picker, Searle, Siemens). The following factors important for routine measurements of gamma camera resolution with this method were tested: ROI placement and shape, phantom orientation, spatial sampling, and procedural consistency. A 0.2% coefficient of variation (CV) between repeat measurements of MTF was observed for a circular ROI. The CVs of less than 2% were observed for measured MTF values for bar orientations ranging from -10 degrees to +10 degrees with respect to the x and y axes of the camera acquisition matrix. A 256 x 256 matrix (1.6 mm pixel spacing) was judged sufficient for routine measurements, giving an estimate of the FWHM to within 0.1 mm of manufacturer-specified values (3% difference). Under simulated clinical conditions, the variation in measurements attributable to procedural effects yielded a CV of less than 2% in newer generation cameras. The moments method for determining MTF correlated well with a peak-valley method, with an average difference of 0.03 across the range of spatial frequencies tested (0.11-0.17 line pairs/mm, corresponding to 4.5-3.0 mm bars). When compared with the NEMA method for measuring intrinsic spatial resolution, the moments method was found to be within 4% of the expected FWHM.

  19. 37 CFR 360.14 - Copies of claims.

    Science.gov (United States)

    2010-07-01

    ... Section 360.14 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS SUBMISSION OF ROYALTY CLAIMS FILING OF CLAIMS TO ROYALTY FEES COLLECTED UNDER COMPULSORY LICENSE Satellite... royalty fees. ...

  20. Three Hundred Sixty Degree Feedback: program implementation in a local health department.

    Science.gov (United States)

    Swain, Geoffrey R; Schubot, David B; Thomas, Virginia; Baker, Bevan K; Foldy, Seth L; Greaves, William W; Monteagudo, Maria

    2004-01-01

    Three Hundred Sixty Degree Feedback systems, while popular in business, have been less commonly implemented in local public health agencies. At the same time, they are effective methods of improving employee morale, work performance, organizational culture, and attainment of desired organizational outcomes. These systems can be purchased "off-the-shelf," or custom applications can be developed for a better fit with unique organizational needs. We describe the City of Milwaukee Health Department's successful experience customizing and implementing a 360-degree feedback system in the context of its ongoing total quality improvement efforts.

  1. Control Design and Digital Implementation of a Fast 2-Degree-of-Freedom Translational Optical Image Stabilizer for Image Sensors in Mobile Camera Phones.

    Science.gov (United States)

    Wang, Jeremy H-S; Qiu, Kang-Fu; Chao, Paul C-P

    2017-10-13

    This study presents design, digital implementation and performance validation of a lead-lag controller for a 2-degree-of-freedom (DOF) translational optical image stabilizer (OIS) installed with a digital image sensor in mobile camera phones. Nowadays, OIS is an important feature of modern commercial mobile camera phones, which aims to mechanically reduce the image blur caused by hand shaking while shooting photos. The OIS developed in this study is able to move the imaging lens by actuating its voice coil motors (VCMs) at the required speed to the position that significantly compensates for imaging blurs by hand shaking. The compensation proposed is made possible by first establishing the exact, nonlinear equations of motion (EOMs) for the OIS, which is followed by designing a simple lead-lag controller based on established nonlinear EOMs for simple digital computation via a field-programmable gate array (FPGA) board in order to achieve fast response. Finally, experimental validation is conducted to show the favorable performance of the designed OIS; i.e., it is able to stabilize the lens holder to the desired position within 0.02 s, which is much less than previously reported times of around 0.1 s. Also, the resulting residual vibration is less than 2.2-2.5 μm, which is commensurate to the very small pixel size found in most of commercial image sensors; thus, significantly minimizing image blur caused by hand shaking.

  2. 21 CFR 175.360 - Vinylidene chloride copolymer coatings for nylon film.

    Science.gov (United States)

    2010-04-01

    ... film. 175.360 Section 175.360 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND... coatings for nylon film. Vinylidene chloride copolymer coatings identified in this section and applied on nylon film may be safely used as food-contact surfaces, in accordance with the following prescribed...

  3. 31 CFR 360.90 - Waiver of regulations.

    Science.gov (United States)

    2010-07-01

    ... STATES SAVINGS BONDS, SERIES I Miscellaneous Provisions § 360.90 Waiver of regulations. The Commissioner... unnecessary hardship: (a) If such action would not be inconsistent with law or equity; (b) If it does not...

  4. Development of underwater camera using high-definition camera

    International Nuclear Information System (INIS)

    Tsuji, Kenji; Watanabe, Masato; Takashima, Masanobu; Kawamura, Shingo; Tanaka, Hiroyuki

    2012-01-01

    In order to reduce the time for core verification or visual inspection of BWR fuels, the underwater camera using a High-Definition camera has been developed. As a result of this development, the underwater camera has 2 lights and 370 x 400 x 328mm dimensions and 20.5kg weight. Using the camera, 6 or so spent-fuel IDs are identified at 1 or 1.5m distance at a time, and 0.3mmφ pin-hole is recognized at 1.5m distance and 20 times zoom-up. Noises caused by radiation less than 15 Gy/h are not affected the images. (author)

  5. Candid camera : video surveillance system can help protect assets

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, L.

    2009-11-15

    By combining closed-circuit cameras with sophisticated video analytics to create video sensors for use in remote areas, Calgary-based IntelliView Technologies Inc.'s explosion-proof video surveillance system can help the oil and gas sector monitor its assets. This article discussed the benefits, features, and applications of IntelliView's technology. Some of the benefits include a reduced need for on-site security and operating personnel and its patented analytics product known as the SmrtDVR, where the camera's images are stored. The technology can be used in temperatures as cold as minus 50 degrees Celsius and as high as 50 degrees Celsius. The product was commercialized in 2006 when it was used by Nexen Inc. It was concluded that false alarms set off by natural occurrences such as rain, snow, glare and shadows were a huge problem with analytics in the past, but that problem has been solved by IntelliView, which has its own source code, and re-programmed code. 1 fig.

  6. Status of the NectarCAM camera project

    International Nuclear Information System (INIS)

    Glicenstein, J.F.; Delagnes, E.; Fesquet, M.; Louis, F.; Moudden, Y.; Moulin, E.; Nunio, F.; Sizun, P.

    2014-01-01

    NectarCAM is a camera designed for the medium-sized telescopes of the Cherenkov Telescope Array (CTA) covering the central energy range 100 GeV to 30 TeV. It has a modular design based on the NECTAr chip, at the heart of which is a GHz sampling Switched Capacitor Array and 12-bit Analog to Digital converter. The camera will be equipped with 265 7-photomultiplier modules, covering a field of view of 7 to 8 degrees. Each module includes the photomultiplier bases, High Voltage supply, pre-amplifier, trigger, readout and Thernet transceiver. Events recorded last between a few nanoseconds and tens of nanoseconds. A flexible trigger scheme allows to read out very long events. NectarCAM can sustain a data rate of 10 kHz. The camera concept, the design and tests of the various sub-components and results of thermal and electrical prototypes are presented. The design includes the mechanical structure, the cooling of electronics, read-out, clock distribution, slow control, data-acquisition, trigger, monitoring and services. A 133-pixel prototype with full scale mechanics, cooling, data acquisition and slow control will be built at the end of 2014. (authors)

  7. [Method for evaluating the competence of specialists--the validation of 360-degree-questionnaire].

    Science.gov (United States)

    Nørgaard, Kirsten; Pedersen, Juri; Ravn, Lisbeth; Albrecht-Beste, Elisabeth; Holck, Kim; Fredløv, Maj; Møller, Lars Krag

    2010-04-19

    Assessment of physicians' performance focuses on the quality of their work. The aim of this study was to develop a valid, usable and acceptable multisource feedback assessment tool (MFAT) for hospital consultants. Statements were produced on consultant competencies within non-medical areas like collaboration, professionalism, communication, health promotion, academics and administration. The statements were validated by physicians and later by non-physician professionals after adjustments had been made. In a pilot test, a group of consultants was assessed using the final collection of statements of the MFAT. They received a report with their personal results and subsequently evaluated the assessment method. In total, 66 statements were developed and after validation they were reduced and reformulated to 35. Mean scores for relevance and "easy to understand" of the statements were in the range between "very high degree" and "high degree". In the pilot test, 18 consultants were assessed by themselves, by 141 other physicians and by 125 other professionals in the hospital. About two thirds greatly benefited of the assessment report and half identified areas for personal development. About a third did not want the head of their department to know the assessment results directly; however, two thirds found a potential value in discussing the results with the head. We developed an MFAT for consultants with relevant and understandable statements. A pilot test confirmed that most of the consultants gained from the assessment, but some did not like to share their results with their heads. For these specialists other methods should be used.

  8. Sub-Camera Calibration of a Penta-Camera

    Science.gov (United States)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  9. Relative camera localisation in non-overlapping camera networks using multiple trajectories

    NARCIS (Netherlands)

    John, V.; Englebienne, G.; Kröse, B.J.A.

    2012-01-01

    In this article we present an automatic camera calibration algorithm using multiple trajectories in a multiple camera network with non-overlapping field-of-views (FOV). Visible trajectories within a camera FOV are assumed to be measured with respect to the camera local co-ordinate system.

  10. HIMSS Venture+ Forum and HX360 Provide Industry View of Health Technology Innovation, Startup and Investment Activity; Advancing the New Model of Care.

    Science.gov (United States)

    Burde, Howard A; Scarfo, Richard

    2015-01-01

    Presented by HIMSS, the Venture+ Forum program and pitch competition provides a 360-degree view on health technology investing and today's top innovative companies. It features exciting 3-minute pitch presentations from emerging and growth-stage companies, investor panels and a networking reception. Recent Venture+ Forum winners include TowerView Health, Prima-Temp, ActuaiMeds and M3 Clinician. As an industry catalyst for health IT innovation and business-building resource for growing companies and emerging technology solutions, HIMSS has co-developed with A VIA, a new initiative that addresses how emerging technologies, health system business model changes and investment will transform the delivery of care. HX360 engages senior healthcare leaders, innovation teams, investors and entrepreneurs around the vision of transforming healthcare delivery by leveraging technology, process and structure.

  11. A new omni-directional multi-camera system for high resolution surveillance

    Science.gov (United States)

    Cogal, Omer; Akin, Abdulkadir; Seyid, Kerem; Popovic, Vladan; Schmid, Alexandre; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2014-05-01

    Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor's image resolution. Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented. This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange imaging which are beyond standard stitching and panorama generation methods.

  12. A General Procedure to Assess the Internal Structure of a Noncognitive Measure--The Student360 Insight Program (S360) Time Management Scale. Research Report. ETS RR-11-42

    Science.gov (United States)

    Ling, Guangming; Rijmen, Frank

    2011-01-01

    The factorial structure of the Time Management (TM) scale of the Student 360: Insight Program (S360) was evaluated based on a national sample. A general procedure with a variety of methods was introduced and implemented, including the computation of descriptive statistics, exploratory factor analysis (EFA), and confirmatory factor analysis (CFA).…

  13. 360-degree feedback for medical trainees

    DEFF Research Database (Denmark)

    Holm, Ellen; Holm, Kirsten; Sørensen, Jette Led

    2015-01-01

    feedback and assessment. In order to secure reliability 8-15 respondents are needed. It is a matter of discussion whether the respondents should be chosen by the trainee or by a third part, and if respondents should be anonymous. The process includes a feedback session with a trained supervisor....

  14. Comparison of auroral ovals from all-sky camera studies and from satellite photographs

    International Nuclear Information System (INIS)

    Bond, F.R.; Akasofu, S.I.

    1979-01-01

    A comparison is made of the statistical auroral ovals determined by all-sky camera photographs with DMSP photographs for different degrees of geomagnetic activity. It is shown that the agreement between them is excellent. (author)

  15. Integrating IPix immersive video surveillance with unattended and remote monitoring (UNARM) systems

    International Nuclear Information System (INIS)

    Michel, K.D.; Klosterbuer, S.F.; Langner, D.C.

    2004-01-01

    Commercially available IPix cameras and software are being researched as a means by which an inspector can be virtually immersed into a nuclear facility. A single IPix camera can provide 360 by 180 degree views with full pan-tilt-zoom capability, and with no moving parts on the camera mount. Immersive video technology can be merged into the current Unattended and Remote Monitoring (UNARM) system, thereby providing an integrated system of monitoring capabilities that tie together radiation, video, isotopic analysis, Global Positioning System (GPS), etc. The integration of the immersive video capability with other monitoring methods already in place provides a significantly enhanced situational awareness to the International Atomic Energy Agency (IAEA) inspectors.

  16. Multi-spectral CCD camera system for ocean water color and seacoast observation

    Science.gov (United States)

    Zhu, Min; Chen, Shiping; Wu, Yanlin; Huang, Qiaolin; Jin, Weiqi

    2001-10-01

    One of the earth observing instruments on HY-1 Satellite which will be launched in 2001, the multi-spectral CCD camera system, is developed by Beijing Institute of Space Mechanics & Electricity (BISME), Chinese Academy of Space Technology (CAST). In 798 km orbit, the system can provide images with 250 m ground resolution and a swath of 500 km. It is mainly used for coast zone dynamic mapping and oceanic watercolor monitoring, which include the pollution of offshore and coast zone, plant cover, watercolor, ice, terrain underwater, suspended sediment, mudflat, soil and vapor gross. The multi- spectral camera system is composed of four monocolor CCD cameras, which are line array-based, 'push-broom' scanning cameras, and responding for four spectral bands. The camera system adapts view field registration; that is, each camera scans the same region at the same moment. Each of them contains optics, focal plane assembly, electrical circuit, installation structure, calibration system, thermal control and so on. The primary features on the camera system are: (1) Offset of the central wavelength is better than 5 nm; (2) Degree of polarization is less than 0.5%; (3) Signal-to-noise ratio is about 1000; (4) Dynamic range is better than 2000:1; (5) Registration precision is better than 0.3 pixel; (6) Quantization value is 12 bit.

  17. Gigavision - A weatherproof, multibillion pixel resolution time-lapse camera system for recording and tracking phenology in every plant in a landscape

    Science.gov (United States)

    Brown, T.; Borevitz, J. O.; Zimmermann, C.

    2010-12-01

    We have a developed a camera system that can record hourly, gigapixel (multi-billion pixel) scale images of an ecosystem in a 360x90 degree panorama. The “Gigavision” camera system is solar-powered and can wirelessly stream data to a server. Quantitative data collection from multiyear timelapse gigapixel images is facilitated through an innovative web-based toolkit for recording time-series data on developmental stages (phenology) from any plant in the camera’s field of view. Gigapixel images enable time-series recording of entire landscapes with a resolution sufficient to record phenology from a majority of individuals in entire populations of plants. When coupled with next generation sequencing, quantitative population genomics can be performed in a landscape context linking ecology and evolution in situ and in real time. The Gigavision camera system achieves gigapixel image resolution by recording rows and columns of overlapping megapixel images. These images are stitched together into a single gigapixel resolution image using commercially available panorama software. Hardware consists of a 5-18 megapixel resolution DSLR or Network IP camera mounted on a pair of heavy-duty servo motors that provide pan-tilt capabilities. The servos and camera are controlled with a low-power Windows PC. Servo movement, power switching, and system status monitoring are enabled with Phidgets-brand sensor boards. System temperature, humidity, power usage, and battery voltage are all monitored at 5 minute intervals. All sensor data is uploaded via cellular or 802.11 wireless to an interactive online interface for easy remote monitoring of system status. Systems with direct internet connections upload the full sized images directly to our automated stitching server where they are stitched and available online for viewing within an hour of capture. Systems with cellular wireless upload an 80 megapixel “thumbnail” of each larger panorama and full-sized images are manually

  18. Mobile phone camera benchmarking: combination of camera speed and image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  19. Vision-based control of robotic arm with 6 degrees of freedom

    OpenAIRE

    Versleegers, Wim

    2014-01-01

    This paper studies the procedure to program a vertically articulated robot with six degrees of freedom, the Mitsubishi Melfa RV-2SD, with Matlab. A major drawback of the programming software provided by Mitsubishi is that it barely allows the use of vision-based programming. The amount of useable cameras is limited and moreover, the cameras are very expensive. Using Matlab, these limitations could be overcome. However there is no direct way to control the robot with Matlab. The goal of this p...

  20. Status of the Dark Energy Survey Camera (DECam) Project

    Energy Technology Data Exchange (ETDEWEB)

    Flaugher, Brenna L.; Abbott, Timothy M.C.; Angstadt, Robert; Annis, Jim; Antonik, Michelle, L.; Bailey, Jim; Ballester, Otger.; Bernstein, Joseph P.; Bernstein, Rebbeca; Bonati, Marco; Bremer, Gale; /Fermilab /Cerro-Tololo InterAmerican Obs. /ANL /Texas A-M /Michigan U. /Illinois U., Urbana /Ohio State U. /University Coll. London /LBNL /SLAC /IFAE

    2012-06-29

    The Dark Energy Survey Collaboration has completed construction of the Dark Energy Camera (DECam), a 3 square degree, 570 Megapixel CCD camera which will be mounted on the Blanco 4-meter telescope at CTIO. DECam will be used to perform the 5000 sq. deg. Dark Energy Survey with 30% of the telescope time over a 5 year period. During the remainder of the time, and after the survey, DECam will be available as a community instrument. All components of DECam have been shipped to Chile and post-shipping checkout finished in Jan. 2012. Installation is in progress. A summary of lessons learned and an update of the performance of DECam and the status of the DECam installation and commissioning will be presented.

  1. Status of the Dark Energy Survey Camera (DECam) project

    Energy Technology Data Exchange (ETDEWEB)

    Flaugher, Brenna L.; McLean, Ian S.; Ramsay, Suzanne K.; Abbott, Timothy M. C.; Angstadt, Robert; Takami, Hideki; Annis, Jim; Antonik, Michelle L.; Bailey, Jim; Ballester, Otger; Bernstein, Joseph P.; Bernstein, Rebecca A.; Bonati, Marco; Bremer, Gale; Briones, Jorge; Brooks, David; Buckley-Geer, Elizabeth J.; Campa, Juila; Cardiel-Sas, Laia; Castander, Francisco; Castilla, Javier; Cease, Herman; Chappa, Steve; Chi, Edward C.; da Costa, Luis; DePoy, Darren L.; Derylo, Gregory; de Vincente, Juan; Diehl, H. Thomas; Doel, Peter; Estrada, Juan; Eiting, Jacob; Elliott, Anne E.; Finley, David A.; Flores, Rolando; Frieman, Josh; Gaztanaga, Enrique; Gerdes, David; Gladders, Mike; Guarino, V.; Gutierrez, G.; Grudzinski, Jim; Hanlon, Bill; Hao, Jiangang; Holland, Steve; Honscheid, Klaus; Huffman, Dave; Jackson, Cheryl; Jonas, Michelle; Karliner, Inga; Kau, Daekwang; Kent, Steve; Kozlovsky, Mark; Krempetz, Kurt; Krider, John; Kubik, Donna; Kuehn, Kyler; Kuhlmann, Steve E.; Kuk, Kevin; Lahav, Ofer; Langellier, Nick; Lathrop, Andrew; Lewis, Peter M.; Lin, Huan; Lorenzon, Wolfgang; Martinez, Gustavo; McKay, Timothy; Merritt, Wyatt; Meyer, Mark; Miquel, Ramon; Morgan, Jim; Moore, Peter; Moore, Todd; Neilsen, Eric; Nord, Brian; Ogando, Ricardo; Olson, Jamieson; Patton, Kenneth; Peoples, John; Plazas, Andres; Qian, Tao; Roe, Natalie; Roodman, Aaron; Rossetto, B.; Sanchez, E.; Soares-Santos, Marcelle; Scarpine, Vic; Schalk, Terry; Schindler, Rafe; Schmidt, Ricardo; Schmitt, Richard; Schubnell, Mike; Schultz, Kenneth; Selen, M.; Serrano, Santiago; Shaw, Terri; Simaitis, Vaidas; Slaughter, Jean; Smith, R. Christopher; Spinka, Hal; Stefanik, Andy; Stuermer, Walter; Sypniewski, Adam; Talaga, R.; Tarle, Greg; Thaler, Jon; Tucker, Doug; Walker, Alistair R.; Weaverdyck, Curtis; Wester, William; Woods, Robert J.; Worswick, Sue; Zhao, Allen

    2012-09-24

    The Dark Energy Survey Collaboration has completed construction of the Dark Energy Camera (DECam), a 3 square degree, 570 Megapixel CCD camera which will be mounted on the Blanco 4-meter telescope at CTIO. DECam will be used to perform the 5000 sq. deg. Dark Energy Survey with 30% of the telescope time over a 5 year period. During the remainder of the time, and after the survey, DECam will be available as a community instrument. All components of DECam have been shipped to Chile and post-shipping checkout finished in Jan. 2012. Installation is in progress. A summary of lessons learned and an update of the performance of DECam and the status of the DECam installation and commissioning will be presented.

  2. 44 CFR 360.2 - Description of program.

    Science.gov (United States)

    2010-10-01

    ... for travel and per diem expenses of students selected by the States for courses reflecting... Section 360.2 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF... costs and participant's travel and per diem. These specifics of date, place, and costs will be required...

  3. 12 CFR Appendix G to Part 360 - Deposit-Customer Join File Structure

    Science.gov (United States)

    2010-01-01

    ..._Code Relationship CodeThe code indicating how the customer is related to the account. Possible values... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Deposit-Customer Join File Structure G Appendix... GENERAL POLICY RESOLUTION AND RECEIVERSHIP RULES Pt. 360, App. G Appendix G to Part 360—Deposit-Customer...

  4. 37 CFR 360.5 - Copies of claims.

    Science.gov (United States)

    2010-07-01

    ... Section 360.5 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS SUBMISSION OF ROYALTY CLAIMS FILING OF CLAIMS TO ROYALTY FEES COLLECTED UNDER COMPULSORY LICENSE Cable Claims... hand delivery or by mail, file an original and one copy of the claim to cable royalty fees. ...

  5. Comparative evaluation of consumer grade cameras and mobile phone cameras for close range photogrammetry

    Science.gov (United States)

    Chikatsu, Hirofumi; Takahashi, Yoji

    2009-08-01

    The authors have been concentrating on developing convenient 3D measurement methods using consumer grade digital cameras, and it was concluded that consumer grade digital cameras are expected to become a useful photogrammetric device for the various close range application fields. On the other hand, mobile phone cameras which have 10 mega pixels were appeared on the market in Japan. In these circumstances, we are faced with alternative epoch-making problem whether mobile phone cameras are able to take the place of consumer grade digital cameras in close range photogrammetric applications. In order to evaluate potentials of mobile phone cameras in close range photogrammetry, comparative evaluation between mobile phone cameras and consumer grade digital cameras are investigated in this paper with respect to lens distortion, reliability, stability and robustness. The calibration tests for 16 mobile phone cameras and 50 consumer grade digital cameras were conducted indoors using test target. Furthermore, practability of mobile phone camera for close range photogrammetry was evaluated outdoors. This paper presents that mobile phone cameras have ability to take the place of consumer grade digital cameras, and develop the market in digital photogrammetric fields.

  6. 19 CFR 360.104 - Steel import monitoring.

    Science.gov (United States)

    2010-04-01

    ... ANALYSIS SYSTEM § 360.104 Steel import monitoring. (a) Throughout the duration of the licensing requirement... include import quantity (metric tons), import Customs value (U.S. $), and average unit value ($/metric ton... and will also present a range of historical data for comparison purposes. Provision of this aggregate...

  7. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, U.; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is mounted in the ray inlet opening of the camera, while the others are placed on separate supports. The supports are swingably mounted upon a column one above the other through about 90 0 to a collimator exchange position. Each of the separate supports is swingable to a vertically aligned position, with limiting of the swinging movement and positioning of the support at the desired exchange position. The collimators are carried on the supports by means of a series of vertically disposed coil springs. Projections on the camera are movable from above into grooves of the collimator at the exchange position, whereupon the collimator is turned so that it is securely prevented from falling out of the camera head

  8. Optical Design of the Camera for Transiting Exoplanet Survey Satellite (TESS)

    Science.gov (United States)

    Chrisp, Michael; Clark, Kristin; Primeau, Brian; Dalpiaz, Michael; Lennon, Joseph

    2015-01-01

    The optical design of the wide field of view refractive camera, 34 degrees diagonal field, for the TESS payload is described. This fast f/1.4 cryogenic camera, operating at -75 C, has no vignetting for maximum light gathering within the size and weight constraints. Four of these cameras capture full frames of star images for photometric searches of planet crossings. The optical design evolution, from the initial Petzval design, took advantage of Forbes aspheres to develop a hybrid design form. This maximized the correction from the two aspherics resulting in a reduction of average spot size by sixty percent in the final design. An external long wavelength pass filter was replaced by an internal filter coating on a lens to save weight, and has been fabricated to meet the specifications. The stray light requirements were met by an extended lens hood baffle design, giving the necessary off-axis attenuation.

  9. Fixed-focus camera objective for small remote sensing satellites

    Science.gov (United States)

    Topaz, Jeremy M.; Braun, Ofer; Freiman, Dov

    1993-09-01

    An athermalized objective has been designed for a compact, lightweight push-broom camera which is under development at El-Op Ltd. for use in small remote-sensing satellites. The high performance objective has a fixed focus setting, but maintains focus passively over the full range of temperatures encountered in small satellites. The lens is an F/5.0, 320 mm focal length Tessar type, operating over the range 0.5 - 0.9 micrometers . It has a 16 degree(s) field of view and accommodates various state-of-the-art silicon detector arrays. The design and performance of the objective is described in this paper.

  10. Extending Driving Vision Based on Image Mosaic Technique

    Directory of Open Access Journals (Sweden)

    Chen Deng

    2017-01-01

    Full Text Available Car cameras have been used extensively to assist driving by make driving visible. However, due to the limitation of the Angle of View (AoV, the dead zone still exists, which is a primary origin of car accidents. In this paper, we introduce a system to extend the vision of drivers to 360 degrees. Our system consists of four wide-angle cameras, which are mounted at different sides of a car. Although the AoV of each camera is within 180 degrees, relying on the image mosaic technique, our system can seamlessly integrate 4-channel videos into a panorama video. The panorama video enable drivers to observe everywhere around a car as far as three meters from a top view. We performed experiments in a laboratory environment. Preliminary results show that our system can eliminate vision dead zone completely. Additionally, the real-time performance of our system can satisfy requirements for practical use.

  11. Accuracy Potential and Applications of MIDAS Aerial Oblique Camera System

    Science.gov (United States)

    Madani, M.

    2012-07-01

    Airborne oblique cameras such as Fairchild T-3A were initially used for military reconnaissance in 30s. A modern professional digital oblique camera such as MIDAS (Multi-camera Integrated Digital Acquisition System) is used to generate lifelike three dimensional to the users for visualizations, GIS applications, architectural modeling, city modeling, games, simulators, etc. Oblique imagery provide the best vantage for accessing and reviewing changes to the local government tax base, property valuation assessment, buying & selling of residential/commercial for better decisions in a more timely manner. Oblique imagery is also used for infrastructure monitoring making sure safe operations of transportation, utilities, and facilities. Sanborn Mapping Company acquired one MIDAS from TrackAir in 2011. This system consists of four tilted (45 degrees) cameras and one vertical camera connected to a dedicated data acquisition computer system. The 5 digital cameras are based on the Canon EOS 1DS Mark3 with Zeiss lenses. The CCD size is 5,616 by 3,744 (21 MPixels) with the pixel size of 6.4 microns. Multiple flights using different camera configurations (nadir/oblique (28 mm/50 mm) and (50 mm/50 mm)) were flown over downtown Colorado Springs, Colorado. Boresight fights for 28 mm nadir camera were flown at 600 m and 1,200 m and for 50 mm nadir camera at 750 m and 1500 m. Cameras were calibrated by using a 3D cage and multiple convergent images utilizing Australis model. In this paper, the MIDAS system is described, a number of real data sets collected during the aforementioned flights are presented together with their associated flight configurations, data processing workflow, system calibration and quality control workflows are highlighted and the achievable accuracy is presented in some detail. This study revealed that the expected accuracy of about 1 to 1.5 GSD (Ground Sample Distance) for planimetry and about 2 to 2.5 GSD for vertical can be achieved. Remaining systematic

  12. ACCURACY POTENTIAL AND APPLICATIONS OF MIDAS AERIAL OBLIQUE CAMERA SYSTEM

    Directory of Open Access Journals (Sweden)

    M. Madani

    2012-07-01

    Full Text Available Airborne oblique cameras such as Fairchild T-3A were initially used for military reconnaissance in 30s. A modern professional digital oblique camera such as MIDAS (Multi-camera Integrated Digital Acquisition System is used to generate lifelike three dimensional to the users for visualizations, GIS applications, architectural modeling, city modeling, games, simulators, etc. Oblique imagery provide the best vantage for accessing and reviewing changes to the local government tax base, property valuation assessment, buying & selling of residential/commercial for better decisions in a more timely manner. Oblique imagery is also used for infrastructure monitoring making sure safe operations of transportation, utilities, and facilities. Sanborn Mapping Company acquired one MIDAS from TrackAir in 2011. This system consists of four tilted (45 degrees cameras and one vertical camera connected to a dedicated data acquisition computer system. The 5 digital cameras are based on the Canon EOS 1DS Mark3 with Zeiss lenses. The CCD size is 5,616 by 3,744 (21 MPixels with the pixel size of 6.4 microns. Multiple flights using different camera configurations (nadir/oblique (28 mm/50 mm and (50 mm/50 mm were flown over downtown Colorado Springs, Colorado. Boresight fights for 28 mm nadir camera were flown at 600 m and 1,200 m and for 50 mm nadir camera at 750 m and 1500 m. Cameras were calibrated by using a 3D cage and multiple convergent images utilizing Australis model. In this paper, the MIDAS system is described, a number of real data sets collected during the aforementioned flights are presented together with their associated flight configurations, data processing workflow, system calibration and quality control workflows are highlighted and the achievable accuracy is presented in some detail. This study revealed that the expected accuracy of about 1 to 1.5 GSD (Ground Sample Distance for planimetry and about 2 to 2.5 GSD for vertical can be achieved. Remaining

  13. Measurement of defects on the wall by use of the inclination angle of laser slit beam and position tracking algorithm of camera

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Hwan; Yoon, Ji Sup; Jung, Jae Hoo; Hong, Dong Hee; Park, Gee Yong

    2001-01-01

    In this paper, a method of measuring the size of defects on the wall and restructuring the defect image is proposed based on the estimation algorithm of a camera orientation which uses the declination angle of the line slit beam. To reconstruct the image, an algorithm of estimating the horizontally inclined angle of CCD camera is presented. This algorithm adopts a 3-dimensional coordinate transformation of the image plane where both the LASER beam and the original image of the defects exist. The estimation equation is obtained by using the information of the beam projected on the wall and the parameters of this equation are experimentally obtained. With this algorithm, the original image of the defect can be reconstructed into the image which is obtained by a camera normal to the wall. From the result of a series of experiment shows that the measuring accuracy of the defect is within 0.5% error bound of real defect size under 30 degree of the horizontally inclined angle. Also, the accuracy is deteriorates with the error rate of 1% for every 10 degree increase of the horizontally inclined angle. The estimation error increases in the range of 30{approx}50 degree due to the existence of dead zone of defect depth, and defect length can not be measured due to the disappearance of image data above 70 degree. In case of under water condition, the measuring accuracy is also influenced due to the changed field of view of both the camera and the laser slit beam caused by the refraction rate in the water. The proposed algorithm provides the method of reconstructing the image taken at any arbitrary camera orientation into the image which is obtained by a camera normal to the wall and thus it enables the accurate measurement of the defect lengths only by using a single camera and a laser slit beam.

  14. 37 CFR 360.2 - Time of filing.

    Science.gov (United States)

    2010-07-01

    ... ROYALTY CLAIMS FILING OF CLAIMS TO ROYALTY FEES COLLECTED UNDER COMPULSORY LICENSE Cable Claims § 360.2... compulsory license royalty fees for secondary transmissions of one or more of its works during the preceding calendar year shall file a claim to such fees with the Copyright Royalty Board. No royalty fees shall be...

  15. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, Ul; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is replaceably mounted in the ray inlet opening of the camera, while the others are placed on separate supports. Supports are swingably mounted upon a column one above the other

  16. Neutron imaging system based on a video camera

    International Nuclear Information System (INIS)

    Dinca, M.

    2004-01-01

    possess versatile and unique readout capabilities that have established their utility in scientific and especially for radiation field applications. A detector for neutron radiography based on a cooled CID camera offers some capabilities, as follows: - Extended linear dynamic range up to 109 without blooming or streaking; - Arbitrary pixel selection and nondestructive readout makes it possible to introduce a high degree of exposure control to low-light viewing of static scenes; - Read multiple areas of interest of an image within a given frame at higher rates; - Wide spectral response (185 nm - 1100 nm); - CIDs tolerate high radiation environments up to 3 Mrad integrated dose; - The contiguous pixel structure of CID arrays contributes to accurate imaging because there are virtually no opaque areas between pixels. (author)

  17. Radiation-resistant camera tube

    International Nuclear Information System (INIS)

    Kuwahata, Takao; Manabe, Sohei; Makishima, Yasuhiro

    1982-01-01

    It was a long time ago that Toshiba launched on manufacturing black-and-white radiation-resistant camera tubes employing nonbrowning face-plate glass for ITV cameras used in nuclear power plants. Now in compliance with the increasing demand in nuclear power field, the Company is at grips with the development of radiation-resistant single color-camera tubes incorporating a color-stripe filter for color ITV cameras used under radiation environment. Herein represented are the results of experiments on characteristics of materials for single color-camera tubes and prospects for commercialization of the tubes. (author)

  18. Portable retinal imaging for eye disease screening using a consumer-grade digital camera

    Science.gov (United States)

    Barriga, Simon; Larichev, Andrey; Zamora, Gilberto; Soliz, Peter

    2012-03-01

    The development of affordable means to image the retina is an important step toward the implementation of eye disease screening programs. In this paper we present the i-RxCam, a low-cost, hand-held, retinal camera for widespread applications such as tele-retinal screening for eye diseases like diabetic retinopathy (DR), glaucoma, and age-related ocular diseases. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities. The i-RxCam uses a Nikon D3100 digital camera body. The camera has a CMOS sensor with 14.8 million pixels. We use a 50mm focal lens that gives a retinal field of view of 45 degrees. The internal autofocus can compensate for about 2D (diopters) of focusing error. The light source is an LED produced by Philips with a linear emitting area that is transformed using a light pipe to the optimal shape at the eye pupil, an annulus. To eliminate corneal reflex we use a polarization technique in which the light passes through a nano-wire polarizer plate. This is a novel type of polarizer featuring high polarization separation (contrast ratio of more than 1000) and very large acceptance angle (>45 degrees). The i-RxCam approach will yield a significantly more economical retinal imaging device that would allow mass screening of the at-risk population.

  19. GRACE star camera noise

    Science.gov (United States)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  20. A fuzzy automated object classification by infrared laser camera

    Science.gov (United States)

    Kanazawa, Seigo; Taniguchi, Kazuhiko; Asari, Kazunari; Kuramoto, Kei; Kobashi, Syoji; Hata, Yutaka

    2011-06-01

    Home security in night is very important, and the system that watches a person's movements is useful in the security. This paper describes a classification system of adult, child and the other object from distance distribution measured by an infrared laser camera. This camera radiates near infrared waves and receives reflected ones. Then, it converts the time of flight into distance distribution. Our method consists of 4 steps. First, we do background subtraction and noise rejection in the distance distribution. Second, we do fuzzy clustering in the distance distribution, and form several clusters. Third, we extract features such as the height, thickness, aspect ratio, area ratio of the cluster. Then, we make fuzzy if-then rules from knowledge of adult, child and the other object so as to classify the cluster to one of adult, child and the other object. Here, we made the fuzzy membership function with respect to each features. Finally, we classify the clusters to one with the highest fuzzy degree among adult, child and the other object. In our experiment, we set up the camera in room and tested three cases. The method successfully classified them in real time processing.

  1. Gamma camera

    International Nuclear Information System (INIS)

    Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  2. Picosecond camera

    International Nuclear Information System (INIS)

    Decroisette, Michel

    A Kerr cell activated by infrared pulses of a model locked Nd glass laser, acts as an ultra-fast and periodic shutter, with a few p.s. opening time. Associated with a S.T.L. camera, it gives rise to a picosecond camera allowing us to study very fast effects [fr

  3. OCAMS: The OSIRIS-REx Camera Suite

    Science.gov (United States)

    Rizk, B.; Drouet d'Aubigny, C.; Golish, D.; Fellows, C.; Merrill, C.; Smith, P.; Walker, M. S.; Hendershot, J. E.; Hancock, J.; Bailey, S. H.; DellaGiustina, D. N.; Lauretta, D. S.; Tanner, R.; Williams, M.; Harshman, K.; Fitzgibbon, M.; Verts, W.; Chen, J.; Connors, T.; Hamara, D.; Dowd, A.; Lowman, A.; Dubin, M.; Burt, R.; Whiteley, M.; Watson, M.; McMahon, T.; Ward, M.; Booher, D.; Read, M.; Williams, B.; Hunten, M.; Little, E.; Saltzman, T.; Alfred, D.; O'Dougherty, S.; Walthall, M.; Kenagy, K.; Peterson, S.; Crowther, B.; Perry, M. L.; See, C.; Selznick, S.; Sauve, C.; Beiser, M.; Black, W.; Pfisterer, R. N.; Lancaster, A.; Oliver, S.; Oquest, C.; Crowley, D.; Morgan, C.; Castle, C.; Dominguez, R.; Sullivan, M.

    2018-02-01

    The OSIRIS-REx Camera Suite (OCAMS) will acquire images essential to collecting a sample from the surface of Bennu. During proximity operations, these images will document the presence of satellites and plumes, record spin state, enable an accurate model of the asteroid's shape, and identify any surface hazards. They will confirm the presence of sampleable regolith on the surface, observe the sampling event itself, and image the sample head in order to verify its readiness to be stowed. They will document Bennu's history as an example of early solar system material, as a microgravity body with a planetesimal size-scale, and as a carbonaceous object. OCAMS is fitted with three cameras. The MapCam will record color images of Bennu as a point source on approach to the asteroid in order to connect Bennu's ground-based point-source observational record to later higher-resolution surface spectral imaging. The SamCam will document the sample site before, during, and after it is disturbed by the sample mechanism. The PolyCam, using its focus mechanism, will observe the sample site at sub-centimeter resolutions, revealing surface texture and morphology. While their imaging requirements divide naturally between the three cameras, they preserve a strong degree of functional overlap. OCAMS and the other spacecraft instruments will allow the OSIRIS-REx mission to collect a sample from a microgravity body on the same visit during which it was first optically acquired from long range, a useful capability as humanity reaches out to explore near-Earth, Main-Belt and Jupiter Trojan asteroids.

  4. 2 CFR 180.360 - What happens if I fail to disclose information required under § 180.355?

    Science.gov (United States)

    2010-01-01

    ... Doing Business With Other Persons Disclosing Information-Lower Tier Participants § 180.360 What happens... 2 Grants and Agreements 1 2010-01-01 2010-01-01 false What happens if I fail to disclose information required under § 180.355? 180.360 Section 180.360 Grants and Agreements OFFICE OF MANAGEMENT AND...

  5. Streets? Where We're Going, We Don't Need Streets

    Science.gov (United States)

    Bailey, J.

    2017-12-01

    In 2007 Google Street View started as a project to provide 360-degree imagery along streets, but in the decade since has evolved into a platform through which to explore everywhere from the slope of everest, to the middle of the Amazon rainforest to under the ocean. As camera technology has evolved it has also become a tool for ground truthing maps, and provided scientific observations, storytelling and education. The Google Street View "special collects" team has undertaken increasingly more challenging projects across 80+ countries and every continent. All of which culminated in possibly the most ambitious collection yet, the capture of Street View on board the International Space Station. Learn about the preparation and obstacles behind this and other special collects. Explore these datasets through both Google Earth and Google Expeditions VR, an educational tool to take students on virtual field trips using 360 degree imagery.

  6. Reducing the Variance of Intrinsic Camera Calibration Results in the ROS Camera_Calibration Package

    Science.gov (United States)

    Chiou, Geoffrey Nelson

    The intrinsic calibration of a camera is the process in which the internal optical and geometric characteristics of the camera are determined. If accurate intrinsic parameters of a camera are known, the ray in 3D space that every point in the image lies on can be determined. Pairing with another camera allows for the position of the points in the image to be calculated by intersection of the rays. Accurate intrinsics also allow for the position and orientation of a camera relative to some world coordinate system to be calculated. These two reasons for having accurate intrinsic calibration for a camera are especially important in the field of industrial robotics where 3D cameras are frequently mounted on the ends of manipulators. In the ROS (Robot Operating System) ecosystem, the camera_calibration package is the default standard for intrinsic camera calibration. Several researchers from the Industrial Robotics & Automation division at Southwest Research Institute have noted that this package results in large variances in the intrinsic parameters of the camera when calibrating across multiple attempts. There are also open issues on this matter in their public repository that have not been addressed by the developers. In this thesis, we confirm that the camera_calibration package does indeed return different results across multiple attempts, test out several possible hypothesizes as to why, identify the reason, and provide simple solution to fix the cause of the issue.

  7. Commercialization of radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10{sup 6} - 10{sup 8} rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  8. Commercialization of radiation tolerant camera

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10 6 - 10 8 rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  9. Cameras in mobile phones

    Science.gov (United States)

    Nummela, Ville; Viinikanoja, Jarkko; Alakarhu, Juha

    2006-04-01

    One of the fastest growing markets in consumer markets today are camera phones. During past few years total volume has been growing fast and today millions of mobile phones with camera will be sold. At the same time resolution and functionality of the cameras has been growing from CIF towards DSC level. From camera point of view the mobile world is an extremely challenging field. Cameras should have good image quality but in small size. They also need to be reliable and their construction should be suitable for mass manufacturing. All components of the imaging chain should be well optimized in this environment. Image quality and usability are the most important parameters to user. The current trend of adding more megapixels to cameras and at the same time using smaller pixels is affecting both. On the other hand reliability and miniaturization are key drivers for product development as well as the cost. In optimized solution all parameters are in balance but the process of finding the right trade-offs is not an easy task. In this paper trade-offs related to optics and their effects to image quality and usability of cameras are discussed. Key development areas from mobile phone camera point of view are also listed.

  10. 33 CFR 96.360 - Interim Safety Management Certificate: what is it and when can it be used?

    Science.gov (United States)

    2010-07-01

    ...? § 96.360 Interim Safety Management Certificate: what is it and when can it be used? (a) A responsible... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Interim Safety Management Certificate: what is it and when can it be used? 96.360 Section 96.360 Navigation and Navigable Waters COAST...

  11. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    Science.gov (United States)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  12. Divergence-ratio axi-vision camera (Divcam): A distance mapping camera

    International Nuclear Information System (INIS)

    Iizuka, Keigo

    2006-01-01

    A novel distance mapping camera the divergence-ratio axi-vision camera (Divcam) is proposed. The decay rate of the illuminating light with distance due to the divergence of the light is used as means of mapping the distance. Resolutions of 10 mm over a range of meters and 0.5 mm over a range of decimeters were achieved. The special features of this camera are its high resolution real-time operation, simplicity, compactness, light weight, portability, and yet low fabrication cost. The feasibility of various potential applications is also included

  13. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  14. Camera processing with chromatic aberration.

    Science.gov (United States)

    Korneliussen, Jan Tore; Hirakawa, Keigo

    2014-10-01

    Since the refractive index of materials commonly used for lens depends on the wavelengths of light, practical camera optics fail to converge light to a single point on an image plane. Known as chromatic aberration, this phenomenon distorts image details by introducing magnification error, defocus blur, and color fringes. Though achromatic and apochromatic lens designs reduce chromatic aberration to a degree, they are complex and expensive and they do not offer a perfect correction. In this paper, we propose a new postcapture processing scheme designed to overcome these problems computationally. Specifically, the proposed solution is comprised of chromatic aberration-tolerant demosaicking algorithm and post-demosaicking chromatic aberration correction. Experiments with simulated and real sensor data verify that the chromatic aberration is effectively corrected.

  15. ALGORITHM OF PLACEMENT OF VIDEO SURVEILLANCE CAMERAS AND ITS SOFTWARE IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Loktev Alexey Alexeevich

    2012-10-01

    Full Text Available Comprehensive distributed safety, control, and monitoring systems applied by companies and organizations of different ownership structure play a substantial role in the present-day society. Video surveillance elements that ensure image processing and decision making in automated or automatic modes are the essential components of new systems. This paper covers the modeling of video surveillance systems installed in buildings, and the algorithm, or pattern, of video camera placement with due account for nearly all characteristics of buildings, detection and recognition facilities, and cameras themselves. This algorithm will be subsequently implemented as a user application. The project contemplates a comprehensive approach to the automatic placement of cameras that take account of their mutual positioning and compatibility of tasks. The project objective is to develop the principal elements of the algorithm of recognition of a moving object to be detected by several cameras. The image obtained by different cameras will be processed. Parameters of motion are to be identified to develop a table of possible options of routes. The implementation of the recognition algorithm represents an independent research project to be covered by a different article. This project consists in the assessment of the degree of complexity of an algorithm of camera placement designated for identification of cases of inaccurate algorithm implementation, as well as in the formulation of supplementary requirements and input data by means of intercrossing sectors covered by neighbouring cameras. The project also contemplates identification of potential problems in the course of development of a physical security and monitoring system at the stage of the project design development and testing. The camera placement algorithm has been implemented as a software application that has already been pilot tested on buildings and inside premises that have irregular dimensions. The

  16. HAL/S-360 compiler system specification

    Science.gov (United States)

    Johnson, A. E.; Newbold, P. N.; Schulenberg, C. W.; Avakian, A. E.; Varga, S.; Helmers, P. H.; Helmers, C. T., Jr.; Hotz, R. L.

    1974-01-01

    A three phase language compiler is described which produces IBM 360/370 compatible object modules and a set of simulation tables to aid in run time verification. A link edit step augments the standard OS linkage editor. A comprehensive run time system and library provide the HAL/S operating environment, error handling, a pseudo real time executive, and an extensive set of mathematical, conversion, I/O, and diagnostic routines. The specifications of the information flow and content for this system are also considered.

  17. Research and Application of Autodesk Fusion360 in Industrial Design

    Science.gov (United States)

    Song, P. P.; Qi, Y. M.; Cai, D. C.

    2018-05-01

    In 2016, Fusion 360, a productintroduced byAutodesk and integrating industrial design, structural design, mechanical simulation, and CAM, turns out a design platform supportingcollaboration and sharing both cross-platform and via the cloud. In previous products, design and manufacturing use to be isolated. In the course of design, research and development, the communication between designers and engineers used to go on through different software products, tool commands, and even industry terms. Moreover, difficulty also lies with the communication between design thoughts and machining strategies. Naturally, a difficult product design and R & D process would trigger a noticeable gap between the design model and the actual product. A complete product development process tends to cover several major areas, such as industrial design, mechanical design, rendering and animation, computer aided emulation (CAE), and computer aided manufacturing (CAM). Fusion 360, a perfect design solving the technical problems of cross-platform data exchange, realizes the effective control of cross-regional collaboration and presents an overview of collaboration and breaks the barriers between art and manufacturing, andblocks between design and processing. The “Eco-development of Fusion360 Industrial Chain” is both a significant means to and an inevitable trend forthe manufacturers and industrial designers to carry out innovation in China.

  18. Radiation camera exposure control

    International Nuclear Information System (INIS)

    Martone, R.J.; Yarsawich, M.; Wolczek, W.

    1976-01-01

    A system and method for governing the exposure of an image generated by a radiation camera to an image sensing camera is disclosed. The exposure is terminated in response to the accumulation of a predetermined quantity of radiation, defining a radiation density, occurring in a predetermined area. An index is produced which represents the value of that quantity of radiation whose accumulation causes the exposure termination. The value of the predetermined radiation quantity represented by the index is sensed so that the radiation camera image intensity can be calibrated to compensate for changes in exposure amounts due to desired variations in radiation density of the exposure, to maintain the detectability of the image by the image sensing camera notwithstanding such variations. Provision is also made for calibrating the image intensity in accordance with the sensitivity of the image sensing camera, and for locating the index for maintaining its detectability and causing the proper centering of the radiation camera image

  19. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen- er...

  20. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  1. X-ray adapter with four freedom degrees for the curvilinear surface study

    International Nuclear Information System (INIS)

    Barakhtin, B.K.; Petrov, P.P.; Moskvin, A.I.

    1989-01-01

    Four-freedom degree adapter, which is placed on goniometer of DRON X-ray diffractometer is described. The adapter consists of specimen spring holder, worm pair, which provides for specimen turning by 360 deg, as well as, of three coordination tables. Investigations of prestressed-deformed state of near-the-surface layer of curvilinear surface units and structures are carried out using X-ray diffractometer equipped with the adapter described. Determination accuracy of residual stresses at the surface of turbine vane is no worse, than ±50 MPa

  2. Camera trap arrays improve detection probability of wildlife: Investigating study design considerations using an empirical dataset.

    Science.gov (United States)

    O'Connor, Kelly M; Nathan, Lucas R; Liberati, Marjorie R; Tingley, Morgan W; Vokoun, Jason C; Rittenhouse, Tracy A G

    2017-01-01

    Camera trapping is a standard tool in ecological research and wildlife conservation. Study designs, particularly for small-bodied or cryptic wildlife species often attempt to boost low detection probabilities by using non-random camera placement or baited cameras, which may bias data, or incorrectly estimate detection and occupancy. We investigated the ability of non-baited, multi-camera arrays to increase detection probabilities of wildlife. Study design components were evaluated for their influence on wildlife detectability by iteratively parsing an empirical dataset (1) by different sizes of camera arrays deployed (1-10 cameras), and (2) by total season length (1-365 days). Four species from our dataset that represented a range of body sizes and differing degrees of presumed detectability based on life history traits were investigated: white-tailed deer (Odocoileus virginianus), bobcat (Lynx rufus), raccoon (Procyon lotor), and Virginia opossum (Didelphis virginiana). For all species, increasing from a single camera to a multi-camera array significantly improved detection probability across the range of season lengths and number of study sites evaluated. The use of a two camera array increased survey detection an average of 80% (range 40-128%) from the detection probability of a single camera across the four species. Species that were detected infrequently benefited most from a multiple-camera array, where the addition of up to eight cameras produced significant increases in detectability. However, for species detected at high frequencies, single cameras produced a season-long (i.e, the length of time over which cameras are deployed and actively monitored) detectability greater than 0.75. These results highlight the need for researchers to be critical about camera trap study designs based on their intended target species, as detectability for each focal species responded differently to array size and season length. We suggest that researchers a priori identify

  3. Camera trap arrays improve detection probability of wildlife: Investigating study design considerations using an empirical dataset.

    Directory of Open Access Journals (Sweden)

    Kelly M O'Connor

    Full Text Available Camera trapping is a standard tool in ecological research and wildlife conservation. Study designs, particularly for small-bodied or cryptic wildlife species often attempt to boost low detection probabilities by using non-random camera placement or baited cameras, which may bias data, or incorrectly estimate detection and occupancy. We investigated the ability of non-baited, multi-camera arrays to increase detection probabilities of wildlife. Study design components were evaluated for their influence on wildlife detectability by iteratively parsing an empirical dataset (1 by different sizes of camera arrays deployed (1-10 cameras, and (2 by total season length (1-365 days. Four species from our dataset that represented a range of body sizes and differing degrees of presumed detectability based on life history traits were investigated: white-tailed deer (Odocoileus virginianus, bobcat (Lynx rufus, raccoon (Procyon lotor, and Virginia opossum (Didelphis virginiana. For all species, increasing from a single camera to a multi-camera array significantly improved detection probability across the range of season lengths and number of study sites evaluated. The use of a two camera array increased survey detection an average of 80% (range 40-128% from the detection probability of a single camera across the four species. Species that were detected infrequently benefited most from a multiple-camera array, where the addition of up to eight cameras produced significant increases in detectability. However, for species detected at high frequencies, single cameras produced a season-long (i.e, the length of time over which cameras are deployed and actively monitored detectability greater than 0.75. These results highlight the need for researchers to be critical about camera trap study designs based on their intended target species, as detectability for each focal species responded differently to array size and season length. We suggest that researchers a priori

  4. Camera Movement in Narrative Cinema

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2007-01-01

    section unearths what characterizes the literature on camera movement. The second section of the dissertation delineates the history of camera movement itself within narrative cinema. Several organizational principles subtending the on-screen effect of camera movement are revealed in section two...... but they are not organized into a coherent framework. This is the task that section three meets in proposing a functional taxonomy for camera movement in narrative cinema. Two presumptions subtend the taxonomy: That camera movement actively contributes to the way in which we understand the sound and images on the screen......, commentative or valuative manner. 4) Focalization: associating the movement of the camera with the viewpoints of characters or entities in the story world. 5) Reflexive: inviting spectators to engage with the artifice of camera movement. 6) Abstract: visualizing abstract ideas and concepts. In order...

  5. The prototype cameras for trans-Neptunian automatic occultation survey

    Science.gov (United States)

    Wang, Shiang-Yu; Ling, Hung-Hsu; Hu, Yen-Sang; Geary, John C.; Chang, Yin-Chang; Chen, Hsin-Yo; Amato, Stephen M.; Huang, Pin-Jie; Pratlong, Jerome; Szentgyorgyi, Andrew; Lehner, Matthew; Norton, Timothy; Jorden, Paul

    2016-08-01

    The Transneptunian Automated Occultation Survey (TAOS II) is a three robotic telescope project to detect the stellar occultation events generated by TransNeptunian Objects (TNOs). TAOS II project aims to monitor about 10000 stars simultaneously at 20Hz to enable statistically significant event rate. The TAOS II camera is designed to cover the 1.7 degrees diameter field of view of the 1.3m telescope with 10 mosaic 4.5k×2k CMOS sensors. The new CMOS sensor (CIS 113) has a back illumination thinned structure and high sensitivity to provide similar performance to that of the back-illumination thinned CCDs. Due to the requirements of high performance and high speed, the development of the new CMOS sensor is still in progress. Before the science arrays are delivered, a prototype camera is developed to help on the commissioning of the robotic telescope system. The prototype camera uses the small format e2v CIS 107 device but with the same dewar and also the similar control electronics as the TAOS II science camera. The sensors, mounted on a single Invar plate, are cooled to the operation temperature of about 200K as the science array by a cryogenic cooler. The Invar plate is connected to the dewar body through a supporting ring with three G10 bipods. The control electronics consists of analog part and a Xilinx FPGA based digital circuit. One FPGA is needed to control and process the signal from a CMOS sensor for 20Hz region of interests (ROI) readout.

  6. FitKids360: Design, Conduct, and Outcomes of a Stage 2 Pediatric Obesity Program

    Directory of Open Access Journals (Sweden)

    Jared M. Tucker

    2014-01-01

    Full Text Available This paper describes FitKids360, a stage 2 pediatric weight management program. FitKids360 is a physician-referred, multicomponent, low-cost healthy lifestyle program for overweight and obese youth 5–16 years of age and their families. FitKids360 provides an evidence-based approach to the treatment of pediatric overweight by targeting patients’ physical activity, screen time, and dietary behaviors using a family-centered approach. The intervention begins with a two-hour orientation and assessment period followed by six weekly sessions. Assessments include lifestyle behaviors, anthropometry, and the Family Nutrition and Physical Activity (FNPA survey, which screens for obesogenic risk factors in the home environment. Outcomes are presented from 258 patients who completed one of 33 FitKids360 classes. After completing FitKids360, patients increased moderate to vigorous physical activity by 14 minutes (P=0.019, reduced screen time by 44 minutes (P<0.001, and improved key dietary behaviors. Overall, FNPA scores increased by 9% (P<0.001 and 69% of patients with “high risk” FNPA scores at baseline dropped below the “high risk” range by followup. Patients also lowered BMIs (P=0.011 and age- and sex-adjusted BMI z-scores (P<0.001 after completing the 7-week program. We hope this report will be useful to medical and public health professionals seeking to develop stage 2 pediatric obesity programs.

  7. Our Journey to Summon and 360: The KAUST experience

    KAUST Repository

    Ramli, Rindra M.

    2017-01-01

    to embark on the project to evaluate, assess and recommend for a new and robust discovery layer. On top of that, the presenters would elaborate the project timeline (which also include the implementation phase for Summon and 360 Core), the challenges faced

  8. Non-invasive diagnostics of ion beams in strong toroidal magnetic fields with standard CMOS cameras

    Science.gov (United States)

    Ates, Adem; Ates, Yakup; Niebuhr, Heiko; Ratzinger, Ulrich

    2018-01-01

    A superconducting Figure-8 stellarator type magnetostatic Storage Ring (F8SR) is under investigation at the Institute for Applied Physics (IAP) at Goethe University Frankfurt. Besides numerical simulations on an optimized design for beam transport and injection a scaled down (0.6T) experiment with two 30°toroidal magnets is set up for further investigations. A great challenge is the development of a non-destructive, magnetically insensitive and flexible detector for local investigations of an ion beam propagating through the toroidal magnetostatic field. This paper introduces a new way of beam path measurement by residual gas monitoring. It uses a single board camera connected to a standard single board computer by a camera serial interface all placed inside the vacuum chamber. First experiments with one camera were done and in a next step two under 90 degree arranged cameras were installed. With the help of the two cameras which are moveable along the beam pipe the theoretical predictions are experimentally verified successfully. Previous experimental results have been confirmed. The transport of H+ and H2+ ion beams with energies of 7 keV and at beam currents of about 1 mA is investigated successfully.

  9. Examination of the ''Ultra-wide-angle compton camera'' in Fukushima

    International Nuclear Information System (INIS)

    Takeda, Shin'ichiro; Watanabe, Shin; Takahashi, Tadayuki

    2012-01-01

    Japan Aerospace Exploration Agency (JAXA) has made the camera in the title, which can visualize radioactive substances emitting gamma ray in a wide-angle view of almost 180 degrees (hemisphere) and this paper explains its technological details and actual examination in Iitatemura Village, Fukushima Prefecture. The camera has a detector module consisting from 5-laminated structure of 2 layers of Si-double-sided strip detector (Si-DSD) and 3 layers of CdTe-DSD at 4 mm pitch, and their device size and electrode pitch are made the same, which enables the detector tray and analog application specific integrated circuit (ASIC) usable to communize the read-out circuits and for economical reduction. Two modules are placed side by side for increasing sensitivity and car-loaded to operate at -5 degree for the examination. The CdTe-DSD has actually Pt cathode and Al anode (Pt/CdTe/Al) for reduction of electric leaking and increase of energy resolution for 137 Cs gamma ray (662 keV). Data from the detector are digital pulse height values, which are then converted to the hit information of the detected position and energy. The hit event due to photoelectric absorption peak in CdTe originated from Compton scattering in Si is selected to be back-projected on the celestial hemisphere, leading to the torus depending on the direction of the gamma ray, of which accumulation results in specifying the position of the source. At the Village of 2-3 mcSv/h of ambient dose environment, locally accumulated radioactive substances (30 mcSv/h) are successfully visualized. With use of soft gamma ray detector in ASTRO-H satellite under development in JAXA, the improved camera can be more sensitive and may be useful in such a case as de-contamination to monitor its results in real time. (T.T.)

  10. Caliste 64, an innovative CdTe hard X-ray micro-camera

    International Nuclear Information System (INIS)

    Meuris, A.; Limousin, O.; Pinsard, F.; Le Mer, I.; Lugiez, F.; Gevin, O.; Delagnes, E.; Vassal, M.C.; Soufflet, F.; Bocage, R.

    2008-01-01

    A prototype 64 pixel miniature camera has been designed and tested for the Simbol-X hard X-ray observatory to be flown on the joint CNES-ASI space mission in 2014. This device is called Caliste 64. It is a high performance spectro-imager with event time-tagging capability, able to detect photons between 2 keV and 250 keV. Caliste 64 is the assembly of a 1 or 2 min thick CdTe detector mounted on top of a readout module. CdTe detectors equipped with Aluminum Schottky barrier contacts are used because of their very low dark current and excellent spectroscopic performance. Front-end electronics is a stack of four IDeF-X V1.1 ASICs, arranged perpendicular to the detection plane, to read out each pixel independently. The whole camera fits in a 10 * 10 * 20 mm 3 volume and is juxtaposable on its four sides. This allows the device to be used as an elementary unit in a larger array of Caliste 64 cameras. Noise performance resulted in an ENC better than 60 electrons rms in average. The first prototype camera is tested at -10 degrees C with a bias of -400 V. The spectrum summed across the 64 pixels results in a resolution of 697 eV FWHM at 13.9 keV and 808 eV FWFM at 59.54 keV. (authors)

  11. Researches regarding glyphosate effectiveness on the degree of weed control in grape plantation

    Directory of Open Access Journals (Sweden)

    Monica NEGREA

    2010-11-01

    Full Text Available In this paper was determined the control degree of weeds in grape plantation, Burgund variety, when is using chemical treatments with herbicides and agro-technique measures. Herbicide used was Roundup 3 l/ha and 4l/ha (glyphosate isopropyl amine salt 360 g/l applied in 4 experimental variants. It was determined the weed presence degree, the type of weeds destroyed and the degree of their participation. Predominant weed species in studied grape plantation, were: Agropyron repens (20.15%, Geranium dissectum (17.91%, Capsella bursa pastoris (15.67% and Avena fatua (13.43%. Ephemeral weeds Veronica hederifolia and Stellaria media had a participation rate of 8.96%. Perennial weeds represented 40.30% while annual weeds are 59.70% . The herbicide Roundup provides most effective control in a dose of 3 or 4 l/ha, combined with mechanical weeding + 1 manual weeding, control rates being over 90%.

  12. INFN Camera demonstrator for the Cherenkov Telescope Array

    CERN Document Server

    Ambrosi, G; Aramo, C.; Bertucci, B.; Bissaldi, E.; Bitossi, M.; Brasolin, S.; Busetto, G.; Carosi, R.; Catalanotti, S.; Ciocci, M.A.; Consoletti, R.; Da Vela, P.; Dazzi, F.; De Angelis, A.; De Lotto, B.; de Palma, F.; Desiante, R.; Di Girolamo, T.; Di Giulio, C.; Doro, M.; D'Urso, D.; Ferraro, G.; Ferrarotto, F.; Gargano, F.; Giglietto, N.; Giordano, F.; Giraudo, G.; Iacovacci, M.; Ionica, M.; Iori, M.; Longo, F.; Mariotti, M.; Mastroianni, S.; Minuti, M.; Morselli, A.; Paoletti, R.; Pauletta, G.; Rando, R.; Fernandez, G. Rodriguez; Rugliancich, A.; Simone, D.; Stella, C.; Tonachini, A.; Vallania, P.; Valore, L.; Vagelli, V.; Verzi, V.; Vigorito, C.

    2015-01-01

    The Cherenkov Telescope Array is a world-wide project for a new generation of ground-based Cherenkov telescopes of the Imaging class with the aim of exploring the highest energy region of the electromagnetic spectrum. With two planned arrays, one for each hemisphere, it will guarantee a good sky coverage in the energy range from a few tens of GeV to hundreds of TeV, with improved angular resolution and a sensitivity in the TeV energy region better by one order of magnitude than the currently operating arrays. In order to cover this wide energy range, three different telescope types are envisaged, with different mirror sizes and focal plane features. In particular, for the highest energies a possible design is a dual-mirror Schwarzschild-Couder optical scheme, with a compact focal plane. A silicon photomultiplier (SiPM) based camera is being proposed as a solution to match the dimensions of the pixel (angular size of ~ 0.17 degrees). INFN is developing a camera demonstrator made by 9 Photo Sensor Modules (PSMs...

  13. Mixel camera--a new push-broom camera concept for high spatial resolution keystone-free hyperspectral imaging.

    Science.gov (United States)

    Høye, Gudrun; Fridman, Andrei

    2013-05-06

    Current high-resolution push-broom hyperspectral cameras introduce keystone errors to the captured data. Efforts to correct these errors in hardware severely limit the optical design, in particular with respect to light throughput and spatial resolution, while at the same time the residual keystone often remains large. The mixel camera solves this problem by combining a hardware component--an array of light mixing chambers--with a mathematical method that restores the hyperspectral data to its keystone-free form, based on the data that was recorded onto the sensor with large keystone. A Virtual Camera software, that was developed specifically for this purpose, was used to compare the performance of the mixel camera to traditional cameras that correct keystone in hardware. The mixel camera can collect at least four times more light than most current high-resolution hyperspectral cameras, and simulations have shown that the mixel camera will be photon-noise limited--even in bright light--with a significantly improved signal-to-noise ratio compared to traditional cameras. A prototype has been built and is being tested.

  14. HAL/S-360 compiler test activity report

    Science.gov (United States)

    Helmers, C. T.

    1974-01-01

    The levels of testing employed in verifying the HAL/S-360 compiler were as follows: (1) typical applications program case testing; (2) functional testing of the compiler system and its generated code; and (3) machine oriented testing of compiler implementation on operational computers. Details of the initial test plan and subsequent adaptation are reported, along with complete test results for each phase which examined the production of object codes for every possible source statement.

  15. VUV testing of science cameras at MSFC: QE measurement of the CLASP flight cameras

    Science.gov (United States)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-08-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint MSFC, National Astronomical Observatory of Japan (NAOJ), Instituto de Astrofisica de Canarias (IAC) and Institut D'Astrophysique Spatiale (IAS) sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512 × 512 detector, dual channel analog readout and an internally mounted cold block. At the flight CCD temperature of -20C, the CLASP cameras exceeded the low-noise performance requirements (UV, EUV and X-ray science cameras at MSFC.

  16. Neutron cameras for ITER

    International Nuclear Information System (INIS)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-01-01

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from 16 N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with 16 N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins

  17. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many

  18. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  19. 40 CFR 98.360 - Definition of the source category.

    Science.gov (United States)

    2010-07-01

    ... this rule. (b) A manure management system (MMS) is a system that stabilizes and/or stores livestock... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Manure Management § 98.360 Definition of the source category. (a) This source category consists of livestock facilities with manure management systems that emit 25...

  20. The Mars Science Laboratory (MSL) Mast cameras and Descent imager: Investigation and instrument descriptions

    Science.gov (United States)

    Malin, Michal C.; Ravine, Michael A.; Caplinger, Michael A.; Tony Ghaemi, F.; Schaffner, Jacob A.; Maki, Justin N.; Bell, James F.; Cameron, James F.; Dietrich, William E.; Edgett, Kenneth S.; Edwards, Laurence J.; Garvin, James B.; Hallet, Bernard; Herkenhoff, Kenneth E.; Heydari, Ezat; Kah, Linda C.; Lemmon, Mark T.; Minitti, Michelle E.; Olson, Timothy S.; Parker, Timothy J.; Rowland, Scott K.; Schieber, Juergen; Sletten, Ron; Sullivan, Robert J.; Sumner, Dawn Y.; Aileen Yingst, R.; Duston, Brian M.; McNair, Sean; Jensen, Elsa H.

    2017-08-01

    The Mars Science Laboratory Mast camera and Descent Imager investigations were designed, built, and operated by Malin Space Science Systems of San Diego, CA. They share common electronics and focal plane designs but have different optics. There are two Mastcams of dissimilar focal length. The Mastcam-34 has an f/8, 34 mm focal length lens, and the M-100 an f/10, 100 mm focal length lens. The M-34 field of view is about 20° × 15° with an instantaneous field of view (IFOV) of 218 μrad; the M-100 field of view (FOV) is 6.8° × 5.1° with an IFOV of 74 μrad. The M-34 can focus from 0.5 m to infinity, and the M-100 from 1.6 m to infinity. All three cameras can acquire color images through a Bayer color filter array, and the Mastcams can also acquire images through seven science filters. Images are ≤1600 pixels wide by 1200 pixels tall. The Mastcams, mounted on the 2 m tall Remote Sensing Mast, have a 360° azimuth and 180° elevation field of regard. Mars Descent Imager is fixed-mounted to the bottom left front side of the rover at 66 cm above the surface. Its fixed focus lens is in focus from 2 m to infinity, but out of focus at 66 cm. The f/3 lens has a FOV of 70° by 52° across and along the direction of motion, with an IFOV of 0.76 mrad. All cameras can acquire video at 4 frames/second for full frames or 720p HD at 6 fps. Images can be processed using lossy Joint Photographic Experts Group and predictive lossless compression.

  1. Improving Situational Awareness in camera surveillance by combining top-view maps with camera images

    NARCIS (Netherlands)

    Kooi, F.L.; Zeeders, R.

    2009-01-01

    The goal of the experiment described is to improve today's camera surveillance in public spaces. Three designs with the camera images combined on a top-view map were compared to each other and to the current situation in camera surveillance. The goal was to test which design makes spatial

  2. SU-F-J-206: Systematic Evaluation of the Minimum Detectable Shift Using a Range- Finding Camera

    Energy Technology Data Exchange (ETDEWEB)

    Platt, M; Platt, M [College of Medicine University of Cincinnati, Cincinnati, OH (United States); Lamba, M [University of Cincinnati, Cincinnati, OH (United States); Mascia, A [University of Cincinnati Medical Center, Cincinnati, OH (United States); Huang, K [UC Health Barret Cancer Center, Cincinnati, OH (United States)

    2016-06-15

    Purpose: The robotic table used for patient alignment in proton therapy is calibrated only at commissioning under well-defined conditions and table shifts may vary over time and with differing conditions. The purpose of this study is to systematically investigate minimum detectable shifts using a time-of-flight (TOF) range-finding camera for table position feedback. Methods: A TOF camera was used to acquire one hundred 424 × 512 range images from a flat surface before and after known shifts. Range was assigned by averaging central regions of the image across multiple images. Depth resolution was determined by evaluating the difference between the actual shift of the surface and the measured shift. Depth resolution was evaluated for number of images averaged, area of sensor over which depth was averaged, distance from camera to surface, central versus peripheral image regions, and angle of surface relative to camera. Results: For one to one thousand images with a shift of one millimeter the range in error was 0.852 ± 0.27 mm to 0.004 ± 0.01 mm (95% C.I.). For varying regions of the camera sensor the range in error was 0.02 ± 0.05 mm to 0.47 ± 0.04 mm. The following results are for 10 image averages. For areas ranging from one pixel to 9 × 9 pixels the range in error was 0.15 ± 0.09 to 0.29 ± 0.15 mm (1σ). For distances ranging from two to four meters the range in error was 0.15 ± 0.09 to 0.28 ± 0.15 mm. For an angle of incidence between thirty degrees and ninety degrees the average range in error was 0.11 ± 0.08 to 0.17 ± 0.09 mm. Conclusion: It is feasible to use a TOF camera for measuring shifts in flat surfaces under clinically relevant conditions with submillimeter precision.

  3. A Logic Architecture for 360 ADAS-Alerts for Hazards Detection Based in Driver Actions

    Directory of Open Access Journals (Sweden)

    Izquierdo-Reyes Javier

    2017-01-01

    Full Text Available In this work is presented a novel approach for passive safety in vehicles by Advanced Driver Assistance Systems (ADAS alert emission in 360° around driver to notify about hazards nearby the vehicle depending on the actions taken by driver per the context. This proposal would create a more robust system compared to current passive ADAS systems since the feedback to driver is in the same direction that hazard is detected (Punctual Sound Source Alert, compared with most assistance systems that emits sounds from the monitor or from the dashboard provoking distractions when emits alerts unnecessarily. The increase of security by this method will allow the driver to be aware of their surroundings even in a very quiet cabin or in a noisy environment. Also, it would detect the steering wheel angle, speed of movement and the activation of turning lights among other alerts, which would allow us to define a critical action during driving; apart from using sensors and cameras aimed at the driver to detect patterns of movement during these critical actions and have a prediction of a possible turn or manoeuvre when driving, refer to Figure 1. It will be necessary a reconfiguration of the alert in frequency, time of action depending upon the level of risk to prevent an accident or to reduce the consequences in an imminent accident.

  4. VUV Testing of Science Cameras at MSFC: QE Measurement of the CLASP Flight Cameras

    Science.gov (United States)

    Champey, Patrick R.; Kobayashi, Ken; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512x512 detector, dual channel analog readout electronics and an internally mounted cold block. At the flight operating temperature of -20 C, the CLASP cameras achieved the low-noise performance requirements (less than or equal to 25 e- read noise and greater than or equal to 10 e-/sec/pix dark current), in addition to maintaining a stable gain of approximately equal to 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Four flight-like cameras were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise, dark current and residual non-linearity of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV and EUV science cameras at MSFC.

  5. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  6. Evaluation of the effectiveness of the 360-credit National ...

    African Journals Online (AJOL)

    We investigated the effectiveness of the 360-credit National Professional Diploma (NPDE) as a programme that is aimed at the upgrading of currently serving unqualified and under-qualified educators, with a view to improving the quality of teaching and learning in schools and Further Education and Training colleges.

  7. Visual fatigue modeling for stereoscopic video shot based on camera motion

    Science.gov (United States)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  8. Narrow-band 1, 2, 3, 4, 8, 16 and 24 cycles/360o angular frequency filters

    Directory of Open Access Journals (Sweden)

    Simas M.L.B.

    2002-01-01

    Full Text Available We measured human frequency response functions for seven angular frequency filters whose test frequencies were centered at 1, 2, 3, 4, 8, 16 or 24 cycles/360º using a supra-threshold summation method. The seven functions of 17 experimental conditions each were measured nine times for five observers. For the arbitrarily selected filter phases, the maximum summation effect occurred at test frequency for filters at 1, 2, 3, 4 and 8 cycles/360º. For both 16 and 24 cycles/360º test frequencies, maximum summation occurred at the lower harmonics. These results allow us to conclude that there are narrow-band angular frequency filters operating somehow in the human visual system either through summation or inhibition of specific frequency ranges. Furthermore, as a general result, it appears that addition of higher angular frequencies to lower ones disturbs low angular frequency perception (i.e., 1, 2, 3 and 4 cycles/360º, whereas addition of lower harmonics to higher ones seems to improve detection of high angular frequency harmonics (i.e., 8, 16 and 24 cycles/360º. Finally, we discuss the possible involvement of coupled radial and angular frequency filters in face perception using an example where narrow-band low angular frequency filters could have a major role.

  9. SHOK—The First Russian Wide-Field Optical Camera in Space

    Science.gov (United States)

    Lipunov, V. M.; Gorbovskoy, E. S.; Kornilov, V. G.; Panasyuk, M. I.; Amelushkin, A. M.; Petrov, V. L.; Yashin, I. V.; Svertilov, S. I.; Vedenkin, N. N.

    2018-02-01

    Onboard the spacecraft Lomonosov is established two fast, fixed, very wide-field cameras SHOK. The main goal of this experiment is the observation of GRB optical emission before, synchronously, and after the gamma-ray emission. The field of view of each of the cameras is placed in the gamma-ray burst detection area of other devices located onboard the "Lomonosov" spacecraft. SHOK provides measurements of optical emissions with a magnitude limit of ˜ 9-10m on a single frame with an exposure of 0.2 seconds. The device is designed for continuous sky monitoring at optical wavelengths in the very wide field of view (1000 square degrees each camera), detection and localization of fast time-varying (transient) optical sources on the celestial sphere, including provisional and synchronous time recording of optical emissions from the gamma-ray burst error boxes, detected by the BDRG device and implemented by a control signal (alert trigger) from the BDRG. The Lomonosov spacecraft has two identical devices, SHOK1 and SHOK2. The core of each SHOK device is a fast-speed 11-Megapixel CCD. Each of the SHOK devices represents a monoblock, consisting of a node observations of optical emission, the electronics node, elements of the mechanical construction, and the body.

  10. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  11. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    A positron emission tomography camera having a plurality of detector planes positioned side-by-side around a patient area to detect radiation. Each plane includes a plurality of photomultiplier tubes and at least two rows of scintillation crystals on each photomultiplier tube extend across to adjacent photomultiplier tubes for detecting radiation from the patient area. Each row of crystals on each photomultiplier tube is offset from the other rows of crystals, and the area of each crystal on each tube in each row is different than the area of the crystals on the tube in other rows for detecting which crystal is actuated and allowing the detector to detect more inter-plane slides. The crystals are offset by an amount equal to the length of the crystal divided by the number of rows. The rows of crystals on opposite sides of the patient may be rotated 90 degrees relative to each other

  12. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  13. Using DSLR cameras in digital holography

    Science.gov (United States)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  14. Selecting a digital camera for telemedicine.

    Science.gov (United States)

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  15. Magnetic field control of 90°, 180°, and 360° domain wall resistance

    Science.gov (United States)

    Majidi, Roya

    2012-10-01

    In the present work, we have compared the resistance of the 90°, 180°, and 360° domain walls in the presence of external magnetic field. The calculations are based on the Boltzmann transport equation within the relaxation time approximation. One-dimensional Néel-type domain walls between two domains whose magnetization differs by angle of 90°, 180°, and 360° are considered. The results indicate that the resistance of the 360° DW is more considerable than that of the 90° and 180° DWs. It is also found that the domain wall resistance can be controlled by applying transverse magnetic field. Increasing the strength of the external magnetic field enhances the domain wall resistance. In providing spintronic devices based on magnetic nanomaterials, considering and controlling the effect of domain wall on resistivity are essential.

  16. Caliste 64, an innovative CdTe hard X-ray micro-camera

    Energy Technology Data Exchange (ETDEWEB)

    Meuris, A.; Limousin, O.; Pinsard, F.; Le Mer, I. [CEA Saclay, DSM, DAPNIA, Serv. Astrophys., F-91191 Gif sur Yvette (France); Lugiez, F.; Gevin, O.; Delagnes, E. [CEA Saclay, DSM, DAPNIA, Serv. Electron., F-91191 Gif sur Yvette (France); Vassal, M.C.; Soufflet, F.; Bocage, R. [3D-plus Company, F-78532 Buc (France)

    2008-07-01

    A prototype 64 pixel miniature camera has been designed and tested for the Simbol-X hard X-ray observatory to be flown on the joint CNES-ASI space mission in 2014. This device is called Caliste 64. It is a high performance spectro-imager with event time-tagging capability, able to detect photons between 2 keV and 250 keV. Caliste 64 is the assembly of a 1 or 2 min thick CdTe detector mounted on top of a readout module. CdTe detectors equipped with Aluminum Schottky barrier contacts are used because of their very low dark current and excellent spectroscopic performance. Front-end electronics is a stack of four IDeF-X V1.1 ASICs, arranged perpendicular to the detection plane, to read out each pixel independently. The whole camera fits in a 10 * 10 * 20 mm{sup 3} volume and is juxtaposable on its four sides. This allows the device to be used as an elementary unit in a larger array of Caliste 64 cameras. Noise performance resulted in an ENC better than 60 electrons rms in average. The first prototype camera is tested at -10 degrees C with a bias of -400 V. The spectrum summed across the 64 pixels results in a resolution of 697 eV FWHM at 13.9 keV and 808 eV FWFM at 59.54 keV. (authors)

  17. Mechanical design for the Evryscope: a minute cadence, 10,000-square-degree FoV, gigapixel-scale telescope

    Science.gov (United States)

    Ratzloff, Jeff; Law, Nicholas M.; Fors, Octavi; Wulfken, Philip J.

    2015-01-01

    We designed, tested, prototyped and built a compact 27-camera robotic telescope that images 10,000 square degrees in 2-minute exposures. We exploit mass produced interline CCD Cameras with Rokinon consumer lenses to economically build a telescope that covers this large part of the sky simultaneously with a good enough pixel sampling to avoid the confusion limit over most of the sky. We developed the initial concept into a 3-d mechanical design with the aid of computer modeling programs. Significant design components include the camera assembly-mounting modules, the hemispherical support structure, and the instrument base structure. We simulated flexure and material stress in each of the three main components, which helped us optimize the rigidity and materials selection, while reducing weight. The camera mounts are CNC aluminum and the support shell is reinforced fiberglass. Other significant project components include optimizing camera locations, camera alignment, thermal analysis, environmental sealing, wind protection, and ease of access to internal components. The Evryscope will be assembled at UNC Chapel Hill and deployed to the CTIO in 2015.

  18. 37 CFR 360.12 - Form and content of claims.

    Science.gov (United States)

    2010-07-01

    ... SUBMISSION OF ROYALTY CLAIMS FILING OF CLAIMS TO ROYALTY FEES COLLECTED UNDER COMPULSORY LICENSE Satellite Claims § 360.12 Form and content of claims. (a) Forms. (1) Each claim to compulsory license royalty fees... owner entitled to claim the royalty fees. (ii) A general statement of the nature of the copyright owner...

  19. 37 CFR 360.3 - Form and content of claims.

    Science.gov (United States)

    2010-07-01

    ... SUBMISSION OF ROYALTY CLAIMS FILING OF CLAIMS TO ROYALTY FEES COLLECTED UNDER COMPULSORY LICENSE Cable Claims § 360.3 Form and content of claims. (a) Forms. (1) Each claim to cable compulsory license royalty fees... copyright owner entitled to claim the royalty fees. (ii) A general statement of the nature of the copyright...

  20. 30 CFR 206.360 - What records must I keep to support my calculations of royalty or fees under this subpart?

    Science.gov (United States)

    2010-07-01

    ... calculations of royalty or fees under this subpart? 206.360 Section 206.360 Mineral Resources MINERALS... Resources § 206.360 What records must I keep to support my calculations of royalty or fees under this subpart? If you determine royalties or direct use fees for your geothermal resource under this subpart...

  1. Automatic locking radioisotope camera lock

    International Nuclear Information System (INIS)

    Rosauer, P.J.

    1978-01-01

    The lock of the present invention secures the isotope source in a stored shielded condition in the camera until a positive effort has been made to open the lock and take the source outside of the camera and prevents disconnection of the source pigtail unless the source is locked in a shielded condition in the camera. It also gives a visual indication of the locked or possible exposed condition of the isotope source and prevents the source pigtail from being completely pushed out of the camera, even when the lock is released. (author)

  2. Acceptance/Operational Test Report for Tank 241-AN-104 camera and camera purge control system

    International Nuclear Information System (INIS)

    Castleberry, J.L.

    1995-11-01

    This Acceptance/Operational Test Procedure (ATP/OTP) will document the satisfactory operation of the camera purge panel, purge control panel, color camera system and associated control components destined for installation. The final acceptance of the complete system will be performed in the field. The purge panel and purge control panel will be tested for its safety interlock which shuts down the camera and pan-and-tilt inside the tank vapor space during loss of purge pressure and that the correct purge volume exchanges are performed as required by NFPA 496. This procedure is separated into seven sections. This Acceptance/Operational Test Report documents the successful acceptance and operability testing of the 241-AN-104 camera system and camera purge control system

  3. An Approach to Evaluate Stability for Cable-Based Parallel Camera Robots with Hybrid Tension-Stiffness Properties

    Directory of Open Access Journals (Sweden)

    Huiling Wei

    2015-12-01

    Full Text Available This paper focuses on studying the effect of cable tensions and stiffness on the stability of cable-based parallel camera robots. For this purpose, the tension factor and the stiffness factor are defined, and the expression of stability is deduced. A new approach is proposed to calculate the hybrid-stability index with the minimum cable tension and the minimum singular value. Firstly, the kinematic model of a cable-based parallel camera robot is established. Based on the model, the tensions are solved and a tension factor is defined. In order to obtain the tension factor, an optimization of the cable tensions is carried out. Then, an expression of the system's stiffness is deduced and a stiffness factor is defined. Furthermore, an approach to evaluate the stability of the cable-based camera robots with hybrid tension-stiffness properties is presented. Finally, a typical three-degree-of-freedom cable-based parallel camera robot with four cables is studied as a numerical example. The simulation results show that the approach is both reasonable and effective.

  4. Development of high-speed video cameras

    Science.gov (United States)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  5. Video camera use at nuclear power plants

    International Nuclear Information System (INIS)

    Estabrook, M.L.; Langan, M.O.; Owen, D.E.

    1990-08-01

    A survey of US nuclear power plants was conducted to evaluate video camera use in plant operations, and determine equipment used and the benefits realized. Basic closed circuit television camera (CCTV) systems are described and video camera operation principles are reviewed. Plant approaches for implementing video camera use are discussed, as are equipment selection issues such as setting task objectives, radiation effects on cameras, and the use of disposal cameras. Specific plant applications are presented and the video equipment used is described. The benefits of video camera use --- mainly reduced radiation exposure and increased productivity --- are discussed and quantified. 15 refs., 6 figs

  6. The development of large-aperture test system of infrared camera and visible CCD camera

    Science.gov (United States)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  7. Human tracking over camera networks: a review

    Science.gov (United States)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  8. Proceedings of the IDA Workshop on Formal Specification and Verification of Ada (Trade Name) (1st) Held in Alexandria, Virginia on 18-20 March 1985.

    Science.gov (United States)

    1985-12-01

    on the third day. 5 ADA VERIFICATION WORKSHOP MARCH 18-20, 1985 LIST OF PARTICIPANTS Bernard Abrams ABRAMS@ADA20 Grumman Aerospace Corporation Mail...20301-3081 (202) 694-0211 Mark R. Cornwell CORNWELL @NRL-CSS Code 7590 Naval Research Lab Washington, D.C. 20375 (202) 767-3365 Jeff Facemire FACEMIRE...accompanied by descriptions of their purpose in English, to LUCKHAM@SAIL for annotation. - X-2 DISTRIBUTION LIST FOR M-146 Bernard Abrams ABRAMS@USC-ECLB

  9. Microprocessor-controlled wide-range streak camera

    Science.gov (United States)

    Lewis, Amy E.; Hollabaugh, Craig

    2006-08-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  10. Microprocessor-controlled, wide-range streak camera

    International Nuclear Information System (INIS)

    Amy E. Lewis; Craig Hollabaugh

    2006-01-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized

  11. Camera pose estimation for augmented reality in a small indoor dynamic scene

    Science.gov (United States)

    Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad

    2017-09-01

    Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.

  12. 360° tunable microwave phase shifter based on silicon-on-insulator dual-microring resonator

    DEFF Research Database (Denmark)

    Pu, Minhao; Xue, Weiqi; Liu, Liu

    2010-01-01

    We demonstrate tunable microwave phase shifters based on electrically tunable silicon-on-insulator dual-microring resonators. A quasi-linear phase shift of 360° with ~2dB radio frequency power variation at a microwave frequency of 40GHz is obtained......We demonstrate tunable microwave phase shifters based on electrically tunable silicon-on-insulator dual-microring resonators. A quasi-linear phase shift of 360° with ~2dB radio frequency power variation at a microwave frequency of 40GHz is obtained...

  13. ALGORITMOS PARA CONSTRUÇÃO DE PANORAMA DE IMAGENS 360 E VISUALIZAÇÃO

    Directory of Open Access Journals (Sweden)

    Alan Kazuo Hiraga

    2013-07-01

    Full Text Available This paper presents a methodology for the construction of mosaic images to form 360 panorama and an application for viewing. We used the SIFT and RANSAC algorithms commonly found in the literature for matching images. The algorithms for projection, adjustment of images and Blend for smoothing the joints were implemented in this paper using the OpenCV Computer Vision library. The methodology of this research techniques were applied to reduce the distortions caused in the joints of successive images, as well as increase the quality of the final panorama and get better performance. The viewer application developed in C# shows the 360 image resulting from images stitching into a cylinder with the point of view inside. The results of the experiments performed using the proposed technique has proved satisfactory, the mosaic formed by the joining of multiple pictures suffers little distortion, however, these distortions do not interfere with the final formation of the panorama 360, which provides good visual quality.

  14. Infrared Camera Diagnostic for Heat Flux Measurements on NSTX

    International Nuclear Information System (INIS)

    D. Mastrovito; R. Maingi; H.W. Kugel; A.L. Roquemore

    2003-01-01

    An infrared imaging system has been installed on NSTX (National Spherical Torus Experiment) at the Princeton Plasma Physics Laboratory to measure the surface temperatures on the lower divertor and center stack. The imaging system is based on an Indigo Alpha 160 x 128 microbolometer camera with 12 bits/pixel operating in the 7-13 (micro)m range with a 30 Hz frame rate and a dynamic temperature range of 0-700 degrees C. From these data and knowledge of graphite thermal properties, the heat flux is derived with a classic one-dimensional conduction model. Preliminary results of heat flux scaling are reported

  15. Soft x-ray camera for internal shape and current density measurements on a noncircular tokamak

    International Nuclear Information System (INIS)

    Fonck, R.J.; Jaehnig, K.P.; Powell, E.T.; Reusch, M.; Roney, P.; Simon, M.P.

    1988-05-01

    Soft x-ray measurements of the internal plasma flux surface shaped in principle allow a determination of the plasma current density distribution, and provide a necessary monitor of the degree of internal elongation of tokamak plasmas with a noncircular cross section. A two-dimensional, tangentially viewing, soft x-ray pinhole camera has been fabricated to provide internal shape measurements on the PBX-M tokamak. It consists of a scintillator at the focal plane of a foil-filtered pinhole camera, which is, in turn, fiber optically coupled to an intensified framing video camera (/DELTA/t />=/ 3 msec). Automated data acquisition is performed on a stand-alone image-processing system, and data archiving and retrieval takes place on an optical disk video recorder. The entire diagnostic is controlled via a PDP-11/73 microcomputer. The derivation of the polodial emission distribution from the measured image is done by fitting to model profiles. 10 refs., 4 figs

  16. Towards Adaptive Virtual Camera Control In Computer Games

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2011-01-01

    Automatic camera control aims to define a framework to control virtual camera movements in dynamic and unpredictable virtual environments while ensuring a set of desired visual properties. We inves- tigate the relationship between camera placement and playing behaviour in games and build a user...... model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platform game, we cluster gaze and camera information to identify camera behaviour profiles and we employ...... camera control in games is discussed....

  17. 41 CFR 102-74.360 - What are the specific accident and fire prevention responsibilities of occupant agencies?

    Science.gov (United States)

    2010-07-01

    ... other hanging materials that are made of non-combustible or flame-resistant fabric; (f) Use only... resistant; (g) Cooperate with GSA to develop and maintain fire prevention programs that provide the maximum... accident and fire prevention responsibilities of occupant agencies? 102-74.360 Section 102-74.360 Public...

  18. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-07-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2-deg deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  19. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-04-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2 0 deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  20. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.; Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  1. The eye of the camera: effects of security cameras on pro-social behavior

    NARCIS (Netherlands)

    van Rompay, T.J.L.; Vonk, D.J.; Fransen, M.L.

    2009-01-01

    This study addresses the effects of security cameras on prosocial behavior. Results from previous studies indicate that the presence of others can trigger helping behavior, arising from the need for approval of others. Extending these findings, the authors propose that security cameras can likewise

  2. Passive auto-focus for digital still cameras and camera phones: Filter-switching and low-light techniques

    Science.gov (United States)

    Gamadia, Mark Noel

    In order to gain valuable market share in the growing consumer digital still camera and camera phone market, camera manufacturers have to continually add and improve existing features to their latest product offerings. Auto-focus (AF) is one such feature, whose aim is to enable consumers to quickly take sharply focused pictures with little or no manual intervention in adjusting the camera's focus lens. While AF has been a standard feature in digital still and cell-phone cameras, consumers often complain about their cameras' slow AF performance, which may lead to missed photographic opportunities, rendering valuable moments and events with undesired out-of-focus pictures. This dissertation addresses this critical issue to advance the state-of-the-art in the digital band-pass filter, passive AF method. This method is widely used to realize AF in the camera industry, where a focus actuator is adjusted via a search algorithm to locate the in-focus position by maximizing a sharpness measure extracted from a particular frequency band of the incoming image of the scene. There are no known systematic methods for automatically deriving the parameters such as the digital pass-bands or the search step-size increments used in existing passive AF schemes. Conventional methods require time consuming experimentation and tuning in order to arrive at a set of parameters which balance AF performance in terms of speed and accuracy ultimately causing a delay in product time-to-market. This dissertation presents a new framework for determining an optimal set of passive AF parameters, named Filter- Switching AF, providing an automatic approach to achieve superior AF performance, both in good and low lighting conditions based on the following performance measures (metrics): speed (total number of iterations), accuracy (offset from truth), power consumption (total distance moved), and user experience (in-focus position overrun). Performance results using three different prototype cameras

  3. Image compensation for camera and lighting variability

    Science.gov (United States)

    Daley, Wayne D.; Britton, Douglas F.

    1996-12-01

    With the current trend of integrating machine vision systems in industrial manufacturing and inspection applications comes the issue of camera and illumination stabilization. Unless each application is built around a particular camera and highly controlled lighting environment, the interchangeability of cameras of fluctuations in lighting become a problem as each camera usually has a different response. An empirical approach is proposed where color tile data is acquired using the camera of interest, and a mapping is developed to some predetermined reference image using neural networks. A similar analytical approach based on a rough analysis of the imaging systems is also considered for deriving a mapping between cameras. Once a mapping has been determined, all data from one camera is mapped to correspond to the images of the other prior to performing any processing on the data. Instead of writing separate image processing algorithms for the particular image data being received, the image data is adjusted based on each particular camera and lighting situation. All that is required when swapping cameras is the new mapping for the camera being inserted. The image processing algorithms can remain the same as the input data has been adjusted appropriately. The results of utilizing this technique are presented for an inspection application.

  4. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  5. 7 CFR 360.400 - Preemption of State and local laws.

    Science.gov (United States)

    2010-01-01

    ... local laws. (a) Under section 436 of the Plant Protection Act (7 U.S.C. 7756), a State or political... of the Plant Protection Act, the regulations in this part preempt all State and local laws and... 360.400 Agriculture Regulations of the Department of Agriculture (Continued) ANIMAL AND PLANT HEALTH...

  6. Science, conservation, and camera traps

    Science.gov (United States)

    Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.

  7. Soft x-ray streak cameras

    International Nuclear Information System (INIS)

    Stradling, G.L.

    1988-01-01

    This paper is a discussion of the development and of the current state of the art in picosecond soft x-ray streak camera technology. Accomplishments from a number of institutions are discussed. X-ray streak cameras vary from standard visible streak camera designs in the use of an x-ray transmitting window and an x-ray sensitive photocathode. The spectral sensitivity range of these instruments includes portions of the near UV and extends from the subkilovolt x- ray region to several tens of kilovolts. Attendant challenges encountered in the design and use of x-ray streak cameras include the accommodation of high-voltage and vacuum requirements, as well as manipulation of a photocathode structure which is often fragile. The x-ray transmitting window is generally too fragile to withstand atmospheric pressure, necessitating active vacuum pumping and a vacuum line of sight to the x-ray signal source. Because of the difficulty of manipulating x-ray beams with conventional optics, as is done with visible light, the size of the photocathode sensing area, access to the front of the tube, the ability to insert the streak tube into a vacuum chamber and the capability to trigger the sweep with very short internal delay times are issues uniquely relevant to x-ray streak camera use. The physics of electron imaging may place more stringent limitations on the temporal and spatial resolution obtainable with x-ray photocathodes than with the visible counterpart. Other issues which are common to the entire streak camera community also concern the x-ray streak camera users and manufacturers

  8. New camera systems for fuel services

    International Nuclear Information System (INIS)

    Hummel, W.; Beck, H.J.

    2010-01-01

    AREVA NP Fuel Services have many years of experience in visual examination and measurements on fuel assemblies and associated core components by using state of the art cameras and measuring technologies. The used techniques allow the surface and dimensional characterization of materials and shapes by visual examination. New enhanced and sophisticated technologies for fuel services f. e. are two shielded color camera systems for use under water and close inspection of a fuel assembly. Nowadays the market requirements for detecting and characterization of small defects (lower than the 10th of one mm) or cracks and analyzing surface appearances on an irradiated fuel rod cladding or fuel assembly structure parts have increased. Therefore it is common practice to use movie cameras with higher resolution. The radiation resistance of high resolution CCD cameras is in general very low and it is not possible to use them unshielded close to a fuel assembly. By extending the camera with a mirror system and shielding around the sensitive parts, the movie camera can be utilized for fuel assembly inspection. AREVA NP Fuel Services is now equipped with such kind of movie cameras. (orig.)

  9. 42 CFR 137.360 - Does the Secretary approve project planning and design documents prepared by the Self-Governance...

    Science.gov (United States)

    2010-10-01

    ... design documents prepared by the Self-Governance Tribe? 137.360 Section 137.360 Public Health PUBLIC... HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Roles of the Secretary in Establishing and... documents prepared by the Self-Governance Tribe? The Secretary shall have at least one opportunity to...

  10. Automatic multi-camera calibration for deployable positioning systems

    Science.gov (United States)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  11. Jäsen 360° -datan vaikutukset Ylöjärven seurakunnan työn kehittämisprosessissa

    OpenAIRE

    Korppoo-Seppänen, Riikka

    2017-01-01

    Tämän opinnäytetyön tarkoituksena oli selvittää, onko Ylöjärven seurakunnan työn kehittämisprosessissa hyödynnetty Jäsen 360° -dataa ja miten se on vaikuttanut työn tekemiseen ja koko seurakunnan toiminnan suunnitteluun ja ohjaukseen. Tarkoituksena oli myös selvittää minkälaisissa tilanteissa ja miten kehittämisprosessi on muuttanut henkilöstön toimintatapoja sekä onko Jäsen 360° -datan avulla muutettu toiminnan suuntaamista. Tutkimuksen avulla selvitettiin Jäsen 360° -datan käytön rajoitteit...

  12. Towards next generation 3D cameras

    Science.gov (United States)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (robotic inspection and assembly systems.

  13. Multi-Angle Snowflake Camera Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Stuefer, Martin [Univ. of Alaska, Fairbanks, AK (United States); Bailey, J. [Univ. of Alaska, Fairbanks, AK (United States)

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASC cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.

  14. CALIBRATION PROCEDURES ON OBLIQUE CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    G. Kemper

    2016-06-01

    Full Text Available Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna –IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first

  15. Relative and Absolute Calibration of a Multihead Camera System with Oblique and Nadir Looking Cameras for a Uas

    Science.gov (United States)

    Niemeyer, F.; Schima, R.; Grenzdörffer, G.

    2013-08-01

    Numerous unmanned aerial systems (UAS) are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg) are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis" software and will give an overview of the results and experiences of test flights.

  16. Autodesk Roadway Design for Infraworks 360 essentials

    CERN Document Server

    Chappell, Eric

    2015-01-01

    Quickly master InfraWorks Roadway Design with hands-on tutorials Autodesk Roadway Design for InfraWorks 360 Essentials, 2nd Edition allows you to begin designing immediately as you learn the ins and outs of the roadway-specific InfraWorks module. Detailed explanations coupled with hands-on exercises help you get up to speed and quickly and become productive with the module's core features and functions. Compelling screenshots illustrate step-by-step tutorials, and the companion website provides downloadable starting and ending files so you can jump in at any point and compare your work to the

  17. Transmission electron microscope CCD camera

    Science.gov (United States)

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  18. Wavefront analysis for plenoptic camera imaging

    International Nuclear Information System (INIS)

    Luan Yin-Sen; Xu Bing; Yang Ping; Tang Guo-Mao

    2017-01-01

    The plenoptic camera is a single lens stereo camera which can retrieve the direction of light rays while detecting their intensity distribution. In this paper, to reveal more truths of plenoptic camera imaging, we present the wavefront analysis for the plenoptic camera imaging from the angle of physical optics but not from the ray tracing model of geometric optics. Specifically, the wavefront imaging model of a plenoptic camera is analyzed and simulated by scalar diffraction theory and the depth estimation is redescribed based on physical optics. We simulate a set of raw plenoptic images of an object scene, thereby validating the analysis and derivations and the difference between the imaging analysis methods based on geometric optics and physical optics are also shown in simulations. (paper)

  19. Homography-based multiple-camera person-tracking

    Science.gov (United States)

    Turk, Matthew R.

    2009-01-01

    Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of

  20. Computing camera heading: A study

    Science.gov (United States)

    Zhang, John Jiaxiang

    2000-08-01

    An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.

  1. Plaque removal efficacy of Colgate 360 toothbrush: A clinical study

    Directory of Open Access Journals (Sweden)

    Nageshwar Iyer

    2016-01-01

    Full Text Available Aim: The aim of this clinical study was to confirm the plaque removal efficacy of the Colgate 360 Whole Mouth Clean Toothbrush. Study Design: This was a single-center, monadic, case-controlled study with the 7 days duration. Materials and Methods: A total of eighty participants (56 male and 24 female aged between 18 and 45 years with a minimum of 20 permanent teeth (excluding the third molars without any prosthetic crowns and an initial plaque score of minimum 1.5 as determined by Modified Quigley-Hein Plaque Index (1970 participated in the study. There were two dropouts during the study duration, one male and one female. The participants were instructed to brush for 1 min, after which plaque index was recorded again. They were then instructed to brush their teeth twice a day for 1 min with the assigned toothbrush (Colgate 360 Whole Mouth Clean Toothbrush and a commercially available fluoride toothpaste for the next 7 days. On the 7 th day, all the participants were recalled for follow-up and plaque examination. The plaque index scores (pre- and post-brushing were recorded, tabulated, and analyzed statistically. Results: The mean plaque indices reduced after brushing both on day 1 and day 7. There was also a reduction in mean plaque indices from day 1 to day 7. All these reductions were statistically significant (P < 0.001. The reduction in plaque scores was independent of the gender of the participants however female participants showed lower scores as compared to male participants (P < 0.001. Conclusion: The present study demonstrated a significant reduction in plaque scores with the use of Colgate 360 Whole Mouth Clean Soft Toothbrush throughout the study period. Continued use resulted in a further significant reduction in plaque scores irrespective of the gender of participants.

  2. Scintillation camera for high activity sources

    International Nuclear Information System (INIS)

    Arseneau, R.E.

    1978-01-01

    The invention described relates to a scintillation camera used for clinical medical diagnosis. Advanced recognition of many unacceptable pulses allows the scintillation camera to discard such pulses at an early stage in processing. This frees the camera to process a greater number of pulses of interest within a given period of time. Temporary buffer storage allows the camera to accommodate pulses received at a rate in excess of its maximum rated capability due to statistical fluctuations in the level of radioactivity of the radiation source measured. (U.K.)

  3. Decision about buying a gamma camera

    International Nuclear Information System (INIS)

    Ganatra, R.D.

    1992-01-01

    A large part of the referral to a nuclear medicine department is usually for imaging studies. Sooner or later, the nuclear medicine specialist will be called upon to make a decision about when and what type of gamma camera to buy. There is no longer an option of choosing between a rectilinear scanner and a gamma camera as the former is virtually out of the market. The decision that one has to make is when to invest in a gamma camera, and then on what basis to select the gamma camera

  4. Decision about buying a gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Ganatra, R D

    1993-12-31

    A large part of the referral to a nuclear medicine department is usually for imaging studies. Sooner or later, the nuclear medicine specialist will be called upon to make a decision about when and what type of gamma camera to buy. There is no longer an option of choosing between a rectilinear scanner and a gamma camera as the former is virtually out of the market. The decision that one has to make is when to invest in a gamma camera, and then on what basis to select the gamma camera 1 tab., 1 fig

  5. Selective-imaging camera

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  6. Tightly-Coupled GNSS/Vision Using a Sky-Pointing Camera for Vehicle Navigation in Urban Areas.

    Science.gov (United States)

    Gakne, Paul Verlaine; O'Keefe, Kyle

    2018-04-17

    This paper presents a method of fusing the ego-motion of a robot or a land vehicle estimated from an upward-facing camera with Global Navigation Satellite System (GNSS) signals for navigation purposes in urban environments. A sky-pointing camera is mounted on the top of a car and synchronized with a GNSS receiver. The advantages of this configuration are two-fold: firstly, for the GNSS signals, the upward-facing camera will be used to classify the acquired images into sky and non-sky (also known as segmentation). A satellite falling into the non-sky areas (e.g., buildings, trees) will be rejected and not considered for the final position solution computation. Secondly, the sky-pointing camera (with a field of view of about 90 degrees) is helpful for urban area ego-motion estimation in the sense that it does not see most of the moving objects (e.g., pedestrians, cars) and thus is able to estimate the ego-motion with fewer outliers than is typical with a forward-facing camera. The GNSS and visual information systems are tightly-coupled in a Kalman filter for the final position solution. Experimental results demonstrate the ability of the system to provide satisfactory navigation solutions and better accuracy than the GNSS-only and the loosely-coupled GNSS/vision, 20 percent and 82 percent (in the worst case) respectively, in a deep urban canyon, even in conditions with fewer than four GNSS satellites.

  7. Person detection and tracking with a 360° lidar system

    Science.gov (United States)

    Hammer, Marcus; Hebel, Marcus; Arens, Michael

    2017-10-01

    Today it is easily possible to generate dense point clouds of the sensor environment using 360° LiDAR (Light Detection and Ranging) sensors which are available since a number of years. The interpretation of these data is much more challenging. For the automated data evaluation the detection and classification of objects is a fundamental task. Especially in urban scenarios moving objects like persons or vehicles are of particular interest, for instance in automatic collision avoidance, for mobile sensor platforms or surveillance tasks. In literature there are several approaches for automated person detection in point clouds. While most techniques show acceptable results in object detection, the computation time is often crucial. The runtime can be problematic, especially due to the amount of data in the panoramic 360° point clouds. On the other hand, for most applications an object detection and classification in real time is needed. The paper presents a proposal for a fast, real-time capable algorithm for person detection, classification and tracking in panoramic point clouds.

  8. Video Chat with Multiple Cameras

    OpenAIRE

    MacCormick, John

    2012-01-01

    The dominant paradigm for video chat employs a single camera at each end of the conversation, but some conversations can be greatly enhanced by using multiple cameras at one or both ends. This paper provides the first rigorous investigation of multi-camera video chat, concentrating especially on the ability of users to switch between views at either end of the conversation. A user study of 23 individuals analyzes the advantages and disadvantages of permitting a user to switch between views at...

  9. Microprocessor-controlled, wide-range streak camera

    Energy Technology Data Exchange (ETDEWEB)

    Amy E. Lewis, Craig Hollabaugh

    2006-09-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera’s user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  10. Fabrication of multi-focal microlens array on curved surface for wide-angle camera module

    Science.gov (United States)

    Pan, Jun-Gu; Su, Guo-Dung J.

    2017-08-01

    In this paper, we present a wide-angle and compact camera module that consists of microlens array with different focal lengths on curved surface. The design integrates the principle of an insect's compound eye and the human eye. It contains a curved hexagonal microlens array and a spherical lens. Compared with normal mobile phone cameras which usually need no less than four lenses, but our proposed system only uses one lens. Furthermore, the thickness of our proposed system is only 2.08 mm and diagonal full field of view is about 100 degrees. In order to make the critical microlens array, we used the inkjet printing to control the surface shape of each microlens for achieving different focal lengths and use replication method to form curved hexagonal microlens array.

  11. Analyzer for gamma cameras diagnostic

    International Nuclear Information System (INIS)

    Oramas Polo, I.; Osorio Deliz, J. F.; Diaz Garcia, A.

    2013-01-01

    This research work was carried out to develop an analyzer for gamma cameras diagnostic. It is composed of an electronic system that includes hardware and software capabilities, and operates from the acquisition of the 4 head position signals of a gamma camera detector. The result is the spectrum of the energy delivered by nuclear radiation coming from the camera detector head. This system includes analog processing of position signals from the camera, digitization and the subsequent processing of the energy signal in a multichannel analyzer, sending data to a computer via a standard USB port and processing of data in a personal computer to obtain the final histogram. The circuits are composed of an analog processing board and a universal kit with micro controller and programmable gate array. (Author)

  12. Single Camera Calibration in 3D Vision

    Directory of Open Access Journals (Sweden)

    Caius SULIMAN

    2009-12-01

    Full Text Available Camera calibration is a necessary step in 3D vision in order to extract metric information from 2D images. A camera is considered to be calibrated when the parameters of the camera are known (i.e. principal distance, lens distorsion, focal length etc.. In this paper we deal with a single camera calibration method and with the help of this method we try to find the intrinsic and extrinsic camera parameters. The method was implemented with succes in the programming and simulation environment Matlab.

  13. RELATIVE AND ABSOLUTE CALIBRATION OF A MULTIHEAD CAMERA SYSTEM WITH OBLIQUE AND NADIR LOOKING CAMERAS FOR A UAS

    Directory of Open Access Journals (Sweden)

    F. Niemeyer

    2013-08-01

    Full Text Available Numerous unmanned aerial systems (UAS are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis“ software and will give an overview of the results and experiences of test flights.

  14. A camera specification for tendering purposes

    International Nuclear Information System (INIS)

    Lunt, M.J.; Davies, M.D.; Kenyon, N.G.

    1985-01-01

    A standardized document is described which is suitable for sending to companies which are being invited to tender for the supply of a gamma camera. The document refers to various features of the camera, the performance specification of the camera, maintenance details, price quotations for various options and delivery, installation and warranty details. (U.K.)

  15. 360°-View of Quantum Theory and Ab Initio Simulation at Extreme Conditions: 2014 Sanibel Symposium

    International Nuclear Information System (INIS)

    Cheng, Hai-Ping

    2016-01-01

    The Sanibel Symposium 2014 was held February 16-21, 2014, at the King and Prince, St. Simons Island, GA. It was successful in bringing condensed-matter physicists and quantum chemists together productively to drive the emergence of those specialties. The Symposium had a significant role in preparing a whole generation of quantum theorists. The 54th Sanibel meeting looked to the future in two ways. We had 360°-View sessions to honor the exceptional contributions of Rodney Bartlett (70), Bill Butler (70), Yngve Öhrn (80), Fritz Schaefer (70), and Malcolm Stocks (70). The work of these five has greatly impacted several generations of quantum chemists and condensed matter physicists. The ''360°'' is the sum of their ages. More significantly, it symbolizes a panoramic view of critical developments and accomplishments in theoretical and computational chemistry and physics oriented toward the future. Thus, two of the eight 360°-View sessions focused specifically on younger scientists. The 360°-View program was the major component of the 2014 Sanibel meeting. Another four sessions included a sub-symposium on ab initio Simulations at Extreme Conditions, with focus on getting past the barriers of present-day Born-Oppenheimer molecular dynamics by advances in finite-temperature density functional theory, orbital-free DFT, and new all-numerical approaches.

  16. Rotation and direction judgment from visual images head-slaved in two and three degrees-of-freedom.

    Science.gov (United States)

    Adelstein, B D; Ellis, S R

    2000-03-01

    The contribution to spatial awareness of adding a roll degree-of-freedom (DOF) to telepresence camera platform yaw and pitch was examined in an experiment where subjects judged direction and rotation of stationary target markers in a remote scene. Subjects viewed the scene via head-slaved camera images in a head-mounted display. Elimination of the roll DOF affected rotation judgment, but only at extreme yaw and pitch combinations, and did not affect azimuth and elevation judgement. Systematic azimuth overshoot occurred regardless of roll condition. Observed rotation misjudgments are explained by kinematic models for eye-head direction of gaze.

  17. State of art in radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Choi; Young Soo; Kim, Seong Ho; Cho, Jae Wan; Kim, Chang Hoi; Seo, Young Chil

    2002-02-01

    Working in radiation environment such as nuclear power plant, RI facility, nuclear fuel fabrication facility, medical center has to be considered radiation exposure, and we can implement these job by remote observation and operation. However the camera used for general industry is weakened at radiation, so radiation-tolerant camera is needed for radiation environment. The application of radiation-tolerant camera system is nuclear industry, radio-active medical, aerospace, and so on. Specially nuclear industry, the demand is continuous in the inspection of nuclear boiler, exchange of pellet, inspection of nuclear waste. In the nuclear developed countries have been an effort to develop radiation-tolerant cameras. Now they have many kinds of radiation-tolerant cameras which can tolerate to 10{sup 6}-10{sup 8} rad total dose. In this report, we examine into the state-of-art about radiation-tolerant cameras, and analyze these technology. We want to grow up the concern of developing radiation-tolerant camera by this paper, and upgrade the level of domestic technology.

  18. 16 CFR 501.1 - Camera film.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk still...

  19. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  20. Principle of some gamma cameras (efficiencies, limitations, development)

    International Nuclear Information System (INIS)

    Allemand, R.; Bourdel, J.; Gariod, R.; Laval, M.; Levy, G.; Thomas, G.

    1975-01-01

    The quality of scintigraphic images is shown to depend on the efficiency of both the input collimator and the detector. Methods are described by which the quality of these images may be improved by adaptations to either the collimator (Fresnel zone camera, Compton effect camera) or the detector (Anger camera, image amplification camera). The Anger camera and image amplification camera are at present the two main instruments whereby acceptable space and energy resolutions may be obtained. A theoretical comparative study of their efficiencies is carried out, independently of their technological differences, after which the instruments designed or under study at the LETI are presented: these include the image amplification camera, the electron amplifier tube camera using a semi-conductor target CdTe and HgI 2 detector [fr

  1. Streak camera recording of interferometer fringes

    International Nuclear Information System (INIS)

    Parker, N.L.; Chau, H.H.

    1977-01-01

    The use of an electronic high-speed camera in the streaking mode to record interference fringe motion from a velocity interferometer is discussed. Advantages of this method over the photomultiplier tube-oscilloscope approach are delineated. Performance testing and data for the electronic streak camera are discussed. The velocity profile of a mylar flyer accelerated by an electrically exploded bridge, and the jump-off velocity of metal targets struck by these mylar flyers are measured in the camera tests. Advantages of the streak camera include portability, low cost, ease of operation and maintenance, simplified interferometer optics, and rapid data analysis

  2. 29 CFR 1952.360 - Description of the plan as initially approved.

    Science.gov (United States)

    2010-07-01

    .... It adopts the definition of occupational safety and health issues expressed in § 1909.2(c)(1) of this... issues raised during the review process, including proposals to be submitted to the New Mexico... Section 1952.360 Labor Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH...

  3. The fly's eye camera system

    Science.gov (United States)

    Mészáros, L.; Pál, A.; Csépány, G.; Jaskó, A.; Vida, K.; Oláh, K.; Mezö, G.

    2014-12-01

    We introduce the Fly's Eye Camera System, an all-sky monitoring device intended to perform time domain astronomy. This camera system design will provide complementary data sets for other synoptic sky surveys such as LSST or Pan-STARRS. The effective field of view is obtained by 19 cameras arranged in a spherical mosaic form. These individual cameras of the device stand on a hexapod mount that is fully capable of achieving sidereal tracking for the subsequent exposures. This platform has many advantages. First of all it requires only one type of moving component and does not include unique parts. Hence this design not only eliminates problems implied by unique elements, but the redundancy of the hexapod allows smooth operations even if one or two of the legs are stuck. In addition, it can calibrate itself by observed stars independently from both the geographical location (including northen and southern hemisphere) and the polar alignment of the full mount. All mechanical elements and electronics are designed within the confines of our institute Konkoly Observatory. Currently, our instrument is in testing phase with an operating hexapod and reduced number of cameras.

  4. Expert System for Competences Evaluation 360° Feedback Using Fuzzy Logic

    Directory of Open Access Journals (Sweden)

    Alberto Alfonso Aguilar Lasserre

    2014-01-01

    Full Text Available Performance evaluation (PE is a process that estimates the employee overall performance during a given period, and it is a common function carried out inside modern companies. PE is important because it is an instrument that encourages employees, organizational areas, and the whole company to have an appropriate behavior and continuous improvement. In addition, PE is useful in decision making about personnel allocation, productivity bonuses, incentives, promotions, disciplinary measures, and dismissals. There are many performance evaluation methods; however, none is universal and common to all companies. This paper proposes an expert performance evaluation system based on a fuzzy logic model, with competences 360° feedback oriented to human behavior. This model uses linguistic labels and adjustable numerical values to represent ambiguous concepts, such as imprecision and subjectivity. The model was validated in the administrative department of a real Mexican manufacturing company, where final results and conclusions show the fuzzy logic method advantages in comparison with traditional 360° performance evaluation methodologies.

  5. Motorcycle detection and counting using stereo camera, IR camera, and microphone array

    Science.gov (United States)

    Ling, Bo; Gibson, David R. P.; Middleton, Dan

    2013-03-01

    Detection, classification, and characterization are the key to enhancing motorcycle safety, motorcycle operations and motorcycle travel estimation. Average motorcycle fatalities per Vehicle Mile Traveled (VMT) are currently estimated at 30 times those of auto fatalities. Although it has been an active research area for many years, motorcycle detection still remains a challenging task. Working with FHWA, we have developed a hybrid motorcycle detection and counting system using a suite of sensors including stereo camera, thermal IR camera and unidirectional microphone array. The IR thermal camera can capture the unique thermal signatures associated with the motorcycle's exhaust pipes that often show bright elongated blobs in IR images. The stereo camera in the system is used to detect the motorcyclist who can be easily windowed out in the stereo disparity map. If the motorcyclist is detected through his or her 3D body recognition, motorcycle is detected. Microphones are used to detect motorcycles that often produce low frequency acoustic signals. All three microphones in the microphone array are placed in strategic locations on the sensor platform to minimize the interferences of background noises from sources such as rain and wind. Field test results show that this hybrid motorcycle detection and counting system has an excellent performance.

  6. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    International Nuclear Information System (INIS)

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L; Yang, J; Beadle, B

    2014-01-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination

  7. Performance Evaluation of Thermographic Cameras for Photogrammetric Measurements

    Science.gov (United States)

    Yastikli, N.; Guler, E.

    2013-05-01

    The aim of this research is the performance evaluation of the termographic cameras for possible use for photogrammetric documentation and deformation analyses caused by moisture and isolation problem of the historical and cultural heritage. To perform geometric calibration of the termographic camera, the 3D test object was designed with 77 control points which were distributed in different depths. For performance evaluation, Flir A320 termographic camera with 320 × 240 pixels and lens with 18 mm focal length was used. The Nikon D3X SLR digital camera with 6048 × 4032 pixels and lens with 20 mm focal length was used as reference for comparison. The size of pixel was 25 μm for the Flir A320 termographic camera and 6 μm for the Nikon D3X SLR digital camera. The digital images of the 3D test object were recorded with the Flir A320 termographic camera and Nikon D3X SLR digital camera and the image coordinate of the control points in the images were measured. The geometric calibration parameters, including the focal length, position of principal points, radial and tangential distortions were determined with introduced additional parameters in bundle block adjustments. The measurement of image coordinates and bundle block adjustments with additional parameters were performed using the PHIDIAS digital photogrammetric system. The bundle block adjustment was repeated with determined calibration parameter for both Flir A320 termographic camera and Nikon D3X SLR digital camera. The obtained standard deviation of measured image coordinates was 9.6 μm and 10.5 μm for Flir A320 termographic camera and 8.3 μm and 7.7 μm for Nikon D3X SLR digital camera. The obtained standard deviation of measured image points in Flir A320 termographic camera images almost same accuracy level with digital camera in comparison with 4 times bigger pixel size. The obtained results from this research, the interior geometry of the termographic cameras and lens distortion was modelled efficiently

  8. THE EFFECT OF SALINITY-SODICITY AND GLYPHOSATE FORMULATIONS – AVANS PREMIUM 360 SL ON PHOSPHOMONOESTERASE ACTIVITIES IN SANDY LOAM

    Directory of Open Access Journals (Sweden)

    Maciej Płatkowski

    2016-01-01

    Full Text Available The aim of study was to determine the influence of NaCl and glyphosate-based herbicide Avans Premium 360 SL on acid and alkaline phosphomonoesterase activities in sandy loam. The experiment was carried out in laboratory conditions on sandy loam with Corg content 10.90 g/kg. Soil was divided into half kilogram samples and adjusted to 60% of maximum water holding capacity. In the experiment dependent variables were: I – dosages of Avans Premium 360 SL (0, a recommended field dosage – FD, a tenfold higher dosage – 10 FD and hundredfold higher dosage – 100 FD, II – amount of NaCl (0, 3% and 6%, III – day of experiment (1, 7, 14, 28 and 56. On days of experiment the activity of alkaline and acid phosphomonoesterase activity was assayed spectrophotometrically. The obtained result showed that the application of Avans Premium 360 SL decreased in acid and alkaline phosphomonoesterase activity in clay soil. Significant interaction effect between the dosage of Avans Premium 360 SL, NaCl amount and day of experiment was reported in the experiment. The inhibitory effect of Avans Premium 360 SL was the highest in soil with NaCl at the amount of 6%.

  9. Imaging capabilities of germanium gamma cameras

    International Nuclear Information System (INIS)

    Steidley, J.W.

    1977-01-01

    Quantitative methods of analysis based on the use of a computer simulation were developed and used to investigate the imaging capabilities of germanium gamma cameras. The main advantage of the computer simulation is that the inherent unknowns of clinical imaging procedures are removed from the investigation. The effects of patient scattered radiation were incorporated using a mathematical LSF model which was empirically developed and experimentally verified. Image modifying effects of patient motion, spatial distortions, and count rate capabilities were also included in the model. Spatial domain and frequency domain modeling techniques were developed and used in the simulation as required. The imaging capabilities of gamma cameras were assessed using low contrast lesion source distributions. The results showed that an improvement in energy resolution from 10% to 2% offers significant clinical advantages in terms of improved contrast, increased detectability, and reduced patient dose. The improvements are of greatest significance for small lesions at low contrast. The results of the computer simulation were also used to compare a design of a hypothetical germanium gamma camera with a state-of-the-art scintillation camera. The computer model performed a parametric analysis of the interrelated effects of inherent and technological limitations of gamma camera imaging. In particular, the trade-off between collimator resolution and collimator efficiency for detection of a given low contrast lesion was directly addressed. This trade-off is an inherent limitation of both gamma cameras. The image degrading effects of patient motion, camera spatial distortions, and low count rate were shown to modify the improvements due to better energy resolution. Thus, based on this research, the continued development of germanium cameras to the point of clinical demonstration is recommended

  10. Stereo Pinhole Camera: Assembly and experimental activities

    Directory of Open Access Journals (Sweden)

    Gilmário Barbosa Santos

    2015-05-01

    Full Text Available This work describes the assembling of a stereo pinhole camera for capturing stereo-pairs of images and proposes experimental activities with it. A pinhole camera can be as sophisticated as you want, or so simple that it could be handcrafted with practically recyclable materials. This paper describes the practical use of the pinhole camera throughout history and currently. Aspects of optics and geometry involved in the building of the stereo pinhole camera are presented with illustrations. Furthermore, experiments are proposed by using the images obtained by the camera for 3D visualization through a pair of anaglyph glasses, and the estimation of relative depth by triangulation is discussed.

  11. First results from the TOPSAT camera

    Science.gov (United States)

    Greenway, Paul; Tosh, Ian; Morris, Nigel; Burton, Gary; Cawley, Steve

    2017-11-01

    The TopSat camera is a low cost remote sensing imager capable of producing 2.5 metre resolution panchromatic imagery, funded by the British National Space Centre's Mosaic programme. The instrument was designed and assembled at the Space Science & Technology Department of the CCLRC's Rutherford Appleton Laboratory (RAL) in the UK, and was launched on the 27th October 2005 from Plesetsk Cosmodrome in Northern Russia on a Kosmos-3M. The camera utilises an off-axis three mirror system, which has the advantages of excellent image quality over a wide field of view, combined with a compactness that makes its overall dimensions smaller than its focal length. Keeping the costs to a minimum has been a major design driver in the development of this camera. The camera is part of the TopSat mission, which is a collaboration between four UK organisations; QinetiQ, Surrey Satellite Technology Ltd (SSTL), RAL and Infoterra. Its objective is to demonstrate provision of rapid response high resolution imagery to fixed and mobile ground stations using a low cost minisatellite. The paper "Development of the TopSat Camera" presented by RAL at the 5th ICSO in 2004 described the opto-mechanical design, assembly, alignment and environmental test methods implemented. Now that the spacecraft is in orbit and successfully acquiring images, this paper presents the first results from the camera and makes an initial assessment of the camera's in-orbit performance.

  12. 360⁰ -View of Quantum Theory and Ab Initio Simulation at Extreme Conditions: 2014 Sanibel Symposium

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Hai-Ping [Univ. of Florida, Gainesville, FL (United States)

    2016-09-02

    The Sanibel Symposium 2014 was held February 16-21, 2014, at the King and Prince, St. Simons Island, GA. It was successful in bringing condensed-matter physicists and quantum chemists together productively to drive the emergence of those specialties. The Symposium had a significant role in preparing a whole generation of quantum theorists. The 54th Sanibel meeting looked to the future in two ways. We had 360⁰-View sessions to honor the exceptional contributions of Rodney Bartlett (70), Bill Butler (70), Yngve Öhrn (80), Fritz Schaefer (70), and Malcolm Stocks (70). The work of these five has greatly impacted several generations of quantum chemists and condensed matter physicists. The “360⁰” is the sum of their ages. More significantly, it symbolizes a panoramic view of critical developments and accomplishments in theoretical and computational chemistry and physics oriented toward the future. Thus, two of the eight 360⁰-View sessions focused specifically on younger scientists. The 360⁰-View program was the major component of the 2014 Sanibel meeting. Another four sessions included a sub-symposium on ab initio Simulations at Extreme Conditions, with focus on getting past the barriers of present-day Born-Oppenheimer molecular dynamics by advances in finite-temperature density functional theory, orbital-free DFT, and new all-numerical approaches.

  13. Camera Trajectory fromWide Baseline Images

    Science.gov (United States)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the

  14. The Eye of the Camera

    NARCIS (Netherlands)

    van Rompay, Thomas Johannes Lucas; Vonk, Dorette J.; Fransen, M.L.

    2009-01-01

    This study addresses the effects of security cameras on prosocial behavior. Results from previous studies indicate that the presence of others can trigger helping behavior, arising from the need for approval of others. Extending these findings, the authors propose that security cameras can likewise

  15. Poster: A Software-Defined Multi-Camera Network

    OpenAIRE

    Chen, Po-Yen; Chen, Chien; Selvaraj, Parthiban; Claesen, Luc

    2016-01-01

    The widespread popularity of OpenFlow leads to a significant increase in the number of applications developed in SoftwareDefined Networking (SDN). In this work, we propose the architecture of a Software-Defined Multi-Camera Network consisting of small, flexible, economic, and programmable cameras which combine the functions of the processor, switch, and camera. A Software-Defined Multi-Camera Network can effectively reduce the overall network bandwidth and reduce a large amount of the Capex a...

  16. Gamma camera performance: technical assessment protocol

    Energy Technology Data Exchange (ETDEWEB)

    Bolster, A.A. [West Glasgow Hospitals NHS Trust, London (United Kingdom). Dept. of Clinical Physics; Waddington, W.A. [University College London Hospitals NHS Trust, London (United Kingdom). Inst. of Nuclear Medicine

    1996-12-31

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera`s computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author).

  17. The Light Field Attachment: Turning a DSLR into a Light Field Camera Using a Low Budget Camera Ring

    KAUST Repository

    Wang, Yuwang

    2016-11-16

    We propose a concept for a lens attachment that turns a standard DSLR camera and lens into a light field camera. The attachment consists of 8 low-resolution, low-quality side cameras arranged around the central high-quality SLR lens. Unlike most existing light field camera architectures, this design provides a high-quality 2D image mode, while simultaneously enabling a new high-quality light field mode with a large camera baseline but little added weight, cost, or bulk compared with the base DSLR camera. From an algorithmic point of view, the high-quality light field mode is made possible by a new light field super-resolution method that first improves the spatial resolution and image quality of the side cameras and then interpolates additional views as needed. At the heart of this process is a super-resolution method that we call iterative Patch- And Depth-based Synthesis (iPADS), which combines patch-based and depth-based synthesis in a novel fashion. Experimental results obtained for both real captured data and synthetic data confirm that our method achieves substantial improvements in super-resolution for side-view images as well as the high-quality and view-coherent rendering of dense and high-resolution light fields.

  18. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  19. PERFORMANCE EVALUATION OF THERMOGRAPHIC CAMERAS FOR PHOTOGRAMMETRIC MEASUREMENTS

    Directory of Open Access Journals (Sweden)

    N. Yastikli

    2013-05-01

    Full Text Available The aim of this research is the performance evaluation of the termographic cameras for possible use for photogrammetric documentation and deformation analyses caused by moisture and isolation problem of the historical and cultural heritage. To perform geometric calibration of the termographic camera, the 3D test object was designed with 77 control points which were distributed in different depths. For performance evaluation, Flir A320 termographic camera with 320 × 240 pixels and lens with 18 mm focal length was used. The Nikon D3X SLR digital camera with 6048 × 4032 pixels and lens with 20 mm focal length was used as reference for comparison. The size of pixel was 25 μm for the Flir A320 termographic camera and 6 μm for the Nikon D3X SLR digital camera. The digital images of the 3D test object were recorded with the Flir A320 termographic camera and Nikon D3X SLR digital camera and the image coordinate of the control points in the images were measured. The geometric calibration parameters, including the focal length, position of principal points, radial and tangential distortions were determined with introduced additional parameters in bundle block adjustments. The measurement of image coordinates and bundle block adjustments with additional parameters were performed using the PHIDIAS digital photogrammetric system. The bundle block adjustment was repeated with determined calibration parameter for both Flir A320 termographic camera and Nikon D3X SLR digital camera. The obtained standard deviation of measured image coordinates was 9.6 μm and 10.5 μm for Flir A320 termographic camera and 8.3 μm and 7.7 μm for Nikon D3X SLR digital camera. The obtained standard deviation of measured image points in Flir A320 termographic camera images almost same accuracy level with digital camera in comparison with 4 times bigger pixel size. The obtained results from this research, the interior geometry of the termographic cameras and lens distortion was

  20. PROFIL-360 high resolution steam generator tube profilometry system

    International Nuclear Information System (INIS)

    Glass, S.W.

    1985-01-01

    A high-resolution profilometry system, PROFIL 360, has been developed to assess the condition of steam generator tubes and rapidly produce the data to evaluate the potential for developing in-service leaks. The probe has an electromechanical sensor in a rotating head. This technique has been demonstrated in the field, saving tubes that would have been plugged with the go-gauge criterion and indicating plugging other high-risk candidates that might otherwise not have been removed from service

  1. Profil-360 high resolution steam generator tube profilometry system

    International Nuclear Information System (INIS)

    Glass, S.W.

    1985-01-01

    A high-resolution profilometry system, PROFIL 360, has been developed to assess the condition of steam generator tubes and rapidly produce the data to evaluate the potential for developing in-service leaks. The probe has an electromechanical sensor in a rotating head. This technique has been demonstrated in the field, saving tubes that would have been plugged with the go-gauge criterion and indicating plugging other high-risk candidates that might otherwise not have been removed from service

  2. Comparison of myocardial perfusion imaging between the new high-speed gamma camera and the standard anger camera

    International Nuclear Information System (INIS)

    Tanaka, Hirokazu; Chikamori, Taishiro; Hida, Satoshi

    2013-01-01

    Cadmium-zinc-telluride (CZT) solid-state detectors have been recently introduced into the field of myocardial perfusion imaging. The aim of this study was to prospectively compare the diagnostic performance of the CZT high-speed gamma camera (Discovery NM 530c) with that of the standard 3-head gamma camera in the same group of patients. The study group consisted of 150 consecutive patients who underwent a 1-day stress-rest 99m Tc-sestamibi or tetrofosmin imaging protocol. Image acquisition was performed first on a standard gamma camera with a 15-min scan time each for stress and for rest. All scans were immediately repeated on a CZT camera with a 5-min scan time for stress and a 3-min scan time for rest, using list mode. The correlations between the CZT camera and the standard camera for perfusion and function analyses were strong within narrow Bland-Altman limits of agreement. Using list mode analysis, image quality for stress was rated as good or excellent in 97% of the 3-min scans, and in 100% of the ≥4-min scans. For CZT scans at rest, similarly, image quality was rated as good or excellent in 94% of the 1-min scans, and in 100% of the ≥2-min scans. The novel CZT camera provides excellent image quality, which is equivalent to standard myocardial single-photon emission computed tomography, despite a short scan time of less than half of the standard time. (author)

  3. High resolution RGB color line scan camera

    Science.gov (United States)

    Lynch, Theodore E.; Huettig, Fred

    1998-04-01

    A color line scan camera family which is available with either 6000, 8000 or 10000 pixels/color channel, utilizes off-the-shelf lenses, interfaces with currently available frame grabbers, includes on-board pixel by pixel offset correction, and is configurable and controllable via RS232 serial port for computer controlled or stand alone operation is described in this paper. This line scan camera is based on an available 8000 element monochrome line scan camera designed by AOA for OEM use. The new color version includes improvements such as better packaging and additional user features which make the camera easier to use. The heart of the camera is a tri-linear CCD sensor with on-chip color balancing for maximum accuracy and pinned photodiodes for low lag response. Each color channel is digitized to 12 bits and all three channels are multiplexed together so that the resulting camera output video is either a 12 or 8 bit data stream at a rate of up to 24Megpixels/sec. Conversion from 12 to 8 bit, or user-defined gamma, is accomplished by on board user-defined video look up tables. The camera has two user-selectable operating modes; lows speed, high sensitivity mode or high speed, reduced sensitivity mode. The intended uses of the camera include industrial inspection, digital archiving, document scanning, and graphic arts applications.

  4. An Open Standard for Camera Trap Data

    Directory of Open Access Journals (Sweden)

    Tavis Forrester

    2016-12-01

    Full Text Available Camera traps that capture photos of animals are a valuable tool for monitoring biodiversity. The use of camera traps is rapidly increasing and there is an urgent need for standardization to facilitate data management, reporting and data sharing. Here we offer the Camera Trap Metadata Standard as an open data standard for storing and sharing camera trap data, developed by experts from a variety of organizations. The standard captures information necessary to share data between projects and offers a foundation for collecting the more detailed data needed for advanced analysis. The data standard captures information about study design, the type of camera used, and the location and species names for all detections in a standardized way. This information is critical for accurately assessing results from individual camera trapping projects and for combining data from multiple studies for meta-analysis. This data standard is an important step in aligning camera trapping surveys with best practices in data-intensive science. Ecology is moving rapidly into the realm of big data, and central data repositories are becoming a critical tool and are emerging for camera trap data. This data standard will help researchers standardize data terms, align past data to new repositories, and provide a framework for utilizing data across repositories and research projects to advance animal ecology and conservation.

  5. Camera network video summarization

    Science.gov (United States)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  6. The use of a portable gamma camera for preoperative lymphatic mapping: a comparison with a conventional gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Vidal-Sicart, Sergi; Paredes, Pilar [Hospital Clinic Barcelona, Nuclear Medicine Department (CDIC), Barcelona (Spain); Institut d' Investigacio Biomedica Agusti Pi Sunyer (IDIBAPS), Barcelona (Spain); Vermeeren, Lenka; Valdes-Olmos, Renato A. [Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital (NKI-AVL), Nuclear Medicine Department, Amsterdam (Netherlands); Sola, Oriol [Hospital Clinic Barcelona, Nuclear Medicine Department (CDIC), Barcelona (Spain)

    2011-04-15

    Planar lymphoscintigraphy is routinely used for preoperative sentinel node visualization, but large gamma cameras are not always available. We evaluated the reproducibility of lymphatic mapping with a smaller and portable gamma camera. In two centres, 52 patients with breast cancer received preoperative lymphoscintigraphy with a conventional gamma camera with a field of view of 40 x 40 cm. Static anterior and lateral images were performed at 15 min, 2 h and 4 h after injection of the radiotracer ({sup 99m}Tc-nanocolloid). At 2 h after injection, anterior and oblique images were also performed with a portable gamma camera (Sentinella, Oncovision) positioned to obtain a field of view of 20 x 20 cm. Visualization of lymphatic drainage on conventional images and images with the portable device were compared for number of nodes depicted, their intensity and localization of sentinel nodes. The images performed with the conventional gamma camera depicted sentinel nodes in 94%, while the portable gamma camera showed drainage in 73%. There was however no significant difference in visualization between the two devices when a lead shield was used to mask the injection area in 43 patients (95 vs 88%, p = 0.25). Second-echelon nodes were visualized in 62% of the patients with the conventional gamma camera and in 29% of the cases with the portable gamma camera. Preoperative imaging with a portable gamma camera fitted with a pinhole collimator to obtain a field of view of 20 x 20 cm is able to depict sentinel nodes in 88% of the cases, if a lead shield is used to mask the injection site. This device may be useful in centres without the possibility to perform a preoperative image. (orig.)

  7. The use of a portable gamma camera for preoperative lymphatic mapping: a comparison with a conventional gamma camera

    International Nuclear Information System (INIS)

    Vidal-Sicart, Sergi; Paredes, Pilar; Vermeeren, Lenka; Valdes-Olmos, Renato A.; Sola, Oriol

    2011-01-01

    Planar lymphoscintigraphy is routinely used for preoperative sentinel node visualization, but large gamma cameras are not always available. We evaluated the reproducibility of lymphatic mapping with a smaller and portable gamma camera. In two centres, 52 patients with breast cancer received preoperative lymphoscintigraphy with a conventional gamma camera with a field of view of 40 x 40 cm. Static anterior and lateral images were performed at 15 min, 2 h and 4 h after injection of the radiotracer ( 99m Tc-nanocolloid). At 2 h after injection, anterior and oblique images were also performed with a portable gamma camera (Sentinella, Oncovision) positioned to obtain a field of view of 20 x 20 cm. Visualization of lymphatic drainage on conventional images and images with the portable device were compared for number of nodes depicted, their intensity and localization of sentinel nodes. The images performed with the conventional gamma camera depicted sentinel nodes in 94%, while the portable gamma camera showed drainage in 73%. There was however no significant difference in visualization between the two devices when a lead shield was used to mask the injection area in 43 patients (95 vs 88%, p = 0.25). Second-echelon nodes were visualized in 62% of the patients with the conventional gamma camera and in 29% of the cases with the portable gamma camera. Preoperative imaging with a portable gamma camera fitted with a pinhole collimator to obtain a field of view of 20 x 20 cm is able to depict sentinel nodes in 88% of the cases, if a lead shield is used to mask the injection site. This device may be useful in centres without the possibility to perform a preoperative image. (orig.)

  8. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.

  9. 12 CFR 360.2 - Federal Home Loan banks as secured creditors.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Federal Home Loan banks as secured creditors... OF GENERAL POLICY RESOLUTION AND RECEIVERSHIP RULES § 360.2 Federal Home Loan banks as secured... regulations, the receiver of a borrower from a Federal Home Loan Bank shall recognize the priority of any...

  10. Creep oceli L360NB za normálních teplot

    Czech Academy of Sciences Publication Activity Database

    Gajdoš, Lubomír; Šperl, Martin; Náprstek, Jiří; Pavelková, M.

    2016-01-01

    Roč. 96, 9/10 (2016), s. 202-211 ISSN 0032-1761 R&D Projects: GA TA ČR(CZ) TE02000162 Institutional support: RVO:68378297 Keywords : L360NB steel * creep deformation * normal temperatures Subject RIV: JL - Materials Fatigue, Friction Mechanics

  11. Image reconstruction methods for the PBX-M pinhole camera

    International Nuclear Information System (INIS)

    Holland, A.; Powell, E.T.; Fonck, R.J.

    1990-03-01

    This paper describes two methods which have been used to reconstruct the soft x-ray emission profile of the PBX-M tokamak from the projected images recorded by the PBX-M pinhole camera. Both methods must accurately represent the shape of the reconstructed profile while also providing a degree of immunity to noise in the data. The first method is a simple least squares fit to the data. This has the advantage of being fast and small, and thus easily implemented on the PDP-11 computer used to control the video digitizer for the pinhole camera. The second method involves the application of a maximum entropy algorithm to an overdetermined system. This has the advantage of allowing the use of a default profile. This profile contains additional knowledge about the plasma shape which can be obtained from equilibrium fits to the external magnetic measurements. Additionally the reconstruction is guaranteed positive, and the fit to the data can be relaxed by specifying both the amount and distribution of noise in the image. The algorithm described has the advantage of being considerably faster, for an overdetermined system, than the usual Lagrange multiplier approach to finding the maximum entropy solution. 13 refs., 24 figs

  12. Video Sharing System Based on Wi-Fi Camera

    OpenAIRE

    Qidi Lin; Hewei Yu; Jinbin Huang; Weile Liang

    2015-01-01

    This paper introduces a video sharing platform based on WiFi, which consists of camera, mobile phone and PC server. This platform can receive wireless signal from the camera and show the live video on the mobile phone captured by camera. In addition, it is able to send commands to camera and control the camera's holder to rotate. The platform can be applied to interactive teaching and dangerous area's monitoring and so on. Testing results show that the platform can share ...

  13. Polarization sensitive camera for the in vitro diagnostic and monitoring of dental erosion

    Science.gov (United States)

    Bossen, Anke; Rakhmatullina, Ekaterina; Lussi, Adrian; Meier, Christoph

    Due to a frequent consumption of acidic food and beverages, the prevalence of dental erosion increases worldwide. In an initial erosion stage, the hard dental tissue is softened due to acidic demineralization. As erosion progresses, a gradual tissue wear occurs resulting in thinning of the enamel. Complete loss of the enamel tissue can be observed in severe clinical cases. Therefore, it is essential to provide a diagnosis tool for an accurate detection and monitoring of dental erosion already at early stages. In this manuscript, we present the development of a polarization sensitive imaging camera for the visualization and quantification of dental erosion. The system consists of two CMOS cameras mounted on two sides of a polarizing beamsplitter. A horizontal linearly polarized light source is positioned orthogonal to the camera to ensure an incidence illumination and detection angles of 45°. The specular reflected light from the enamel surface is collected with an objective lens mounted on the beam splitter and divided into horizontal (H) and vertical (V) components on each associate camera. Images of non-eroded and eroded enamel surfaces at different erosion degrees were recorded and assessed with diagnostic software. The software was designed to generate and display two types of images: distribution of the reflection intensity (V) and a polarization ratio (H-V)/(H+V) throughout the analyzed tissue area. The measurements and visualization of these two optical parameters, i.e. specular reflection intensity and the polarization ratio, allowed detection and quantification of enamel erosion at early stages in vitro.

  14. EDICAM (Event Detection Intelligent Camera)

    Energy Technology Data Exchange (ETDEWEB)

    Zoletnik, S. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Szabolics, T., E-mail: szabolics.tamas@wigner.mta.hu [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Kocsis, G.; Szepesi, T.; Dunai, D. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary)

    2013-10-15

    Highlights: ► We present EDICAM's hardware modules. ► We present EDICAM's main design concepts. ► This paper will describe EDICAM firmware architecture. ► Operation principles description. ► Further developments. -- Abstract: A new type of fast framing camera has been developed for fusion applications by the Wigner Research Centre for Physics during the last few years. A new concept was designed for intelligent event driven imaging which is capable of focusing image readout to Regions of Interests (ROIs) where and when predefined events occur. At present these events mean intensity changes and external triggers but in the future more sophisticated methods might also be defined. The camera provides 444 Hz frame rate at full resolution of 1280 × 1024 pixels, but monitoring of smaller ROIs can be done in the 1–116 kHz range even during exposure of the full image. Keeping space limitations and the harsh environment in mind the camera is divided into a small Sensor Module and a processing card interconnected by a fast 10 Gbit optical link. This camera hardware has been used for passive monitoring of the plasma in different devices for example at ASDEX Upgrade and COMPASS with the first version of its firmware. The new firmware and software package is now available and ready for testing the new event processing features. This paper will present the operation principle and features of the Event Detection Intelligent Camera (EDICAM). The device is intended to be the central element in the 10-camera monitoring system of the Wendelstein 7-X stellarator.

  15. An Open Standard for Camera Trap Data

    NARCIS (Netherlands)

    Forrester, Tavis; O'Brien, Tim; Fegraus, Eric; Jansen, P.A.; Palmer, Jonathan; Kays, Roland; Ahumada, Jorge; Stern, Beth; McShea, William

    2016-01-01

    Camera traps that capture photos of animals are a valuable tool for monitoring biodiversity. The use of camera traps is rapidly increasing and there is an urgent need for standardization to facilitate data management, reporting and data sharing. Here we offer the Camera Trap Metadata Standard as an

  16. Polarizing aperture stereoscopic cinema camera

    Science.gov (United States)

    Lipton, Lenny

    2012-07-01

    The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.

  17. Control system for gamma camera

    International Nuclear Information System (INIS)

    Miller, D.W.

    1977-01-01

    An improved gamma camera arrangement is described which utilizing a solid state detector, formed of high purity germanium. the central arrangement of the camera operates to effect the carrying out of a trapezoidal filtering operation over antisymmetrically summed spatial signals through gated integration procedures utilizing idealized integrating intervals. By simultaneously carrying out peak energy evaluation of the input signals, a desirable control over pulse pile-up phenomena is achieved. Additionally, through the use of the time derivative of incoming pulse or signal energy information to initially enable the control system, a low level information evaluation is provided serving to enhance the signal processing efficiency of the camera

  18. Scintillating camera

    International Nuclear Information System (INIS)

    Vlasbloem, H.

    1976-01-01

    The invention relates to a scintillating camera and in particular to an apparatus for determining the position coordinates of a light pulse emitting point on the anode of an image intensifier tube which forms part of a scintillating camera, comprising at least three photomultipliers which are positioned to receive light emitted by the anode screen on their photocathodes, circuit means for processing the output voltages of the photomultipliers to derive voltages that are representative of the position coordinates; a pulse-height discriminator circuit adapted to be fed with the sum voltage of the output voltages of the photomultipliers for gating the output of the processing circuit when the amplitude of the sum voltage of the output voltages of the photomultipliers lies in a predetermined amplitude range, and means for compensating the distortion introduced in the image on the anode screen

  19. The "All Sky Camera Network"

    Science.gov (United States)

    Caldwell, Andy

    2005-01-01

    In 2001, the "All Sky Camera Network" came to life as an outreach program to connect the Denver Museum of Nature and Science (DMNS) exhibit "Space Odyssey" with Colorado schools. The network is comprised of cameras placed strategically at schools throughout Colorado to capture fireballs--rare events that produce meteorites.…

  20. Europe's space camera unmasks a cosmic gamma-ray machine

    Science.gov (United States)

    1996-11-01

    The new-found neutron star is the visible counterpart of a pulsating radio source, Pulsar 1055-52. It is a mere 20 kilometres wide. Although the neutron star is very hot, at about a million degrees C, very little of its radiant energy takes the form of visible light. It emits mainly gamma-rays, an extremely energetic form of radiation. By examining it at visible wavelengths, astronomers hope to figure out why Pulsar 1055-52 is the most efficient generator of gamma-rays known so far, anywhere the Universe. The Faint Object Camera found Pulsar 1055-52 in near ultraviolet light at 3400 angstroms, a little shorter in wavelength than the violet light at the extremity of the human visual range. Roberto Mignani, Patrizia Caraveo and Giovanni Bignami of the Istituto di Fisica Cosmica in Milan, Italy, report its optical identification in a forthcoming issue of Astrophysical Journal Letters (1 January 1997). The formal name of the object is PSR 1055-52. Evading the glare of an adjacent star The Italian team had tried since 1988 to spot Pulsar 1055-52 with two of the most powerful ground-based optical telescopes in the Southern Hemisphere. These were the 3.6-metre Telescope and the 3.5-metre New Technology Telescope of the European Southern Observatory at La Silla, Chile. Unfortunately an ordinary star 100,000 times brighter lay in almost the same direction in the sky, separated from the neutron star by only a thousandth of a degree. The Earth's atmosphere defocused the star's light sufficiently to mask the glimmer from Pulsar 1055-52. The astronomers therefore needed an instrument in space. The Faint Object Camera offered the best precision and sensitivity to continue the hunt. Devised by European astronomers to complement the American wide field camera in the Hubble Space Telescope, the Faint Object Camera has a relatively narrow field of view. It intensifies the image of a faint object by repeatedly accelerating electrons from photo-electric films, so as to produce

  1. Initial laboratory evaluation of color video cameras: Phase 2

    Energy Technology Data Exchange (ETDEWEB)

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. The monochrome cameras were selected over color cameras because they have greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color camera technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories has established an on-going program to evaluate the newest color solid-state cameras. Phase One of the Sandia program resulted in the SAND91-2579/1 report titled: Initial Laboratory Evaluation of Color Video Cameras. The report briefly discusses imager chips, color cameras, and monitors, describes the camera selection, details traditional test parameters and procedures, and gives the results reached by evaluating 12 cameras. Here, in Phase Two of the report, we tested 6 additional cameras using traditional methods. In addition, all 18 cameras were tested by newly developed methods. This Phase 2 report details those newly developed test parameters and procedures, and evaluates the results.

  2. Movement-based Interaction in Camera Spaces

    DEFF Research Database (Denmark)

    Eriksson, Eva; Riisgaard Hansen, Thomas; Lykke-Olesen, Andreas

    2006-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movement-based projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space......, relations, and feedback. We see these as central for describing and analysing movement-based systems using camera tracking and we show how these three concepts can be used to analyse other camera tracking applications....

  3. 21 CFR 886.1120 - Opthalmic camera.

    Science.gov (United States)

    2010-04-01

    ... DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1120 Opthalmic camera. (a) Identification. An ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding area...

  4. Cryogenic solid Schmidt camera as a base for future wide-field IR systems

    Science.gov (United States)

    Yudin, Alexey N.

    2011-11-01

    Work is focused on study of capability of solid Schmidt camera to serve as a wide-field infrared lens for aircraft system with whole sphere coverage, working in 8-14 um spectral range, coupled with spherical focal array of megapixel class. Designs of 16 mm f/0.2 lens with 60 and 90 degrees sensor diagonal are presented, their image quality is compared with conventional solid design. Achromatic design with significantly improved performance, containing enclosed soft correcting lens behind protective front lens is proposed. One of the main goals of the work is to estimate benefits from curved detector arrays in 8-14 um spectral range wide-field systems. Coupling of photodetector with solid Schmidt camera by means of frustrated total internal reflection is considered, with corresponding tolerance analysis. The whole lens, except front element, is considered to be cryogenic, with solid Schmidt unit to be flown by hydrogen for improvement of bulk transmission.

  5. Comparison of retreatment ability of full-sequence reciprocating instrumentation and 360° rotary instrumentation.

    Science.gov (United States)

    Capar, Ismail Davut; Gok, Tuba; Orhan, Ezgi

    2015-12-01

    The purpose of the present study was to investigate the amount of root canal filling material after root canal filling removal with 360° rotary instrumentation or reciprocating motion with the same file sequence. Root canals of the 36 mandibular premolars were shaped with ProTaper Universal instruments up to size F2 and filled with corresponding single gutta-percha cone and sealer. The teeth were assigned to two retreatment groups (n = 18): group 1 360° rotational motion and group 2 reciprocating motion of ATR Tecnika motors (1310° clockwise and 578° counterclockwise). Retreatment procedure was performed with ProTaper Universal retreatment files with a sequence of D1-3 and ProTaper Universal F3 instruments. Total time required to remove filling material were recorded. Remaining filling material was examined under stereomicroscope at ×8 magnification. The data were analysed statistically using the Mann-Whitney U test, and testing was performed at 95 % confidence level (p  0.05) in terms of remaining filling material. The total time required for retreatment was shorter in 360° rotational motion group compared to reciprocating motion group (p instruments with reciprocating motion of ATR motor and conventional rotary motion have similar efficacy in root canal filling removal.

  6. Prism-based single-camera system for stereo display

    Science.gov (United States)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  7. The Use of Camera Traps in Wildlife

    Directory of Open Access Journals (Sweden)

    Yasin Uçarlı

    2013-11-01

    Full Text Available Camera traps are increasingly used in the abundance and density estimates of wildlife species. Camera traps are very good alternative for direct observation in case, particularly, steep terrain, dense vegetation covered areas or nocturnal species. The main reason for the use of camera traps is eliminated that the economic, personnel and time loss in a continuous manner at the same time in different points. Camera traps, motion and heat sensitive, can take a photo or video according to the models. Crossover points and feeding or mating areas of the focal species are addressed as a priority camera trap set locations. The population size can be finding out by the images combined with Capture-Recapture methods. The population density came out the population size divided to effective sampling area size. Mating and breeding season, habitat choice, group structures and survival rates of the focal species can be achieved from the images. Camera traps are very useful to obtain the necessary data about the particularly mysterious species with economically in planning and conservation efforts.

  8. Architectural Design Document for Camera Models

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study.......Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study....

  9. Hidden cameras everything you need to know about covert recording, undercover cameras and secret filming

    CERN Document Server

    Plomin, Joe

    2016-01-01

    Providing authoritative information on the practicalities of using hidden cameras to expose abuse or wrongdoing, this book is vital reading for anyone who may use or encounter secret filming. It gives specific advice on using phones or covert cameras and unravels the complex legal and ethical issues that need to be considered.

  10. Long wavelength infrared camera (LWIRC): a 10 micron camera for the Keck Telescope

    Energy Technology Data Exchange (ETDEWEB)

    Wishnow, E.H.; Danchi, W.C.; Tuthill, P.; Wurtz, R.; Jernigan, J.G.; Arens, J.F.

    1998-05-01

    The Long Wavelength Infrared Camera (LWIRC) is a facility instrument for the Keck Observatory designed to operate at the f/25 forward Cassegrain focus of the Keck I telescope. The camera operates over the wavelength band 7-13 {micro}m using ZnSe transmissive optics. A set of filters, a circular variable filter (CVF), and a mid-infrared polarizer are available, as are three plate scales: 0.05``, 0.10``, 0.21`` per pixel. The camera focal plane array and optics are cooled using liquid helium. The system has been refurbished with a 128 x 128 pixel Si:As detector array. The electronics readout system used to clock the array is compatible with both the hardware and software of the other Keck infrared instruments NIRC and LWS. A new pre-amplifier/A-D converter has been designed and constructed which decreases greatly the system susceptibility to noise.

  11. A Simple Setup to Perform 3D Locomotion Tracking in Zebrafish by Using a Single Camera

    Directory of Open Access Journals (Sweden)

    Gilbert Audira

    2018-02-01

    Full Text Available Generally, the measurement of three-dimensional (3D swimming behavior in zebrafish relies on commercial software or requires sophisticated scripts, and depends on more than two cameras to capture the video. Here, we establish a simple and economic apparatus to detect 3D locomotion in zebrafish, which involves a single camera capture system that records zebrafish movement in a specially designed water tank with a mirror tilted at 45 degrees. The recorded videos are analyzed using idTracker, while spatial positions are calibrated by ImageJ software and 3D trajectories are plotted by Origin 9.1 software. This easy setting allowed scientists to track 3D swimming behavior of multiple zebrafish with low cost and precise spatial position, showing great potential for fish behavioral research in the future.

  12. Using a laser scanning camera for reactor inspection

    International Nuclear Information System (INIS)

    Armour, I.A.; Adrain, R.S.; Klewe, R.C.

    1984-01-01

    Inspection of nuclear reactors is normally carried out using TV or film cameras. There are, however, several areas where these cameras show considerable shortcomings. To overcome these difficulties, laser scanning cameras have been developed. This type of camera can be used for general visual inspection as well as the provision of high resolution video images with high ratio on and off-axis zoom capability. In this paper, we outline the construction and operation of a laser scanning camera and give examples of how it has been used in various power stations, and indicate future potential developments. (author)

  13. Qualification Tests of Micro-camera Modules for Space Applications

    Science.gov (United States)

    Kimura, Shinichi; Miyasaka, Akira

    Visual capability is very important for space-based activities, for which small, low-cost space cameras are desired. Although cameras for terrestrial applications are continually being improved, little progress has been made on cameras used in space, which must be extremely robust to withstand harsh environments. This study focuses on commercial off-the-shelf (COTS) CMOS digital cameras because they are very small and are based on an established mass-market technology. Radiation and ultrahigh-vacuum tests were conducted on a small COTS camera that weighs less than 100 mg (including optics). This paper presents the results of the qualification tests for COTS cameras and for a small, low-cost COTS-based space camera.

  14. Intralaryngeal thyroarytaenoid lateralisation using the Fast-Fix 360 system: a canine cadaveric study.

    Science.gov (United States)

    Stegen, Ludo; Kitshoff, Adriaan M; Van Goethem, Bart; Vandekerckhove, Peter; de Rooster, Hilde

    2015-01-01

    Laryngeal paralysis is a condition in which failure of arytaenoid abduction results in a reduced rima glottidis cross-sectional area. The most commonly performed surgical techniques rely on unilateral abduction of the arytaenoid, requiring a lateral or ventral surgical approach to the larynx. The aim of the study was to investigate a novel minimally invasive intralaryngeal thyroarytaenoid lateralisation technique, using the Fast-Fix 360 meniscal repair system. Larynges were harvested from large breed canine cadavers. With the aid of Kirschner wires placed between the centre of the vocal process and the centre of an imaginary line between the cranial thyroid fissure and the cricothyroid articulation, the mean insertion angle was calculated. The Fast-Fix 360 delivery needle inserted intralaryngeally (n=10), according to a simplified insertion angle (70°), resulted in thyroid penetration (>2.5 mm from margin) in all patients. The Fast-Fix was applied unilaterally at 70° with the first toggle fired on the lateral aspect of the thyroid cartilage and inside the laryngeal cavity on retraction. The suture was tightened. Preprocedural (61.06±9.21 mm2) and postprocedural (138.37±26.12 mm2) rima glottidis cross-sectional area was significantly different (P<0.0001). The mean percentage increase in rima glottidis cross-sectional area was 125.96 per cent (±16.54 per cent). Intralaryngeal thyroarytaenoid laterlisation using the Fast-Fix 360 meniscal repair system ex vivo increased the rima glottidis cross-sectional area significantly.

  15. Modelling Virtual Camera Behaviour Through Player Gaze

    DEFF Research Database (Denmark)

    Picardi, Andrea; Burelli, Paolo; Yannakakis, Georgios N.

    2012-01-01

    industry and game AI research focus on the devel- opment of increasingly sophisticated systems to automate the control of the virtual camera integrating artificial intel- ligence algorithms within physical simulations. However, in both industry and academia little research has been carried out......In a three-dimensional virtual environment, aspects such as narrative and interaction largely depend on the placement and animation of the virtual camera. Therefore, virtual camera control plays a critical role in player experience and, thereby, in the overall quality of a computer game. Both game...... on the relationship between virtual camera, game-play and player behaviour. We run a game user experiment to shed some light on this relationship and identify relevant dif- ferences between camera behaviours through different game sessions, playing behaviours and player gaze patterns. Re- sults show that users can...

  16. Event detection intelligent camera development

    International Nuclear Information System (INIS)

    Szappanos, A.; Kocsis, G.; Molnar, A.; Sarkozi, J.; Zoletnik, S.

    2008-01-01

    A new camera system 'event detection intelligent camera' (EDICAM) is being developed for the video diagnostics of W-7X stellarator, which consists of 10 distinct and standalone measurement channels each holding a camera. Different operation modes will be implemented for continuous and for triggered readout as well. Hardware level trigger signals will be generated from real time image processing algorithms optimized for digital signal processor (DSP) and field programmable gate array (FPGA) architectures. At full resolution a camera sends 12 bit sampled 1280 x 1024 pixels with 444 fps which means 1.43 Terabyte over half an hour. To analyse such a huge amount of data is time consuming and has a high computational complexity. We plan to overcome this problem by EDICAM's preprocessing concepts. EDICAM camera system integrates all the advantages of CMOS sensor chip technology and fast network connections. EDICAM is built up from three different modules with two interfaces. A sensor module (SM) with reduced hardware and functional elements to reach a small and compact size and robust action in harmful environment as well. An image processing and control unit (IPCU) module handles the entire user predefined events and runs image processing algorithms to generate trigger signals. Finally a 10 Gigabit Ethernet compatible image readout card functions as the network interface for the PC. In this contribution all the concepts of EDICAM and the functions of the distinct modules are described

  17. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    A positron emission tomography camera having a plurality of detector rings positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom. Each detector ring or offset ring includes a plurality of photomultiplier tubes and a plurality of scintillation crystals are positioned relative to the photomultiplier tubes whereby each tube is responsive to more than one crystal. Each alternate crystal in the ring is offset by one-half or less of the thickness of the crystal such that the staggered crystals are seen by more than one photomultiplier tube. This sharing of crystals and photomultiplier tubes allows identification of the staggered crystal and the use of smaller detectors shared by larger photomultiplier tubes thereby requiring less photomultiplier tubes, creating more scanning slices, providing better data sampling, and reducing the cost of the camera. The offset detector ring geometry reduces the costs of the positron camera and improves its performance

  18. Gain uniformity, linearity, saturation and depletion in gated microchannel-plate x-ray framing cameras

    International Nuclear Information System (INIS)

    Landen, O.L.; Bell, P.M.; Satariano, J.J.; Oertel, J.A.; Bradley, D.K.

    1994-01-01

    The pulsed characteristics of gated, stripline configuration microchannel-plate (MCP) detectors used in X-ray framing cameras deployed on laser plasma experiments worldwide are examined in greater detail. The detectors are calibrated using short (20 ps) and long (500 ps) pulse X-ray irradiation and 3--60 ps, deep UV (202 and 213 nm), spatially-smoothed laser irradiation. Two-dimensional unsaturated gain profiles show 5 in irradiation and fitted using a discrete dynode model. Finally, a pump-probe experiment quantifying for the first time long-suspected gain depletion by strong localized irradiation was performed. The mechanism for the extra voltage and hence gain degradation is shown to be associated with intense MCP irradiation in the presence of the voltage pulse, at a fluence at least an order of magnitude above that necessary for saturation. Results obtained for both constant pump area and constant pump fluence are presented. The data are well modeled by calculating the instantaneous electrical energy loss due to the intense charge extraction at the pump site and then recalculating the gain downstream at the probe site given the pump-dependent degradation in voltage amplitude

  19. a Low-Cost Panoramic Camera for the 3d Documentation of Contaminated Crime Scenes

    Science.gov (United States)

    Abate, D.; Toschi, I.; Sturdy-Colls, C.; Remondino, F.

    2017-11-01

    Crime scene documentation is a fundamental task which has to be undertaken in a fast, accurate and reliable way, highlighting evidence which can be further used for ensuring justice for victims and for guaranteeing the successful prosecution of perpetrators. The main focus of this paper is on the documentation of a typical crime scene and on the rapid recording of any possible contamination that could have influenced its original appearance. A 3D reconstruction of the environment is first generated by processing panoramas acquired with the low-cost Ricoh Theta 360 camera, and further analysed to highlight potentials and limits of this emerging and consumer-grade technology. Then, a methodology is proposed for the rapid recording of changes occurring between the original and the contaminated crime scene. The approach is based on an automatic 3D feature-based data registration, followed by a cloud-to-cloud distance computation, given as input the 3D point clouds generated before and after e.g. the misplacement of evidence. All the algorithms adopted for panoramas pre-processing, photogrammetric 3D reconstruction, 3D geometry registration and analysis, are presented and currently available in open-source or low-cost software solutions.

  20. Development of a visible framing camera diagnostic for the study of current initiation in z-pinch plasmas

    International Nuclear Information System (INIS)

    Muron, D.J.; Hurst, M.J.; Derzon, M.S.

    1996-01-01

    The authors assembled and tested a visible framing camera system to take 5 ns FWHM images of the early time emission from a z-pinch plasma. This diagnostic was used in conjunction with a visible streak camera allowing early time emissions measurements to diagnose current initiation. Individual frames from gated image intensifiers were proximity coupled to charge injection device (CID) cameras and read out at video rate and 8-bit resolution. A mirror was used to view the pinch from a 90-degree angle. The authors observed the destruction of the mirror surface, due to the high surface heating, and the subsequent reduction in signal reflected from the mirror. Images were obtained that showed early time ejecta and a nonuniform emission from the target. This initial test of the equipment highlighted problems with this measurement. They observed non-uniformities in early time emission. This is believed to be due to either spatially varying current density or heating of the foam. Images were obtained that showed early time ejecta from the target. The results and suggestions for improvement are discussed in the text

  1. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    Science.gov (United States)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellutta, Paolo; Sherwin, Gary W.

    2011-01-01

    The ability to perform off-road autonomous navigation at any time of day or night is a requirement for some unmanned ground vehicle (UGV) programs. Because there are times when it is desirable for military UGVs to operate without emitting strong, detectable electromagnetic signals, a passive only terrain perception mode of operation is also often a requirement. Thermal infrared (TIR) cameras can be used to provide day and night passive terrain perception. TIR cameras have a detector sensitive to either mid-wave infrared (MWIR) radiation (3-5?m) or long-wave infrared (LWIR) radiation (8-12?m). With the recent emergence of high-quality uncooled LWIR cameras, TIR cameras have become viable passive perception options for some UGV programs. The Jet Propulsion Laboratory (JPL) has used a stereo pair of TIR cameras under several UGV programs to perform stereo ranging, terrain mapping, tree-trunk detection, pedestrian detection, negative obstacle detection, and water detection based on object reflections. In addition, we have evaluated stereo range data at a variety of UGV speeds, evaluated dual-band TIR classification of soil, vegetation, and rock terrain types, analyzed 24 hour water and 12 hour mud TIR imagery, and analyzed TIR imagery for hazard detection through smoke. Since TIR cameras do not currently provide the resolution available from megapixel color cameras, a UGV's daytime safe speed is often reduced when using TIR instead of color cameras. In this paper, we summarize the UGV terrain perception work JPL has performed with TIR cameras over the last decade and describe a calibration target developed by General Dynamics Robotic Systems (GDRS) for TIR cameras and other sensors.

  2. THERMAL EFFECTS ON CAMERA FOCAL LENGTH IN MESSENGER STAR CALIBRATION AND ORBITAL IMAGING

    Directory of Open Access Journals (Sweden)

    S. Burmeister

    2018-04-01

    Full Text Available We analyse images taken by the MErcury Surface, Space ENviorment, GEochemistry, and Ranging (MESSENGER spacecraft for the camera’s thermal response in the harsh thermal environment near Mercury. Specifically, we study thermally induced variations in focal length of the Mercury Dual Imaging System (MDIS. Within the several hundreds of images of star fields, the Wide Angle Camera (WAC typically captures up to 250 stars in one frame of the panchromatic channel. We measure star positions and relate these to the known star coordinates taken from the Tycho-2 catalogue. We solve for camera pointing, the focal length parameter and two non-symmetrical distortion parameters for each image. Using data from the temperature sensors on the camera focal plane we model a linear focal length function in the form of f(T = A0 + A1 T. Next, we use images from MESSENGER’s orbital mapping mission. We deal with large image blocks, typically used for the production of a high-resolution digital terrain models (DTM. We analyzed images from the combined quadrangles H03 and H07, a selected region, covered by approx. 10,600 images, in which we identified about 83,900 tiepoints. Using bundle block adjustments, we solved for the unknown coordinates of the control points, the pointing of the camera – as well as the camera’s focal length. We then fit the above linear function with respect to the focal plane temperature. As a result, we find a complex response of the camera to thermal conditions of the spacecraft. To first order, we see a linear increase by approx. 0.0107 mm per degree temperature for the Narrow-Angle Camera (NAC. This is in agreement with the observed thermal response seen in images of the panchromatic channel of the WAC. Unfortunately, further comparisons of results from the two methods, both of which use different portions of the available image data, are limited. If leaving uncorrected, these effects may pose significant difficulties in

  3. Traveling wave deflector design for femtosecond streak camera

    International Nuclear Information System (INIS)

    Pei, Chengquan; Wu, Shengli; Luo, Duan; Wen, Wenlong; Xu, Junkai; Tian, Jinshou; Zhang, Minrui; Chen, Pin; Chen, Jianzhong; Liu, Rong

    2017-01-01

    In this paper, a traveling wave deflection deflector (TWD) with a slow-wave property induced by a microstrip transmission line is proposed for femtosecond streak cameras. The pass width and dispersion properties were simulated. In addition, the dynamic temporal resolution of the femtosecond camera was simulated by CST software. The results showed that with the proposed TWD a femtosecond streak camera can achieve a dynamic temporal resolution of less than 600 fs. Experiments were done to test the femtosecond streak camera, and an 800 fs dynamic temporal resolution was obtained. Guidance is provided for optimizing a femtosecond streak camera to obtain higher temporal resolution.

  4. Traveling wave deflector design for femtosecond streak camera

    Energy Technology Data Exchange (ETDEWEB)

    Pei, Chengquan; Wu, Shengli [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi’an 710049 (China); Luo, Duan [Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Wen, Wenlong [Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Xu, Junkai [Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Tian, Jinshou, E-mail: tianjs@opt.ac.cn [Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan, Shanxi 030006 (China); Zhang, Minrui; Chen, Pin [Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Chen, Jianzhong [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi’an 710049 (China); Liu, Rong [Xi' an Technological University, Xi' an 710021 (China)

    2017-05-21

    In this paper, a traveling wave deflection deflector (TWD) with a slow-wave property induced by a microstrip transmission line is proposed for femtosecond streak cameras. The pass width and dispersion properties were simulated. In addition, the dynamic temporal resolution of the femtosecond camera was simulated by CST software. The results showed that with the proposed TWD a femtosecond streak camera can achieve a dynamic temporal resolution of less than 600 fs. Experiments were done to test the femtosecond streak camera, and an 800 fs dynamic temporal resolution was obtained. Guidance is provided for optimizing a femtosecond streak camera to obtain higher temporal resolution.

  5. Multi-Angle Snowflake Camera Value-Added Product

    Energy Technology Data Exchange (ETDEWEB)

    Shkurko, Konstantin [Univ. of Utah, Salt Lake City, UT (United States); Garrett, T. [Univ. of Utah, Salt Lake City, UT (United States); Gaustad, K [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-12-01

    The Multi-Angle Snowflake Camera (MASC) addresses a need for high-resolution multi-angle imaging of hydrometeors in freefall with simultaneous measurement of fallspeed. As illustrated in Figure 1, the MASC consists of three cameras, separated by 36°, each pointing at an identical focal point approximately 10 cm away. Located immediately above each camera, a light aims directly at the center of depth of field for its corresponding camera. The focal point at which the cameras are aimed lies within a ring through which hydrometeors fall. The ring houses a system of near-infrared emitter-detector pairs, arranged in two arrays separated vertically by 32 mm. When hydrometeors pass through the lower array, they simultaneously trigger all cameras and lights. Fallspeed is calculated from the time it takes to traverse the distance between the upper and lower triggering arrays. The trigger electronics filter out ambient light fluctuations associated with varying sunlight and shadows. The microprocessor onboard the MASC controls the camera system and communicates with the personal computer (PC). The image data is sent via FireWire 800 line, and fallspeed (and camera control) is sent via a Universal Serial Bus (USB) line that relies on RS232-over-USB serial conversion. See Table 1 for specific details on the MASC located at the Oliktok Point Mobile Facility on the North Slope of Alaska. The value-added product (VAP) detailed in this documentation analyzes the raw data (Section 2.0) using Python: images rely on OpenCV image processing library and derived aggregated statistics rely on some clever averaging. See Sections 4.1 and 4.2 for more details on what variables are computed.

  6. CameraHRV: robust measurement of heart rate variability using a camera

    Science.gov (United States)

    Pai, Amruta; Veeraraghavan, Ashok; Sabharwal, Ashutosh

    2018-02-01

    The inter-beat-interval (time period of the cardiac cycle) changes slightly for every heartbeat; this variation is measured as Heart Rate Variability (HRV). HRV is presumed to occur due to interactions between the parasym- pathetic and sympathetic nervous system. Therefore, it is sometimes used as an indicator of the stress level of an individual. HRV also reveals some clinical information about cardiac health. Currently, HRV is accurately measured using contact devices such as a pulse oximeter. However, recent research in the field of non-contact imaging Photoplethysmography (iPPG) has made vital sign measurements using just the video recording of any exposed skin (such as a person's face) possible. The current signal processing methods for extracting HRV using peak detection perform well for contact-based systems but have poor performance for the iPPG signals. The main reason for this poor performance is the fact that current methods are sensitive to large noise sources which are often present in iPPG data. Further, current methods are not robust to motion artifacts that are common in iPPG systems. We developed a new algorithm, CameraHRV, for robustly extracting HRV even in low SNR such as is common with iPPG recordings. CameraHRV combined spatial combination and frequency demodulation to obtain HRV from the instantaneous frequency of the iPPG signal. CameraHRV outperforms other current methods of HRV estimation. Ground truth data was obtained from FDA-approved pulse oximeter for validation purposes. CameraHRV on iPPG data showed an error of 6 milliseconds for low motion and varying skin tone scenarios. The improvement in error was 14%. In case of high motion scenarios like reading, watching and talking, the error was 10 milliseconds.

  7. Determination of the optimal speed of rotational display through an 180 degree arc in rotatostereoradiography and MR angiography

    International Nuclear Information System (INIS)

    Ottomo, M.; Takekawa, S.D.; Sugawara, K.; Nakamura, T.; Fujimoto, M.; Nakanishi, T.

    1990-01-01

    Rotatostereoradiographic (RSRG) images are displayed in an oscillating, rotational manner. While reviewing these rotating images, the radiologist may become psychologically irritated by the rotation. A rapidly rotating display of linear subjects gives one three-dimensional depth information. This three-dimensional sense is lost if the rotation speed is too slow. The authors of this paper determined the slowest possible rotating display speed that allows perception of three-dimensional depth information minimizing psychological irritation. In the RSRG device (Shimadzu ROTATO-360), an x-ray tube coupled with an image intensifier rotates through a 180 degrees arc in 1.8 or 2.25 seconds. Both rotation times could be doubled. The images were displayed at four different speeds, covering the 180 degrees arc in 1.8, 2.25, 3.6, and 4.5 seconds

  8. An image-tube camera for cometary spectrography

    Science.gov (United States)

    Mamadov, O.

    The paper discusses the mounting of an image tube camera. The cathode is of antimony, sodium, potassium, and cesium. The parts used for mounting are of acrylic plastic and a fabric-based laminate. A mounting design that does not include cooling is presented. The aperture ratio of the camera is 1:27. Also discussed is the way that the camera is joined to the spectrograph.

  9. Diagnosis of cardiovascular diseases by means of a scintillation camera with radioisotope

    Energy Technology Data Exchange (ETDEWEB)

    Domeki, Kin-ichi [Tokyo Medical Coll. (Japan)

    1975-01-01

    Hemodynamic studies of cardiovascular disease were made on 318 patients ranging in age from 2 months to 87 years, and the diagnostic usefulness of the scinatillation camera with radioistope was evaluated forty five of the patients (aged 5 to 24), considered to be normal, were used for computing normal values. Equipment consisted of a Pho/Gamma III type scintillation camera, a 35 mm time lapse camera and a 1600 word memory sytem. Radio-pharmaceuticals used for the study were sup(113m)In and sup(99m)-Tc. Administration of the isotope was by arm injection, allowing it to pass through the heart chamber and major blood vessels as a large bolus of radioactivity. For children 3 years and under, a pinhole collimater was employed to magnify cardiac images and enhance resolution. In septal defects, right left shunting was easier to demonstrate than was left right. Atrial and/or ventricular changes in valvular disease were readily observed on the radio-angiogram and corresponded to the degree of damage. Increased blood pooling was also demonstrable. Hemodynamic changes in each chamber were established by the drafting of an isotope dilution curve, and the C/sub 2/ C, ratio, expressed as a percentage, was obtained from the curve. This safe, low risk radionuclide evaluation of the heart and major blood vessels was found to be an excellent method for use in making diagnoses of congenital heart disease, aneurysmss and obstructure arterial disease as well as for evaluating their severity.

  10. Characterization of SWIR cameras by MRC measurements

    Science.gov (United States)

    Gerken, M.; Schlemmer, H.; Haan, Hubertus A.; Siemens, Christofer; Münzberg, M.

    2014-05-01

    Cameras for the SWIR wavelength range are becoming more and more important because of the better observation range for day-light operation under adverse weather conditions (haze, fog, rain). In order to choose the best suitable SWIR camera or to qualify a camera for a given application, characterization of the camera by means of the Minimum Resolvable Contrast MRC concept is favorable as the MRC comprises all relevant properties of the instrument. With the MRC known for a given camera device the achievable observation range can be calculated for every combination of target size, illumination level or weather conditions. MRC measurements in the SWIR wavelength band can be performed widely along the guidelines of the MRC measurements of a visual camera. Typically measurements are performed with a set of resolution targets (e.g. USAF 1951 target) manufactured with different contrast values from 50% down to less than 1%. For a given illumination level the achievable spatial resolution is then measured for each target. The resulting curve is showing the minimum contrast that is necessary to resolve the structure of a target as a function of spatial frequency. To perform MRC measurements for SWIR cameras at first the irradiation parameters have to be given in radiometric instead of photometric units which are limited in their use to the visible range. In order to do so, SWIR illumination levels for typical daylight and twilight conditions have to be defined. At second, a radiation source is necessary with appropriate emission in the SWIR range (e.g. incandescent lamp) and the irradiance has to be measured in W/m2 instead of Lux = Lumen/m2. At third, the contrast values of the targets have to be calibrated newly for the SWIR range because they typically differ from the values determined for the visual range. Measured MRC values of three cameras are compared to the specified performance data of the devices and the results of a multi-band in-house designed Vis-SWIR camera

  11. Integrating Gigabit ethernet cameras into EPICS at Diamond light source

    International Nuclear Information System (INIS)

    Cobb, T.

    2012-01-01

    At Diamond Light Source a range of cameras are used to provide images for diagnostic purposes in both the accelerator and photo beamlines. The accelerator and existing beamlines use Point Grey Flea and Flea2 Firewire cameras. We have selected Gigabit Ethernet cameras supporting GigE Vision for our new photon beamlines. GigE Vision is an interface standard for high speed Ethernet cameras which encourages inter-operability between manufacturers. This paper describes the challenges encountered while integrating GigE Vision cameras from a range of vendors into EPICS. GigE Vision cameras appear to be more reliable than the Firewire cameras, and the simple cabling makes much easier to move the cameras to different positions. Upcoming power over Ethernet versions of the cameras will reduce the number of cables still further

  12. Digital airborne camera introduction and technology

    CERN Document Server

    Sandau, Rainer

    2014-01-01

    The last decade has seen great innovations on the airborne camera. This book is the first ever written on the topic and describes all components of a digital airborne camera ranging from the object to be imaged to the mass memory device.

  13. Camera Traps Can Be Heard and Seen by Animals

    Science.gov (United States)

    Meek, Paul D.; Ballard, Guy-Anthony; Fleming, Peter J. S.; Schaefer, Michael; Williams, Warwick; Falzon, Greg

    2014-01-01

    Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals’ hearing and produce illumination that can be seen by many species. PMID:25354356

  14. Camera traps can be heard and seen by animals.

    Directory of Open Access Journals (Sweden)

    Paul D Meek

    Full Text Available Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5 and infrared illumination outputs (n = 7 of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21 and assessed the vision ranges (n = 3 of mammals species (where data existed to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals' hearing and produce illumination that can be seen by many species.

  15. A SPECT demonstrator—revival of a gamma camera

    Science.gov (United States)

    Valastyán, I.; Kerek, A.; Molnár, J.; Novák, D.; Végh, J.; Emri, M.; Trón, L.

    2006-07-01

    A gamma camera has been updated and converted to serve as a demonstrator for educational purposes. The gantry and the camera head were the only part of the system that remained untouched. The main reason for this modernization was to increase the transparency of the gamma camera by partitioning the different logical building blocks of the system and thus providing access for inspection and improvements throughout the chain. New data acquisition and reconstruction software has been installed. By taking these measures, the camera is now used in education and also serves as a platform for tests of new hardware and software solutions. The camera is also used to demonstrate 3D (SPECT) imaging by collecting 2D projections from a rotatable cylindrical phantom. Since the camera head is not attached mechanically to the phantom, the effect of misalignment between the head and the rotation axis of the phantom can be studied.

  16. A SPECT demonstrator-revival of a gamma camera

    International Nuclear Information System (INIS)

    Valastyan, I.; Kerek, A.; Molnar, J.; Novak, D.; Vegh, J.; Emri, M.; Tron, L.

    2006-01-01

    A gamma camera has been updated and converted to serve as a demonstrator for educational purposes. The gantry and the camera head were the only part of the system that remained untouched. The main reason for this modernization was to increase the transparency of the gamma camera by partitioning the different logical building blocks of the system and thus providing access for inspection and improvements throughout the chain. New data acquisition and reconstruction software has been installed. By taking these measures, the camera is now used in education and also serves as a platform for tests of new hardware and software solutions. The camera is also used to demonstrate 3D (SPECT) imaging by collecting 2D projections from a rotatable cylindrical phantom. Since the camera head is not attached mechanically to the phantom, the effect of misalignment between the head and the rotation axis of the phantom can be studied

  17. Analysis of Camera Parameters Value in Various Object Distances Calibration

    International Nuclear Information System (INIS)

    Yusoff, Ahmad Razali; Ariff, Mohd Farid Mohd; Idris, Khairulnizam M; Majid, Zulkepli; Setan, Halim; Chong, Albert K

    2014-01-01

    In photogrammetric applications, good camera parameters are needed for mapping purpose such as an Unmanned Aerial Vehicle (UAV) that encompassed with non-metric camera devices. Simple camera calibration was being a common application in many laboratory works in order to get the camera parameter's value. In aerial mapping, interior camera parameters' value from close-range camera calibration is used to correct the image error. However, the causes and effects of the calibration steps used to get accurate mapping need to be analyze. Therefore, this research aims to contribute an analysis of camera parameters from portable calibration frame of 1.5 × 1 meter dimension size. Object distances of two, three, four, five, and six meters are the research focus. Results are analyzed to find out the changes in image and camera parameters' value. Hence, camera calibration parameter's of a camera is consider different depend on type of calibration parameters and object distances

  18. NV-CMOS HD camera for day/night imaging

    Science.gov (United States)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  19. Modelos matemáticos e substitución lingüística

    Directory of Open Access Journals (Sweden)

    Johannes Kabatek

    2012-01-01

    Full Text Available This paper presents a critical review of several recent studies that aim to model language contact, language shift and language death mathematically. We will present the models proposed by Abrams & Strogatz 2003, Mira & Paredes 2005, Mira, Seoane & Nieto 2011, Minett & Wang 2007 and Castelló (2010, as well as Castelló et al. (2012. These papers refer to different contact situations, including Galician-Spanish language contact. The different models and corresponding algorithms will be described and critiqued, distinguishing several degrees of sophistication and adequacy. The study identifies a series of shortcomings, particularly in Abrams & Strogatz’ and Mira & Paredes’ papers: data problems, argumentative and logical errors, and also inadequate simplifications. Nevertheless, it will be argued that the more sophisticated models in particular should continue to be developed, and a “heuristic spiral” between model developers and sociolinguists ought to be pursued, in order to achieve better models and greater insight into the mechanisms involved in language change and language shift.

  20. Design of Endoscopic Capsule With Multiple Cameras.

    Science.gov (United States)

    Gu, Yingke; Xie, Xiang; Li, Guolin; Sun, Tianjia; Wang, Dan; Yin, Zheng; Zhang, Pengfei; Wang, Zhihua

    2015-08-01

    In order to reduce the miss rate of the wireless capsule endoscopy, in this paper, we propose a new system of the endoscopic capsule with multiple cameras. A master-slave architecture, including an efficient bus architecture and a four level clock management architecture, is applied for the Multiple Cameras Endoscopic Capsule (MCEC). For covering more area of the gastrointestinal tract wall with low power, multiple cameras with a smart image capture strategy, including movement sensitive control and camera selection, are used in the MCEC. To reduce the data transfer bandwidth and power consumption to prolong the MCEC's working life, a low complexity image compressor with PSNR 40.7 dB and compression rate 86% is implemented. A chipset is designed and implemented for the MCEC and a six cameras endoscopic capsule prototype is implemented by using the chipset. With the smart image capture strategy, the coverage rate of the MCEC prototype can achieve 98% and its power consumption is only about 7.1 mW.

  1. Evaluation of Red Light Camera Enforcement at Signalized Intersections

    Directory of Open Access Journals (Sweden)

    Abdulrahman AlJanahi

    2007-12-01

    Full Text Available The study attempts to find the effectiveness of adopting red light cameras in reducing red light violators. An experimental approach was adopted to investigate the use of red light cameras at signalized intersections in the Kingdom of Bahrain. The study locations were divided into three groups. The first group was related to the approaches monitored with red light cameras. The second group was related to approaches without red light cameras, but located within an intersection that had one of its approaches monitored with red light cameras. The third group was related to intersection approaches located at intersection without red light cameras (controlled sites. A methodology was developed for data collection. The data were then tested statistically by Z-test using proportion methods to compare the proportion of red light violations occurring at different sites. The study found that the proportion of red light violators at approaches monitored with red light cameras was significantly less than those at the controlled sites for most of the time. Approaches without red light cameras located within intersections having red light cameras showed, in general, fewer violations than controlled sites, but the results were not significant for all times of the day. The study reveals that red light cameras have a positive effect on reducing red light violations. However, these conclusions need further evaluations to justify their safe and economic use.

  2. Gamma ray camera

    International Nuclear Information System (INIS)

    Wang, S.-H.; Robbins, C.D.

    1979-01-01

    An Anger gamma ray camera is improved by the substitution of a gamma ray sensitive, proximity type image intensifier tube for the scintillator screen in the Anger camera. The image intensifier tube has a negatively charged flat scintillator screen, a flat photocathode layer, and a grounded, flat output phosphor display screen, all of which have the same dimension to maintain unit image magnification; all components are contained within a grounded metallic tube, with a metallic, inwardly curved input window between the scintillator screen and a collimator. The display screen can be viewed by an array of photomultipliers or solid state detectors. There are two photocathodes and two phosphor screens to give a two stage intensification, the two stages being optically coupled by a light guide. (author)

  3. Creep crack growth in a reactor pressure vessel steel at 360 deg C

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Rui; Seitisleam, F; Sandstroem, R [Swedish Institute for Metals Research, Stockholm (Sweden)

    1999-12-31

    Plain creep (PC) and creep crack growth (CCG) tests at 360 deg C and post metallography were carried out on a low alloy reactor pressure vessel steel (ASTM A508 class 2) with different microstructures. Lives for the CCG tests were shorter than those for the PC tests and this is more pronounced for simulated heat affected zone microstructure than for the parent metal at longer lives. For the CCG tests, after initiation, the cracks grew constantly and intergranularly before they accelerated to approach rupture. The creep crack growth rate is well described by C*. The relations between reference stress, failure time and steady crack growth rate are presented for the CCG tests. It is demonstrated that the failure stress due to CCG is considerably lower than the yield stress at 360 deg C. Consequently, the CCG will control the static strength of a reactor vessel. (orig.) 17 refs.

  4. Creep crack growth in a reactor pressure vessel steel at 360 deg C

    Energy Technology Data Exchange (ETDEWEB)

    Rui Wu; Seitisleam, F.; Sandstroem, R. [Swedish Institute for Metals Research, Stockholm (Sweden)

    1998-12-31

    Plain creep (PC) and creep crack growth (CCG) tests at 360 deg C and post metallography were carried out on a low alloy reactor pressure vessel steel (ASTM A508 class 2) with different microstructures. Lives for the CCG tests were shorter than those for the PC tests and this is more pronounced for simulated heat affected zone microstructure than for the parent metal at longer lives. For the CCG tests, after initiation, the cracks grew constantly and intergranularly before they accelerated to approach rupture. The creep crack growth rate is well described by C*. The relations between reference stress, failure time and steady crack growth rate are presented for the CCG tests. It is demonstrated that the failure stress due to CCG is considerably lower than the yield stress at 360 deg C. Consequently, the CCG will control the static strength of a reactor vessel. (orig.) 17 refs.

  5. 3D for the people: multi-camera motion capture in the field with consumer-grade cameras and open source software

    Directory of Open Access Journals (Sweden)

    Brandon E. Jackson

    2016-09-01

    Full Text Available Ecological, behavioral and biomechanical studies often need to quantify animal movement and behavior in three dimensions. In laboratory studies, a common tool to accomplish these measurements is the use of multiple, calibrated high-speed cameras. Until very recently, the complexity, weight and cost of such cameras have made their deployment in field situations risky; furthermore, such cameras are not affordable to many researchers. Here, we show how inexpensive, consumer-grade cameras can adequately accomplish these measurements both within the laboratory and in the field. Combined with our methods and open source software, the availability of inexpensive, portable and rugged cameras will open up new areas of biological study by providing precise 3D tracking and quantification of animal and human movement to researchers in a wide variety of field and laboratory contexts.

  6. Extended spectrum SWIR camera with user-accessible Dewar

    Science.gov (United States)

    Benapfl, Brendan; Miller, John Lester; Vemuri, Hari; Grein, Christoph; Sivananthan, Siva

    2017-02-01

    Episensors has developed a series of extended short wavelength infrared (eSWIR) cameras based on high-Cd concentration Hg1-xCdxTe absorbers. The cameras have a bandpass extending to 3 microns cutoff wavelength, opening new applications relative to traditional InGaAs-based cameras. Applications and uses are discussed and examples given. A liquid nitrogen pour-filled version was initially developed. This was followed by a compact Stirling-cooled version with detectors operating at 200 K. Each camera has unique sensitivity and performance characteristics. The cameras' size, weight and power specifications are presented along with images captured with band pass filters and eSWIR sources to demonstrate spectral response beyond 1.7 microns. The soft seal Dewars of the cameras are designed for accessibility, and can be opened and modified in a standard laboratory environment. This modular approach allows user flexibility for swapping internal components such as cold filters and cold stops. The core electronics of the Stirlingcooled camera are based on a single commercial field programmable gate array (FPGA) that also performs on-board non-uniformity corrections, bad pixel replacement, and directly drives any standard HDMI display.

  7. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different....... For this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of scenarios...

  8. The GISMO-2 Bolometer Camera

    Science.gov (United States)

    Staguhn, Johannes G.; Benford, Dominic J.; Fixsen, Dale J.; Hilton, Gene; Irwin, Kent D.; Jhabvala, Christine A.; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; Miller, Timothy M.; hide

    2012-01-01

    We present the concept for the GISMO-2 bolometer camera) which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISM0-2 will operate Simultaneously in the 1 mm and 2 mm atmospherical windows. The 1 mm channel uses a 32 x 40 TES-based Backshort Under Grid (BUG) bolometer array, the 2 mm channel operates with a 16 x 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISM0-2 was strongly influenced by our experience with the GISMO 2 mm bolometer camera which is successfully operating at the 30m telescope. GISMO is accessible to the astronomical community through the regular IRAM call for proposals.

  9. [Analog gamma camera digitalization computer system].

    Science.gov (United States)

    Rojas, G M; Quintana, J C; Jer, J; Astudillo, S; Arenas, L; Araya, H

    2004-01-01

    Digitalization of analogue gamma cameras systems, using special acquisition boards in microcomputers and appropriate software for acquisition and processing of nuclear medicine images is described in detail. Microcomputer integrated systems interconnected by means of a Local Area Network (LAN) and connected to several gamma cameras have been implemented using specialized acquisition boards. The PIP software (Portable Image Processing) was installed on each microcomputer to acquire and preprocess the nuclear medicine images. A specialized image processing software has been designed and developed for these purposes. This software allows processing of each nuclear medicine exam, in a semiautomatic procedure, and recording of the results on radiological films. . A stable, flexible and inexpensive system which makes it possible to digitize, visualize, process, and print nuclear medicine images obtained from analogue gamma cameras was implemented in the Nuclear Medicine Division. Such a system yields higher quality images than those obtained with analogue cameras while keeping operating costs considerably lower (filming: 24.6%, fixing 48.2% and developing 26%.) Analogue gamma camera systems can be digitalized economically. This system makes it possible to obtain optimal clinical quality nuclear medicine images, to increase the acquisition and processing efficiency, and to reduce the steps involved in each exam.

  10. Advanced system for Gamma Cameras modernization

    International Nuclear Information System (INIS)

    Osorio Deliz, J. F.; Diaz Garcia, A.; Arista Romeu, E. J.

    2015-01-01

    Analog and digital gamma cameras still largely used in developing countries. Many of them rely in old hardware electronics, which in many cases limits their use in actual nuclear medicine diagnostic studies. Consequently, there are different worldwide companies that produce medical equipment engaged into a partial or total Gamma Cameras modernization. Present work has demonstrated the possibility of substitution of almost entire signal processing electronics placed at inside a Gamma Camera detector head by a digitizer PCI card. this card includes four 12 Bits Analog-to-Digital-Converters of 50 MHz speed. It has been installed in a PC and controlled through software developed in Lab View. Besides, there were done some changes to the hardware inside the detector head including redesign of the Orientation Display Block (ODA card). Also a new electronic design was added to the Microprocessor Control Block (MPA card) which comprised a PIC micro controller acting as a tuning system for individual Photomultiplier Tubes. The images, obtained by measurement of 99m Tc point radioactive source, using modernized camera head demonstrate its overall performance. The system was developed and tested in an old Gamma Camera ORBITER II SIEMENS GAMMASONIC at National Institute of Oncology and Radiobiology (INOR) under CAMELUD project supported by National Program PNOULU and IAEA . (Author)

  11. Gamma cameras - a method of evaluation

    International Nuclear Information System (INIS)

    Oates, L.; Bibbo, G.

    2000-01-01

    Full text: With the sophistication and longevity of the modern gamma camera it is not often that the need arises to evaluate a gamma camera for purchase. We have recently been placed in the position of retiring our two single headed cameras of some vintage and replacing them with a state of the art dual head variable angle gamma camera. The process used for the evaluation consisted of five parts: (1) Evaluation of the technical specification as expressed in the tender document; (2) A questionnaire adapted from the British Society of Nuclear Medicine; (3) Site visits to assess gantry configuration, movement, patient access and occupational health, welfare and safety considerations; (4) Evaluation of the processing systems offered; (5) Whole of life costing based on equally configured systems. The results of each part of the evaluation were expressed using a weighted matrix analysis with each of the criteria assessed being weighted in accordance with their importance to the provision of an effective nuclear medicine service for our centre and the particular importance to paediatric nuclear medicine. This analysis provided an objective assessment of each gamma camera system from which a purchase recommendation was made. Copyright (2000) The Australian and New Zealand Society of Nuclear Medicine Inc

  12. Lunar Reconnaissance Orbiter Camera (LROC) instrument overview

    Science.gov (United States)

    Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.

  13. Acoustic results of the Boeing model 360 whirl tower test

    Science.gov (United States)

    Watts, Michael E.; Jordan, David

    1990-09-01

    An evaluation is presented for whirl tower test results of the Model 360 helicopter's advanced, high-performance four-bladed composite rotor system intended to facilitate over-200-knot flight. During these performance measurements, acoustic data were acquired by seven microphones. A comparison of whirl-tower tests with theory indicate that theoretical prediction accuracies vary with both microphone position and the inclusion of ground reflection. Prediction errors varied from 0 to 40 percent of the measured signal-to-peak amplitude.

  14. Cloud Computing with Context Cameras

    Science.gov (United States)

    Pickles, A. J.; Rosing, W. E.

    2016-05-01

    We summarize methods and plans to monitor and calibrate photometric observations with our autonomous, robotic network of 2m, 1m and 40cm telescopes. These are sited globally to optimize our ability to observe time-variable sources. Wide field "context" cameras are aligned with our network telescopes and cycle every ˜2 minutes through BVr'i'z' filters, spanning our optical range. We measure instantaneous zero-point offsets and transparency (throughput) against calibrators in the 5-12m range from the all-sky Tycho2 catalog, and periodically against primary standards. Similar measurements are made for all our science images, with typical fields of view of ˜0.5 degrees. These are matched against Landolt, Stetson and Sloan standards, and against calibrators in the 10-17m range from the all-sky APASS catalog. Such measurements provide pretty good instantaneous flux calibration, often to better than 5%, even in cloudy conditions. Zero-point and transparency measurements can be used to characterize, monitor and inter-compare sites and equipment. When accurate calibrations of Target against Standard fields are required, monitoring measurements can be used to select truly photometric periods when accurate calibrations can be automatically scheduled and performed.

  15. Politseis ja piirivalves võib kaduda kuni 360 töökohta / Urmas Seaver

    Index Scriptorium Estoniae

    Seaver, Urmas, 1973-

    2009-01-01

    Seoses eelarvekärpe ja 1. jaanuarist tööle hakkava politsei-, piiirvalve- ja kodakondsus- ja migratsioniametit liitva uue ühendasutuse loomisega võib kaduda politseis ja piirivalves 350-360 töökohta

  16. Gamma camera performance: technical assessment protocol

    International Nuclear Information System (INIS)

    Bolster, A.A.; Waddington, W.A.

    1996-01-01

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera's computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author)

  17. a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging

    Science.gov (United States)

    Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.

    2017-08-01

    Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.

  18. A SPATIO-SPECTRAL CAMERA FOR HIGH RESOLUTION HYPERSPECTRAL IMAGING

    Directory of Open Access Journals (Sweden)

    S. Livens

    2017-08-01

    Full Text Available Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600–900 nm in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots, horticulture (crop status monitoring to evaluate irrigation management in strawberry fields and geology (meteorite detection on a grassland field. Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475–925 nm, and we discuss future work.

  19. Performance analysis for gait in camera networks

    OpenAIRE

    Michela Goffredo; Imed Bouchrika; John Carter; Mark Nixon

    2008-01-01

    This paper deploys gait analysis for subject identification in multi-camera surveillance scenarios. We present a new method for viewpoint independent markerless gait analysis that does not require camera calibration and works with a wide range of directions of walking. These properties make the proposed method particularly suitable for gait identification in real surveillance scenarios where people and their behaviour need to be tracked across a set of cameras. Tests on 300 synthetic and real...

  20. Development of gamma camera and application to decontamination

    International Nuclear Information System (INIS)

    Yoshida, Akira; Moro, Eiji; Takahashi, Isao

    2013-01-01

    A gamma camera has been developed to support recovering from the contamination caused by the accident of Fukushima Dai-ichi Nuclear Power Plant of Tokyo Electric Power Company. The gamma camera enables recognition of the contamination by visualizing radioactivity. The gamma camera has been utilized for risk communication (explanation to community resident) at local governments in Fukushima. From now on, the gamma camera will be applied to solve decontaminations issues; improving efficiency of decontamination, visualizing the effect of decontamination work and reducing radioactive waste. (author)

  1. A method of camera calibration in the measurement process with reference mark for approaching observation space target

    Science.gov (United States)

    Zhang, Hua; Zeng, Luan

    2017-11-01

    Binocular stereoscopic vision can be used for space-based space targets near observation. In order to solve the problem that the traditional binocular vision system cannot work normally after interference, an online calibration method of binocular stereo measuring camera with self-reference is proposed. The method uses an auxiliary optical imaging device to insert the image of the standard reference object into the edge of the main optical path and image with the target on the same focal plane, which is equivalent to a standard reference in the binocular imaging optical system; When the position of the system and the imaging device parameters are disturbed, the image of the standard reference will change accordingly in the imaging plane, and the position of the standard reference object does not change. The camera's external parameters can be re-calibrated by the visual relationship of the standard reference object. The experimental results show that the maximum mean square error of the same object can be reduced from the original 72.88mm to 1.65mm when the right camera is deflected by 0.4 degrees and the left camera is high and low with 0.2° rotation. This method can realize the online calibration of binocular stereoscopic vision measurement system, which can effectively improve the anti - jamming ability of the system.

  2. The making of analog module for gamma camera interface

    International Nuclear Information System (INIS)

    Yulinarsari, Leli; Rl, Tjutju; Susila, Atang; Sukandar

    2003-01-01

    The making of an analog module for gamma camera has been conducted. For computerization of planar gamma camera 37 PMT it has been developed interface hardware technology and software between the planar gamma camera with PC. With this interface gamma camera image information (Originally analog signal) was changed to digital single, therefore processes of data acquisition, image quality increase and data analysis as well as data base processing can be conducted with the help of computers, there are three gamma camera main signals, i.e. X, Y and Z . This analog module makes digitation of analog signal X and Y from the gamma camera that conveys position information coming from the gamma camera crystal. Analog conversion to digital was conducted by 2 converters ADC 12 bit with conversion time 800 ns each, conversion procedure for each coordinate X and Y was synchronized using suitable strobe signal Z for information acceptance

  3. An evolution of image source camera attribution approaches.

    Science.gov (United States)

    Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul

    2016-05-01

    Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics

  4. Driving with head-slaved camera system

    NARCIS (Netherlands)

    Oving, A.B.; Erp, J.B.F. van

    2001-01-01

    In a field experiment, we tested the effectiveness of a head-slaved camera system for driving an armoured vehicle under armour. This system consists of a helmet-mounted display (HMD), a headtracker, and a motion platform with two cameras. Subjects performed several driving tasks on paved and in

  5. Improving Photometric Calibration of Meteor Video Camera Systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at approx. 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  6. Phase camera experiment for Advanced Virgo

    International Nuclear Information System (INIS)

    Agatsuma, Kazuhiro; Beuzekom, Martin van; Schaaf, Laura van der; Brand, Jo van den

    2016-01-01

    We report on a study of the phase camera, which is a frequency selective wave-front sensor of a laser beam. This sensor is utilized for monitoring sidebands produced by phase modulations in a gravitational wave (GW) detector. Regarding the operation of the GW detectors, the laser modulation/demodulation method is used to measure mirror displacements and used for the position controls. This plays a significant role because the quality of controls affect the noise level of the GW detector. The phase camera is able to monitor each sideband separately, which has a great benefit for the manipulation of the delicate controls. Also, overcoming mirror aberrations will be an essential part of Advanced Virgo (AdV), which is a GW detector close to Pisa. Especially low-frequency sidebands can be affected greatly by aberrations in one of the interferometer cavities. The phase cameras allow tracking such changes because the state of the sidebands gives information on mirror aberrations. A prototype of the phase camera has been developed and is currently tested. The performance checks are almost completed and the installation of the optics at the AdV site has started. After the installation and commissioning, the phase camera will be combined to a thermal compensation system that consists of CO 2 lasers and compensation plates. In this paper, we focus on the prototype and show some limitations from the scanner performance. - Highlights: • The phase camera is being developed for a gravitational wave detector. • A scanner performance limits the operation speed and layout design of the system. • An operation range was found by measuring the frequency response of the scanner.

  7. Phase camera experiment for Advanced Virgo

    Energy Technology Data Exchange (ETDEWEB)

    Agatsuma, Kazuhiro, E-mail: agatsuma@nikhef.nl [National Institute for Subatomic Physics, Amsterdam (Netherlands); Beuzekom, Martin van; Schaaf, Laura van der [National Institute for Subatomic Physics, Amsterdam (Netherlands); Brand, Jo van den [National Institute for Subatomic Physics, Amsterdam (Netherlands); VU University, Amsterdam (Netherlands)

    2016-07-11

    We report on a study of the phase camera, which is a frequency selective wave-front sensor of a laser beam. This sensor is utilized for monitoring sidebands produced by phase modulations in a gravitational wave (GW) detector. Regarding the operation of the GW detectors, the laser modulation/demodulation method is used to measure mirror displacements and used for the position controls. This plays a significant role because the quality of controls affect the noise level of the GW detector. The phase camera is able to monitor each sideband separately, which has a great benefit for the manipulation of the delicate controls. Also, overcoming mirror aberrations will be an essential part of Advanced Virgo (AdV), which is a GW detector close to Pisa. Especially low-frequency sidebands can be affected greatly by aberrations in one of the interferometer cavities. The phase cameras allow tracking such changes because the state of the sidebands gives information on mirror aberrations. A prototype of the phase camera has been developed and is currently tested. The performance checks are almost completed and the installation of the optics at the AdV site has started. After the installation and commissioning, the phase camera will be combined to a thermal compensation system that consists of CO{sub 2} lasers and compensation plates. In this paper, we focus on the prototype and show some limitations from the scanner performance. - Highlights: • The phase camera is being developed for a gravitational wave detector. • A scanner performance limits the operation speed and layout design of the system. • An operation range was found by measuring the frequency response of the scanner.

  8. Two-Phase Algorithm for Optimal Camera Placement

    Directory of Open Access Journals (Sweden)

    Jun-Woo Ahn

    2016-01-01

    Full Text Available As markers for visual sensor networks have become larger, interest in the optimal camera placement problem has continued to increase. The most featured solution for the optimal camera placement problem is based on binary integer programming (BIP. Due to the NP-hard characteristic of the optimal camera placement problem, however, it is difficult to find a solution for a complex, real-world problem using BIP. Many approximation algorithms have been developed to solve this problem. In this paper, a two-phase algorithm is proposed as an approximation algorithm based on BIP that can solve the optimal camera placement problem for a placement space larger than in current studies. This study solves the problem in three-dimensional space for a real-world structure.

  9. Portable mini gamma camera for medical applications

    CERN Document Server

    Porras, E; Benlloch, J M; El-Djalil-Kadi-Hanifi, M; López, S; Pavon, N; Ruiz, J A; Sánchez, F; Sebastiá, A

    2002-01-01

    A small, portable and low-cost gamma camera for medical applications has been developed and clinically tested. This camera, based on a scintillator crystal and a Position Sensitive Photo-Multiplier Tube, has a useful field of view of 4.6 cm diameter and provides 2.2 mm of intrinsic spatial resolution. Its mobility and light weight allow to reach the patient from any desired direction. This camera images small organs with high efficiency and so addresses the demand for devices of specific clinical applications. In this paper, we present the camera and briefly describe the procedures that have led us to choose its configuration and the image reconstruction method. The clinical tests and diagnostic capability are also presented and discussed.

  10. GDC 360 for the endovascular treatment of intracranial aneurysms: a matched-pair study analysing angiographic outcomes with GDC 3D coils in 38 patients

    International Nuclear Information System (INIS)

    Taschner, Christian A.; Thines, Laurent; Lejeune, Jean-Paul; El-Mahdy, Mohamed; Rachdi, Henda; Gauvrit, Jean-Yves; Pruvo, Jean-Pierre; Leclerc, Xavier

    2009-01-01

    The purpose of this study was to determine whether coil embolisation with a new complex-shaped Guglielmi Detachable Coil (GDC 360 ; Boston Scientific Neurovascular, Fremont, CA, USA) has any effect on the stability of aneurysm occlusion. Fifty-one consecutive patients with intracranial aneurysms treated with GDC 360 were included. Angiographic results and adverse neurological events during the follow-up period were recorded. For 38 patients treated with GDC 360 with available follow-up data, a corresponding patient treated with GDC 3D was identified from our database. Matches were sought for rupture status, location, aneurysmal size, and neck size. The angiographic outcome of these matched controls at 6 months was compared to aneurysms treated with GDC 360 . Initial angiographic controls for 38 patients treated with GDC 360 showed complete occlusion in 32 aneurysms, and a neck remnant in six. At 6-month follow-up, complete occlusion was found in 29, a neck remnant in eight, and a residual aneurysm in one. One patient treated with GDC 360 needed retreatment for a major recanalisation. In 38 matched patients treated with GDC 3D, initial angiographic controls found complete aneurysmal occlusion in 30 aneurysms and a residual neck in 8. At 6-month follow-up, 24 aneurysms were completely occluded, ten showed a neck remnant, and residual aneurysms were seen in four. Four patients, treated with GDC 3D, were retreated for major aneurysm recanalisations. Our data suggests that endovascular coil embolisation with GDC 360 might improve long-term stability of coiled aneurysms when compared to GDC 3D. (orig.)

  11. Color reproduction software for a digital still camera

    Science.gov (United States)

    Lee, Bong S.; Park, Du-Sik; Nam, Byung D.

    1998-04-01

    We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.

  12. Camera Network Coverage Improving by Particle Swarm Optimization

    NARCIS (Netherlands)

    Xu, Y.C.; Lei, B.; Hendriks, E.A.

    2011-01-01

    This paper studies how to improve the field of view (FOV) coverage of a camera network. We focus on a special but practical scenario where the cameras are randomly scattered in a wide area and each camera may adjust its orientation but cannot move in any direction. We propose a particle swarm

  13. Real-time vehicle matching for multi-camera tunnel surveillance

    Science.gov (United States)

    Jelača, Vedran; Niño Castañeda, Jorge Oswaldo; Frías-Velázquez, Andrés; Pižurica, Aleksandra; Philips, Wilfried

    2011-03-01

    Tracking multiple vehicles with multiple cameras is a challenging problem of great importance in tunnel surveillance. One of the main challenges is accurate vehicle matching across the cameras with non-overlapping fields of view. Since systems dedicated to this task can contain hundreds of cameras which observe dozens of vehicles each, for a real-time performance computational efficiency is essential. In this paper, we propose a low complexity, yet highly accurate method for vehicle matching using vehicle signatures composed of Radon transform like projection profiles of the vehicle image. The proposed signatures can be calculated by a simple scan-line algorithm, by the camera software itself and transmitted to the central server or to the other cameras in a smart camera environment. The amount of data is drastically reduced compared to the whole image, which relaxes the data link capacity requirements. Experiments on real vehicle images, extracted from video sequences recorded in a tunnel by two distant security cameras, validate our approach.

  14. Wired and Wireless Camera Triggering with Arduino

    Science.gov (United States)

    Kauhanen, H.; Rönnholm, P.

    2017-10-01

    Synchronous triggering is an important task that allows simultaneous data capture from multiple cameras. Accurate synchronization enables 3D measurements of moving objects or from a moving platform. In this paper, we describe one wired and four wireless variations of Arduino-based low-cost remote trigger systems designed to provide a synchronous trigger signal for industrial cameras. Our wireless systems utilize 315 MHz or 434 MHz frequencies with noise filtering capacitors. In order to validate the synchronization accuracy, we developed a prototype of a rotating trigger detection system (named RoTriDeS). This system is suitable to detect the triggering accuracy of global shutter cameras. As a result, the wired system indicated an 8.91 μs mean triggering time difference between two cameras. Corresponding mean values for the four wireless triggering systems varied between 7.92 and 9.42 μs. Presented values include both camera-based and trigger-based desynchronization. Arduino-based triggering systems appeared to be feasible, and they have the potential to be extended to more complicated triggering systems.

  15. Super-resolution in plenoptic cameras using FPGAs.

    Science.gov (United States)

    Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime

    2014-05-16

    Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes.

  16. Super-Resolution in Plenoptic Cameras Using FPGAs

    Directory of Open Access Journals (Sweden)

    Joel Pérez

    2014-05-01

    Full Text Available Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA devices using VHDL (very high speed integrated circuit (VHSIC hardware description language. With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes.

  17. Learning by Viewing - Nobel Labs 360

    Science.gov (United States)

    Mather, John C.

    2013-01-01

    First of all, my thanks to the Nobel Lindau Foundation for their inspiration and leadership in sharing the excitement of scientific discovery with the public and with future scientists! I have had the pleasure of participating twice in the Lindau meetings, and recently worked with the Nobel Labs 360 project to show how we are building the world's greatest telescope yet, the James Webb Space Telescope (JWST). For the future, I see the greatest challenges for all the sciences in continued public outreach and inspiration. Outreach, so the public knows why we are doing what we are doing, and what difference it makes for them today and in the long-term future. Who knows what our destiny may be? It could be glorious, or not, depending on how we all behave. Inspiration, so that the most creative and inquisitive minds can pursue the scientific and engineering discoveries that are at the heart of so much of human prosperity, health, and progress. And, of course, national and local security depend on those discoveries too; scientists have been working with "the government" throughout recorded history. For the Lindau Nobel experiment, we have a truly abundant supply of knowledge and excitement, through the interactions of young scientists with the Nobelists, and through the lectures and the video recordings we can now share with the whole world across the Internet. But the challenge is always to draw attention! With 7 billion inhabitants on Earth, trying to earn a living and have some fun, there are plenty of competing opportunities and demands on us all. So what will draw attention to our efforts at Lindau? These days, word of mouth has become word of (computer) mouse, and ideas propagate as viruses ( or memes) across the Internet according to the interests of the participants. So our challenge is to find and match those interests, so that the efforts of our scientists, photographers, moviemakers, and writers are rewarded by our public. The world changes every day, so there

  18. Camera Control and Geo-Registration for Video Sensor Networks

    Science.gov (United States)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  19. Camera Layout Design for the Upper Stage Thrust Cone

    Science.gov (United States)

    Wooten, Tevin; Fowler, Bart

    2010-01-01

    Engineers in the Integrated Design and Analysis Division (EV30) use a variety of different tools to aid in the design and analysis of the Ares I vehicle. One primary tool in use is Pro-Engineer. Pro-Engineer is a computer-aided design (CAD) software that allows designers to create computer generated structural models of vehicle structures. For the Upper State thrust cone, Pro-Engineer was used to assist in the design of a layout for two camera housings. These cameras observe the separation between the first and second stage of the Ares I vehicle. For the Ares I-X, one standard speed camera was used. The Ares I design calls for two separate housings, three cameras, and a lighting system. With previous design concepts and verification strategies in mind, a new layout for the two camera design concept was developed with members of the EV32 team. With the new design, Pro-Engineer was used to draw the layout to observe how the two camera housings fit with the thrust cone assembly. Future analysis of the camera housing design will verify the stability and clearance of the camera with other hardware present on the thrust cone.

  20. Acceptance/operational test procedure 241-AN-107 Video Camera System

    International Nuclear Information System (INIS)

    Pedersen, L.T.

    1994-01-01

    This procedure will document the satisfactory operation of the 241-AN-107 Video Camera System. The camera assembly, including camera mast, pan-and-tilt unit, camera, and lights, will be installed in Tank 241-AN-107 to monitor activities during the Caustic Addition Project. The camera focus, zoom, and iris remote controls will be functionally tested. The resolution and color rendition of the camera will be verified using standard reference charts. The pan-and-tilt unit will be tested for required ranges of motion, and the camera lights will be functionally tested. The master control station equipment, including the monitor, VCRs, printer, character generator, and video micrometer will be set up and performance tested in accordance with original equipment manufacturer's specifications. The accuracy of the video micrometer to measure objects in the range of 0.25 inches to 67 inches will be verified. The gas drying distribution system will be tested to ensure that a drying gas can be flowed over the camera and lens in the event that condensation forms on these components. This test will be performed by attaching the gas input connector, located in the upper junction box, to a pressurized gas supply and verifying that the check valve, located in the camera housing, opens to exhaust the compressed gas. The 241-AN-107 camera system will also be tested to assure acceptable resolution of the camera imaging components utilizing the camera system lights

  1. Selecting the right digital camera for telemedicine-choice for 2009.

    Science.gov (United States)

    Patricoski, Chris; Ferguson, A Stewart; Brudzinski, Jay; Spargo, Garret

    2010-03-01

    Digital cameras are fundamental tools for store-and-forward telemedicine (electronic consultation). The choice of a camera may significantly impact this consultative process based on the quality of the images, the ability of users to leverage the cameras' features, and other facets of the camera design. The goal of this research was to provide a substantive framework and clearly defined process for reviewing digital cameras and to demonstrate the results obtained when employing this process to review point-and-shoot digital cameras introduced in 2009. The process included a market review, in-house evaluation of features, image reviews, functional testing, and feature prioritization. Seventy-two cameras were identified new on the market in 2009, and 10 were chosen for in-house evaluation. Four cameras scored very high for mechanical functionality and ease-of-use. The final analysis revealed three cameras that had excellent scores for both color accuracy and photographic detail and these represent excellent options for telemedicine: Canon Powershot SD970 IS, Fujifilm FinePix F200EXR, and Panasonic Lumix DMC-ZS3. Additional features of the Canon Powershot SD970 IS make it the camera of choice for our Alaska program.

  2. Measurement of the timing behaviour of off-the-shelf cameras

    Science.gov (United States)

    Schatz, Volker

    2017-04-01

    This paper presents a measurement method suitable for investigating the timing properties of cameras. A single light source illuminates the camera detector starting with a varying defined delay after the camera trigger. Pixels from the recorded camera frames are summed up and normalised, and the resulting function is indicative of the overlap between illumination and exposure. This allows one to infer the trigger delay and the exposure time with sub-microsecond accuracy. The method is therefore of interest when off-the-shelf cameras are used in reactive systems or synchronised with other cameras. It can supplement radiometric and geometric calibration methods for cameras in scientific use. A closer look at the measurement results reveals deviations from the ideal camera behaviour of constant sensitivity limited to the exposure interval. One of the industrial cameras investigated retains a small sensitivity long after the end of the nominal exposure interval. All three investigated cameras show non-linear variations of sensitivity at O≤ft({{10}-3}\\right) to O≤ft({{10}-2}\\right) during exposure. Due to its sign, the latter effect cannot be described by a sensitivity function depending on the time after triggering, but represents non-linear pixel characteristics.

  3. A wide field X-ray camera

    International Nuclear Information System (INIS)

    Sims, M.; Turner, M.J.L.; Willingale, R.

    1980-01-01

    A wide field of view X-ray camera based on the Dicke or Coded Mask principle is described. It is shown that this type of instrument is more sensitive than a pin-hole camera, or than a scanning survey of a given region of sky for all wide field conditions. The design of a practical camera is discussed and the sensitivity and performance of the chosen design are evaluated by means of computer simulations. The Wiener Filter and Maximum Entropy methods of deconvolution are described and these methods are compared with each other and cross-correlation using data from the computer simulations. It is shown that the analytic expressions for sensitivity used by other workers are confirmed by the simulations, and that ghost images caused by incomplete coding can be substantially eliminated by the use of the Wiener Filter and the Maximum Entropy Method, with some penalty in computer time for the latter. The cyclic mask configuration is compared with the simple mask camera. It is shown that when the diffuse X-ray background dominates, the simple system is more sensitive and has the better angular resolution. When sources dominate the simple system is less sensitive. It is concluded that the simple coded mask camera is the best instrument for wide field imaging of the X-ray sky. (orig.)

  4. LAMOST CCD camera-control system based on RTS2

    Science.gov (United States)

    Tian, Yuan; Wang, Zheng; Li, Jian; Cao, Zi-Huang; Dai, Wei; Wei, Shou-Lin; Zhao, Yong-Heng

    2018-05-01

    The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is the largest existing spectroscopic survey telescope, having 32 scientific charge-coupled-device (CCD) cameras for acquiring spectra. Stability and automation of the camera-control software are essential, but cannot be provided by the existing system. The Remote Telescope System 2nd Version (RTS2) is an open-source and automatic observatory-control system. However, all previous RTS2 applications were developed for small telescopes. This paper focuses on implementation of an RTS2-based camera-control system for the 32 CCDs of LAMOST. A virtual camera module inherited from the RTS2 camera module is built as a device component working on the RTS2 framework. To improve the controllability and robustness, a virtualized layer is designed using the master-slave software paradigm, and the virtual camera module is mapped to the 32 real cameras of LAMOST. The new system is deployed in the actual environment and experimentally tested. Finally, multiple observations are conducted using this new RTS2-framework-based control system. The new camera-control system is found to satisfy the requirements for automatic camera control in LAMOST. This is the first time that RTS2 has been applied to a large telescope, and provides a referential solution for full RTS2 introduction to the LAMOST observatory control system.

  5. The GCT camera for the Cherenkov Telescope Array

    Science.gov (United States)

    Lapington, J. S.; Abchiche, A.; Allan, D.; Amans, J.-P.; Armstrong, T. P.; Balzer, A.; Berge, D.; Boisson, C.; Bousquet, J.-J.; Bose, R.; Brown, A. M.; Bryan, M.; Buchholtz, G.; Buckley, J.; Chadwick, P. M.; Costantini, H.; Cotter, G.; Daniel, M. K.; De Franco, A.; De Frondat, F.; Dournaux, J.-L.; Dumas, D.; Ernenwein, J.-P.; Fasola, G.; Funk, S.; Gironnet, J.; Graham, J. A.; Greenshaw, T.; Hervet, O.; Hidaka, N.; Hinton, J. A.; Huet, J.-M.; Jankowsky, D.; Jegouzo, I.; Jogler, T.; Kawashima, T.; Kraus, M.; Laporte, P.; Leach, S.; Lefaucheur, J.; Markoff, S.; Melse, T.; Minaya, I. A.; Mohrmann, L.; Molyneux, P.; Moore, P.; Nolan, S. J.; Okumura, A.; Osborne, J. P.; Parsons, R. D.; Rosen, S.; Ross, D.; Rowell, G.; Rulten, C. B.; Sato, Y.; Sayede, F.; Schmoll, J.; Schoorlemmer, H.; Servillat, M.; Sol, H.; Stamatescu, V.; Stephan, M.; Stuik, R.; Sykes, J.; Tajima, H.; Thornhill, J.; Tibaldo, L.; Trichard, C.; Varner, G.; Vink, J.; Watson, J. J.; White, R.; Yamane, N.; Zech, A.; Zink, A.; Zorn, J.; CTA Consortium

    2017-12-01

    The Gamma Cherenkov Telescope (GCT) is one of the designs proposed for the Small Sized Telescope (SST) section of the Cherenkov Telescope Array (CTA). The GCT uses dual-mirror optics, resulting in a compact telescope with good image quality and a large field of view with a smaller, more economical, camera than is achievable with conventional single mirror solutions. The photon counting GCT camera is designed to record the flashes of atmospheric Cherenkov light from gamma and cosmic ray initiated cascades, which last only a few tens of nanoseconds. The GCT optics require that the camera detectors follow a convex surface with a radius of curvature of 1 m and a diameter of 35 cm, which is approximated by tiling the focal plane with 32 modules. The first camera prototype is equipped with multi-anode photomultipliers, each comprising an 8×8 array of 6×6 mm2 pixels to provide the required angular scale, adding up to 2048 pixels in total. Detector signals are shaped, amplified and digitised by electronics based on custom ASICs that provide digitisation at 1 GSample/s. The camera is self-triggering, retaining images where the focal plane light distribution matches predefined spatial and temporal criteria. The electronics are housed in the liquid-cooled, sealed camera enclosure. LED flashers at the corners of the focal plane provide a calibration source via reflection from the secondary mirror. The first GCT camera prototype underwent preliminary laboratory tests last year. In November 2015, the camera was installed on a prototype GCT telescope (SST-GATE) in Paris and was used to successfully record the first Cherenkov light of any CTA prototype, and the first Cherenkov light seen with such a dual-mirror optical system. A second full-camera prototype based on Silicon Photomultipliers is under construction. Up to 35 GCTs are envisaged for CTA.

  6. Camera Coverage Estimation Based on Multistage Grid Subdivision

    Directory of Open Access Journals (Sweden)

    Meizhen Wang

    2017-04-01

    Full Text Available Visual coverage is one of the most important quality indexes for depicting the usability of an individual camera or camera network. It is the basis for camera network deployment, placement, coverage-enhancement, planning, etc. Precision and efficiency are critical influences on applications, especially those involving several cameras. This paper proposes a new method to efficiently estimate superior camera coverage. First, the geographic area that is covered by the camera and its minimum bounding rectangle (MBR without considering obstacles is computed using the camera parameters. Second, the MBR is divided into grids using the initial grid size. The status of the four corners of each grid is estimated by a line of sight (LOS algorithm. If the camera, considering obstacles, covers a corner, the status is represented by 1, otherwise by 0. Consequently, the status of a grid can be represented by a code that is a combination of 0s or 1s. If the code is not homogeneous (not four 0s or four 1s, the grid will be divided into four sub-grids until the sub-grids are divided into a specific maximum level or their codes are homogeneous. Finally, after performing the process above, total camera coverage is estimated according to the size and status of all grids. Experimental results illustrate that the proposed method’s accuracy is determined by the method that divided the coverage area into the smallest grids at the maximum level, while its efficacy is closer to the method that divided the coverage area into the initial grids. It considers both efficiency and accuracy. The initial grid size and maximum level are two critical influences on the proposed method, which can be determined by weighing efficiency and accuracy.

  7. A Quality Evaluation of Single and Multiple Camera Calibration Approaches for an Indoor Multi Camera Tracking System

    Directory of Open Access Journals (Sweden)

    M. Adduci

    2014-06-01

    Full Text Available Human detection and tracking has been a prominent research area for several scientists around the globe. State of the art algorithms have been implemented, refined and accelerated to significantly improve the detection rate and eliminate false positives. While 2D approaches are well investigated, 3D human detection and tracking is still an unexplored research field. In both 2D/3D cases, introducing a multi camera system could vastly expand the accuracy and confidence of the tracking process. Within this work, a quality evaluation is performed on a multi RGB-D camera indoor tracking system for examining how camera calibration and pose can affect the quality of human tracks in the scene, independently from the detection and tracking approach used. After performing a calibration step on every Kinect sensor, state of the art single camera pose estimators were evaluated for checking how good the quality of the poses is estimated using planar objects such as an ordinate chessboard. With this information, a bundle block adjustment and ICP were performed for verifying the accuracy of the single pose estimators in a multi camera configuration system. Results have shown that single camera estimators provide high accuracy results of less than half a pixel forcing the bundle to converge after very few iterations. In relation to ICP, relative information between cloud pairs is more or less preserved giving a low score of fitting between concatenated pairs. Finally, sensor calibration proved to be an essential step for achieving maximum accuracy in the generated point clouds, and therefore in the accuracy of the produced 3D trajectories, from each sensor.

  8. Experimental demonstration of 360 tunable RF phase shift using slow and fast light effects

    DEFF Research Database (Denmark)

    Xue, Weiqi; Sales, Salvador; Capmany, Jose

    2009-01-01

    A microwave photonic phase shifter realizing 360º phase shift over a RF bandwidth of more than 10 GHz is demonstrated using optical filtering assisted slow and fast light effects in a cascaded structure of semiconductor optical amplifiers....

  9. PC-AT to gamma camera interface ANUGAMI-S

    International Nuclear Information System (INIS)

    Bhattacharya, Sadhana; Gopalakrishnan, K.R.

    1997-01-01

    PC-AT to gamma camera interface is an image acquisition system used in nuclear medicine centres and hospitals. The interface hardware and acquisition software have been designed and developed to meet most of the routine clinical applications using gamma camera. The state of the art design of the interface provides quality improvement in addition to image acquisition, by applying on-line uniformity correction which is very essential for gamma camera applications in nuclear medicine. The improvement in the quality of the image has been achieved by image acquisition in positionally varying and sliding energy window. It supports all acquisition modes viz. static, dynamic and gated acquisition modes with and without uniformity correction. The user interface provides the acquisition in various user selectable frame sizes, orientation and colour palettes. A complete emulation of camera console has been provided along with persistence scope and acquisition parameter display. It is a universal system which provides a modern, cost effective and easily maintainable solution for interfacing any gamma camera to PC or upgradation of analog gamma camera. (author). 4 refs., 3 figs

  10. Remote control video cameras on a suborbital rocket

    International Nuclear Information System (INIS)

    Wessling, Francis C.

    1997-01-01

    Three video cameras were controlled in real time from the ground to a sub-orbital rocket during a fifteen minute flight from White Sands Missile Range in New Mexico. Telemetry communications with the rocket allowed the control of the cameras. The pan, tilt, zoom, focus, and iris of two of the camera lenses, the power and record functions of the three cameras, and also the analog video signal that would be sent to the ground was controlled by separate microprocessors. A microprocessor was used to record data from three miniature accelerometers, temperature sensors and a differential pressure sensor. In addition to the selected video signal sent to the ground and recorded there, the video signals from the three cameras also were recorded on board the rocket. These recorders were mounted inside the pressurized segment of the rocket payload. The lenses, lens control mechanisms, and the three small television cameras were located in a portion of the rocket payload that was exposed to the vacuum of space. The accelerometers were also exposed to the vacuum of space

  11. The MARS Photon Processing Cameras for Spectral CT

    CERN Document Server

    Doesburg, Robert Michael Nicholas; Butler, APH; Renaud, PF

    This thesis is about the development of the MARS camera: a stan- dalone portable digital x-ray camera with spectral sensitivity. It is built for use in the MARS Spectral system from the Medipix2 and Medipix3 imaging chips. Photon counting detectors and Spectral CT are introduced, and Medipix is identified as a powerful new imaging device. The goals and strategy for the MARS camera are discussed. The Medipix chip physical, electronic and functional aspects, and ex- perience gained, are described. The camera hardware, firmware and supporting PC software are presented. Reports of experimental work on the process of equalisation from noise, and of tests of charge sum- ming mode, conclude the main body of the thesis. The camera has been actively used since late 2009 in pre-clinical re- search. A list of publications that derive from the use of the camera and the MARS Spectral scanner demonstrates the practical benefits already obtained from this work. Two of the publications are first- author, eight are co-authore...

  12. Rosetta Star Tracker and Navigation Camera

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera.......Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera....

  13. Retinal axial focusing and multi-layer imaging with a liquid crystal adaptive optics camera

    International Nuclear Information System (INIS)

    Liu Rui-Xue; Zheng Xian-Liang; Li Da-Yu; Hu Li-Fa; Cao Zhao-Liang; Mu Quan-Quan; Xuan Li; Xia Ming-Liang

    2014-01-01

    With the help of adaptive optics (AO) technology, cellular level imaging of living human retina can be achieved. Aiming to reduce distressing feelings and to avoid potential drug induced diseases, we attempted to image retina with dilated pupil and froze accommodation without drugs. An optimized liquid crystal adaptive optics camera was adopted for retinal imaging. A novel eye stared system was used for stimulating accommodation and fixating imaging area. Illumination sources and imaging camera kept linkage for focusing and imaging different layers. Four subjects with diverse degree of myopia were imaged. Based on the optical properties of the human eye, the eye stared system reduced the defocus to less than the typical ocular depth of focus. In this way, the illumination light can be projected on certain retina layer precisely. Since that the defocus had been compensated by the eye stared system, the adopted 512 × 512 liquid crystal spatial light modulator (LC-SLM) corrector provided the crucial spatial fidelity to fully compensate high-order aberrations. The Strehl ratio of a subject with −8 diopter myopia was improved to 0.78, which was nearly close to diffraction-limited imaging. By finely adjusting the axial displacement of illumination sources and imaging camera, cone photoreceptors, blood vessels and nerve fiber layer were clearly imaged successfully. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  14. CamOn: A Real-Time Autonomous Camera Control System

    DEFF Research Database (Denmark)

    Burelli, Paolo; Jhala, Arnav Harish

    2009-01-01

    This demonstration presents CamOn, an autonomous cam- era control system for real-time 3D games. CamOn employs multiple Artificial Potential Fields (APFs), a robot motion planning technique, to control both the location and orienta- tion of the camera. Scene geometry from the 3D environment...... contributes to the potential field that is used to determine po- sition and movement of the camera. Composition constraints for the camera are modelled as potential fields for controlling the view target of the camera. CamOn combines the compositional benefits of constraint- based camera systems, and improves...

  15. The Light Field Attachment: Turning a DSLR into a Light Field Camera Using a Low Budget Camera Ring

    KAUST Repository

    Wang, Yuwang; Liu, Yebin; Heidrich, Wolfgang; Dai, Qionghai

    2016-01-01

    camera. From an algorithmic point of view, the high-quality light field mode is made possible by a new light field super-resolution method that first improves the spatial resolution and image quality of the side cameras and then interpolates additional

  16. Camera systems in human motion analysis for biomedical applications

    Science.gov (United States)

    Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.

    2015-05-01

    Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.

  17. Preliminary field evaluation of solid state cameras for security applications

    International Nuclear Information System (INIS)

    1987-01-01

    Recent developments in solid state imager technology have resulted in a series of compact, lightweight, all-solid-state closed circuit television (CCTV) cameras. Although it is widely known that the various solid state cameras have less light sensitivity and lower resolution than their vacuum tube counterparts, the potential for having a much longer Mean Time Between Failure (MTBF) for the all-solid-state cameras is generating considerable interest within the security community. Questions have been raised as to whether the newest and best of the solid state cameras are a viable alternative to the high maintenance vacuum tube cameras in exterior security applications. To help answer these questions, a series of tests were performed by Sandia National Laboratories at various test sites and under several lighting conditions. In general, all-solid-state cameras need to be improved in four areas before they can be used as wholesale replacements for tube cameras in exterior security applications: resolution, sensitivity, contrast, and smear. However, with careful design some of the higher performance cameras can be used for perimeter security systems, and all of the cameras have applications where they are uniquely qualified. Many of the cameras are well suited for interior assessment and surveillance uses, and several of the cameras are well designed as robotics and machine vision devices

  18. Calibration Procedures in Mid Format Camera Setups

    Science.gov (United States)

    Pivnicka, F.; Kemper, G.; Geissler, S.

    2012-07-01

    A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU), the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and camera can be applied

  19. Calibration of a Stereo Radiation Detection Camera Using Planar Homography

    Directory of Open Access Journals (Sweden)

    Seung-Hae Baek

    2016-01-01

    Full Text Available This paper proposes a calibration technique of a stereo gamma detection camera. Calibration of the internal and external parameters of a stereo vision camera is a well-known research problem in the computer vision society. However, few or no stereo calibration has been investigated in the radiation measurement research. Since no visual information can be obtained from a stereo radiation camera, it is impossible to use a general stereo calibration algorithm directly. In this paper, we develop a hybrid-type stereo system which is equipped with both radiation and vision cameras. To calibrate the stereo radiation cameras, stereo images of a calibration pattern captured from the vision cameras are transformed in the view of the radiation cameras. The homography transformation is calibrated based on the geometric relationship between visual and radiation camera coordinates. The accuracy of the stereo parameters of the radiation camera is analyzed by distance measurements to both visual light and gamma sources. The experimental results show that the measurement error is about 3%.

  20. The future of consumer cameras

    Science.gov (United States)

    Battiato, Sebastiano; Moltisanti, Marco

    2015-03-01

    In the last two decades multimedia, and in particular imaging devices (camcorders, tablets, mobile phones, etc.) have been dramatically diffused. Moreover the increasing of their computational performances, combined with an higher storage capability, allows them to process large amount of data. In this paper an overview of the current trends of consumer cameras market and technology will be given, providing also some details about the recent past (from Digital Still Camera up today) and forthcoming key issues.

  1. Single chip camera active pixel sensor

    Science.gov (United States)

    Shaw, Timothy (Inventor); Pain, Bedabrata (Inventor); Olson, Brita (Inventor); Nixon, Robert H. (Inventor); Fossum, Eric R. (Inventor); Panicacci, Roger A. (Inventor); Mansoorian, Barmak (Inventor)

    2003-01-01

    A totally digital single chip camera includes communications to operate most of its structure in serial communication mode. The digital single chip camera include a D/A converter for converting an input digital word into an analog reference signal. The chip includes all of the necessary circuitry for operating the chip using a single pin.

  2. Perceptual Color Characterization of Cameras

    Directory of Open Access Journals (Sweden)

    Javier Vazquez-Corral

    2014-12-01

    Full Text Available Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as \\(XYZ\\, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a \\(3 \\times 3\\ matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson al., to perform a perceptual color characterization. In particular, we search for the \\(3 \\times 3\\ matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE \\(\\Delta E\\ error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3for the \\(\\Delta E\\ error, 7& for the S-CIELAB error and 13% for the CID error measures.

  3. Feasibility of LED-Assisted CMOS Camera: Contrast Estimation for Laser Tattoo Treatment

    Directory of Open Access Journals (Sweden)

    Ngot Thi Pham

    2018-04-01

    Full Text Available Understanding the residual tattoo ink in skin after laser treatment is often critical for achieving good clinical outcomes. The current study aims to investigate the feasibility of a light-emitting diode (LED-assisted CMOS camera to estimate the relative variations in tattoo contrast after the laser treatment. Asian mice were tattooed using two color inks (black and red. The LED illumination was a separate process from the laser tattoo treatment. Images of the ink tattoos in skin were acquired under the irradiation of three different LED colors (red, green, and blue for pre- and post-treatment. The degree of contrast variation due to the treatment was calculated and compared with the residual tattoo distribution in the skin. The black tattoo demonstrated that the contrast consistently decreased after the laser treatment for all LED colors. However, the red tattoo showed that the red LED yielded an insignificant contrast whereas the green and blue LEDs induced a 30% (p < 0.001 and 26% (p < 0.01 contrast reduction between the treatment conditions, respectively. The proposed LED-assisted CMOS camera can estimate the relative variations in the image contrast before and after the laser tattoo treatment.

  4. Scintillation camera with second order resolution

    International Nuclear Information System (INIS)

    Muehllehner, G.

    1976-01-01

    A scintillation camera for use in radioisotope imaging to determine the concentration of radionuclides in a two-dimensional area is described in which means is provided for second order positional resolution. The phototubes, which normally provide only a single order of resolution, are modified to provide second order positional resolution of radiation within an object positioned for viewing by the scintillation camera. The phototubes are modified in that multiple anodes are provided to receive signals from the photocathode in a manner such that each anode is particularly responsive to photoemissions from a limited portion of the photocathode. Resolution of radioactive events appearing as an output of this scintillation camera is thereby improved

  5. Mechanical Design of the LSST Camera

    Energy Technology Data Exchange (ETDEWEB)

    Nordby, Martin; Bowden, Gordon; Foss, Mike; Guiffre, Gary; /SLAC; Ku, John; /Unlisted; Schindler, Rafe; /SLAC

    2008-06-13

    The LSST camera is a tightly packaged, hermetically-sealed system that is cantilevered into the main beam of the LSST telescope. It is comprised of three refractive lenses, on-board storage for five large filters, a high-precision shutter, and a cryostat that houses the 3.2 giga-pixel CCD focal plane along with its support electronics. The physically large optics and focal plane demand large structural elements to support them, but the overall size of the camera and its components must be minimized to reduce impact on the image stability. Also, focal plane and optics motions must be minimized to reduce systematic errors in image reconstruction. Design and analysis for the camera body and cryostat will be detailed.

  6. Scintillation camera with second order resolution

    International Nuclear Information System (INIS)

    1975-01-01

    A scintillation camera is described for use in radioisotope imaging to determine the concentration of radionuclides in a two-dimensional area in which means is provided for second-order positional resolution. The phototubes which normally provide only a single order of resolution, are modified to provide second-order positional resolution of radiation within an object positioned for viewing by the scintillation camera. The phototubes are modified in that multiple anodes are provided to receive signals from the photocathode in a manner such that each anode is particularly responsive to photoemissions from a limited portion of the photocathode. Resolution of radioactive events appearing as an output of this scintillation camera is thereby improved

  7. Degree-degree correlations in random graphs with heavy-tailed degrees

    NARCIS (Netherlands)

    Hofstad, van der R.W.; Litvak, N.

    2014-01-01

    Mixing patterns in large self-organizing networks, such as the Internet, the World Wide Web, social, and biological networks are often characterized by degree-degree dependencies between neighboring nodes. In assortative networks, the degree-degree dependencies are positive (nodes with similar

  8. Degree-Degree Dependencies in Random Graphs with Heavy-Tailed Degrees

    NARCIS (Netherlands)

    van der Hofstad, Remco; Litvak, Nelly

    2014-01-01

    Mixing patterns in large self-organizing networks, such as the Internet, the World Wide Web, social, and biological networks are often characterized by degree-degree dependencies between neighboring nodes. In assortative networks, the degree-degree dependencies are positive (nodes with similar

  9. 25 CFR 1000.360 - Is the trust evaluation standard or process different when the trust asset is held in trust for...

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false Is the trust evaluation standard or process different when the trust asset is held in trust for an individual Indian or Indian allottee? 1000.360 Section 1000.360 Indians OFFICE OF THE ASSISTANT SECRETARY, INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ANNUAL FUNDING AGREEMENTS UNDER THE TRIBAL SELF-GOVERNMEN...

  10. MISR L1B3 Radiometric Camera-by-camera Cloud Mask Product subset for the RICO region V004

    Data.gov (United States)

    National Aeronautics and Space Administration — This file contains the Radiometric camera-by-camera Cloud Mask dataset over the RICO region. It is used to determine whether a scene is classified as clear or...

  11. Design and tests of a portable mini gamma camera

    International Nuclear Information System (INIS)

    Sanchez, F.; Benlloch, J.M.; Escat, B.; Pavon, N.; Porras, E.; Kadi-Hanifi, D.; Ruiz, J.A.; Mora, F.J.; Sebastia, A.

    2004-01-01

    Design optimization, manufacturing, and tests, both laboratory and clinical, of a portable gamma camera for medical applications are presented. This camera, based on a continuous scintillation crystal and a position-sensitive photomultiplier tube, has an intrinsic spatial resolution of ≅2 mm, an energy resolution of 13% at 140 keV, and linearities of 0.28 mm (absolute) and 0.15 mm (differential), with a useful field of view of 4.6 cm diameter. Our camera can image small organs with high efficiency and so it can address the demand for devices of specific clinical applications like thyroid and sentinel node scintigraphy as well as scintimammography and radio-guided surgery. The main advantages of the gamma camera with respect to those previously reported in the literature are high portability, low cost, and weight (2 kg), with no significant loss of sensitivity and spatial resolution. All the electronic components are packed inside the minigamma camera, and no external electronic devices are required. The camera is only connected through the universal serial bus port to a portable personal computer (PC), where a specific software allows to control both the camera parameters and the measuring process, by displaying on the PC the acquired image on 'real time'. In this article, we present the camera and describe the procedures that have led us to choose its configuration. Laboratory and clinical tests are presented together with diagnostic capabilities of the gamma camera

  12. Calibration of Low Cost RGB and NIR Uav Cameras

    Science.gov (United States)

    Fryskowska, A.; Kedzierski, M.; Grochala, A.; Braula, A.

    2016-06-01

    Non-metric digital cameras are being widely used for photogrammetric studies. The increase in resolution and quality of images obtained by non-metric cameras, allows to use it in low-cost UAV and terrestrial photogrammetry. Imagery acquired with non-metric cameras can be used in 3D modeling of objects or landscapes, reconstructing of historical sites, generating digital elevation models (DTM), orthophotos, or in the assessment of accidents. Non-metric digital camcorders are characterized by instability and ignorance of the interior orientation parameters. Therefore, the use of these devices requires prior calibration. Calibration research was conducted using non-metric camera, different calibration tests and various software. The first part of the paper contains a brief theoretical introduction including the basic definitions, like the construction of non-metric cameras or description of different optical distortions. The second part of the paper contains cameras calibration process, details of the calibration methods and models that have been used. Sony Nex 5 camera calibration has been done using software: Image Master Calib, Matlab - Camera Calibrator application and Agisoft Lens. For the study 2D test fields has been used. As a part of the research a comparative analysis of the results have been done.

  13. An integrated port camera and display system for laparoscopy.

    Science.gov (United States)

    Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E

    2010-05-01

    In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.

  14. Global Calibration of Multiple Cameras Based on Sphere Targets

    Directory of Open Access Journals (Sweden)

    Junhua Sun

    2016-01-01

    Full Text Available Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three, while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view.

  15. Presence capture cameras - a new challenge to the image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  16. Compact Optical Technique for Streak Camera Calibration

    International Nuclear Information System (INIS)

    Bell, P; Griffith, R; Hagans, K; Lerche, R; Allen, C; Davies, T; Janson, F; Justin, R; Marshall, B; Sweningsen, O

    2004-01-01

    The National Ignition Facility (NIF) is under construction at the Lawrence Livermore National Laboratory (LLNL) for the U.S. Department of Energy Stockpile Stewardship Program. Optical streak cameras are an integral part of the experimental diagnostics instrumentation. To accurately reduce data from the streak cameras a temporal calibration is required. This article describes a technique for generating trains of precisely timed short-duration optical pulses1 (optical comb generators) that are suitable for temporal calibrations. These optical comb generators (Figure 1) are used with the LLNL optical streak cameras. They are small, portable light sources that produce a series of temporally short, uniformly spaced, optical pulses. Comb generators have been produced with 0.1, 0.5, 1, 3, 6, and 10-GHz pulse trains of 780-nm wavelength light with individual pulse durations of ∼25-ps FWHM. Signal output is via a fiber-optic connector. Signal is transported from comb generator to streak camera through multi-mode, graded-index optical fibers. At the NIF, ultra-fast streak-cameras are used by the Laser Fusion Program experimentalists to record fast transient optical signals. Their temporal resolution is unmatched by any other transient recorder. Their ability to spatially discriminate an image along the input slit allows them to function as a one-dimensional image recorder, time-resolved spectrometer, or multichannel transient recorder. Depending on the choice of photocathode, they can be made sensitive to photon energies from 1.1 eV to 30 keV and beyond. Comb generators perform two important functions for LLNL streak-camera users. First, comb generators are used as a precision time-mark generator for calibrating streak camera sweep rates. Accuracy is achieved by averaging many streak camera images of comb generator signals. Time-base calibrations with portable comb generators are easily done in both the calibration laboratory and in situ. Second, comb signals are applied

  17. Review of Calibration Methods for Scheimpflug Camera

    Directory of Open Access Journals (Sweden)

    Cong Sun

    2018-01-01

    Full Text Available The Scheimpflug camera offers a wide range of applications in the field of typical close-range photogrammetry, particle image velocity, and digital image correlation due to the fact that the depth-of-view of Scheimpflug camera can be greatly extended according to the Scheimpflug condition. Yet, the conventional calibration methods are not applicable in this case because the assumptions used by classical calibration methodologies are not valid anymore for cameras undergoing Scheimpflug condition. Therefore, various methods have been investigated to solve the problem over the last few years. However, no comprehensive review exists that provides an insight into recent calibration methods of Scheimpflug cameras. This paper presents a survey of recent calibration methods of Scheimpflug cameras with perspective lens, including the general nonparametric imaging model, and analyzes in detail the advantages and drawbacks of the mainstream calibration models with respect to each other. Real data experiments including calibrations, reconstructions, and measurements are performed to assess the performance of the models. The results reveal that the accuracies of the RMM, PLVM, PCIM, and GNIM are basically equal, while the accuracy of GNIM is slightly lower compared with the other three parametric models. Moreover, the experimental results reveal that the parameters of the tangential distortion are likely coupled with the tilt angle of the sensor in Scheimpflug calibration models. The work of this paper lays the foundation of further research of Scheimpflug cameras.

  18. Low-cost uncooled VOx infrared camera development

    Science.gov (United States)

    Li, Chuan; Han, C. J.; Skidmore, George D.; Cook, Grady; Kubala, Kenny; Bates, Robert; Temple, Dorota; Lannon, John; Hilton, Allan; Glukh, Konstantin; Hardy, Busbee

    2013-06-01

    The DRS Tamarisk® 320 camera, introduced in 2011, is a low cost commercial camera based on the 17 µm pixel pitch 320×240 VOx microbolometer technology. A higher resolution 17 µm pixel pitch 640×480 Tamarisk®640 has also been developed and is now in production serving the commercial markets. Recently, under the DARPA sponsored Low Cost Thermal Imager-Manufacturing (LCTI-M) program and internal project, DRS is leading a team of industrial experts from FiveFocal, RTI International and MEMSCAP to develop a small form factor uncooled infrared camera for the military and commercial markets. The objective of the DARPA LCTI-M program is to develop a low SWaP camera (costs less than US $500 based on a 10,000 units per month production rate. To meet this challenge, DRS is developing several innovative technologies including a small pixel pitch 640×512 VOx uncooled detector, an advanced digital ROIC and low power miniature camera electronics. In addition, DRS and its partners are developing innovative manufacturing processes to reduce production cycle time and costs including wafer scale optic and vacuum packaging manufacturing and a 3-dimensional integrated camera assembly. This paper provides an overview of the DRS Tamarisk® project and LCTI-M related uncooled technology development activities. Highlights of recent progress and challenges will also be discussed. It should be noted that BAE Systems and Raytheon Vision Systems are also participants of the DARPA LCTI-M program.

  19. A method and results of color calibration for the Chang'e-3 terrain camera and panoramic camera

    Science.gov (United States)

    Ren, Xin; Li, Chun-Lai; Liu, Jian-Jun; Wang, Fen-Fei; Yang, Jian-Feng; Liu, En-Hai; Xue, Bin; Zhao, Ru-Jin

    2014-12-01

    The terrain camera (TCAM) and panoramic camera (PCAM) are two of the major scientific payloads installed on the lander and rover of the Chang'e 3 mission respectively. They both use a Bayer color filter array covering CMOS sensor to capture color images of the Moon's surface. RGB values of the original images are related to these two kinds of cameras. There is an obvious color difference compared with human visual perception. This paper follows standards published by the International Commission on Illumination to establish a color correction model, designs the ground calibration experiment and obtains the color correction coefficient. The image quality has been significantly improved and there is no obvious color difference in the corrected images. Ground experimental results show that: (1) Compared with uncorrected images, the average color difference of TCAM is 4.30, which has been reduced by 62.1%. (2) The average color differences of the left and right cameras in PCAM are 4.14 and 4.16, which have been reduced by 68.3% and 67.6% respectively.

  20. MonoSLAM: real-time single camera SLAM.

    Science.gov (United States)

    Davison, Andrew J; Reid, Ian D; Molton, Nicholas D; Stasse, Olivier

    2007-06-01

    We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to Structure from Motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera.

  1. Designing Camera Networks by Convex Quadratic Programming

    KAUST Repository

    Ghanem, Bernard

    2015-05-04

    ​In this paper, we study the problem of automatic camera placement for computer graphics and computer vision applications. We extend the problem formulations of previous work by proposing a novel way to incorporate visibility constraints and camera-to-camera relationships. For example, the placement solution can be encouraged to have cameras that image the same important locations from different viewing directions, which can enable reconstruction and surveillance tasks to perform better. We show that the general camera placement problem can be formulated mathematically as a convex binary quadratic program (BQP) under linear constraints. Moreover, we propose an optimization strategy with a favorable trade-off between speed and solution quality. Our solution is almost as fast as a greedy treatment of the problem, but the quality is significantly higher, so much so that it is comparable to exact solutions that take orders of magnitude more computation time. Because it is computationally attractive, our method also allows users to explore the space of solutions for variations in input parameters. To evaluate its effectiveness, we show a range of 3D results on real-world floorplans (garage, hotel, mall, and airport). ​

  2. Addressing challenges of modulation transfer function measurement with fisheye lens cameras

    Science.gov (United States)

    Deegan, Brian M.; Denny, Patrick E.; Zlokolica, Vladimir; Dever, Barry; Russell, Laura

    2015-03-01

    Modulation transfer function (MTF) is a well defined and accepted method of measuring image sharpness. The slanted edge test, as defined in ISO12233 is a standard method of calculating MTF, and is widely used for lens alignment and auto-focus algorithm verification. However, there are a number of challenges which should be considered when measuring MTF in cameras with fisheye lenses. Due to trade-offs related Petzval curvature, planarity of the optical plane is difficult to achieve in fisheye lenses. It is therefore critical to have the ability to accurately measure sharpness throughout the entire image, particularly for lens alignment. One challenge for fisheye lenses is that, because of the radial distortion, the slanted edges will have different angles, depending on the location within the image and on the distortion profile of the lens. Previous work in the literature indicates that MTF measurements are robust for angles between 2 and 10 degrees. Outside of this range, MTF measurements become unreliable. Also, the slanted edge itself will be curved by the lens distortion, causing further measurement problems. This study summarises the difficulties in the use of MTF for sharpness measurement in fisheye lens cameras, and proposes mitigations and alternative methods.

  3. Astronomy and the camera obscura

    Science.gov (United States)

    Feist, M.

    2000-02-01

    The camera obscura (from Latin meaning darkened chamber) is a simple optical device with a long history. In the form considered here, it can be traced back to 1550. It had its heyday during the Victorian era when it was to be found at the seaside as a tourist attraction or sideshow. It was also used as an artist's drawing aid and, in 1620, the famous astronomer-mathematician, Johannes Kepler used a small tent camera obscura to trace the scenery.

  4. Streak cameras and their applications

    International Nuclear Information System (INIS)

    Bernet, J.M.; Imhoff, C.

    1987-01-01

    Over the last several years, development of various measurement techniques in the nanosecond and pico-second range has led to increased reliance on streak cameras. This paper will present the main electronic and optoelectronic performances of the Thomson-CSF TSN 506 cameras and their associated devices used to build an automatic image acquisition and processing system (NORMA). A brief survey of the diversity and the spread of the use of high speed electronic cinematography will be illustrated by a few typical applications [fr

  5. Distributed embedded smart cameras architectures, design and applications

    CERN Document Server

    Velipasalar, Senem

    2014-01-01

    This publication addresses distributed embedded smart camerascameras that perform onboard analysis and collaborate with other cameras. This book provides the material required to better understand the architectural design challenges of embedded smart camera systems, the hardware/software ecosystem, the design approach for, and applications of distributed smart cameras together with the state-of-the-art algorithms. The authors concentrate on the architecture, hardware/software design, realization of smart camera networks from applications to architectures, in particular in the embedded and mobile domains. •                    Examines energy issues related to wireless communication such as decreasing energy consumption to increase battery-life •                    Discusses processing large volumes of video data on an embedded environment in real-time •                    Covers design of realistic applications of distributed and embedded smart...

  6. Isochronous 180 degree turns for the SLC positron system

    International Nuclear Information System (INIS)

    Helm, R.H.; Clendenin, J.E.; Ecklund, S.D.; Kulikov, A.V.; Pitthan, R.

    1991-05-01

    The design of the compact, achromatic, second order isochronous 180 degrees turn for the SLC positron transport system will be described. Design criteria require an energy range of 200±20 MeV, energy acceptance of ±5%, transverse admittance of 25π mm-mr, and minimal lengthening of the 3 to 4 mm (rms) positron bunch. The devices had to fit within a maximum height or width of about 10 ft. Optics specifications and theoretical performance are presented and compared to experimental results based on streak camera measurements of bunch length immediately after the first isochronous turn (200 MeV) and positron beam energy spread after S-band acceleration to 1.15 GeV. 5 refs., 7 figs

  7. Solid-state framing camera with multiple time frames

    Energy Technology Data Exchange (ETDEWEB)

    Baker, K. L.; Stewart, R. E.; Steele, P. T.; Vernon, S. P.; Hsing, W. W.; Remington, B. A. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States)

    2013-10-07

    A high speed solid-state framing camera has been developed which can operate over a wide range of photon energies. This camera measures the two-dimensional spatial profile of the flux incident on a cadmium selenide semiconductor at multiple times. This multi-frame camera has been tested at 3.1 eV and 4.5 keV. The framing camera currently records two frames with a temporal separation between the frames of 5 ps but this separation can be varied between hundreds of femtoseconds up to nanoseconds and the number of frames can be increased by angularly multiplexing the probe beam onto the cadmium selenide semiconductor.

  8. Tomographic Small-Animal Imaging Using a High-Resolution Semiconductor Camera

    Science.gov (United States)

    Kastis, GA; Wu, MC; Balzer, SJ; Wilson, DW; Furenlid, LR; Stevenson, G; Barber, HB; Barrett, HH; Woolfenden, JM; Kelly, P; Appleby, M

    2015-01-01

    We have developed a high-resolution, compact semiconductor camera for nuclear medicine applications. The modular unit has been used to obtain tomographic images of phantoms and mice. The system consists of a 64 x 64 CdZnTe detector array and a parallel-hole tungsten collimator mounted inside a 17 cm x 5.3 cm x 3.7 cm tungsten-aluminum housing. The detector is a 2.5 cm x 2.5 cm x 0.15 cm slab of CdZnTe connected to a 64 x 64 multiplexer readout via indium-bump bonding. The collimator is 7 mm thick, with a 0.38 mm pitch that matches the detector pixel pitch. We obtained a series of projections by rotating the object in front of the camera. The axis of rotation was vertical and about 1.5 cm away from the collimator face. Mouse holders were made out of acrylic plastic tubing to facilitate rotation and the administration of gas anesthetic. Acquisition times were varied from 60 sec to 90 sec per image for a total of 60 projections at an equal spacing of 6 degrees between projections. We present tomographic images of a line phantom and mouse bone scan and assess the properties of the system. The reconstructed images demonstrate spatial resolution on the order of 1–2 mm. PMID:26568676

  9. X-ray powder diffraction camera for high-field experiments

    International Nuclear Information System (INIS)

    Koyama, K; Mitsui, Y; Takahashi, K; Watanabe, K

    2009-01-01

    We have designed a high-field X-ray diffraction (HF-XRD) camera which will be inserted into an experimental room temperature bore (100 mm) of a conventional solenoid-type cryocooled superconducting magnet (10T-CSM). Using the prototype camera that is same size of the HF-XRD camera, a XRD pattern of Si is taken at room temperature in a zero magnetic field. From the obtained results, the expected ability of the designed HF-XRD camera is presented.

  10. Combining local and global optimisation for virtual camera control

    OpenAIRE

    Burelli, Paolo; Yannakakis, Georgios N.; 2010 IEEE Symposium on Computational Intelligence and Games

    2010-01-01

    Controlling a virtual camera in 3D computer games is a complex task. The camera is required to react to dynamically changing environments and produce high quality visual results and smooth animations. This paper proposes an approach that combines local and global search to solve the virtual camera control problem. The automatic camera control problem is described and it is decomposed into sub-problems; then a hierarchical architecture that solves each sub-problem using the most appropriate op...

  11. Ultra-fast framing camera tube

    Science.gov (United States)

    Kalibjian, Ralph

    1981-01-01

    An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.

  12. Dark Energy Camera for Blanco

    Energy Technology Data Exchange (ETDEWEB)

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images from the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.

  13. Design and Construction of an X-ray Lightning Camera

    Science.gov (United States)

    Schaal, M.; Dwyer, J. R.; Rassoul, H. K.; Uman, M. A.; Jordan, D. M.; Hill, J. D.

    2010-12-01

    A pinhole-type camera was designed and built for the purpose of producing high-speed images of the x-ray emissions from rocket-and-wire-triggered lightning. The camera consists of 30 7.62-cm diameter NaI(Tl) scintillation detectors, each sampling at 10 million frames per second. The steel structure of the camera is encased in 1.27-cm thick lead, which blocks x-rays that are less than 400 keV, except through a 7.62-cm diameter “pinhole” aperture located at the front of the camera. The lead and steel structure is covered in 0.16-cm thick aluminum to block RF noise, water and light. All together, the camera weighs about 550-kg and is approximately 1.2-m x 0.6-m x 0.6-m. The image plane, which is adjustable, was placed 32-cm behind the pinhole aperture, giving a field of view of about ±38° in both the vertical and horizontal directions. The elevation of the camera is adjustable between 0 and 50° from horizontal and the camera may be pointed in any azimuthal direction. In its current configuration, the camera’s angular resolution is about 14°. During the summer of 2010, the x-ray camera was located 44-m from the rocket-launch tower at the UF/Florida Tech International Center for Lightning Research and Testing (ICLRT) at Camp Blanding, FL and several rocket-triggered lightning flashes were observed. In this presentation, I will discuss the design, construction and operation of this x-ray camera.

  14. Conceptual design of a neutron camera for MAST Upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Weiszflog, M., E-mail: matthias.weiszflog@physics.uu.se; Sangaroon, S.; Cecconello, M.; Conroy, S.; Ericsson, G.; Klimek, I. [Department of Physics and Astronomy, Uppsala University, EURATOM-VR Association, Uppsala (Sweden); Keeling, D.; Martin, R. [CCFE, Culham Science Centre, Abingdon (United Kingdom); Turnyanskiy, M. [ITER Physics Department, EFDA CSU Garching, Boltzmannstrae 2, D-85748 Garching (Germany)

    2014-11-15

    This paper presents two different conceptual designs of neutron cameras for Mega Ampere Spherical Tokamak (MAST) Upgrade. The first one consists of two horizontal cameras, one equatorial and one vertically down-shifted by 65 cm. The second design, viewing the plasma in a poloidal section, also consists of two cameras, one radial and the other one with a diagonal view. Design parameters for the different cameras were selected on the basis of neutron transport calculations and on a set of target measurement requirements taking into account the predicted neutron emissivities in the different MAST Upgrade operating scenarios. Based on a comparison of the cameras’ profile resolving power, the horizontal cameras are suggested as the best option.

  15. CALIBRATION PROCEDURES IN MID FORMAT CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    F. Pivnicka

    2012-07-01

    Full Text Available A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU, the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and

  16. Laser scanning camera inspects hazardous area

    International Nuclear Information System (INIS)

    Fryatt, A.; Miprode, C.

    1985-01-01

    Main operational characteristics of a new laser scanning camera are presented. The camera is intended primarily for low level high resolution viewing inside nuclear reactors. It uses a He-Ne laser beam raster; by detecting the reflected light by means of a phomultiplier, the subject under observation can be reconstructed in an electronic video store and reviewed on a conventional monitor screen

  17. Ultra fast x-ray streak camera

    International Nuclear Information System (INIS)

    Coleman, L.W.; McConaghy, C.F.

    1975-01-01

    A unique ultrafast x-ray sensitive streak camera, with a time resolution of 50psec, has been built and operated. A 100A thick gold photocathode on a beryllium vacuum window is used in a modified commerical image converter tube. The X-ray streak camera has been used in experiments to observe time resolved emission from laser-produced plasmas. (author)

  18. Centering mount for a gamma camera

    International Nuclear Information System (INIS)

    Mirkhodzhaev, A.Kh.; Kuznetsov, N.K.; Ostryj, Yu.E.

    1988-01-01

    A device for centering a γ-camera detector in case of radionuclide diagnosis is described. It permits the use of available medical coaches instead of a table with a transparent top. The device can be used for centering a detector (when it is fixed at the low end of a γ-camera) on a required area of the patient's body

  19. Application of colon capsule endoscopy (CCE to evaluate the whole gastrointestinal tract: a comparative study of single-camera and dual-camera analysis

    Directory of Open Access Journals (Sweden)

    Remes-Troche JM

    2013-09-01

    Full Text Available José María Remes-Troche,1 Victoria Alejandra Jiménez-García,2 Josefa María García-Montes,2 Pedro Hergueta-Delgado,2 Federico Roesch-Dietlen,1 Juan Manuel Herrerías-Gutiérrez2 1Digestive Physiology and Motility Lab, Medical Biological Research Institute, Universidad Veracruzana, Veracruz, México; 2Gastroenterology Service, Virgen Macarena University Hospital, Seville, Spain Background and study aims: Colon capsule endoscopy (CCE was developed for the evaluation of colorectal pathology. In this study, our aim was to assess if a dual-camera analysis using CCE allows better evaluation of the whole gastrointestinal (GI tract compared to a single-camera analysis. Patients and methods: We included 21 patients (12 males, mean age 56.20 years submitted for a CCE examination. After standard colon preparation, the colon capsule endoscope (PillCam Colon™ was swallowed after reinitiation from its “sleep” mode. Four physicians performed the analysis: two reviewed both video streams at the same time (dual-camera analysis; one analyzed images from one side of the device (“camera 1”; and the other reviewed the opposite side (“camera 2”. We compared numbers of findings from different parts of the entire GI tract and level of agreement among reviewers. Results: A complete evaluation of the GI tract was possible in all patients. Dual-camera analysis provided 16% and 5% more findings compared to camera 1 and camera 2 analysis, respectively. Overall agreement was 62.7% (kappa = 0.44, 95% CI: 0.373–0.510. Esophageal (kappa = 0.611 and colorectal (kappa = 0.595 findings had a good level of agreement, while small bowel (kappa = 0.405 showed moderate agreement. Conclusion: The use of dual-camera analysis with CCE for the evaluation of the GI tract is feasible and detects more abnormalities when compared with single-camera analysis. Keywords: capsule endoscopy, colon, gastrointestinal tract, small bowel

  20. Static Histomorphometry of the iliac crest after 360 days of antiorthostatic bed rest with and without countermeasures

    Science.gov (United States)

    Thomsen, J. S.; Morukov, B. V.; Vico, L.; Saparin, P. I.; Gowin, W.

    The loss of bone during immobilization is well-known and investigated, whereas the structural changes human cancellous bone undergoes during disuse is less well examined. The aim of the study was to examine the influence of hypokinesia on the static histomorphometric measures of the iliac crest using a 360-day-long bed rest experiment, simulating exposure to microgravity. Eight healthy males underwent 360 days of 5° head-down tilt bed rest. Three subjects were treated with the bisphosphonate Xidifon (900 mg/day) combined with a treadmill and ergonometer exercise regimen (1--2 hours/day) for the entire study period. Five subjects underwent 120 days of bed rest without countermeasures followed by 240 days of bed rest with the treadmill and ergonometer exercise regimen. Transiliac bone biopsies were obtained either at day 0 and 360 or at day 0, 120, and 360 at alternating sides of the ileum. The biopsies were embedded in methylmethacrylate, cut in 7-μm-thick sections, stained with Goldner trichrome, and static histomorphometry was performed. 120 days of bed rest without countermeasures resulted in decreased trabecular bone volume (-6.3%, p = 0.046) and trabecular number (-10.2%, p = 0.080) and increased trabecular separation (14.7%, p = 0.020), whereas 240 days of subsequent bed rest with exercise treatment prevented further significant deterioration of the histomorphometric measures. 360 days of bed rest with bisphosphonate and exercise treatment did not induce any significant changes in any of the histomorphometric measures. The study showed that 120 days of antiorthostatic bed rest without countermeasures induced significant deterioration of iliac crest trabecular bone histomorphometric properties. There are indications that the immobilization induced changes involve a loss of trabeculae rather than a general thinning of the trabeculae. On average, the countermeasures consisting of either bisphosphonate and exercise or exercise alone were able to either prevent