WorldWideScience

Sample records for stereoscopic cameras

  1. Polarizing aperture stereoscopic cinema camera

    Science.gov (United States)

    Lipton, Lenny

    2012-07-01

    The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.

  2. Head-coupled remote stereoscopic camera system for telepresence applications

    Science.gov (United States)

    Bolas, Mark T.; Fisher, Scott S.

    1990-09-01

    The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.

  3. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    Science.gov (United States)

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  4. Use of camera drive in stereoscopic display of learning contents of introductory physics

    Science.gov (United States)

    Matsuura, Shu

    2011-03-01

    Simple 3D physics simulations with stereoscopic display were created for a part of introductory physics e-Learning. First, cameras to see the 3D world can be made controllable by the user. This enabled to observe the system and motions of objects from any position in the 3D world. Second, cameras were made attachable to one of the moving object in the simulation so as to observe the relative motion of other objects. By this option, it was found that users perceive the velocity and acceleration more sensibly on stereoscopic display than on non-stereoscopic 3D display. Simulations were made using Adobe Flash ActionScript, and Papervison 3D library was used to render the 3D models in the flash web pages. To display the stereogram, two viewports from virtual cameras were displayed in parallel in the same web page. For observation of stereogram, the images of two viewports were superimposed by using 3D stereogram projection box (T&TS CO., LTD.), and projected on an 80-inch screen. The virtual cameras were controlled by keyboard and also by Nintendo Wii remote controller buttons. In conclusion, stereoscopic display offers learners more opportunities to play with the simulated models, and to perceive the characteristics of motion better.

  5. Indoor calibration for stereoscopic camera STC: a new method

    Science.gov (United States)

    Simioni, E.; Re, C.; Da Deppo, V.; Naletto, G.; Borrelli, D.; Dami, M.; Ficai Veltroni, I.; Cremonese, G.

    2017-11-01

    In the framework of the ESA-JAXA BepiColombo mission to Mercury, the global mapping of the planet will be performed by the on-board Stereo Camera (STC), part of the SIMBIO-SYS suite [1]. In this paper we propose a new technique for the validation of the 3D reconstruction of planetary surface from images acquired with a stereo camera. STC will provide a three-dimensional reconstruction of Mercury surface. The generation of a DTM of the observed features is based on the processing of the acquired images and on the knowledge of the intrinsic and extrinsic parameters of the optical system. The new stereo concept developed for STC needs a pre-flight verification of the actual capabilities to obtain elevation information from stereo couples: for this, a stereo validation setup to get an indoor reproduction of the flight observing condition of the instrument would give a much greater confidence to the developed instrument design. STC is the first stereo satellite camera with two optical channels converging in a unique sensor. Its optical model is based on a brand new concept to minimize mass and volume and to allow push-frame imaging. This model imposed to define a new calibration pipeline to test the reconstruction method in a controlled ambient. An ad-hoc indoor set-up has been realized for validating the instrument designed to operate in deep space, i.e. in-flight STC will have to deal with source/target essentially placed at infinity. This auxiliary indoor setup permits on one side to rescale the stereo reconstruction problem from the operative distance in-flight of 400 km to almost 1 meter in lab; on the other side it allows to replicate different viewing angles for the considered targets. Neglecting for sake of simplicity the Mercury curvature, the STC observing geometry of the same portion of the planet surface at periherm corresponds to a rotation of the spacecraft (SC) around the observed target by twice the 20° separation of each channel with respect to nadir

  6. System design description for the LDUA high resolution stereoscopic video camera system (HRSVS)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The High Resolution Stereoscopic Video Camera System (HRSVS), system 6230, was designed to be used as an end effector on the LDUA to perform surveillance and inspection activities within a waste tank. It is attached to the LDUA by means of a Tool Interface Plate (TIP) which provides a feed through for all electrical and pneumatic utilities needed by the end effector to operate. Designed to perform up close weld and corrosion inspection roles in US T operations, the HRSVS will support and supplement the Light Duty Utility Arm (LDUA) and provide the crucial inspection tasks needed to ascertain waste tank condition

  7. Evaluation of stereoscopic video cameras synchronized with the movement of an operator's head on the teleoperation of the actual backhoe shovel

    Science.gov (United States)

    Minamoto, Masahiko; Matsunaga, Katsuya

    1999-05-01

    Operator performance while using a remote controlled backhoe shovel is described for three different stereoscopic viewing conditions: direct view, fixed stereoscopic cameras connected to a helmet mounted display (HMD), and rotating stereo camera connected and slaved to the head orientation of a free moving stereo HMD. Results showed that the head- slaved system provided the best performance.

  8. Visual fatigue modeling for stereoscopic video shot based on camera motion

    Science.gov (United States)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  9. Calibration grooming and alignment for LDUA High Resolution Stereoscopic Video Camera System (HRSVS)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The High Resolution Stereoscopic Video Camera System (HRSVS) was designed by the Savannah River Technology Center (SRTC) to provide routine and troubleshooting views of tank interiors during characterization and remediation phases of underground storage tank (UST) processing. The HRSVS is a dual color camera system designed to provide stereo viewing of the interior of the tanks including the tank wall in a Class 1, Division 1, flammable atmosphere. The HRSVS was designed with a modular philosophy for easy maintenance and configuration modifications. During operation of the system with the LDUA, the control of the camera system will be performed by the LDUA supervisory data acquisition system (SDAS). Video and control status 1458 will be displayed on monitors within the LDUA control center. All control functions are accessible from the front panel of the control box located within the Operations Control Trailer (OCT). The LDUA will provide all positioning functions within the waste tank for the end effector. Various electronic measurement instruments will be used to perform CG and A activities. The instruments may include a digital volt meter, oscilloscope, signal generator, and other electronic repair equipment. None of these instruments will need to be calibrated beyond what comes from the manufacturer. During CG and A a temperature indicating device will be used to measure the temperature of the outside of the HRSVS from initial startup until the temperature has stabilized. This device will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing. This sensor will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing

  10. Digital stereoscopic imaging

    Science.gov (United States)

    Rao, A. Ravishankar; Jaimes, Alejandro

    1999-05-01

    The convergence of inexpensive digital cameras and cheap hardware for displaying stereoscopic images has created the right conditions for the proliferation of stereoscopic imagin applications. One application, which is of growing importance to museums and cultural institutions, consists of capturing and displaying 3D images of objects at multiple orientations. In this paper, we present our stereoscopic imaging system and methodology for semi-automatically capturing multiple orientation stereo views of objects in a studio setting, and demonstrate the superiority of using a high resolution, high fidelity digital color camera for stereoscopic object photography. We show the superior performance achieved with the IBM TDI-Pro 3000 digital camera developed at IBM Research. We examine various choices related to the camera parameters, image capture geometry, and suggest a range of optimum values that work well in practice. We also examine the effect of scene composition and background selection on the quality of the stereoscopic image display. We will demonstrate our technique with turntable views of objects from the IBM Corporate Archive.

  11. Depth Perception In Remote Stereoscopic Viewing Systems

    Science.gov (United States)

    Diner, Daniel B.; Von Sydow, Marika

    1989-01-01

    Report describes theoretical and experimental studies of perception of depth by human operators through stereoscopic video systems. Purpose of such studies to optimize dual-camera configurations used to view workspaces of remote manipulators at distances of 1 to 3 m from cameras. According to analysis, static stereoscopic depth distortion decreased, without decreasing stereoscopitc depth resolution, by increasing camera-to-object and intercamera distances and camera focal length. Further predicts dynamic stereoscopic depth distortion reduced by rotating cameras around center of circle passing through point of convergence of viewing axes and first nodal points of two camera lenses.

  12. Analysis of the three-dimensional trajectories of dusts observed with a stereoscopic fast framing camera in the Large Helical Device

    Energy Technology Data Exchange (ETDEWEB)

    Shoji, M., E-mail: shoji@LHD.nifs.ac.jp [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki 509-5292, Gifu (Japan); Masuzaki, S. [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki 509-5292, Gifu (Japan); Tanaka, Y. [Kanazawa University, Kakuma, Kanazawa 920-1192 (Japan); Pigarov, A.Yu.; Smirnov, R.D. [University of California at San Diego, La Jolla, CA 92093 (United States); Kawamura, G.; Uesugi, Y.; Yamada, H. [National Institute for Fusion Science, 322-6 Oroshi-cho, Toki 509-5292, Gifu (Japan)

    2015-08-15

    The three-dimensional trajectories of dusts have been observed with two stereoscopic fast framing cameras installed in upper and outer viewports in the Large Helical Device (LHD). It shows that the dust trajectories locate in divertor legs and an ergodic layer around the main plasma confinement region. While it is found that most of the dusts approximately move along the magnetic field lines with acceleration, there are some dusts which have sharply curved trajectories crossing over the magnetic field lines. A dust transport simulation code was modified to investigate the dust trajectories in fully three dimensional geometries such as LHD plasmas. It can explain the general trend of most of observed dust trajectories by the effect of the plasma flow in the peripheral plasma. However, the behavior of the some dusts with sharply curved trajectories is not consistent with the simulations.

  13. Stereoscopic augmented reality for laparoscopic surgery.

    Science.gov (United States)

    Kang, Xin; Azizian, Mahdi; Wilson, Emmanuel; Wu, Kyle; Martin, Aaron D; Kane, Timothy D; Peters, Craig A; Cleary, Kevin; Shekhar, Raj

    2014-07-01

    Conventional laparoscopes provide a flat representation of the three-dimensional (3D) operating field and are incapable of visualizing internal structures located beneath visible organ surfaces. Computed tomography (CT) and magnetic resonance (MR) images are difficult to fuse in real time with laparoscopic views due to the deformable nature of soft-tissue organs. Utilizing emerging camera technology, we have developed a real-time stereoscopic augmented-reality (AR) system for laparoscopic surgery by merging live laparoscopic ultrasound (LUS) with stereoscopic video. The system creates two new visual cues: (1) perception of true depth with improved understanding of 3D spatial relationships among anatomical structures, and (2) visualization of critical internal structures along with a more comprehensive visualization of the operating field. The stereoscopic AR system has been designed for near-term clinical translation with seamless integration into the existing surgical workflow. It is composed of a stereoscopic vision system, a LUS system, and an optical tracker. Specialized software processes streams of imaging data from the tracked devices and registers those in real time. The resulting two ultrasound-augmented video streams (one for the left and one for the right eye) give a live stereoscopic AR view of the operating field. The team conducted a series of stereoscopic AR interrogations of the liver, gallbladder, biliary tree, and kidneys in two swine. The preclinical studies demonstrated the feasibility of the stereoscopic AR system during in vivo procedures. Major internal structures could be easily identified. The system exhibited unobservable latency with acceptable image-to-video registration accuracy. We presented the first in vivo use of a complete system with stereoscopic AR visualization capability. This new capability introduces new visual cues and enhances visualization of the surgical anatomy. The system shows promise to improve the precision and

  14. Morphometric Optic Nerve Head Analysis in Glaucoma Patients: A Comparison between the Simultaneous Nonmydriatic Stereoscopic Fundus Camera (Kowa Nonmyd WX3D and the Heidelberg Scanning Laser Ophthalmoscope (HRT III

    Directory of Open Access Journals (Sweden)

    Siegfried Mariacher

    2016-01-01

    Full Text Available Purpose. To investigate the agreement between morphometric optic nerve head parameters assessed with the confocal laser ophthalmoscope HRT III and the stereoscopic fundus camera Kowa nonmyd WX3D retrospectively. Methods. Morphometric optic nerve head parameters of 40 eyes of 40 patients with primary open angle glaucoma were analyzed regarding their vertical cup-to-disc-ratio (CDR. Vertical CDR, disc area, cup volume, rim volume, and maximum cup depth were assessed with both devices by one examiner. Mean bias and limits of agreement (95% CI were obtained using scatter plots and Bland-Altman analysis. Results. Overall vertical CDR comparison between HRT III and Kowa nonmyd WX3D measurements showed a mean difference (limits of agreement of −0.06 (−0.36 to 0.24. For the CDR < 0.5 group (n=24 mean difference in vertical CDR was −0.14 (−0.34 to 0.06 and for the CDR ≥ 0.5 group (n=16 0.06 (−0.21 to 0.34. Conclusion. This study showed a good agreement between Kowa nonmyd WX3D and HRT III with regard to widely used optic nerve head parameters in patients with glaucomatous optic neuropathy. However, data from Kowa nonmyd WX3D exhibited the tendency to measure larger CDR values than HRT III in the group with CDR < 0.5 group and lower CDR values in the group with CDR ≥ 0.5.

  15. An HTML Tool for Production of Interactive Stereoscopic Compositions.

    Science.gov (United States)

    Chistyakov, Alexey; Soto, Maria Teresa; Martí, Enric; Carrabina, Jordi

    2016-12-01

    The benefits of stereoscopic vision in medical applications were appreciated and have been thoroughly studied for more than a century. The usage of the stereoscopic displays has a proven positive impact on performance in various medical tasks. At the same time the market of 3D-enabled technologies is blooming. New high resolution stereo cameras, TVs, projectors, monitors, and head mounted displays become available. This equipment, completed with a corresponding application program interface (API), could be relatively easy implemented in a system. Such complexes could open new possibilities for medical applications exploiting the stereoscopic depth. This work proposes a tool for production of interactive stereoscopic graphical user interfaces, which could represent a software layer for web-based medical systems facilitating the stereoscopic effect. Further the tool's operation mode and the results of the conducted subjective and objective performance tests will be exposed.

  16. A stereoscopic television system for reactor inspection

    International Nuclear Information System (INIS)

    Friend, D.B.; Jones, A.

    1980-03-01

    A stereoscopic television system suitable for reactor inspection has been developed. Right and left eye views, obtained from two conventional black and white cameras, are displayed by the anaglyph technique and observers wear appropriately coloured viewing spectacles. All camera functions, such as zoom, focus and toe-in are remotely controlled. A laboratory experiment is described which demonstrates the increase in spatial awareness afforded by the use of stereo television and illustrates its potential in the supervision of remote handling tasks. Typical depth resolutions of 3mm at 1m and 10mm at 2m have been achieved with the reactor instrument. Trials undertaken during routine inspection at Oldbury Power Station in June 1978 are described. They demonstrate that stereoscopic television can indeed improve the convenience of remote handling and that the added display realism is beneficial in visual inspection. (author)

  17. Evaluating methods for controlling depth perception in stereoscopic cinematography

    Science.gov (United States)

    Sun, Geng; Holliman, Nick

    2009-02-01

    Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography

  18. Matte painting in stereoscopic synthetic imagery

    Science.gov (United States)

    Eisenmann, Jonathan; Parent, Rick

    2010-02-01

    While there have been numerous studies concerning human perception in stereoscopic environments, rules of thumb for cinematography in stereoscopy have not yet been well-established. To that aim, we present experiments and results of subject testing in a stereoscopic environment, similar to that of a theater (i.e. large flat screen without head-tracking). In particular we wish to empirically identify thresholds at which different types of backgrounds, referred to in the computer animation industry as matte paintings, can be used while still maintaining the illusion of seamless perspective and depth for a particular scene and camera shot. In monoscopic synthetic imagery, any type of matte painting that maintains proper perspective lines, depth cues, and coherent lighting and textures saves in production costs while still maintaining the illusion of an alternate cinematic reality. However, in stereoscopic synthetic imagery, a 2D matte painting that worked in monoscopy may fail to provide the intended illusion of depth because the viewer has added depth information provided by stereopsis. We intend to observe two stereoscopic perceptual thresholds in this study which will provide practical guidelines indicating when to use each of three types of matte paintings. We ran subject tests in two virtual testing environments, each with varying conditions. Data were collected showing how the choices of the users matched the correct response, and the resulting perceptual threshold patterns are discussed below.

  19. Stereoscopic image production: live, CGI, and integration

    Science.gov (United States)

    Criado, Enrique

    2006-02-01

    This paper shortly describes part of the experience gathered in more than 10 years of stereoscopic movie production, some of the most common problems found and the solutions, with more or less fortune, we applied to solve those problems. Our work is mainly focused in the entertainment market, theme parks, museums, and other cultural related locations and events. In our movies, we have been forced to develop our own devices to permit correct stereo shooting (stereoscopic rigs) or stereo monitoring (real-time), and to solve problems found with conventional film editing, compositing and postproduction software. Here, we discuss stereo lighting, monitoring, special effects, image integration (using dummies and more), stereo-camera parameters, and other general 3-D movie production aspects.

  20. Clinical usefulness of stereoscopic DSA

    International Nuclear Information System (INIS)

    Bussaka, Hiromasa; Takahashi, Mutsumasa; Miyawaki, Masayuki; Korogi, Yukinori; Yamashita, Yasuyuki; Izunaga, Hiroshi; Nakashima, Koki; Yoshizumi, Kazuhiro

    1988-01-01

    Digital subtraction angiography (DSA) is widely used as a screening examination for vascular diseases, but it has several disadvantages, one of which is overlapping of the vessels. To overcome this disadvantage, stereoscopic technique is applied to our DSA equipment. Stereoscopic DSA is obtained by alternate exposures from twin focal spots of an x-ray tube without additional contrast medium or radiation exposures. Stereoscopic intravenous DSA was performed 223 times, and was useful in 157 times (70.4 %) for the identification and stereoscopic observation of the abdominal and pelvic vessels. Thirty-seven intra-arterial DSAs were performed stereoscopically for cranial, abdominal and pelvic angiograms, and effective studies were obtained in 30 DSAs (81.1 %) with demonstration of tumor stains and displacement of the vessels. It is necessary to use adequate compensation filters for the good stereoscopic DSAs, especially for the cervical and thoracic DSAs. (author)

  1. Efficient stereoscopic contents file format on the basis of ISO base media file format

    Science.gov (United States)

    Kim, Kyuheon; Lee, Jangwon; Suh, Doug Young; Park, Gwang Hoon

    2009-02-01

    A lot of 3D contents haven been widely used for multimedia services, however, real 3D video contents have been adopted for a limited applications such as a specially designed 3D cinema. This is because of the difficulty of capturing real 3D video contents and the limitation of display devices available in a market. However, diverse types of display devices for stereoscopic video contents for real 3D video contents have been recently released in a market. Especially, a mobile phone with a stereoscopic camera has been released in a market, which provides a user as a consumer to have more realistic experiences without glasses, and also, as a content creator to take stereoscopic images or record the stereoscopic video contents. However, a user can only store and display these acquired stereoscopic contents with his/her own devices due to the non-existence of a common file format for these contents. This limitation causes a user not share his/her contents with any other users, which makes it difficult the relevant market to stereoscopic contents is getting expanded. Therefore, this paper proposes the common file format on the basis of ISO base media file format for stereoscopic contents, which enables users to store and exchange pure stereoscopic contents. This technology is also currently under development for an international standard of MPEG as being called as a stereoscopic video application format.

  2. Stereoscopic 3D graphics generation

    Science.gov (United States)

    Li, Zhi; Liu, Jianping; Zan, Y.

    1997-05-01

    Stereoscopic display technology is one of the key techniques of areas such as simulation, multimedia, entertainment, virtual reality, and so on. Moreover, stereoscopic 3D graphics generation is an important part of stereoscopic 3D display system. In this paper, at first, we describe the principle of stereoscopic display and summarize some methods to generate stereoscopic 3D graphics. Secondly, to overcome the problems which came from the methods of user defined models (such as inconvenience, long modifying period and so on), we put forward the vector graphics files defined method. Thus we can design more directly; modify the model simply and easily; generate more conveniently; furthermore, we can make full use of graphics accelerator card and so on. Finally, we discuss the problem of how to speed up the generation.

  3. Stereoscopic optical viewing system

    Science.gov (United States)

    Tallman, C.S.

    1986-05-02

    An improved optical system which provides the operator with a stereoscopic viewing field and depth of vision, particularly suitable for use in various machines such as electron or laser beam welding and drilling machines. The system features two separate but independently controlled optical viewing assemblies from the eyepiece to a spot directly above the working surface. Each optical assembly comprises a combination of eye pieces, turning prisms, telephoto lenses for providing magnification, achromatic imaging relay lenses and final stage pentagonal turning prisms. Adjustment for variations in distance from the turning prisms to the workpiece, necessitated by varying part sizes and configurations and by the operator's visual accuity, is provided separately for each optical assembly by means of separate manual controls at the operator console or within easy reach of the operator.

  4. Stereoscopic methods in TEM

    International Nuclear Information System (INIS)

    Thomas, L.E.

    1975-07-01

    Stereoscopic methods used in TEM are reviewed. The use of stereoscopy to characterize three-dimensional structures observed by TEM has become widespread since the introduction of instruments operating at 1 MV. In its emphasis on whole structures and thick specimens this approach differs significantly from conventional methods of microstructural analysis based on three-dimensional image reconstruction from a number of thin-section views. The great advantage of stereo derives from the ability to directly perceive and measure structures in three-dimensions by capitalizing on the unsurpassed human ability for stereoscopic matching of corresponding details on picture pairs showing the same features from different viewpoints. At this time, stereo methods are aimed mainly at structural understanding at the level of dislocations, precipitates, and irradiation-induced point-defect clusters in crystal and on the cellular irradiation-induced point-defect clusters in crystal and on the cellular level of biological specimens. 3-d reconstruction methods have concentrated on the molecular level where image resolution requirements dictate the use of very thin specimens. One recent application of three-dimensional coordinate measurements is a system developed for analyzing depth variations in the numbers, sizes and total volumes of voids produced near the surfaces of metal specimens during energetic ion bombardment. This system was used to correlate the void volumes at each depth along the ion range with the number of atomic displacements produced at that depth, thereby unfolding the entire swelling versus dose relationship from a single stereo view. A later version of this system incorporating computer-controlled stereo display capabilities is now being built

  5. Efficient Stereoscopic Video Matching and Map Reconstruction for a Wheeled Mobile Robot

    Directory of Open Access Journals (Sweden)

    Oscar Montiel-Ross

    2012-10-01

    Full Text Available This paper presents a novel method to achieve stereoscopic vision for mobile robot (MR navigation with the advantage of not needing camera calibration for depth (distance estimation measurements. It uses the concept of the adaptive candidate matching window for stereoscopic correspondence for block matching, resulting in improvements in efficiency and accuracy. An average of 40% of time reduction in the calculation process is obtained. All the algorithms for navigation, including the stereoscopic vision module, were implemented using an original computer architecture for the Virtex 5 FPGA, where a distributed multicore processor system was embedded and coordinated using the Message Passing Interface.

  6. [Dendrobium officinale stereoscopic cultivation method].

    Science.gov (United States)

    Si, Jin-Ping; Dong, Hong-Xiu; Liao, Xin-Yan; Zhu, Yu-Qiu; Li, Hui

    2014-12-01

    The study is aimed to make the most of available space of Dendrobium officinale cultivation facility, reveal the yield and functional components variation of stereoscopic cultivated D. officinale, and improve quality, yield and efficiency. The agronomic traits and yield variation of stereoscopic cultivated D. officinale were studied by operating field experiment. The content of polysaccharide and extractum were determined by using phenol-sulfuric acid method and 2010 edition of "Chinese Pharmacopoeia" Appendix X A. The results showed that the land utilization of stereoscopic cultivated D. officinale increased 2.74 times, the stems, leaves and their total fresh or dry weight in unit area of stereoscopic cultivated D. officinale were all heavier than those of the ground cultivated ones. There was no significant difference in polysaccharide content between stereoscopic cultivation and ground cultivation. But the extractum content and total content of polysaccharide and extractum were significantly higher than those of the ground cultivated ones. In additional, the polysaccharide content and total content of polysaccharide and extractum from the top two levels of stereoscopic culture matrix were significantly higher than that of the ones from the other levels and ground cultivation. Steroscopic cultivation can effectively improves the utilization of space and yield, while the total content of polysaccharides and extractum were significantly higher than that of the ground cultivated ones. The significant difference in Dendrobium polysaccharides among the plants from different height of stereo- scopic culture matrix may be associated with light factor.

  7. Stereoscopically Observing Manipulative Actions.

    Science.gov (United States)

    Ferri, S; Pauwels, K; Rizzolatti, G; Orban, G A

    2016-08-01

    The purpose of this study was to investigate the contribution of stereopsis to the processing of observed manipulative actions. To this end, we first combined the factors "stimulus type" (action, static control, and dynamic control), "stereopsis" (present, absent) and "viewpoint" (frontal, lateral) into a single design. Four sites in premotor, retro-insular (2) and parietal cortex operated specifically when actions were viewed stereoscopically and frontally. A second experiment clarified that the stereo-action-specific regions were driven by actions moving out of the frontoparallel plane, an effect amplified by frontal viewing in premotor cortex. Analysis of single voxels and their discriminatory power showed that the representation of action in the stereo-action-specific areas was more accurate when stereopsis was active. Further analyses showed that the 4 stereo-action-specific sites form a closed network converging onto the premotor node, which connects to parietal and occipitotemporal regions outside the network. Several of the specific sites are known to process vestibular signals, suggesting that the network combines observed actions in peripersonal space with gravitational signals. These findings have wider implications for the function of premotor cortex and the role of stereopsis in human behavior. © The Author 2016. Published by Oxford University Press.

  8. Crosstalk evaluation in stereoscopic displays

    NARCIS (Netherlands)

    Wang, L.; Teunissen, C.; Tu, Yan; Chen, Li; Zhang, P.; Zhang, T.; Heynderickx, I.E.J.

    2011-01-01

    Substantial progress in liquid-crystal display and polarization film technology has enabled several types of stereoscopic displays. Despite all progress, some image distortions still exist in these 3-D displays, of which interocular crosstalk - light leakage of the image for one eye to the other eye

  9. Stereoscopic medical imaging collaboration system

    Science.gov (United States)

    Okuyama, Fumio; Hirano, Takenori; Nakabayasi, Yuusuke; Minoura, Hirohito; Tsuruoka, Shinji

    2007-02-01

    The computerization of the clinical record and the realization of the multimedia have brought improvement of the medical service in medical facilities. It is very important for the patients to obtain comprehensible informed consent. Therefore, the doctor should plainly explain the purpose and the content of the diagnoses and treatments for the patient. We propose and design a Telemedicine Imaging Collaboration System which presents a three dimensional medical image as X-ray CT, MRI with stereoscopic image by using virtual common information space and operating the image from a remote location. This system is composed of two personal computers, two 15 inches stereoscopic parallax barrier type LCD display (LL-151D, Sharp), one 1Gbps router and 1000base LAN cables. The software is composed of a DICOM format data transfer program, an operation program of the images, the communication program between two personal computers and a real time rendering program. Two identical images of 512×768 pixcels are displayed on two stereoscopic LCD display, and both images show an expansion, reduction by mouse operation. This system can offer a comprehensible three-dimensional image of the diseased part. Therefore, the doctor and the patient can easily understand it, depending on their needs.

  10. METHOD FOR DETERMINING THE SPATIAL COORDINATES IN THE ACTIVE STEREOSCOPIC SYSTEM

    Directory of Open Access Journals (Sweden)

    Valery V. Korotaev

    2014-11-01

    Full Text Available The paper deals with the structural scheme of active stereoscopic system and algorithm of its operation, providing the fast calculation of the spatial coordinates. The system includes two identical cameras, forming a stereo pair, and a laser scanner, which provides vertical scanning of the space before the system by the laser beam. A separate synchronizer provides synchronous operation of the two cameras. The developed algorithm of the system operation is implemented in MATLAB. In the proposed algorithm, the influence of background light is eliminated by interframe processing. The algorithm is based on precomputation of coordinates for epipolar lines and corresponding points in stereoscopic image. These data are used to quick calculation of the three-dimensional coordinates of points that form the three-dimensional images of objects. Experiment description on a physical model is given. Experimental results confirm the efficiency of the proposed active stereoscopic system and its operation algorithm. The proposed scheme of active stereoscopic system and calculating method for the spatial coordinates can be recommended for creation of stereoscopic systems, operating in real time and at high processing speed: devices for face recognition, systems for the position control of railway track, automobile active safety systems.

  11. Stereoscopic game design and evaluation

    Science.gov (United States)

    Rivett, Joe; Holliman, Nicolas

    2013-03-01

    We report on a new game design where the goal is to make the stereoscopic depth cue sufficiently critical to success that game play should become impossible without using a stereoscopic 3D (S3D) display and, at the same time, we investigate whether S3D game play is affected by screen size. Before we detail our new game design we review previously unreported results from our stereoscopic game research over the last ten years at the Durham Visualisation Laboratory. This demonstrates that game players can achieve significantly higher scores using S3D displays when depth judgements are an integral part of the game. Method: We design a game where almost all depth cues, apart from the binocular cue, are removed. The aim of the game is to steer a spaceship through a series of oncoming hoops where the viewpoint of the game player is from above, with the hoops moving right to left across the screen towards the spaceship, to play the game it is essential to make decisive depth judgments to steer the spaceship through each oncoming hoop. To confound these judgements we design altered depth cues, for example perspective is reduced as a cue by varying the hoop's depth, radius and cross-sectional size. Results: Players were screened for stereoscopic vision, given a short practice session, and then played the game in both 2D and S3D modes on a seventeen inch desktop display, on average participants achieved a more than three times higher score in S3D than they achieved in 2D. The same experiment was repeated using a four metre S3D projection screen and similar results were found. Conclusions: Our conclusion is that games that use the binocular depth cue in decisive game judgements can benefit significantly from using an S3D display. Based on both our current and previous results we additionally conclude that display size, from cell-phone, to desktop, to projection display does not adversely affect player performance.

  12. 21 CFR 886.1870 - Stereoscope.

    Science.gov (United States)

    2010-04-01

    ... exercises of eye muscles. (b) Classification. Class I (general controls). The AC-powered device and the... Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1870 Stereoscope. (a) Identification. A stereoscope is an AC...

  13. High-Definition 3D Stereoscopic Microscope Display System for Biomedical Applications

    Directory of Open Access Journals (Sweden)

    Yoo Kwan-Hee

    2010-01-01

    Full Text Available Biomedical research has been performed by using advanced information techniques, and micro-high-quality stereo images have been used by researchers and/or doctors for various aims in biomedical research and surgery. To visualize the stereo images, many related devices have been developed. However, the devices are difficult to learn for junior doctors and demanding to supervise for experienced surgeons. In this paper, we describe the development of a high-definition (HD three-dimensional (3D stereoscopic imaging display system for operating a microscope or experimenting on animals. The system consists of a stereoscopic camera part, image processing device for stereoscopic video recording, and stereoscopic display. In order to reduce eyestrain and viewer fatigue, we use a preexisting stereo microscope structure and polarized-light stereoscopic display method that does not reduce the quality of the stereo images. The developed system can overcome the discomfort of the eye piece and eyestrain caused by use over a long period of time.

  14. Brief history of electronic stereoscopic displays

    Science.gov (United States)

    Lipton, Lenny

    2012-02-01

    A brief history of recent developments in electronic stereoscopic displays is given concentrating on products that have succeeded in the market place and hence have had a significant influence on future implementations. The concentration is on plano-stereoscopic (two-view) technology because it is now the dominant display modality in the marketplace. Stereoscopic displays were created for the motion picture industry a century ago, and this technology influenced the development of products for science and industry, which in turn influenced product development for entertainment.

  15. Stereoscopic augmented reality with pseudo-realistic global illumination effects

    Science.gov (United States)

    de Sorbier, Francois; Saito, Hideo

    2014-03-01

    Recently, augmented reality has become very popular and has appeared in our daily life with gaming, guiding systems or mobile phone applications. However, inserting object in such a way their appearance seems natural is still an issue, especially in an unknown environment. This paper presents a framework that demonstrates the capabilities of Kinect for convincing augmented reality in an unknown environment. Rather than pre-computing a reconstruction of the scene like proposed by most of the previous method, we propose a dynamic capture of the scene that allows adapting to live changes of the environment. Our approach, based on the update of an environment map, can also detect the position of the light sources. Combining information from the environment map, the light sources and the camera tracking, we can display virtual objects using stereoscopic devices with global illumination effects such as diffuse and mirror reflections, refractions and shadows in real time.

  16. Two Eyes, 3D: Stereoscopic Design Principles

    Science.gov (United States)

    Price, Aaron; Subbarao, M.; Wyatt, R.

    2013-01-01

    Two Eyes, 3D is a NSF-funded research project about how people perceive highly spatial objects when shown with 2D or stereoscopic ("3D") representations. As part of the project, we produced a short film about SN 2011fe. The high definition film has been rendered in both 2D and stereoscopic formats. It was developed according to a set of stereoscopic design principles we derived from the literature and past experience producing and studying stereoscopic films. Study participants take a pre- and post-test that involves a spatial cognition assessment and scientific knowledge questions about Type-1a supernovae. For the evaluation, participants use iPads in order to record spatial manipulation of the device and look for elements of embodied cognition. We will present early results and also describe the stereoscopic design principles and the rationale behind them. All of our content and software is available under open source licenses. More information is at www.twoeyes3d.org.

  17. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    Science.gov (United States)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  18. 21 CFR 886.1880 - Fusion and stereoscopic target.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Fusion and stereoscopic target. 886.1880 Section... (CONTINUED) MEDICAL DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1880 Fusion and stereoscopic target. (a) Identification. A fusion and stereoscopic target is a device intended for use as a viewing object...

  19. Stereoscopic display in a slot machine

    Science.gov (United States)

    Laakso, M.

    2012-03-01

    This paper reports the results of a user trial with a slot machine equipped with a stereoscopic display. The main research question was to find out what kind of added value does stereoscopic 3D (S-3D) bring to slot games? After a thorough literature survey, a novel gaming platform was designed and implemented. Existing multi-game slot machine "Nova" was converted to "3DNova" by replacing the monitor with an S-3D display and converting six original games to S-3D format. To evaluate the system, several 3DNova machines were put available for players for four months. Both qualitative and quantitative analysis was carried out from statistical values, questionnaires and observations. According to the results, people find the S-3D concept interesting but the technology is not optimal yet. Young adults and adults were fascinated by the system, older people were more cautious. Especially the need to wear stereoscopic glasses provide a challenge; ultimate system would probably use autostereoscopic technology. Also the games should be designed to utilize its full power. The main contributions of this paper are lessons learned from creating an S-3D slot machine platform and novel information about human factors related to stereoscopic slot machine gaming.

  20. Visual discomfort in stereoscopic displays : a review

    NARCIS (Netherlands)

    Lambooij, M.T.M.; IJsselsteijn, W.A.; Heynderickx, I.E.J.; Woods, A.J.; Merritt, J.O.; Bolas, M.T.; McDowall, I.E.

    2007-01-01

    Visual discomfort has been the subject of considerable research in relation to stereoscopic and autostereoscopic displays, but remains an ambiguous concept used to denote a variety of subjective symptoms potentially related to different underlying processes. In this paper we clarify the importance

  1. Visual discomfort in stereoscopic dsplays : A review

    NARCIS (Netherlands)

    Lambooij, M.T.M.; IJsselsteijn, W.; Heynderickx, I.

    2007-01-01

    Visual discomfort has been the subject of considerable research in relation to stereoscopic and autostereoscopic displays, but remains an ambiguous concept used to denote a variety of subjective symptoms potentially related to different underlying processes. In this paper we clarify the importance

  2. Teaching with Stereoscopic Video: Opportunities and Challenges

    Science.gov (United States)

    Variano, Evan

    2017-11-01

    I will present my work on creating stereoscopic videos for fluid pedagogy. I discuss a variety of workflows for content creation and a variety of platforms for content delivery. I review the qualitative lessons learned when teaching with this material, and discuss outlook for the future. This work was partially supported by the NSF award ENG-1604026 and the UC Berkeley Student Technology Fund.

  3. Digital stereoscopic photography using StereoData Maker

    Science.gov (United States)

    Toeppen, John; Sykes, David

    2009-02-01

    Stereoscopic digital photography has become much more practical with the use of USB wired connections between a pair of Canon cameras using StereoData Maker software for precise synchronization. StereoPhoto Maker software is now used to automatically combine and align right and left image files to produce a stereo pair. Side by side images are saved as pairs and may be viewed using software that converts the images into the preferred viewing format at the time of display. Stereo images may be shared on the internet, displayed on computer monitors, autostereo displays, viewed on high definition 3D TVs, or projected for a group. Stereo photographers are now free to control composition using point and shoot settings, or are able to control shutter speed, aperture, focus, ISO, and zoom. The quality of the output depends on the developed skills of the photographer as well as their understanding of the software, human vision and the geometry they choose for their cameras and subjects. Observers of digital stereo images can zoom in for greater detail and scroll across large panoramic fields with a few keystrokes. The art, science, and methods of taking, creating and viewing digital stereo photos are presented in a historic and developmental context in this paper.

  4. Is eye damage caused by stereoscopic displays?

    Science.gov (United States)

    Mayer, Udo; Neumann, Markus D.; Kubbat, Wolfgang; Landau, Kurt

    2000-05-01

    A normal developing child will achieve emmetropia in youth and maintain it. Thereby cornea, lens and axial length of the eye grow astonishingly coordinated. In the last years research has evidenced that this coordinated growing process is a visually controlled closed loop. The mechanism has been studied particularly in animals. It was found that the growth of the axial length of the eyeball is controlled by image focus information from the retina. It was shown that maladjustment can occur by this visually-guided growth control mechanism that result in ametropia. Thereby it has been proven that e.g. short-sightedness is not only caused by heredity, but is acquired under certain visual conditions. It is shown that these conditions are similar to the conditions of viewing stereoscopic displays where the normal accommodation convergence coupling is disjoint. An evaluation is given of the potential of damaging the eyes by viewing stereoscopic displays. Concerning this, different viewing methods for stereoscopic displays are evaluated. Moreover, clues are given how the environment and display conditions shall be set and what users shall be chosen to minimize the risk of eye damages.

  5. Usability of stereoscopic view in teleoperation

    Science.gov (United States)

    Boonsuk, Wutthigrai

    2015-03-01

    Recently, there are tremendous growths in the area of 3D stereoscopic visualization. The 3D stereoscopic visualization technology has been used in a growing number of consumer products such as the 3D televisions and the 3D glasses for gaming systems. This technology refers to the idea that human brain develops depth of perception by retrieving information from the two eyes. Our brain combines the left and right images on the retinas and extracts depth information. Therefore, viewing two video images taken at slightly distance apart as shown in Figure 1 can create illusion of depth [8]. Proponents of this technology argue that the stereo view of 3D visualization increases user immersion and performance as more information is gained through the 3D vision as compare to the 2D view. However, it is still uncertain if additional information gained from the 3D stereoscopic visualization can actually improve user performance in real world situations such as in the case of teleoperation.

  6. A 3-D mixed-reality system for stereoscopic visualization of medical dataset.

    Science.gov (United States)

    Ferrari, Vincenzo; Megali, Giuseppe; Troia, Elena; Pietrabissa, Andrea; Mosca, Franco

    2009-11-01

    We developed a simple, light, and cheap 3-D visualization device based on mixed reality that can be used by physicians to see preoperative radiological exams in a natural way. The system allows the user to see stereoscopic "augmented images," which are created by mixing 3-D virtual models of anatomies obtained by processing preoperative volumetric radiological images (computed tomography or MRI) with real patient live images, grabbed by means of cameras. The interface of the system consists of a head-mounted display equipped with two high-definition cameras. Cameras are mounted in correspondence of the user's eyes and allow one to grab live images of the patient with the same point of view of the user. The system does not use any external tracker to detect movements of the user or the patient. The movements of the user's head and the alignment of virtual patient with the real one are done using machine vision methods applied on pairs of live images. Experimental results, concerning frame rate and alignment precision between virtual and real patient, demonstrate that machine vision methods used for localization are appropriate for the specific application and that systems based on stereoscopic mixed reality are feasible and can be proficiently adopted in clinical practice.

  7. Stereoscopic Machine-Vision System Using Projected Circles

    Science.gov (United States)

    Mackey, Jeffrey R.

    2010-01-01

    A machine-vision system capable of detecting obstacles large enough to damage or trap a robotic vehicle is undergoing development. The system includes (1) a pattern generator that projects concentric circles of laser light forward onto the terrain, (2) a stereoscopic pair of cameras that are aimed forward to acquire images of the circles, (3) a frame grabber and digitizer for acquiring image data from the cameras, and (4) a single-board computer that processes the data. The system is being developed as a prototype of machine- vision systems to enable robotic vehicles ( rovers ) on remote planets to avoid craters, large rocks, and other terrain features that could capture or damage the vehicles. Potential terrestrial applications of systems like this one could include terrain mapping, collision avoidance, navigation of robotic vehicles, mining, and robotic rescue. This system is based partly on the same principles as those of a prior stereoscopic machine-vision system in which the cameras acquire images of a single stripe of laser light that is swept forward across the terrain. However, this system is designed to afford improvements over some of the undesirable features of the prior system, including the need for a pan-and-tilt mechanism to aim the laser to generate the swept stripe, ambiguities in interpretation of the single-stripe image, the time needed to sweep the stripe across the terrain and process the data from many images acquired during that time, and difficulty of calibration because of the narrowness of the stripe. In this system, the pattern generator does not contain any moving parts and need not be mounted on a pan-and-tilt mechanism: the pattern of concentric circles is projected steadily in the forward direction. The system calibrates itself by use of data acquired during projection of the concentric-circle pattern onto a known target representing flat ground. The calibration- target image data are stored in the computer memory for use as a

  8. Measurement of rotation and strain-rate tensors by using stereoscopic PIV

    DEFF Research Database (Denmark)

    Özcan, O.; Meyer, Knud Erik; Larsen, Poul Scheel

    2004-01-01

    A simple technique is described for measuring the mean rate-of-displacement (velocity gradient) tensor in a plane by using a conventional stereoscopic PIV system. The technique involves taking PIV data in two or three closely-spaced parallel planes at different times. All components of the mean...... are presented to show the applicability of the proposed technique. The PIV cameras and light sheet optics shown in Fig. 1a are mounted on the same traverse mechanism in order to displace the measurement plane accurately. Data obtained in constant-y and -z planes are presented. Fig. 1b shows a contour plot...

  9. Human factors involved in perception and action in a natural stereoscopic world: an up-to-date review with guidelines for stereoscopic displays and stereoscopic virtual reality (VR)

    Science.gov (United States)

    Perez-Bayas, Luis

    2001-06-01

    In stereoscopic perception of a three-dimensional world, binocular disparity might be thought of as the most important cue to 3D depth perception. Nevertheless, in reality there are many other factors involved before the 'final' conscious and subconscious stereoscopic perception, such as luminance, contrast, orientation, color, motion, and figure-ground extraction (pop-out phenomenon). In addition, more complex perceptual factors exist, such as attention and its duration (an equivalent of 'brain zooming') in relation to physiological central vision, In opposition to attention to peripheral vision and the brain 'top-down' information in relation to psychological factors like memory of previous experiences and present emotions. The brain's internal mapping of a pure perceptual world might be different from the internal mapping of a visual-motor space, which represents an 'action-directed perceptual world.' In addition, psychological factors (emotions and fine adjustments) are much more involved in a stereoscopic world than in a flat 2D-world, as well as in a world using peripheral vision (like VR, using a curved perspective representation, and displays, as natural vision does) as opposed to presenting only central vision (bi-macular stereoscopic vision) as in the majority of typical stereoscopic displays. Here is presented the most recent and precise information available about the psycho-neuro- physiological factors involved in the perception of stereoscopic three-dimensional world, with an attempt to give practical, functional, and pertinent guidelines for building more 'natural' stereoscopic displays.

  10. Digital stereoscopic cinema: the 21st century

    Science.gov (United States)

    Lipton, Lenny

    2008-02-01

    Over 1000 theaters in more than a dozen countries have been outfitted with digital projectors using the Texas Instruments DLP engine equipped to show field-sequential 3-D movies using the polarized method of image selection. Shuttering eyewear and advanced anaglyph products are also being deployed for image selection. Many studios are in production with stereoscopic films, and some have committed to producing their entire output of animated features in 3-D. This is a time of technology change for the motion picture industry.

  11. Peculiarities of perception of stereoscopic radiation images in full colour

    International Nuclear Information System (INIS)

    Mamchev, G.V.

    1994-01-01

    The principles of coloring stereoscopic radiation images providing their three-dimensional structure distinguishing increase are discussed. The results of analytical and experimental studies dealing with estimation of the effect of stereoscopic image chromaticity on accuracy of metric operations realization in three-dimensional space are given. 5 refs., 1 fig., 1 tab

  12. Stereoscopic 3D video games and their effects on engagement

    Science.gov (United States)

    Hogue, Andrew; Kapralos, Bill; Zerebecki, Chris; Tawadrous, Mina; Stanfield, Brodie; Hogue, Urszula

    2012-03-01

    With television manufacturers developing low-cost stereoscopic 3D displays, a large number of consumers will undoubtedly have access to 3D-capable televisions at home. The availability of 3D technology places the onus on content creators to develop interesting and engaging content. While the technology of stereoscopic displays and content generation are well understood, there are many questions yet to be answered surrounding its effects on the viewer. Effects of stereoscopic display on passive viewers for film are known, however video games are fundamentally different since the viewer/player is actively (rather than passively) engaged in the content. Questions of how stereoscopic viewing affects interaction mechanics have previously been studied in the context of player performance but very few have attempted to quantify the player experience to determine whether stereoscopic 3D has a positive or negative influence on their overall engagement. In this paper we present a preliminary study of the effects stereoscopic 3D have on player engagement in video games. Participants played a video game in two conditions, traditional 2D and stereoscopic 3D and their engagement was quantified using a previously validated self-reporting tool. The results suggest that S3D has a positive effect on immersion, presence, flow, and absorption.

  13. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, Ul; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is replaceably mounted in the ray inlet opening of the camera, while the others are placed on separate supports. Supports are swingably mounted upon a column one above the other

  14. Gamma camera

    International Nuclear Information System (INIS)

    Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  15. Picosecond camera

    International Nuclear Information System (INIS)

    Decroisette, Michel

    A Kerr cell activated by infrared pulses of a model locked Nd glass laser, acts as an ultra-fast and periodic shutter, with a few p.s. opening time. Associated with a S.T.L. camera, it gives rise to a picosecond camera allowing us to study very fast effects [fr

  16. Psychometric Assessment of Stereoscopic Head-Mounted Displays

    Science.gov (United States)

    2016-06-29

    Journal Article 3. DATES COVERED (From – To) Jan 2015 - Dec 2015 4. TITLE AND SUBTITLE PSYCHOMETRIC ASSESSMENT OF STEREOSCOPIC HEAD- MOUNTED DISPLAYS...to render an immersive three-dimensional constructive environment. The purpose of this effort was to quantify the impact of aircrew vision on an...simulated tasks requiring precise depth discrimination. This work will provide an example validation method for future stereoscopic virtual immersive

  17. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, U.; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is mounted in the ray inlet opening of the camera, while the others are placed on separate supports. The supports are swingably mounted upon a column one above the other through about 90 0 to a collimator exchange position. Each of the separate supports is swingable to a vertically aligned position, with limiting of the swinging movement and positioning of the support at the desired exchange position. The collimators are carried on the supports by means of a series of vertically disposed coil springs. Projections on the camera are movable from above into grooves of the collimator at the exchange position, whereupon the collimator is turned so that it is securely prevented from falling out of the camera head

  18. GEOMETRIC AND REFLECTANCE SIGNATURE CHARACTERIZATION OF COMPLEX CANOPIES USING HYPERSPECTRAL STEREOSCOPIC IMAGES FROM UAV AND TERRESTRIAL PLATFORMS

    Directory of Open Access Journals (Sweden)

    E. Honkavaara

    2016-06-01

    Full Text Available Light-weight hyperspectral frame cameras represent novel developments in remote sensing technology. With frame camera technology, when capturing images with stereoscopic overlaps, it is possible to derive 3D hyperspectral reflectance information and 3D geometric data of targets of interest, which enables detailed geometric and radiometric characterization of the object. These technologies are expected to provide efficient tools in various environmental remote sensing applications, such as canopy classification, canopy stress analysis, precision agriculture, and urban material classification. Furthermore, these data sets enable advanced quantitative, physical based retrieval of biophysical and biochemical parameters by model inversion technologies. Objective of this investigation was to study the aspects of capturing hyperspectral reflectance data from unmanned airborne vehicle (UAV and terrestrial platform with novel hyperspectral frame cameras in complex, forested environment.

  19. Interactive floating windows: a new technique for stereoscopic video games

    Science.gov (United States)

    Zerebecki, Chris; Stanfield, Brodie; Tawadrous, Mina; Buckstein, Daniel; Hogue, Andrew; Kapralos, Bill

    2012-03-01

    The film industry has a long history of creating compelling experiences in stereoscopic 3D. Recently, the video game as an artistic medium has matured into an effective way to tell engaging and immersive stories. Given the current push to bring stereoscopic 3D technology into the consumer market there is considerable interest to develop stereoscopic 3D video games. Game developers have largely ignored the need to design their games specifically for stereoscopic 3D and have thus relied on automatic conversion and driver technology. Game developers need to evaluate solutions used in other media, such as film, to correct perceptual problems such as window violations, and modify or create new solutions to work within an interactive framework. In this paper we extend the dynamic floating window technique into the interactive domain enabling the player to position a virtual window in space. Interactively changing the position, size, and the 3D rotation of the virtual window, objects can be made to 'break the mask' dramatically enhancing the stereoscopic effect. By demonstrating that solutions from the film industry can be extended into the interactive space, it is our hope that this initiates further discussion in the game development community to strengthen their story-telling mechanisms in stereoscopic 3D games.

  20. Broadcast-quality-stereoscopic video in a time-critical entertainment and corporate environment

    Science.gov (United States)

    Gay, Jean-Philippe

    1995-03-01

    `reality present: Peter Gabrial and Cirque du Soleil' is a 12 minute original work directed and produced by Doug Brown, Jean-Philippe Gay & A. Coogan, which showcases creative content applications of commercial stereoscopic video equipment. For production, a complete equipment package including a Steadicam mount was used in support of the Ikegami LK-33 camera. Remote production units were fielded in the time critical, on-stage and off-stage environments of 2 major live concerts: Peter Gabriel's Secret World performance at the San Diego Sports Arena, and Cirque du Soleil's Saltimbanco performance in Chicago. Twin 60 Hz video channels were captured on Beta SP for maximum post production flexibility. Digital post production and field sequential mastering were effected in D-2 format at studio facilities in Los Angeles. The program was world premiered to a large public at the World of Music, Arts and Dance festivals in Los Angeles and San Francisco, in late 1993. It was presented to the artists in Los Angeles, Montreal and Washington D.C. Additional presentations have been made using a broad range of commercial and experimental stereoscopic video equipment, including projection systems, LCD and passive eyewear, and digital signal processors. Technical packages for live presentation have been fielded on site and off, through to the present.

  1. Calculation of 3D Coordinates of a Point on the Basis of a Stereoscopic System

    Science.gov (United States)

    Mussabayev, R. R.; Kalimoldayev, M. N.; Amirgaliyev, Ye. N.; Tairova, A. T.; Mussabayev, T. R.

    2018-05-01

    The solution of three-dimensional (3D) coordinate calculation task for a material point is considered. Two flat images (a stereopair) which correspond to the left and to the right viewpoints of a 3D scene are used for this purpose. The stereopair is obtained using two cameras with parallel optical axes. The analytical formulas for calculating 3D coordinates of a material point in the scene were obtained on the basis of analysis of the stereoscopic system optical and geometrical schemes. The detailed presentation of the algorithmic and hardware realization of the given method was discussed with the the practical. The practical module was recommended for the determination of the optical system unknown parameters. The series of experimental investigations were conducted for verification of theoretical results. During these experiments the minor inaccuracies were occurred by space distortions in the optical system and by it discrecity. While using the high quality stereoscopic system, the existing calculation inaccuracy enables to apply the given method for the wide range of practical tasks.

  2. Study of high-definition and stereoscopic head-aimed vision for improved teleoperation of an unmanned ground vehicle

    Science.gov (United States)

    Tyczka, Dale R.; Wright, Robert; Janiszewski, Brian; Chatten, Martha Jane; Bowen, Thomas A.; Skibba, Brian

    2012-06-01

    Nearly all explosive ordnance disposal robots in use today employ monoscopic standard-definition video cameras to relay live imagery from the robot to the operator. With this approach, operators must rely on shadows and other monoscopic depth cues in order to judge distances and object depths. Alternatively, they can contact an object with the robot's manipulator to determine its position, but that approach carries with it the risk of detonation from unintentionally disturbing the target or nearby objects. We recently completed a study in which high-definition (HD) and stereoscopic video cameras were used in addition to conventional standard-definition (SD) cameras in order to determine if higher resolutions and/or stereoscopic depth cues improve operators' overall performance of various unmanned ground vehicle (UGV) tasks. We also studied the effect that the different vision modes had on operator comfort. A total of six different head-aimed vision modes were used including normal-separation HD stereo, SD stereo, "micro" (reduced separation) SD stereo, HD mono, and SD mono (two types). In general, the study results support the expectation that higher resolution and stereoscopic vision aid UGV teleoperation, but the degree of improvement was found to depend on the specific task being performed; certain tasks derived notably more benefit from improved depth perception than others. This effort was sponsored by the Joint Ground Robotics Enterprise under Robotics Technology Consortium Agreement #69-200902 T01. Technical management was provided by the U.S. Air Force Research Laboratory's Robotics Research and Development Group at Tyndall AFB, Florida.

  3. What is stereoscopic vision good for?

    Science.gov (United States)

    Read, Jenny C. A.

    2015-03-01

    Stereo vision is a resource-intensive process. Nevertheless, it has evolved in many animals including mammals, birds, amphibians and insects. It must therefore convey significant fitness benefits. It is often assumed that the main benefit is improved accuracy of depth judgments, but camouflage breaking may be as important, particularly in predatory animals. In humans, for the last 150 years, stereo vision has been turned to a new use: helping us reproduce visual reality for artistic purposes. By recreating the different views of a scene seen by the two eyes, stereo achieves unprecedented levels of realism. However, it also has some unexpected effects on viewer experience. The disruption of established mechanisms for interpreting pictures may be one reason why some viewers find stereoscopic content disturbing. Stereo vision also has uses in ophthalmology. Clinical stereoacuity tests are used in the management of conditions such as strabismus and amblyopia as well as vision screening. Stereoacuity can reveal the effectiveness of therapy and even predict long-term outcomes post surgery. Yet current clinical stereo tests fall far short of the accuracy and precision achievable in the lab. At Newcastle University, we are exploiting the recent availability of autostereo 3D tablet computers to design a clinical stereotest app in the form of a game suitable for young children. Our goal is to enable quick, accurate and precise stereoacuity measures which will enable clinicians to obtain better outcomes for children with visual disorders.

  4. Using mental rotation to evaluate the benefits of stereoscopic displays

    Science.gov (United States)

    Aitsiselmi, Y.; Holliman, N. S.

    2009-02-01

    Context: The idea behind stereoscopic displays is to create the illusion of depth and this concept could have many practical applications. A common spatial ability test involves mental rotation. Therefore a mental rotation task should be easier if being undertaken on a stereoscopic screen. Aim: The aim of this project is to evaluate stereoscopic displays (3D screen) and to assess whether they are better for performing a certain task than over a 2D display. A secondary aim was to perform a similar study but replicating the conditions of using a stereoscopic mobile phone screen. Method: We devised a spatial ability test involving a mental rotation task that participants were asked to complete on either a 3D or 2D screen. We also design a similar task to simulate the experience on a stereoscopic cell phone. The participants' error rate and response times were recorded. Using statistical analysis, we then compared the error rate and response times of the groups to see if there were any significant differences. Results: We found that the participants got better scores if they were doing the task on a stereoscopic screen as opposed to a 2D screen. However there was no statistically significant difference in the time it took them to complete the task. We also found similar results for 3D cell phone display condition. Conclusions: The results show that the extra depth information given by a stereoscopic display makes it easier to mentally rotate a shape as depth cues are readily available. These results could have many useful implications to certain industries.

  5. Interlopers 3D: experiences designing a stereoscopic game

    Science.gov (United States)

    Weaver, James; Holliman, Nicolas S.

    2014-03-01

    Background In recent years 3D-enabled televisions, VR headsets and computer displays have become more readily available in the home. This presents an opportunity for game designers to explore new stereoscopic game mechanics and techniques that have previously been unavailable in monocular gaming. Aims To investigate the visual cues that are present in binocular and monocular vision, identifying which are relevant when gaming using a stereoscopic display. To implement a game whose mechanics are so reliant on binocular cues that the game becomes impossible or at least very difficult to play in non-stereoscopic mode. Method A stereoscopic 3D game was developed whose objective was to shoot down advancing enemies (the Interlopers) before they reached their destination. Scoring highly required players to make accurate depth judgments and target the closest enemies first. A group of twenty participants played both a basic and advanced version of the game in both monoscopic 2D and stereoscopic 3D. Results The results show that in both the basic and advanced game participants achieved higher scores when playing in stereoscopic 3D. The advanced game showed that by disrupting the depth from motion cue the game became more difficult in monoscopic 2D. Results also show a certain amount of learning taking place over the course of the experiment, meaning that players were able to score higher and finish the game faster over the course of the experiment. Conclusions Although the game was not impossible to play in monoscopic 2D, participants results show that it put them at a significant disadvantage when compared to playing in stereoscopic 3D.

  6. Integrating multi-view transmission system into MPEG-21 stereoscopic and multi-view DIA (digital item adaptation)

    Science.gov (United States)

    Lee, Seungwon; Park, Ilkwon; Kim, Manbae; Byun, Hyeran

    2006-10-01

    As digital broadcasting technologies have been rapidly progressed, users' expectations for realistic and interactive broadcasting services also have been increased. As one of such services, 3D multi-view broadcasting has received much attention recently. In general, all the view sequences acquired at the server are transmitted to the client. Then, the user can select a part of views or all the views according to display capabilities. However, this kind of system requires high processing power of the server as well as the client, thus posing a difficulty in practical applications. To overcome this problem, a relatively simple method is to transmit only two view-sequences requested by the client in order to deliver a stereoscopic video. In this system, effective communication between the server and the client is one of important aspects. In this paper, we propose an efficient multi-view system that transmits two view-sequences and their depth maps according to user's request. The view selection process is integrated into MPEG-21 DIA (Digital Item Adaptation) so that our system is compatible to MPEG-21 multimedia framework. DIA is generally composed of resource adaptation and descriptor adaptation. It is one of merits that SVA (stereoscopic video adaptation) descriptors defined in DIA standard are used to deliver users' preferences and device capabilities. Furthermore, multi-view descriptions related to multi-view camera and system are newly introduced. The syntax of the descriptions and their elements is represented in XML (eXtensible Markup Language) schema. If the client requests an adapted descriptor (e.g., view numbers) to the server, then the server sends its associated view sequences. Finally, we present a method which can reduce user's visual discomfort that might occur while viewing stereoscopic video. This phenomenon happens when view changes as well as when a stereoscopic image produces excessive disparity caused by a large baseline between two cameras. To

  7. Stereoscopic Planar Laser-Induced Fluorescence Imaging at 500 kHz

    Science.gov (United States)

    Medford, Taylor L.; Danehy, Paul M.; Jones, Stephen B.; Jiang, N.; Webster, M.; Lempert, Walter; Miller, J.; Meyer, T.

    2011-01-01

    A new measurement technique for obtaining time- and spatially-resolved image sequences in hypersonic flows is developed. Nitric-oxide planar laser-induced fluorescence (NO PLIF) has previously been used to investigate transition from laminar to turbulent flow in hypersonic boundary layers using both planar and volumetric imaging capabilities. Low flow rates of NO were typically seeded into the flow, minimally perturbing the flow. The volumetric imaging was performed at a measurement rate of 10 Hz using a thick planar laser sheet that excited NO fluorescence. The fluorescence was captured by a pair of cameras having slightly different views of the flow. Subsequent stereoscopic reconstruction of these images allowed the three-dimensional flow structures to be viewed. In the current paper, this approach has been extended to 50,000 times higher repetition rates. A laser operating at 500 kHz excites the seeded NO molecules, and a camera, synchronized with the laser and fitted with a beam-splitting assembly, acquires two separate images of the flow. The resulting stereoscopic images provide three-dimensional flow visualizations at 500 kHz for the first time. The 200 ns exposure time in each frame is fast enough to freeze the flow while the 500 kHz repetition rate is fast enough to time-resolve changes in the flow being studied. This method is applied to visualize the evolving hypersonic flow structures that propagate downstream of a discrete protuberance attached to a flat plate. The technique was demonstrated in the NASA Langley Research Center s 31-Inch Mach 10 Air Tunnel facility. Different tunnel Reynolds number conditions, NO flow rates and two different cylindrical protuberance heights were investigated. The location of the onset of flow unsteadiness, an indicator of transition, was observed to move downstream during the tunnel runs, coinciding with an increase in the model temperature.

  8. Stereoscopic Integrated Imaging Goggles for Multimodal Intraoperative Image Guidance.

    Directory of Open Access Journals (Sweden)

    Christopher A Mela

    Full Text Available We have developed novel stereoscopic wearable multimodal intraoperative imaging and display systems entitled Integrated Imaging Goggles for guiding surgeries. The prototype systems offer real time stereoscopic fluorescence imaging and color reflectance imaging capacity, along with in vivo handheld microscopy and ultrasound imaging. With the Integrated Imaging Goggle, both wide-field fluorescence imaging and in vivo microscopy are provided. The real time ultrasound images can also be presented in the goggle display. Furthermore, real time goggle-to-goggle stereoscopic video sharing is demonstrated, which can greatly facilitate telemedicine. In this paper, the prototype systems are described, characterized and tested in surgeries in biological tissues ex vivo. We have found that the system can detect fluorescent targets with as low as 60 nM indocyanine green and can resolve structures down to 0.25 mm with large FOV stereoscopic imaging. The system has successfully guided simulated cancer surgeries in chicken. The Integrated Imaging Goggle is novel in 4 aspects: it is (a the first wearable stereoscopic wide-field intraoperative fluorescence imaging and display system, (b the first wearable system offering both large FOV and microscopic imaging simultaneously,

  9. Stereoscopic radiographic images with gamma source encoding

    International Nuclear Information System (INIS)

    Strocovsky, S.G.; Otero, D

    2012-01-01

    Conventional radiography with X-ray tube has several drawbacks, as the compromise between the size of the focal spot and the fluence. The finite dimensions of the focal spot impose a limit to the spatial resolution. Gamma radiography uses gamma-ray sources which surpass in size, portability and simplicity to X-ray tubes. However, its low intrinsic fluence forces to use extended sources that also degrade the spatial resolution. In this work, we show the principles of a new radiographic technique that overcomes the limitations associated with the finite dimensions of X-ray sources, and that offers additional benefits to conventional techniques. The new technique called coding source imaging (CSI), is based on the use of extended sources, edge-encoding of radiation and differential detection. The mathematical principles and the method of images reconstruction with the new proposed technique are explained in the present work. Analytical calculations were made to determine the maximum spatial resolution and the variables on which it depends. The CSI technique was tested by means of Monte Carlo simulations with sets of spherical objects. We show that CSI has stereoscopic capabilities and it can resolve objects smaller than the source size. The CSI decoding algorithm reconstructs simultaneously four different projections from the same object, while conventional radiography produces only one projection per acquisition. Projections are located in separate image fields on the detector plane. Our results show it is possible to apply an extremely simple radiographic technique with extended sources, and get 3D information of the attenuation coefficient distribution for simple geometry objects in a single acquisition. The results are promising enough to evaluate the possibility of future research with more complex objects typical of medical diagnostic radiography and industrial gamma radiography (author)

  10. Scintillating camera

    International Nuclear Information System (INIS)

    Vlasbloem, H.

    1976-01-01

    The invention relates to a scintillating camera and in particular to an apparatus for determining the position coordinates of a light pulse emitting point on the anode of an image intensifier tube which forms part of a scintillating camera, comprising at least three photomultipliers which are positioned to receive light emitted by the anode screen on their photocathodes, circuit means for processing the output voltages of the photomultipliers to derive voltages that are representative of the position coordinates; a pulse-height discriminator circuit adapted to be fed with the sum voltage of the output voltages of the photomultipliers for gating the output of the processing circuit when the amplitude of the sum voltage of the output voltages of the photomultipliers lies in a predetermined amplitude range, and means for compensating the distortion introduced in the image on the anode screen

  11. The Role of Amodal Surface Completion in Stereoscopic Transparency

    Science.gov (United States)

    Anderson, Barton L.; Schmid, Alexandra C.

    2012-01-01

    Previous work has shown that the visual system can decompose stereoscopic textures into percepts of inhomogeneous transparency. We investigate whether this form of layered image decomposition is shaped by constraints on amodal surface completion. We report a series of experiments that demonstrate that stereoscopic depth differences are easier to discriminate when the stereo images generate a coherent percept of surface color, than when images require amodally integrating a series of color changes into a coherent surface. Our results provide further evidence for the intimate link between the segmentation processes that occur in conditions of transparency and occlusion, and the interpolation processes involved in the formation of amodally completed surfaces. PMID:23060829

  12. Gamma camera

    International Nuclear Information System (INIS)

    Reiss, K.H.; Kotschak, O.; Conrad, B.

    1976-01-01

    A gamma camera with a simplified setup as compared with the state of engineering is described permitting, apart from good localization, also energy discrimination. Behind the usual vacuum image amplifier a multiwire proportional chamber filled with trifluorine bromium methane is connected in series. Localizing of the signals is achieved by a delay line, energy determination by means of a pulse height discriminator. With the aid of drawings and circuit diagrams, the setup and mode of operation are explained. (ORU) [de

  13. Stereoscopic radiographic images with thermal neutrons

    International Nuclear Information System (INIS)

    Silvani, M.I.; Almeida, G.L.; Rogers, J.D.; Lopes, R.T.

    2011-01-01

    Spatial structure of an object can be perceived by the stereoscopic vision provided by eyes or by the parallax produced by movement of the object with regard to the observer. For an opaque object, a technique to render it transparent should be used, in order to make visible the spatial distribution of its inner structure, for any of the two approaches used. In this work, a beam of thermal neutrons at the main port of the Argonauta research reactor of the Instituto de Engenharia Nuclear in Rio de Janeiro/Brazil has been used as radiation to render the inspected objects partially transparent. A neutron sensitive Imaging Plate has been employed as a detector and after exposure it has been developed by a reader using a 0.5 μm laser beam, which defines the finest achievable spatial resolution of the acquired digital image. This image, a radiographic attenuation map of the object, does not represent any specific cross-section but a convoluted projection for each specific attitude of the object with regard to the detector. After taking two of these projections at different object attitudes, they are properly processed and the final image is viewed by a red and green eyeglass. For monochromatic images this processing involves transformation of black and white radiographies into red and white and green and white ones, which are afterwards merged to yield a single image. All the processes are carried out with the software ImageJ. Divergence of the neutron beam unfortunately spoils both spatial and contrast resolutions, which become poorer as object-detector distance increases. Therefore, in order to evaluate the range of spatial resolution corresponding to the 3D image being observed, a curve expressing spatial resolution against object-detector gap has been deduced from the Modulation Transfer Functions experimentally. Typical exposure times, under a reactor power of 170 W, were 6 min for both quantitative and qualitative measurements. In spite of its intrinsic constraints

  14. Stereoscopic radiographic images with thermal neutrons

    Science.gov (United States)

    Silvani, M. I.; Almeida, G. L.; Rogers, J. D.; Lopes, R. T.

    2011-10-01

    Spatial structure of an object can be perceived by the stereoscopic vision provided by eyes or by the parallax produced by movement of the object with regard to the observer. For an opaque object, a technique to render it transparent should be used, in order to make visible the spatial distribution of its inner structure, for any of the two approaches used. In this work, a beam of thermal neutrons at the main port of the Argonauta research reactor of the Instituto de Engenharia Nuclear in Rio de Janeiro/Brazil has been used as radiation to render the inspected objects partially transparent. A neutron sensitive Imaging Plate has been employed as a detector and after exposure it has been developed by a reader using a 0.5 μm laser beam, which defines the finest achievable spatial resolution of the acquired digital image. This image, a radiographic attenuation map of the object, does not represent any specific cross-section but a convoluted projection for each specific attitude of the object with regard to the detector. After taking two of these projections at different object attitudes, they are properly processed and the final image is viewed by a red and green eyeglass. For monochromatic images this processing involves transformation of black and white radiographies into red and white and green and white ones, which are afterwards merged to yield a single image. All the processes are carried out with the software ImageJ. Divergence of the neutron beam unfortunately spoils both spatial and contrast resolutions, which become poorer as object-detector distance increases. Therefore, in order to evaluate the range of spatial resolution corresponding to the 3D image being observed, a curve expressing spatial resolution against object-detector gap has been deduced from the Modulation Transfer Functions experimentally. Typical exposure times, under a reactor power of 170 W, were 6 min for both quantitative and qualitative measurements. In spite of its intrinsic constraints

  15. Gamma camera

    International Nuclear Information System (INIS)

    Berninger, W.H.

    1975-01-01

    The light pulse output of a scintillator, on which incident collimated gamma rays impinge, is detected by an array of photoelectric tubes each having a convexly curved photocathode disposed in close proximity to the scintillator. Electronic circuitry connected to outputs of the phototubes develops the scintillation event position coordinate electrical signals with good linearity and with substantial independence of the spacing between the scintillator and photocathodes so that the phototubes can be positioned as close to the scintillator as is possible to obtain less distortion in the field of view and improved spatial resolution as compared to conventional planar photocathode gamma cameras

  16. Radioisotope camera

    International Nuclear Information System (INIS)

    Tausch, L.M.; Kump, R.J.

    1978-01-01

    The electronic ciruit corrects distortions caused by the distance between the individual photomultiplier tubes of the multiple radioisotope camera on one hand and between the tube configuration and the scintillator plate on the other. For this purpose the transmission characteristics of the nonlinear circuits are altered as a function of the energy of the incident radiation. By this means the threshold values between lower and higher amplification are adjusted to the energy level of each scintillation. The correcting circuit may be used for any number of isotopes to be measured. (DG) [de

  17. A system and method for adjusting and presenting stereoscopic content

    DEFF Research Database (Denmark)

    2013-01-01

    on the basis of one or more vision specific parameters (0M, ThetaMuAlphaChi, ThetaMuIotaNu, DeltaTheta) indicating abnormal vision for the user. In this way, presenting stereoscopic content is enabled that is adjusted specifically to the given person. This may e.g. be used for training purposes or for improved...

  18. 3D Stereoscopic Visualization of Fenestrated Stent Grafts

    International Nuclear Information System (INIS)

    Sun Zhonghua; Squelch, Andrew; Bartlett, Andrew; Cunningham, Kylie; Lawrence-Brown, Michael

    2009-01-01

    The purpose of this study was to present a technique of stereoscopic visualization in the evaluation of patients with abdominal aortic aneurysm treated with fenestrated stent grafts compared with conventional 2D visualizations. Two patients with abdominal aortic aneurysm undergoing fenestrated stent grafting were selected for inclusion in the study. Conventional 2D views including axial, multiplanar reformation, maximum-intensity projection, and volume rendering and 3D stereoscopic visualizations were assessed by two experienced reviewers independently with regard to the treatment outcomes of fenestrated repair. Interobserver agreement was assessed with Kendall's W statistic. Multiplanar reformation and maximum-intensity projection visualizations were scored the highest in the evaluation of parameters related to the fenestrated stent grafting, while 3D stereoscopic visualization was scored as valuable in the evaluation of appearance (any distortions) of the fenestrated stent. Volume rendering was found to play a limited role in the follow-up of fenestrated stent grafting. 3D stereoscopic visualization adds additional information that assists endovascular specialists to identify any distortions of the fenestrated stents when compared with 2D visualizations.

  19. Size Optimization of 3D Stereoscopic Film Frames

    African Journals Online (AJOL)

    pc

    2018-03-22

    Mar 22, 2018 ... perception. Keywords- Optimization; Stereoscopic Film; 3D Frames;Aspect. Ratio ... television will mature to enable the viewing of 3D films prevalent[3]. On the .... Industry Standard VFX Practices and Proced. 2014. [10] N. A. ...

  20. The rendering context for stereoscopic 3D web

    Science.gov (United States)

    Chen, Qinshui; Wang, Wenmin; Wang, Ronggang

    2014-03-01

    3D technologies on the Web has been studied for many years, but they are basically monoscopic 3D. With the stereoscopic technology gradually maturing, we are researching to integrate the binocular 3D technology into the Web, creating a stereoscopic 3D browser that will provide users with a brand new experience of human-computer interaction. In this paper, we propose a novel approach to apply stereoscopy technologies to the CSS3 3D Transforms. Under our model, each element can create or participate in a stereoscopic 3D rendering context, in which 3D Transforms such as scaling, translation and rotation, can be applied and be perceived in a truly 3D space. We first discuss the underlying principles of stereoscopy. After that we discuss how these principles can be applied to the Web. A stereoscopic 3D browser with backward compatibility is also created for demonstration purposes. We take advantage of the open-source WebKit project, integrating the 3D display ability into the rendering engine of the web browser. For each 3D web page, our 3D browser will create two slightly different images, each representing the left-eye view and right-eye view, both to be combined on the 3D display to generate the illusion of depth. And as the result turns out, elements can be manipulated in a truly 3D space.

  1. Flow Mapping of a Jet in Crossflow with Stereoscopic PIV

    DEFF Research Database (Denmark)

    Meyer, Knud Erik; Özcan, Oktay; Westergaard, C. H.

    2002-01-01

    Stereoscopic Particle Image Velocimetry (PIV) has been used to make a three-dimensional flow mapping of a jet in crossflow. The Reynolds number based on the free stream velocity and the jet diameter was nominally 2400. A jet-to-crossflow velocity ratio of 3.3 was used. Details of the formation...

  2. The Advanced Gamma-ray Imaging System (AGIS): Real Time Stereoscopic Array Trigger

    Science.gov (United States)

    Byrum, K.; Anderson, J.; Buckley, J.; Cundiff, T.; Dawson, J.; Drake, G.; Duke, C.; Haberichter, B.; Krawzcynski, H.; Krennrich, F.; Madhavan, A.; Schroedter, M.; Smith, A.

    2009-05-01

    Future large arrays of Imaging Atmospheric Cherenkov telescopes (IACTs) such as AGIS and CTA are conceived to comprise of 50 - 100 individual telescopes each having a camera with 10**3 to 10**4 pixels. To maximize the capabilities of such IACT arrays with a low energy threshold, a wide field of view and a low background rate, a sophisticated array trigger is required. We describe the design of a stereoscopic array trigger that calculates image parameters and then correlates them across a subset of telescopes. Fast Field Programmable Gate Array technology allows to use lookup tables at the array trigger level to form a real-time pattern recognition trigger tht capitalizes on the multiple view points of the shower at different shower core distances. A proof of principle system is currently under construction. It is based on 400 MHz FPGAs and the goal is for camera trigger rates of up to 10 MHz and a tunable cosmic-ray background suppression at the array level.

  3. Robust and Accurate Algorithm for Wearable Stereoscopic Augmented Reality with Three Indistinguishable Markers

    Directory of Open Access Journals (Sweden)

    Fabrizio Cutolo

    2016-09-01

    Full Text Available In the context of surgical navigation systems based on augmented reality (AR, the key challenge is to ensure the highest degree of realism in merging computer-generated elements with live views of the surgical scene. This paper presents an algorithm suited for wearable stereoscopic augmented reality video see-through systems for use in a clinical scenario. A video-based tracking solution is proposed that relies on stereo localization of three monochromatic markers rigidly constrained to the scene. A PnP-based optimization step is introduced to refine separately the pose of the two cameras. Video-based tracking methods using monochromatic markers are robust to non-controllable and/or inconsistent lighting conditions. The two-stage camera pose estimation algorithm provides sub-pixel registration accuracy. From a technological and an ergonomic standpoint, the proposed approach represents an effective solution to the implementation of wearable AR-based surgical navigation systems wherever rigid anatomies are involved.

  4. 3-D Digitization of Stereoscopic Jet-in-Crossflow Vortex Structure Images via Augmented Reality

    Science.gov (United States)

    Sigurdson, Lorenz; Strand, Christopher; Watson, Graeme; Nault, Joshua; Tucker, Ryan

    2006-11-01

    Stereoscopic images of smoke-laden vortex flows have proven useful for understanding the topology of the embedded 3-D vortex structures. Images from two cameras allow a perception of the 3-D structure via the use of red/blue eye glasses. The human brain has an astonishing capacity to calculate and present to the observer the complex turbulent smoke volume. We have developed a technique whereby a virtual cursor is introduced to the perception, which creates an ``augmented reality.'' The perceived position of this cursor in the 3-D field can be precisely controlled by the observer. It can be brought near a characteristic vortex structure in order to digitally estimate the spatial coordinates of that feature. A calibration procedure accounts for camera positioning. Vortex tubes can be traced and recorded for later or real time supersposition of tube skeleton models. These models can be readily digitally obtained for display in graphics systems to allow complete exploration from any location or perspective. A unique feature of this technology is the use of the human brain to naturally perform the difficult computation of the shape of the translucent smoke volume. Examples are given of application to low velocity ratio and Reynolds number elevated jets-in-crossflow.

  5. Gamma camera

    International Nuclear Information System (INIS)

    Conrad, B.; Heinzelmann, K.G.

    1975-01-01

    A gamma camera is described which obviates the distortion of locating signals generally caused by the varied light conductive capacities of the light conductors in that the flow of light through each light conductor may be varied by means of a shutter. A balancing of the flow of light through each of the individual light conductors, in effect, collective light conductors may be balanced on the basis of their light conductive capacities or properties, so as to preclude a distortion of the locating signals caused by the varied light conductive properties of the light conductors. Each light conductor has associated therewith two, relative to each other, independently adjustable shutters, of which one forms a closure member and the other an adjusting shutter. In this embodiment of the invention it is thus possible to block all of the light conductors leading to a photoelectric transducer, with the exception of those light conductors which are to be balanced. The balancing of the individual light conductors may then be obtained on the basis of the output signals of the photoelectric transducer. (auth)

  6. Scintillation camera

    International Nuclear Information System (INIS)

    Zioni, J.; Klein, Y.; Inbar, D.

    1975-01-01

    The scintillation camera is to make pictures of the density distribution of radiation fields created by the injection or administration radioactive medicaments into the body of the patient. It contains a scintillation crystal, several photomultipliers and computer circuits to obtain an analytical function at the exits of the photomultiplier which is dependent on the position of the scintillations at the time in the crystal. The scintillation crystal is flat and spatially corresponds to the production site of radiation. The photomultipliers form a pattern whose basic form consists of at least three photomultipliers. They are assigned to at least two crossing parallel series groups where a vertical running reference axis in the crystal plane belongs to each series group. The computer circuits are each assigned to a reference axis. Each series of a series group assigned to one of the reference axes in the computer circuit has an adder to produce a scintillation dependent series signal. Furthermore, the projection of the scintillation on this reference axis is calculated. A series signal is used for this which originates from a series chosen from two neighbouring photomultiplier series of this group. The scintillation must have appeared between these chosen series. They are termed as basic series. The photomultiplier can be arranged hexagonally or rectangularly. (GG/LH) [de

  7. Dream Home: a multiview stereoscopic interior design system

    Science.gov (United States)

    Hsiao, Fu-Jen; Teng, Chih-Jen; Lin, Chung-Wei; Luo, An-Chun; Yang, Jinn-Cherng

    2010-01-01

    In this paper, a novel multi-view stereoscopic interior design system, "Dream Home", has been developed to bring users new interior design experience. Different than other interior design system before, we put emphasis on its intuitive manipulation and multi-view stereoscopic visualization in real time. Users can do their own interior design just using their hands and eyes without any difficulty. They manipulate furniture cards directly as they wish to setup their living room in the model house task space, get the multi-view 3D visual feedback instantly, and re-adjust cards until they are satisfied. No special skills are required, and you can explore your design talent arbitrarily. We hope that "Dream Home" will make interior design more user-friendly, more intuitive, and more vivid.

  8. Methodology for stereoscopic motion-picture quality assessment

    Science.gov (United States)

    Voronov, Alexander; Vatolin, Dmitriy; Sumin, Denis; Napadovsky, Vyacheslav; Borisov, Alexey

    2013-03-01

    Creating and processing stereoscopic video imposes additional quality requirements related to view synchronization. In this work we propose a set of algorithms for detecting typical stereoscopic-video problems, which appear owing to imprecise setup of capture equipment or incorrect postprocessing. We developed a methodology for analyzing the quality of S3D motion pictures and for revealing their most problematic scenes. We then processed 10 modern stereo films, including Avatar, Resident Evil: Afterlife and Hugo, and analyzed changes in S3D-film quality over the years. This work presents real examples of common artifacts (color and sharpness mismatch, vertical disparity and excessive horizontal disparity) in the motion pictures we processed, as well as possible solutions for each problem. Our results enable improved quality assessment during the filming and postproduction stages.

  9. Current status of stereoscopic 3D LCD TV technologies

    Science.gov (United States)

    Choi, Hee-Jin

    2011-06-01

    The year 2010 may be recorded as a first year of successful commercial 3D products. Among them, the 3D LCD TVs are expected to be the major one regarding the sales volume. In this paper, the principle of current stereoscopic 3D LCD TV techniques and the required flat panel display (FPD) technologies for the realization of them are reviewed. [Figure not available: see fulltext.

  10. Clinical Assessment of a New Stereoscopic Digital Angiography System

    International Nuclear Information System (INIS)

    Moll, Thierry; Douek, Philippe; Finet, Gerard; Turjman, Francis; Picard, Catherine; Revel, Didier; Amiel, Michel

    1998-01-01

    Purpose: To assess the clinical feasibility of an experimental modified angiographic system capable of real-time digital stereofluoroscopy and stereography in X-ray angiography, using a twin-focus tube and a stereoscopic monitor. Methods: We report the experience obtained in 37 patients with a well-documented examination. The patients were examined for coronary angiography (11 cases), aortography (7 cases), pulmonary angiography (6 cases), inferior vena cava filter placement (2 cases), and cerebral angiography (11 cases). Six radiologists were asked to use stereoscopic features for fluoroscopy and angiography. A questionnaire was designed to record their subjective evaluation of stereoscopic image quality, ergonomics of the system, and its medical interest. Results: Stereofluoroscopy was successfully used in 25 of 37 cases; diplopia and/or ghost images were reported in 6 cases. It was helpful for aortic catheterization in 10 cases and for selective catheterization in 5 cases. In stereoangiography, depth was easily and accurately perceived in 27 of 37 cases; diplopia and/or ghost images were reported in 4 cases. A certain gain in the three-dimensional evaluation of the anatomy and relation between vessels and lesions was noted. As regards ergonomic considerations, polarized spectacles were not considered cumbersome. Visual fatigue and additional work were variously reported. Stereoshift tuning before X-ray acquisition was not judged to be a limiting factor. Conclusion: A twin-focus X-ray tube and a polarized shutter for stereoscopic display allowed effective real-time three-dimensional perception of angiographic images. Our clinical study suggests no clear medical interest for diagnostic examinations, but the field of interventional radiology needs to be investigated

  11. Optimal display conditions for quantitative analysis of stereoscopic cerebral angiograms

    International Nuclear Information System (INIS)

    Charland, P.; Peters, T.; McGill Univ., Montreal, Quebec

    1996-01-01

    For several years the authors have been using a stereoscopic display as a tool in the planning of stereotactic neurosurgical techniques. This PC-based workstation allows the surgeon to interact with and view vascular images in three dimensions, as well as to perform quantitative analysis of the three-dimensional (3-D) space. Some of the perceptual issues relevant to the presentation of medical images on this stereoscopic display were addressed in five experiments. The authors show that a number of parameters--namely the shape, color, and depth cue, associated with a cursor--as well as the image filtering and observer position, have a role in improving the observer's perception of a 3-D image and his ability to localize points within the stereoscopically presented 3-D image. However, an analysis of the results indicates that while varying these parameters can lead to an effect on the performance of individual observers, the effects are not consistent across observers, and the mean accuracy remains relatively constant under the different experimental conditions

  12. Quantitative evaluation of papilledema from stereoscopic color fundus photographs.

    Science.gov (United States)

    Tang, Li; Kardon, Randy H; Wang, Jui-Kai; Garvin, Mona K; Lee, Kyungmoo; Abràmoff, Michael D

    2012-07-03

    To derive a computerized measurement of optic disc volume from digital stereoscopic fundus photographs for the purpose of diagnosing and managing papilledema. Twenty-nine pairs of stereoscopic fundus photographs and optic nerve head (ONH) centered spectral domain optical coherence tomography (SD-OCT) scans were obtained at the same visit in 15 patients with papilledema. Some patients were imaged at multiple visits in order to assess their changes. Three-dimensional shape of the ONH was estimated from stereo fundus photographs using an automated multi-scale stereo correspondence algorithm. We assessed the correlation of the stereo volume measurements with the SD-OCT volume measurements quantitatively, in terms of volume of retinal surface elevation above a reference plane and also to expert grading of papilledema from digital fundus photographs using the Frisén grading scale. The volumetric measurements of retinal surface elevation estimated from stereo fundus photographs and OCT scans were positively correlated (correlation coefficient r(2) = 0.60; P photographs compares favorably with that from OCT scans and with expert grading of papilledema severity. Stereoscopic color imaging of the ONH combined with a method of automated shape reconstruction is a low-cost alternative to SD-OCT scans that has potential for a more cost-effective diagnosis and management of papilledema in a telemedical setting. An automated three-dimensional image analysis method was validated that quantifies the retinal surface topography with an imaging modality that has lacked prior objective assessment.

  13. Architecture for high performance stereoscopic game rendering on Android

    Science.gov (United States)

    Flack, Julien; Sanderson, Hugh; Shetty, Sampath

    2014-03-01

    Stereoscopic gaming is a popular source of content for consumer 3D display systems. There has been a significant shift in the gaming industry towards casual games for mobile devices running on the Android™ Operating System and driven by ARM™ and other low power processors. Such systems are now being integrated directly into the next generation of 3D TVs potentially removing the requirement for an external games console. Although native stereo support has been integrated into some high profile titles on established platforms like Windows PC and PS3 there is a lack of GPU independent 3D support for the emerging Android platform. We describe a framework for enabling stereoscopic 3D gaming on Android for applications on mobile devices, set top boxes and TVs. A core component of the architecture is a 3D game driver, which is integrated into the Android OpenGL™ ES graphics stack to convert existing 2D graphics applications into stereoscopic 3D in real-time. The architecture includes a method of analyzing 2D games and using rule based Artificial Intelligence (AI) to position separate objects in 3D space. We describe an innovative stereo 3D rendering technique to separate the views in the depth domain and render directly into the display buffer. The advantages of the stereo renderer are demonstrated by characterizing the performance in comparison to more traditional render techniques, including depth based image rendering, both in terms of frame rates and impact on battery consumption.

  14. A systematized WYSIWYG pipeline for digital stereoscopic 3D filmmaking

    Science.gov (United States)

    Mueller, Robert; Ward, Chris; Hušák, Michal

    2008-02-01

    Digital tools are transforming stereoscopic 3D content creation and delivery, creating an opportunity for the broad acceptance and success of stereoscopic 3D films. Beginning in late 2005, a series of mostly CGI features has successfully initiated the public to this new generation of highly-comfortable, artifact-free digital 3D. While the response has been decidedly favorable, a lack of high-quality live-action films could hinder long-term success. Liveaction stereoscopic films have historically been more time-consuming, costly, and creatively-limiting than 2D films - thus a need arises for a live-action 3D filmmaking process which minimizes such limitations. A unique 'systematized' what-you-see-is-what-you-get (WYSIWYG) pipeline is described which allows the efficient, intuitive and accurate capture and integration of 3D and 2D elements from multiple shoots and sources - both live-action and CGI. Throughout this pipeline, digital tools utilize a consistent algorithm to provide meaningful and accurate visual depth references with respect to the viewing audience in the target theater environment. This intuitive, visual approach introduces efficiency and creativity to the 3D filmmaking process by eliminating both the need for a 'mathematician mentality' of spreadsheets and calculators, as well as any trial and error guesswork, while enabling the most comfortable, 'pixel-perfect', artifact-free 3D product possible.

  15. The development and evaluation of a stereoscopic television system for use in nuclear environments

    International Nuclear Information System (INIS)

    Dumbreck, A.A.; Murphy, S.P.

    1987-01-01

    This paper describes the development and evaluation of a stereoscopic TV system at Harwell Laboratory. The theory of stereo image geometry is outlined, and criteria for the matching of stereoscopic pictures are given. A stereoscopic TV system designed for remote handling tasks has been produced, it provides two selectable angles of view and variable convergence, the display is viewed via polarizing spectacles. Preliminary evaluations have indicated improved performance with no problems of operator fatigue

  16. The development and evaluation of a stereoscopic television system for remote handling

    International Nuclear Information System (INIS)

    Dumbreck, A.A.; Murphy, S.P.; Smith, C.W.

    1990-01-01

    This paper describes the development and evaluation of a stereoscopic television system at Harwell Laboratory. The theory of stereo image geometry is outlined, and criteria for the matching of stereoscopic pictures are given. A stereoscopic television system designed for remote handling tasks has been produced, it provides two selectable angles of view and variable convergence, the display is viewed via polarizing spectacles. Evaluations have indicated improved performance with no problems of operator fatigue over a wide range of applications. (author)

  17. Measuring system with stereoscopic x-ray television for accurate diagnosis

    International Nuclear Information System (INIS)

    Iwasaki, K.; Shimizu, S.

    1987-01-01

    X-ray stereoscopic television is diagnostically effective. The authors invented a measuring system using stereoscopic television whereby the coordinates of any two points and their separation can be measured in real time without physical contact. For this purpose, the distances between the two foci of the tube and between the tube and image intensifier were entered into a microcomputer beforehand, and any two points on the CRT stereoscopic image can be defined through the stereoscopic spectacles. The coordinates and distance are then displayed on the CRT monitor. By this means, measurements such as distance between vessels and size of organs are easily made

  18. A analysis of differences between common types of 3D stereoscopic movie & TV technology

    Directory of Open Access Journals (Sweden)

    CHEN Shuangyin

    2013-06-01

    Full Text Available 3D stereoscopic movie & TV technology develops rapidly.It is spreading into common people's life day by day.In this thesis,the author analyzes 3D stereoscopic movie & TV technology thoroughly.By comparing and studying the different technical solutions of the stereoscopic photography and video recording,production process and playing back,the author generalizes the characteristics of various programs and analyzes their strength and weakness.Eventually,the thesis gives the specific application of existing technical solutions and the future development.At last,it puts improvement goals of 3D stereoscopic movie & TV technology and gives large future development.

  19. Stereoscopic filming for investigating evasive side-stepping and anterior cruciate ligament injury risk

    Science.gov (United States)

    Lee, Marcus J. C.; Bourke, Paul; Alderson, Jacqueline A.; Lloyd, David G.; Lay, Brendan

    2010-02-01

    Non-contact anterior cruciate ligament (ACL) injuries are serious and debilitating, often resulting from the performance of evasive sides-stepping (Ssg) by team sport athletes. Previous laboratory based investigations of evasive Ssg have used generic visual stimuli to simulate realistic time and space constraints that athletes experience in the preparation and execution of the manoeuvre. However, the use of unrealistic visual stimuli to impose these constraints may not be accurately identifying the relationship between the perceptual demands and ACL loading during Ssg in actual game environments. We propose that stereoscopically filmed footage featuring sport specific opposing defender/s simulating a tackle on the viewer, when used as visual stimuli, could improve the ecological validity of laboratory based investigations of evasive Ssg. Due to the need for precision and not just the experience of viewing depth in these scenarios, a rigorous filming process built on key geometric considerations and equipment development to enable a separation of 6.5 cm between two commodity cameras had to be undertaken. Within safety limits, this could be an invaluable tool in enabling more accurate investigations of the associations between evasive Ssg and ACL injury risk.

  20. Stereoscopic Feature Tracking System for Retrieving Velocity of Surface Waters

    Science.gov (United States)

    Zuniga Zamalloa, C. C.; Landry, B. J.

    2017-12-01

    The present work is concerned with the surface velocity retrieval of flows using a stereoscopic setup and finding the correspondence in the images via feature tracking (FT). The feature tracking provides a key benefit of substantially reducing the level of user input. In contrast to other commonly used methods (e.g., normalized cross-correlation), FT does not require the user to prescribe interrogation window sizes and removes the need for masking when specularities are present. The results of the current FT methodology are comparable to those obtained via Large Scale Particle Image Velocimetry while requiring little to no user input which allowed for rapid, automated processing of imagery.

  1. Evaluating stereoscopic displays : both efficiency measures and perceived workload sensitive to manipulations in binocular disparity

    NARCIS (Netherlands)

    Beurden, van M.H.P.H.; IJsselsteijn, W.A.; Kort, de Y.A.W.; Woods, A.J.; Holliman, N.S.; Dodgson, N.A.

    2011-01-01

    Stereoscopic displays are known to offer a number of key advantages in visualizing complex 3D structures or datasets. The large majority of studies that focus on evaluating stereoscopic displays for professional applications use completion time and/or the percentage of correct answers to measure

  2. Application of stereoscopic particle image velocimetry to studies of transport in a dusty (complex) plasma

    International Nuclear Information System (INIS)

    Thomas, Edward Jr.; Williams, Jeremiah D.; Silver, Jennifer

    2004-01-01

    Over the past 5 years, two-dimensional particle image velocimetry (PIV) techniques [E. Thomas, Jr., Phys. Plasmas 6, 2672 (1999)] have been used to obtain detailed measurements of microparticle transport in dusty plasmas. This Letter reports on an extension of these techniques to a three-dimensional velocity vector measurement approach using stereoscopic PIV. Initial measurements using the stereoscopic PIV diagnostic are presented

  3. Low-cost universal stereoscopic virtual reality interfaces

    Science.gov (United States)

    Starks, Michael R.

    1993-09-01

    Low cost stereoscopic virtual reality hardware interfacing with nearly any computer and stereoscopic software running on any PC is described. Both are user configurable for serial or parallel ports. Stereo modeling, rendering, and interaction via gloves or 6D mice are provided. Low cost LCD Visors and external interfaces represent a breakthrough in convenience and price/performance. A complete system with software, Visor, interface and Power Glove is under $DOL500. StereoDrivers will interface with any system giving video sync (e.g., G of RGB). PC3D will access any standard serial port, while PCVR works with serial or parallel ports and glove devices. Model RF Visors detect magnetic fields and require no connection to the system. PGSI is a microprocessor control for the Power Glove and Visors. All interfaces will operate to 120 Hz with Model G Visors. The SpaceStations are demultiplexing, field doubling devices which convert field sequential video or graphics for stereo display with dual video projection or dual LCD SpaceHelmets.

  4. Perceptual asymmetry reveals neural substrates underlying stereoscopic transparency.

    Science.gov (United States)

    Tsirlin, Inna; Allison, Robert S; Wilcox, Laurie M

    2012-02-01

    We describe a perceptual asymmetry found in stereoscopic perception of overlaid random-dot surfaces. Specifically, the minimum separation in depth needed to perceptually segregate two overlaid surfaces depended on the distribution of dots across the surfaces. With the total dot density fixed, significantly larger inter-plane disparities were required for perceptual segregation of the surfaces when the front surface had fewer dots than the back surface compared to when the back surface was the one with fewer dots. We propose that our results reflect an asymmetry in the signal strength of the front and back surfaces due to the assignment of the spaces between the dots to the back surface by disparity interpolation. This hypothesis was supported by the results of two experiments designed to reduce the imbalance in the neuronal response to the two surfaces. We modeled the psychophysical data with a network of inter-neural connections: excitatory within-disparity and inhibitory across disparity, where the spread of disparity was modulated according to figure-ground assignment. These psychophysical and computational findings suggest that stereoscopic transparency depends on both inter-neural interactions of disparity-tuned cells and higher-level processes governing figure ground segregation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Some theoretical aspects of the design of stereoscopic television systems

    International Nuclear Information System (INIS)

    Jones, A.

    1980-03-01

    Several parameters which together specify the performance of a stereoscopic television system which has been demonstrated in reactors are investigated theoretically. These are: (1) the minimum resolvable depth interval in object space, (2) the region of space which can be displayed in three dimensions without causing undue eyestrain to the observer, (3) distortions which may arise in the display. The resulting equations form a basis from which operational stereocameras can be designed and a particular example is given, which also illustrates the relationships between the parameters. It is argued that the extent of the stereo region (parameter (2) above) predicted by previously published work is probably too large for closed circuit television inspection. This arises because the criterion used to determine the maximum tolerable screen parallax is too generous. An alternative, based upon the size of Panum's fusional area (a property of the observer's eye) is proposed. Preliminary experimental support for the proposal is given by measurements of the extent of the stereoscopic region using a number of observers. (author)

  6. Matching and correlation computations in stereoscopic depth perception.

    Science.gov (United States)

    Doi, Takahiro; Tanabe, Seiji; Fujita, Ichiro

    2011-03-02

    A fundamental task of the visual system is to infer depth by using binocular disparity. To encode binocular disparity, the visual cortex performs two distinct computations: one detects matched patterns in paired images (matching computation); the other constructs the cross-correlation between the images (correlation computation). How the two computations are used in stereoscopic perception is unclear. We dissociated their contributions in near/far discrimination by varying the magnitude of the disparity across separate sessions. For small disparity (0.03°), subjects performed at chance level to a binocularly opposite-contrast (anti-correlated) random-dot stereogram (RDS) but improved their performance with the proportion of contrast-matched (correlated) dots. For large disparity (0.48°), the direction of perceived depth reversed with an anti-correlated RDS relative to that for a correlated one. Neither reversed nor normal depth was perceived when anti-correlation was applied to half of the dots. We explain the decision process as a weighted average of the two computations, with the relative weight of the correlation computation increasing with the disparity magnitude. We conclude that matching computation dominates fine depth perception, while both computations contribute to coarser depth perception. Thus, stereoscopic depth perception recruits different computations depending on the disparity magnitude.

  7. Virtual and stereoscopic anatomy: when virtual reality meets medical education.

    Science.gov (United States)

    de Faria, Jose Weber Vieira; Teixeira, Manoel Jacobsen; de Moura Sousa Júnior, Leonardo; Otoch, Jose Pinhata; Figueiredo, Eberval Gadelha

    2016-11-01

    OBJECTIVE The authors sought to construct, implement, and evaluate an interactive and stereoscopic resource for teaching neuroanatomy, accessible from personal computers. METHODS Forty fresh brains (80 hemispheres) were dissected. Images of areas of interest were captured using a manual turntable and processed and stored in a 5337-image database. Pedagogic evaluation was performed in 84 graduate medical students, divided into 3 groups: 1 (conventional method), 2 (interactive nonstereoscopic), and 3 (interactive and stereoscopic). The method was evaluated through a written theory test and a lab practicum. RESULTS Groups 2 and 3 showed the highest mean scores in pedagogic evaluations and differed significantly from Group 1 (p 0.05). Size effects, measured as differences in scores before and after lectures, indicate the effectiveness of the method. ANOVA results showed significant difference (p < 0.05) between groups, and the Tukey test showed statistical differences between Group 1 and the other 2 groups (p < 0.05). No statistical differences between Groups 2 and 3 were found in the practicum. However, there were significant differences when Groups 2 and 3 were compared with Group 1 (p < 0.05). CONCLUSIONS The authors conclude that this method promoted further improvement in knowledge for students and fostered significantly higher learning when compared with traditional teaching resources.

  8. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  9. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen- er...

  10. No-Reference Stereoscopic IQA Approach: From Nonlinear Effect to Parallax Compensation

    Directory of Open Access Journals (Sweden)

    Ke Gu

    2012-01-01

    Full Text Available The last decade has seen a booming of the applications of stereoscopic images/videos and the corresponding technologies, such as 3D modeling, reconstruction, and disparity estimation. However, only a very limited number of stereoscopic image quality assessment metrics was proposed through the years. In this paper, we propose a new no-reference stereoscopic image quality assessment algorithm based on the nonlinear additive model, ocular dominance model, and saliency based parallax compensation. Our studies using the Toyama database result in three valuable findings. First, quality of the stereoscopic image has a nonlinear relationship with a direct summation of two monoscopic image qualities. Second, it is a rational assumption that the right-eye response has the higher impact on the stereoscopic image quality, which is based on a sampling survey in the ocular dominance research. Third, the saliency based parallax compensation, resulted from different stereoscopic image contents, is considerably valid to improve the prediction performance of image quality metrics. Experimental results confirm that our proposed stereoscopic image quality assessment paradigm has superior prediction accuracy as compared to state-of-the-art competitors.

  11. Stereoscopic HDTV Research at NHK Science and Technology Research Laboratories

    CERN Document Server

    Yamanoue, Hirokazu; Nojiri, Yuji

    2012-01-01

    This book focuses on the two psychological factors of naturalness and ease of viewing of three-dimensional high-definition television (3D HDTV) images. It has been said that distortions peculiar to stereoscopic images, such as the “puppet theater” effect or the “cardboard” effect, spoil the sense of presence. Whereas many earlier studies have focused on geometrical calculations about these distortions, this book instead describes the relationship between the naturalness of reproduced 3D HDTV images and the nonlinearity of depthwise reproduction. The ease of viewing of each scene is regarded as one of the causal factors of visual fatigue. Many of the earlier studies have been concerned with the accurate extraction of local parallax; however, this book describes the typical spatiotemporal distribution of parallax in 3D images. The purpose of the book is to examine the correlations between the psychological factors and amount of characteristics of parallax distribution in order to understand the characte...

  12. Visual perception and stereoscopic imaging: an artist's perspective

    Science.gov (United States)

    Mason, Steve

    2015-03-01

    This paper continues my 2014 February IS and T/SPIE Convention exploration into the relationship of stereoscopic vision and consciousness (90141F-1). It was proposed then that by using stereoscopic imaging people may consciously experience, or see, what they are viewing and thereby help make them more aware of the way their brains manage and interpret visual information. Environmental imaging was suggested as a way to accomplish this. This paper is the result of further investigation, research, and follow-up imaging. A show of images, that is a result of this research, allows viewers to experience for themselves the effects of stereoscopy on consciousness. Creating dye-infused aluminum prints while employing ChromaDepth® 3D glasses, I hope to not only raise awareness of visual processing but also explore the differences and similarities between the artist and scientist―art increases right brain spatial consciousness, not only empirical thinking, while furthering the viewer's cognizance of the process of seeing. The artist must abandon preconceptions and expectations, despite what the evidence and experience may indicate in order to see what is happening in his work and to allow it to develop in ways he/she could never anticipate. This process is then revealed to the viewer in a show of work. It is in the experiencing, not just from the thinking, where insight is achieved. Directing the viewer's awareness during the experience using stereoscopic imaging allows for further understanding of the brain's function in the visual process. A cognitive transformation occurs, the preverbal "left/right brain shift," in order for viewers to "see" the space. Using what we know from recent brain research, these images will draw from certain parts of the brain when viewed in two dimensions and different ones when viewed stereoscopically, a shift, if one is looking for it, which is quite noticeable. People who have experienced these images in the context of examining their own

  13. Disparity modifications and the emotional effects of stereoscopic images

    Science.gov (United States)

    Kawai, Takashi; Atsuta, Daiki; Tomiyama, Yuya; Kim, Sanghyun; Morikawa, Hiroyuki; Mitsuya, Reiko; Häkkinen, Jukka

    2014-03-01

    This paper describes a study that focuses on disparity changes in emotional scenes of stereoscopic (3D) images, in which an examination of the effects on pleasant and arousal was carried out by adding binocular disparity to 2D images that evoke specific emotions, and applying disparity modification based on the disparity analysis of famous 3D movies. From the results of the experiment, for pleasant, a significant difference was found only for the main effect of the emotions. On the other hand, for arousal, there was a trend of increasing the evaluation values in the order 2D condition, 3D condition and 3D condition applied the disparity modification for happiness, surprise, and fear. This suggests the possibility that binocular disparity and the modification affect arousal.

  14. Stereoscopic three-dimensional images of an anatomical dissection of the eyeball and orbit for educational purposes.

    Science.gov (United States)

    Matsuo, Toshihiko; Takeda, Yoshimasa; Ohtsuka, Aiji

    2013-01-01

    The purpose of this study was to develop a series of stereoscopic anatomical images of the eye and orbit for use in the curricula of medical schools and residency programs in ophthalmology and other specialties. Layer-by-layer dissection of the eyelid, eyeball, and orbit of a cadaver was performed by an ophthalmologist. A stereoscopic camera system was used to capture a series of anatomical views that were scanned in a panoramic three-dimensional manner around the center of the lid fissure. The images could be rotated 360 degrees in the frontal plane and the angle of views could be tilted up to 90 degrees along the anteroposterior axis perpendicular to the frontal plane around the 360 degrees. The skin, orbicularis oculi muscle, and upper and lower tarsus were sequentially observed. The upper and lower eyelids were removed to expose the bulbar conjunctiva and to insert three 25-gauge trocars for vitrectomy at the location of the pars plana. The cornea was cut at the limbus, and the lens with mature cataract was dislocated. The sclera was cut to observe the trocars from inside the eyeball. The sclera was further cut to visualize the superior oblique muscle with the trochlea and the inferior oblique muscle. The eyeball was dissected completely to observe the optic nerve and the ophthalmic artery. The thin bones of the medial and inferior orbital wall were cracked with a forceps to expose the ethmoid and maxillary sinus, respectively. In conclusion, the serial dissection images visualized aspects of the local anatomy specific to various procedures, including the levator muscle and tarsus for blepharoptosis surgery, 25-gauge trocars as viewed from inside the eye globe for vitrectomy, the oblique muscles for strabismus surgery, and the thin medial and inferior orbital bony walls for orbital bone fractures.

  15. Stereoscopic motion analysis in densely packed clusters: 3D analysis of the shimmering behaviour in Giant honey bees.

    Science.gov (United States)

    Kastberger, Gerald; Maurer, Michael; Weihmann, Frank; Ruether, Matthias; Hoetzl, Thomas; Kranner, Ilse; Bischof, Horst

    2011-02-08

    The detailed interpretation of mass phenomena such as human escape panic or swarm behaviour in birds, fish and insects requires detailed analysis of the 3D movements of individual participants. Here, we describe the adaptation of a 3D stereoscopic imaging method to measure the positional coordinates of individual agents in densely packed clusters. The method was applied to study behavioural aspects of shimmering in Giant honeybees, a collective defence behaviour that deters predatory wasps by visual cues, whereby individual bees flip their abdomen upwards in a split second, producing Mexican wave-like patterns. Stereoscopic imaging provided non-invasive, automated, simultaneous, in-situ 3D measurements of hundreds of bees on the nest surface regarding their thoracic position and orientation of the body length axis. Segmentation was the basis for the stereo matching, which defined correspondences of individual bees in pairs of stereo images. Stereo-matched "agent bees" were re-identified in subsequent frames by the tracking procedure and triangulated into real-world coordinates. These algorithms were required to calculate the three spatial motion components (dx: horizontal, dy: vertical and dz: towards and from the comb) of individual bees over time. The method enables the assessment of the 3D positions of individual Giant honeybees, which is not possible with single-view cameras. The method can be applied to distinguish at the individual bee level active movements of the thoraces produced by abdominal flipping from passive motions generated by the moving bee curtain. The data provide evidence that the z-deflections of thoraces are potential cues for colony-intrinsic communication. The method helps to understand the phenomenon of collective decision-making through mechanoceptive synchronization and to associate shimmering with the principles of wave propagation. With further, minor modifications, the method could be used to study aspects of other mass phenomena that

  16. Stereoscopic motion analysis in densely packed clusters: 3D analysis of the shimmering behaviour in Giant honey bees

    Directory of Open Access Journals (Sweden)

    Hoetzl Thomas

    2011-02-01

    Full Text Available Abstract Background The detailed interpretation of mass phenomena such as human escape panic or swarm behaviour in birds, fish and insects requires detailed analysis of the 3D movements of individual participants. Here, we describe the adaptation of a 3D stereoscopic imaging method to measure the positional coordinates of individual agents in densely packed clusters. The method was applied to study behavioural aspects of shimmering in Giant honeybees, a collective defence behaviour that deters predatory wasps by visual cues, whereby individual bees flip their abdomen upwards in a split second, producing Mexican wave-like patterns. Results Stereoscopic imaging provided non-invasive, automated, simultaneous, in-situ 3D measurements of hundreds of bees on the nest surface regarding their thoracic position and orientation of the body length axis. Segmentation was the basis for the stereo matching, which defined correspondences of individual bees in pairs of stereo images. Stereo-matched "agent bees" were re-identified in subsequent frames by the tracking procedure and triangulated into real-world coordinates. These algorithms were required to calculate the three spatial motion components (dx: horizontal, dy: vertical and dz: towards and from the comb of individual bees over time. Conclusions The method enables the assessment of the 3D positions of individual Giant honeybees, which is not possible with single-view cameras. The method can be applied to distinguish at the individual bee level active movements of the thoraces produced by abdominal flipping from passive motions generated by the moving bee curtain. The data provide evidence that the z-deflections of thoraces are potential cues for colony-intrinsic communication. The method helps to understand the phenomenon of collective decision-making through mechanoceptive synchronization and to associate shimmering with the principles of wave propagation. With further, minor modifications, the method

  17. Many-core computing for space-based stereoscopic imaging

    Science.gov (United States)

    McCall, Paul; Torres, Gildo; LeGrand, Keith; Adjouadi, Malek; Liu, Chen; Darling, Jacob; Pernicka, Henry

    The potential benefits of using parallel computing in real-time visual-based satellite proximity operations missions are investigated. Improvements in performance and relative navigation solutions over single thread systems can be achieved through multi- and many-core computing. Stochastic relative orbit determination methods benefit from the higher measurement frequencies, allowing them to more accurately determine the associated statistical properties of the relative orbital elements. More accurate orbit determination can lead to reduced fuel consumption and extended mission capabilities and duration. Inherent to the process of stereoscopic image processing is the difficulty of loading, managing, parsing, and evaluating large amounts of data efficiently, which may result in delays or highly time consuming processes for single (or few) processor systems or platforms. In this research we utilize the Single-Chip Cloud Computer (SCC), a fully programmable 48-core experimental processor, created by Intel Labs as a platform for many-core software research, provided with a high-speed on-chip network for sharing information along with advanced power management technologies and support for message-passing. The results from utilizing the SCC platform for the stereoscopic image processing application are presented in the form of Performance, Power, Energy, and Energy-Delay-Product (EDP) metrics. Also, a comparison between the SCC results and those obtained from executing the same application on a commercial PC are presented, showing the potential benefits of utilizing the SCC in particular, and any many-core platforms in general for real-time processing of visual-based satellite proximity operations missions.

  18. Analysis of brain activity and response during monoscopic and stereoscopic visualization

    Science.gov (United States)

    Calore, Enrico; Folgieri, Raffaella; Gadia, Davide; Marini, Daniele

    2012-03-01

    Stereoscopic visualization in cinematography and Virtual Reality (VR) creates an illusion of depth by means of two bidimensional images corresponding to different views of a scene. This perceptual trick is used to enhance the emotional response and the sense of presence and immersivity of the observers. An interesting question is if and how it is possible to measure and analyze the level of emotional involvement and attention of the observers during a stereoscopic visualization of a movie or of a virtual environment. The research aims represent a challenge, due to the large number of sensorial, physiological and cognitive stimuli involved. In this paper we begin this research by analyzing possible differences in the brain activity of subjects during the viewing of monoscopic or stereoscopic contents. To this aim, we have performed some preliminary experiments collecting electroencephalographic (EEG) data of a group of users using a Brain- Computer Interface (BCI) during the viewing of stereoscopic and monoscopic short movies in a VR immersive installation.

  19. Influence of stereoscopic vision on task performance with an operating microscope

    NARCIS (Netherlands)

    Nibourg, Lisanne M.; Wanders, Wouter; Cornelissen, Frans W.; Koopmans, Steven A.

    PURPOSE: To determine the extent to which stereoscopic depth perception influences the performance of tasks executed under an operating microscope. SETTING: Laboratory of Experimental Ophthalmology, University Medical Center Groningen, the Netherlands. DESIGN: Experimental study. METHODS: Medical

  20. Radiation camera exposure control

    International Nuclear Information System (INIS)

    Martone, R.J.; Yarsawich, M.; Wolczek, W.

    1976-01-01

    A system and method for governing the exposure of an image generated by a radiation camera to an image sensing camera is disclosed. The exposure is terminated in response to the accumulation of a predetermined quantity of radiation, defining a radiation density, occurring in a predetermined area. An index is produced which represents the value of that quantity of radiation whose accumulation causes the exposure termination. The value of the predetermined radiation quantity represented by the index is sensed so that the radiation camera image intensity can be calibrated to compensate for changes in exposure amounts due to desired variations in radiation density of the exposure, to maintain the detectability of the image by the image sensing camera notwithstanding such variations. Provision is also made for calibrating the image intensity in accordance with the sensitivity of the image sensing camera, and for locating the index for maintaining its detectability and causing the proper centering of the radiation camera image

  1. GRACE star camera noise

    Science.gov (United States)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  2. Enhancement of stereoscopic comfort by fast control of frequency content with wavelet transform

    Science.gov (United States)

    Lemmer, Nicolas; Moreau, Guillaume; Fuchs, Philippe

    2003-05-01

    As the scope of virtual reality applications including stereoscopic imaging becomes wider, it is quite clear that not every designer of a VR application thinks of its constraints in order to make a correct use of stereo. Stereoscopic imagery though not required can be a useful tool for depth perception. It is possible to limit the depth of field as shown by Perrin who has also undertaken research on the link between the ability of fusing stereoscopic images (stereopsis) and local disparity and spatial frequency content. We will show how we can extend and enhance this work especially on the computational complexity point of view. The wavelet theory allows us to define a local spatial frequency and then a local measure of stereoscopic comfort. This measure is based on local spatial frequency and disparity as well as on the observations made by Woepking. Local comfort estimation allows us to propose several filtering methods to enhance this comfort. The idea to modify the images such as they check a "stereoscopic comfort condition" defined as a threshold for the stereoscopic comfort condition. More technically, we seek to limit high spatial frequency content when disparity is high thanks to the use of fast algorithms.

  3. Evaluating stereoscopic displays: both efficiency measures and perceived workload sensitive to manipulations in binocular disparity

    Science.gov (United States)

    van Beurden, Maurice H. P. H.; Ijsselsteijn, Wijnand A.; de Kort, Yvonne A. W.

    2011-03-01

    Stereoscopic displays are known to offer a number of key advantages in visualizing complex 3D structures or datasets. The large majority of studies that focus on evaluating stereoscopic displays for professional applications use completion time and/or the percentage of correct answers to measure potential performance advantages. However, completion time and accuracy may not fully reflect all the benefits of stereoscopic displays. In this paper, we argue that perceived workload is an additional valuable indicator reflecting the extent to which users can benefit from using stereoscopic displays. We performed an experiment in which participants were asked to perform a visual path-tracing task within a convoluted 3D wireframe structure, varying in level of complexity of the visualised structure and level of disparity of the visualisation. The results showed that an optimal performance (completion time, accuracy and workload), depend both on task difficulty and disparity level. Stereoscopic disparity revealed a faster and more accurate task performance, whereas we observed a trend that performance on difficult tasks stands to benefit more from higher levels of disparity than performance on easy tasks. Perceived workload (as measured using the NASA-TLX) showed a similar response pattern, providing evidence that perceived workload is sensitive to variations in disparity as well as task difficulty. This suggests that perceived workload could be a useful concept, in addition to standard performance indicators, in characterising and measuring human performance advantages when using stereoscopic displays.

  4. Stereoscopic Visualization of Diffusion Tensor Imaging Data: A Comparative Survey of Visualization Techniques

    International Nuclear Information System (INIS)

    Raslan, O.; Debnam, J.M.; Ketonen, L.; Kumar, A.J.; Schellingerhout, D.; Wang, J.

    2013-01-01

    Diffusion tensor imaging (DTI) data has traditionally been displayed as a gray scale functional anisotropy map (GSFM) or color coded orientation map (CCOM). These methods use black and white or color with intensity values to map the complex multidimensional DTI data to a two-dimensional image. Alternative visualization techniques, such as V m ax maps utilize enhanced graphical representation of the principal eigenvector by means of a headless arrow on regular non stereoscopic (VM) or stereoscopic display (VMS). A survey of clinical utility of patients with intracranial neoplasms was carried out by 8 neuro radiologists using traditional and nontraditional methods of DTI display. Pairwise comparison studies of 5 intracranial neoplasms were performed with a structured questionnaire comparing GSFM, CCOM, VM, and VMS. Six of 8 neuro radiologists favored V m ax maps over traditional methods of display (GSFM and CCOM). When comparing the stereoscopic (VMS) and the non-stereoscopic (VM) modes, 4 favored VMS, 2 favored VM, and 2 had no preference. In conclusion, processing and visualizing DTI data stereoscopically is technically feasible. An initial survey of users indicated that V m ax based display methodology with or without stereoscopic visualization seems to be preferred over traditional methods to display DTI data.

  5. A Review on Stereoscopic 3D: Home Entertainment for the Twenty First Century

    Science.gov (United States)

    Karajeh, Huda; Maqableh, Mahmoud; Masa'deh, Ra'ed

    2014-12-01

    In the last few years, stereoscopic developed very rapidly and employed in many different fields such as entertainment. Due to the importance of entertainment aspect of stereoscopic 3D (S3D) applications, a review of the current state of S3D development in entertainment technology is conducted. In this paper, a novel survey of the stereoscopic entertainment aspects is presented by discussing the significant development of a 3D cinema, the major development of 3DTV, the issues related to 3D video content and 3D video games. Moreover, we reviewed some problems that can be caused in the viewers' visual system from watching stereoscopic contents. Some stereoscopic viewers are not satisfied as they are frustrated from wearing glasses, have visual fatigue, complain from unavailability of 3D contents, and/or complain from some sickness. Therefore, we will discuss stereoscopic visual discomfort and to what extend the viewer will have an eye fatigue while watching 3D contents or playing 3D games. The suggested solutions in the literature for this problem are discussed.

  6. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  7. Design of an experimental four-camera setup for enhanced 3D surface reconstruction in microsurgery

    Directory of Open Access Journals (Sweden)

    Marzi Christian

    2017-09-01

    Full Text Available Future fully digital surgical visualization systems enable a wide range of new options. Caused by optomechanical limitations a main disadvantage of today’s surgical microscopes is their incapability of providing arbitrary perspectives to more than two observers. In a fully digital microscopic system, multiple arbitrary views can be generated from a 3D reconstruction. Modern surgical microscopes allow replacing the eyepieces by cameras in order to record stereoscopic videos. A reconstruction from these videos can only contain the amount of detail the recording camera system gathers from the scene. Therefore, covered surfaces can result in a faulty reconstruction for deviating stereoscopic perspectives. By adding cameras recording the object from different angles, additional information of the scene is acquired, allowing to improve the reconstruction. Our approach is to use a fixed four-camera setup as a front-end system to capture enhanced 3D topography of a pseudo-surgical scene. This experimental setup would provide images for the reconstruction algorithms and generation of multiple observing stereo perspectives. The concept of the designed setup is based on the common main objective (CMO principle of current surgical microscopes. These systems are well established and optically mature. Furthermore, the CMO principle allows a more compact design and a lowered effort in calibration than cameras with separate optics. Behind the CMO four pupils separate the four channels which are recorded by one camera each. The designed system captures an area of approximately 28mm × 28mm with four cameras. Thus, allowing to process images of 6 different stereo perspectives. In order to verify the setup, it is modelled in silico. It can be used in further studies to test algorithms for 3D reconstruction from up to four perspectives and provide information about the impact of additionally recorded perspectives on the enhancement of a reconstruction.

  8. Real-time photorealistic stereoscopic rendering of fire

    Science.gov (United States)

    Rose, Benjamin M.; McAllister, David F.

    2007-02-01

    We propose a method for real-time photorealistic stereo rendering of the natural phenomenon of fire. Applications include the use of virtual reality in fire fighting, military training, and entertainment. Rendering fire in real-time presents a challenge because of the transparency and non-static fluid-like behavior of fire. It is well known that, in general, methods that are effective for monoscopic rendering are not necessarily easily extended to stereo rendering because monoscopic methods often do not provide the depth information necessary to produce the parallax required for binocular disparity in stereoscopic rendering. We investigate the existing techniques used for monoscopic rendering of fire and discuss their suitability for extension to real-time stereo rendering. Methods include the use of precomputed textures, dynamic generation of textures, and rendering models resulting from the approximation of solutions of fluid dynamics equations through the use of ray-tracing algorithms. We have found that in order to attain real-time frame rates, our method based on billboarding is effective. Slicing is used to simulate depth. Texture mapping or 2D images are mapped onto polygons and alpha blending is used to treat transparency. We can use video recordings or prerendered high-quality images of fire as textures to attain photorealistic stereo.

  9. Measurement of compressed breast thickness by optical stereoscopic photogrammetry.

    Science.gov (United States)

    Tyson, Albert H; Mawdsley, Gordon E; Yaffe, Martin J

    2009-02-01

    The determination of volumetric breast density (VBD) from mammograms requires accurate knowledge of the thickness of the compressed breast. In attempting to accurately determine VBD from images obtained on conventional mammography systems, the authors found that the thickness reported by a number of mammography systems in the field varied by as much as 15 mm when compressing the same breast or phantom. In order to evaluate the behavior of mammographic compression systems and to be able to predict the thickness at different locations in the breast on patients, they have developed a method for measuring the local thickness of the breast at all points of contact with the compression paddle using optical stereoscopic photogrammetry. On both flat (solid) and compressible phantoms, the measurements were accurate to better than 1 mm with a precision of 0.2 mm. In a pilot study, this method was used to measure thickness on 108 volunteers who were undergoing mammography examination. This measurement tool will allow us to characterize paddle surface deformations, deflections and calibration offsets for mammographic units.

  10. Stereoscopic vision in the absence of the lateral occipital cortex.

    Directory of Open Access Journals (Sweden)

    Jenny C A Read

    2010-09-01

    Full Text Available Both dorsal and ventral cortical visual streams contain neurons sensitive to binocular disparities, but the two streams may underlie different aspects of stereoscopic vision. Here we investigate stereopsis in the neurological patient D.F., whose ventral stream, specifically lateral occipital cortex, has been damaged bilaterally, causing profound visual form agnosia. Despite her severe damage to cortical visual areas, we report that DF's stereo vision is strikingly unimpaired. She is better than many control observers at using binocular disparity to judge whether an isolated object appears near or far, and to resolve ambiguous structure-from-motion. DF is, however, poor at using relative disparity between features at different locations across the visual field. This may stem from a difficulty in identifying the surface boundaries where relative disparity is available. We suggest that the ventral processing stream may play a critical role in enabling healthy observers to extract fine depth information from relative disparities within one surface or between surfaces located in different parts of the visual field.

  11. Measurements of turbulent premixed flame dynamics using cinema stereoscopic PIV

    Energy Technology Data Exchange (ETDEWEB)

    Steinberg, Adam M.; Driscoll, James F. [University of Michigan, Department of Aerospace Engineering, Ann Arbor, MI (United States); Ceccio, Steven L. [University of Michigan, Department of Mechanical Engineering, Ann Arbor, MI (United States)

    2008-06-15

    A new experimental method is described that provides high-speed movies of turbulent premixed flame wrinkling dynamics and the associated vorticity fields. This method employs cinema stereoscopic particle image velocimetry and has been applied to a turbulent slot Bunsen flame. Three-component velocity fields were measured with high temporal and spatial resolutions of 0.9 ms and 140{mu}m, respectively. The flame-front location was determined using a new multi-step method based on particle image gradients, which is described. Comparisons are made between flame fronts found with this method and simultaneous CH-PLIF images. These show that the flame contour determined corresponds well to the true location of maximum gas density gradient. Time histories of typical eddy-flame interactions are reported and several important phenomena identified. Outwardly rotating eddy pairs wrinkle the flame and are attenuated at they pass through the flamelet. Significant flame-generated vorticity is produced downstream of the wrinkled tip. Similar wrinkles are caused by larger groups of outwardly rotating eddies. Inwardly rotating pairs cause significant convex wrinkles that grow as the flame propagates. These wrinkles encounter other eddies that alter their behavior. The effects of the hydrodynamic and diffusive instabilities are observed and found to be significant contributors to the formation and propagation of wrinkles. (orig.)

  12. Development of a stereoscopic three-dimensional drawing application

    Science.gov (United States)

    Carver, Donald E.; McAllister, David F.

    1991-08-01

    With recent advances in 3-D technology, computer users have the opportunity to work within a natural 3-D environment; a flat panel LCD computer display of this type, the DTI-100M made by Dimension Technologies, Inc., recently went on the market. In a joint venture between DTI and NCSU, an object-oriented 3-D drawing application, 3-D Draw, was developed to address some issues of human interface design for interactive stereo drawing applications. The focus of this paper is to determine some of the procedures a user would naturally expect to follow while working within a true 3-D environment. The paper discusses (1) the interface between the Macintosh II and DTI-100M during implementation of 3-D Draw, including stereo cursor development and presentation of current 2-D systems, with an additional `depth'' parameter, in the 3-D world, (2) problems in general for human interface into the 3-D environment, and (3) necessary functions and/or problems in developing future stereoscopic 3-D operating systems/tools.

  13. Stereoscopic, thermal, and true deep cumulus cloud top heights

    Science.gov (United States)

    Llewellyn-Jones, D. T.; Corlett, G. K.; Lawrence, S. P.; Remedios, J. J.; Sherwood, S. C.; Chae, J.; Minnis, P.; McGill, M.

    2004-05-01

    We compare cloud-top height estimates from several sensors: thermal tops from GOES-8 and MODIS, stereoscopic tops from MISR, and directly measured heights from the Goddard Cloud Physics Lidar on board the ER-2, all collected during the CRYSTAL-FACE field campaign. Comparisons reveal a persistent 1-2 km underestimation of cloud-top heights by thermal imagery, even when the finite optical extinctions near cloud top and in thin overlying cirrus are taken into account. The most severe underestimates occur for the tallest clouds. The MISR "best-sinds" and lidar estimates disagree in very similar ways with thermally estimated tops, which we take as evidence of excellent performance by MISR. Encouraged by this, we use MISR to examine variations in cloud penetration and thermal top height errors in several locations of tropical deep convection over multiple seasons. The goals of this are, first, to learn how cloud penetration depends on the near-tropopause environment; and second, to gain further insight into the mysterious underestimation of tops by thermal imagery.

  14. Cameras in mobile phones

    Science.gov (United States)

    Nummela, Ville; Viinikanoja, Jarkko; Alakarhu, Juha

    2006-04-01

    One of the fastest growing markets in consumer markets today are camera phones. During past few years total volume has been growing fast and today millions of mobile phones with camera will be sold. At the same time resolution and functionality of the cameras has been growing from CIF towards DSC level. From camera point of view the mobile world is an extremely challenging field. Cameras should have good image quality but in small size. They also need to be reliable and their construction should be suitable for mass manufacturing. All components of the imaging chain should be well optimized in this environment. Image quality and usability are the most important parameters to user. The current trend of adding more megapixels to cameras and at the same time using smaller pixels is affecting both. On the other hand reliability and miniaturization are key drivers for product development as well as the cost. In optimized solution all parameters are in balance but the process of finding the right trade-offs is not an easy task. In this paper trade-offs related to optics and their effects to image quality and usability of cameras are discussed. Key development areas from mobile phone camera point of view are also listed.

  15. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  16. The camera of the fifth H.E.S.S. telescope. Part I: System description

    Energy Technology Data Exchange (ETDEWEB)

    Bolmont, J., E-mail: bolmont@in2p3.fr [LPNHE, Université Pierre et Marie Curie Paris 6, Université Denis Diderot Paris 7, CNRS/IN2P3, 4 Place Jussieu, F-75252 Paris Cedex 5 (France); Corona, P.; Gauron, P.; Ghislain, P.; Goffin, C.; Guevara Riveros, L.; Huppert, J.-F.; Martineau-Huynh, O.; Nayman, P.; Parraud, J.-M.; Tavernet, J.-P.; Toussenel, F.; Vincent, D.; Vincent, P. [LPNHE, Université Pierre et Marie Curie Paris 6, Université Denis Diderot Paris 7, CNRS/IN2P3, 4 Place Jussieu, F-75252 Paris Cedex 5 (France); Bertoli, W.; Espigat, P.; Punch, M. [APC, AstroParticule et Cosmologie, Université Paris Diderot, CNRS/IN2P3, CEA/Irfu, Observatoire de Paris, Sorbonne Paris Cité, 10, rue Alice Domon et Léonie Duquet, F-75205 Paris Cedex 13 (France); Besin, D.; Delagnes, E.; Glicenstein, J.-F. [CEA Saclay, DSM/IRFU, F-91191 Gif-Sur-Yvette Cedex (France); and others

    2014-10-11

    In July 2012, as the four ground-based gamma-ray telescopes of the H.E.S.S. (High Energy Stereoscopic System) array reached their tenth year of operation in Khomas Highlands, Namibia, a fifth telescope took its first data as part of the system. This new Cherenkov detector, comprising a 614.5 m{sup 2} reflector with a highly pixelized camera in its focal plane, improves the sensitivity of the current array by a factor two and extends its energy domain down to a few tens of GeV. The present part I of the paper gives a detailed description of the fifth H.E.S.S. telescope's camera, presenting the details of both the hardware and the software, emphasizing the main improvements as compared to previous H.E.S.S. camera technology.

  17. Traveling via Rome through the Stereoscope: Reality, Memory, and Virtual Travel

    Directory of Open Access Journals (Sweden)

    Douglas M. Klahr

    2016-06-01

    Full Text Available Underwood and Underwood’s 'Rome through the Stereoscope' of 1902 was a landmark in stereoscopic photography publishing, both as an intense, visually immersive experience and as a cognitively demanding exercise. The set consisted of a guidebook, forty-six stereographs, and five maps whose notations enabled the reader/viewer to precisely replicate the location and orientation of the photographer at each site. Combined with the extensive narrative within the guidebook, the maps and images guided its users through the city via forty-six sites, whether as an example of armchair travel or an actual travel companion. The user’s experience is examined and analyzed within the following parameters: the medium of stereoscopic photography, narrative, geographical imagination, and memory, bringing forth issues of movement, survey and route frames of reference, orientation, visualization, immersion, and primary versus secondary memories. 'Rome through the Stereoscope' was an example of virtual travel, and the process of fusing dual images into one — stereoscopic synthesis — further demarcated the experience as a virtual environment.

  18. Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.

    Science.gov (United States)

    Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick

    2017-10-01

    In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).

  19. Subjective and objective measurements of visual fatigue induced by excessive disparities in stereoscopic images

    Science.gov (United States)

    Jung, Yong Ju; Kim, Dongchan; Sohn, Hosik; Lee, Seong-il; Park, Hyun Wook; Ro, Yong Man

    2013-03-01

    As stereoscopic displays have spread, it is important to know what really causes the visual fatigue and discomfort and what happens in the visual system in the brain behind the retina while viewing stereoscopic 3D images on the displays. In this study, functional magnetic resonance imaging (fMRI) was used for the objective measurement to assess the human brain regions involved in the processing of the stereoscopic stimuli with excessive disparities. Based on the subjective measurement results, we selected two subsets of comfort videos and discomfort videos in our dataset. Then, a fMRI experiment was conducted with the subsets of comfort and discomfort videos in order to identify which brain regions activated while viewing the discomfort videos in a stereoscopic display. We found that, when viewing a stereoscopic display, the right middle frontal gyrus, the right inferior frontal gyrus, the right intraparietal lobule, the right middle temporal gyrus, and the bilateral cuneus were significantly activated during the processing of excessive disparities, compared to those of small disparities (< 1 degree).

  20. Digital stereoscopic convergence where video games and movies for the home user meet

    Science.gov (United States)

    Schur, Ethan

    2009-02-01

    Today there is a proliferation of stereoscopic 3D display devices, 3D content, and 3D enabled video games. As we in the S-3D community bring stereoscopic 3D to the home user we have a real opportunity of using stereoscopic 3D to bridge the gap between exciting immersive games and home movies. But to do this, we cannot limit ourselves to current conceptions of gaming and movies. We need, for example, to imagine a movie that is fully rendered using avatars in a stereoscopic game environment. Or perhaps to imagine a pervasive drama where viewers can play too and become an essential part of the drama - whether at home or on the go on a mobile platform. Stereoscopic 3D is the "glue" that will bind these video and movie concepts together. As users feel more immersed, the lines between current media will blur. This means that we have the opportunity to shape the way that we, as humans, view and interact with each other, our surroundings and our most fundamental art forms. The goal of this paper is to stimulate conversation and further development on expanding the current gaming and home theatre infrastructures to support greatly-enhanced experiential entertainment.

  1. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  2. The upgrade of the H.E.S.S. cameras

    Science.gov (United States)

    Giavitto, Gianluca; Ashton, Terry; Balzer, Arnim; Berge, David; Brun, Francois; Chaminade, Thomas; Delagnes, Eric; Fontaine, Gerard; Füßling, Matthias; Giebels, Berrie; Glicenstein, Jean-Francois; Gräber, Tobias; Hinton, Jim; Jahnke, Albert; Klepser, Stefan; Kossatz, Marko; Kretzschmann, Axel; Lefranc, Valentin; Leich, Holger; Lüdecke, Hartmut; Lypova, Iryna; Manigot, Pascal; Marandon, Vincent; Moulin, Emmanuel; Naurois, Mathieu de; Nayman, Patrick; Ohm, Stefan; Penno, Marek; Ross, Duncan; Salek, David; Schade, Markus; Schwab, Thomas; Simoni, Rachel; Stegmann, Christian; Steppa, Constantin; Thornhill, Julian; Toussnel, Francois

    2017-12-01

    The High Energy Stereoscopic System (HESS) is an array of imaging atmospheric Cherenkov telescopes (IACTs) located in the Khomas highland in Namibia. It was built to detect Very High Energy (VHE > 100 GeV) cosmic gamma rays. Since 2003, HESS has discovered the majority of the known astrophysical VHE gamma-ray sources, opening a new observational window on the extreme non-thermal processes at work in our universe. HESS consists of four 12-m diameter Cherenkov telescopes (CT1-4), which started data taking in 2002, and a larger 28-m telescope (CT5), built in 2012, which lowers the energy threshold of the array to 30 GeV . The cameras of CT1-4 are currently undergoing an extensive upgrade, with the goals of reducing their failure rate, reducing their readout dead time and improving the overall performance of the array. The entire camera electronics has been renewed from ground-up, as well as the power, ventilation and pneumatics systems, and the control and data acquisition software. Only the PMTs and their HV supplies have been kept from the original cameras. Novel technical solutions have been introduced, which will find their way into some of the Cherenkov cameras foreseen for the next-generation Cherenkov Telescope Array (CTA) observatory. In particular, the camera readout system is the first large-scale system based on the analog memory chip NECTAr, which was designed for CTA cameras. The camera control subsystems and the control software framework also pursue an innovative design, exploiting cutting-edge hardware and software solutions which excel in performance, robustness and flexibility. The CT1 camera has been upgraded in July 2015 and is currently taking data; CT2-4 have been upgraded in fall 2016. Together they will assure continuous operation of HESS at its full sensitivity until and possibly beyond the advent of CTA. This contribution describes the design, the testing and the in-lab and on-site performance of all components of the newly upgraded HESS

  3. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.; Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  4. Neutron cameras for ITER

    International Nuclear Information System (INIS)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-01-01

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from 16 N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with 16 N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins

  5. Scintigraphic and echographic thyroid image matching by a stereoscopic method

    International Nuclear Information System (INIS)

    Ballet, E.; Rousseau, J.; Marchandise, X.; Cussac, J.F.; Ballet, E.; Vasseur, C.; Gibon, D.

    1997-01-01

    We developed a device which allows us to match echographic data and scintiscanning data in a common 3D reference system. In thyroid exploration, this device completes the nuclear medicine examination by specifying simultaneously volume and echo-structure of the gland. Positions of γ-camera and echograph are determined in a 3D reference system using the stereo-vision principle: two CCD cameras allow locating of both sensors within 1.6 m, and sensors may be moved in a 0.4 m x 0.4 m FOV. Real time computation is reduced by limiting data to be treated to light emitters landmarks mounted on the sensor and used to calculate its position and its orientation. Matching accuracy is better than 0.5 mm for position, and better than 0.35 deg for orientation. Sensor marking average time is lesser than 0.5 s. (authors)

  6. Application of longitudinal magnification effect to magnification stereoscopic angiography. A new method of cerebral angiography

    International Nuclear Information System (INIS)

    Doi, K.; Rossmann, K.; Duda, E.E.

    1976-01-01

    A new method of stereoscopic cerebral angiography was developed which employs 2X radiographic magnification. In order to obtain the same depth perception in the object as with conventional contact stereoscopic angiography, one can make the x-ray exposures at two focal spot positions which are separated by only 1 inch, whereas the contact technique requires a separation of 4 inches. The smaller distance is possible because, with 2X magnification, the transverse detail in the object is magnified by a factor of two, but the longitudinal detail, which is related to the stereo effect, is magnified by a factor of four, due to the longitudinal magnification effect. The small focal spot separation results in advantages such as improved stereoscopic image detail, better image quality, and low radiation exposure to the patient

  7. Evaluating visual discomfort in stereoscopic projection-based CAVE system with a close viewing distance

    Science.gov (United States)

    Song, Weitao; Weng, Dongdong; Feng, Dan; Li, Yuqian; Liu, Yue; Wang, Yongtian

    2015-05-01

    As one of popular immersive Virtual Reality (VR) systems, stereoscopic cave automatic virtual environment (CAVE) system is typically consisted of 4 to 6 3m-by-3m sides of a room made of rear-projected screens. While many endeavors have been made to reduce the size of the projection-based CAVE system, the issue of asthenopia caused by lengthy exposure to stereoscopic images in such CAVE with a close viewing distance was seldom tangled. In this paper, we propose a light-weighted approach which utilizes a convex eyepiece to reduce visual discomfort induced by stereoscopic vision. An empirical experiment was conducted to examine the feasibility of convex eyepiece in a large depth of field (DOF) at close viewing distance both objectively and subjectively. The result shows the positive effects of convex eyepiece on the relief of eyestrain.

  8. Application of longitudinal magnification effect to magnification stereoscopic angiography. A new method of cerebral angiography

    Energy Technology Data Exchange (ETDEWEB)

    Doi, K.; Rossmann, K.; Duda, E.E.

    1976-01-01

    A new method of stereoscopic cerebral angiography was developed which employs 2X radiographic magnification. In order to obtain the same depth perception in the object as with conventional contact stereoscopic angiography, one can make the x-ray exposures at two focal spot positions which are separated by only 1 inch, whereas the contact technique requires a separation of 4 inches. The smaller distance is possible because, with 2X magnification, the transverse detail in the object is magnified by a factor of two, but the longitudinal detail, which is related to the stereo effect, is magnified by a factor of four, due to the longitudinal magnification effect. The small focal spot separation results in advantages such as improved stereoscopic image detail, better image quality, and low radiation exposure to the patient.

  9. Usage of stereoscopic visualization in the learning contents of rotational motion.

    Science.gov (United States)

    Matsuura, Shu

    2013-01-01

    Rotational motion plays an essential role in physics even at an introductory level. In addition, the stereoscopic display of three-dimensional graphics includes is advantageous for the presentation of rotational motions, particularly for depth recognition. However, the immersive visualization of rotational motion has been known to lead to dizziness and even nausea for some viewers. Therefore, the purpose of this study is to examine the onset of nausea and visual fatigue when learning rotational motion through the use of a stereoscopic display. The findings show that an instruction method with intermittent exposure of the stereoscopic display and a simplification of its visual components reduced the onset of nausea and visual fatigue for the viewers, which maintained the overall effect of instantaneous spatial recognition.

  10. Doing Textiles Experiments in Game-Based Virtual Reality: A Design of the Stereoscopic Chemical Laboratory (SCL) for Textiles Education

    Science.gov (United States)

    Lau, Kung Wong; Kan, Chi Wai; Lee, Pui Yuen

    2017-01-01

    Purpose: The purpose of this paper is to discuss the use of stereoscopic virtual technology in textile and fashion studies in particular to the area of chemical experiment. The development of a designed virtual platform, called Stereoscopic Chemical Laboratory (SCL), is introduced. Design/methodology/approach: To implement the suggested…

  11. Taking space literally: reconceptualizing the effects of stereoscopic representation on user experience

    Directory of Open Access Journals (Sweden)

    Benny Liebold

    2013-03-01

    Full Text Available Recently, cinemas, home theater systems and game consoles have undergone a rapid evolution towards stereoscopic representation with recipients gradually becoming accustomed to these changes. Stereoscopy techniques in most media present two offset images separately to the left and right eye of the viewer (usually with the help of glasses separating both images resulting in the perception of three-dimensional depth. In contrast to these mass market techniques, true 3D volumetric displays or holograms that display an image in three full dimensions are relatively uncommon. The visual quality and visual comfort of stereoscopic representation is constantly being improved by the industry.

  12. Stereoscopic Three-Dimensional Visualization Applied to Multimodal Brain Images: Clinical Applications and a Functional Connectivity Atlas.

    Directory of Open Access Journals (Sweden)

    Gonzalo M Rojas

    2014-11-01

    Full Text Available Effective visualization is central to the exploration and comprehension of brain imaging data. While MRI data are acquired in three-dimensional space, the methods for visualizing such data have rarely taken advantage of three-dimensional stereoscopic technologies. We present here results of stereoscopic visualization of clinical data, as well as an atlas of whole-brain functional connectivity. In comparison with traditional 3D rendering techniques, we demonstrate the utility of stereoscopic visualizations to provide an intuitive description of the exact location and the relative sizes of various brain landmarks, structures and lesions. In the case of resting state fMRI, stereoscopic 3D visualization facilitated comprehension of the anatomical position of complex large-scale functional connectivity patterns. Overall, stereoscopic visualization improves the intuitive visual comprehension of image contents, and brings increased dimensionality to visualization of traditional MRI data, as well as patterns of functional connectivity.

  13. A method of camera calibration in the measurement process with reference mark for approaching observation space target

    Science.gov (United States)

    Zhang, Hua; Zeng, Luan

    2017-11-01

    Binocular stereoscopic vision can be used for space-based space targets near observation. In order to solve the problem that the traditional binocular vision system cannot work normally after interference, an online calibration method of binocular stereo measuring camera with self-reference is proposed. The method uses an auxiliary optical imaging device to insert the image of the standard reference object into the edge of the main optical path and image with the target on the same focal plane, which is equivalent to a standard reference in the binocular imaging optical system; When the position of the system and the imaging device parameters are disturbed, the image of the standard reference will change accordingly in the imaging plane, and the position of the standard reference object does not change. The camera's external parameters can be re-calibrated by the visual relationship of the standard reference object. The experimental results show that the maximum mean square error of the same object can be reduced from the original 72.88mm to 1.65mm when the right camera is deflected by 0.4 degrees and the left camera is high and low with 0.2° rotation. This method can realize the online calibration of binocular stereoscopic vision measurement system, which can effectively improve the anti - jamming ability of the system.

  14. Commercialization of radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10{sup 6} - 10{sup 8} rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  15. Commercialization of radiation tolerant camera

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10 6 - 10 8 rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  16. Selective-imaging camera

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  17. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    A positron emission tomography camera having a plurality of detector rings positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom. Each detector ring or offset ring includes a plurality of photomultiplier tubes and a plurality of scintillation crystals are positioned relative to the photomultiplier tubes whereby each tube is responsive to more than one crystal. Each alternate crystal in the ring is offset by one-half or less of the thickness of the crystal such that the staggered crystals are seen by more than one photomultiplier tube. This sharing of crystals and photomultiplier tubes allows identification of the staggered crystal and the use of smaller detectors shared by larger photomultiplier tubes thereby requiring less photomultiplier tubes, creating more scanning slices, providing better data sampling, and reducing the cost of the camera. The offset detector ring geometry reduces the costs of the positron camera and improves its performance

  18. The Impact of Stereoscopic Imagery and Motion on Anatomical Structure Recognition and Visual Attention Performance

    Science.gov (United States)

    Remmele, Martin; Schmidt, Elena; Lingenfelder, Melissa; Martens, Andreas

    2018-01-01

    Gross anatomy is located in a three-dimensional space. Visualizing aspects of structures in gross anatomy education should aim to provide information that best resembles their original spatial proportions. Stereoscopic three-dimensional imagery might offer possibilities to implement this aim, though some research has revealed potential impairments…

  19. What is 3D good for? A review of human performance on stereoscopic 3D displays

    Science.gov (United States)

    McIntire, John P.; Havig, Paul R.; Geiselman, Eric E.

    2012-06-01

    This work reviews the human factors-related literature on the task performance implications of stereoscopic 3D displays, in order to point out the specific performance benefits (or lack thereof) one might reasonably expect to observe when utilizing these displays. What exactly is 3D good for? Relative to traditional 2D displays, stereoscopic displays have been shown to enhance performance on a variety of depth-related tasks. These tasks include judging absolute and relative distances, finding and identifying objects (by breaking camouflage and eliciting perceptual "pop-out"), performing spatial manipulations of objects (object positioning, orienting, and tracking), and navigating. More cognitively, stereoscopic displays can improve the spatial understanding of 3D scenes or objects, improve memory/recall of scenes or objects, and improve learning of spatial relationships and environments. However, for tasks that are relatively simple, that do not strictly require depth information for good performance, where other strong cues to depth can be utilized, or for depth tasks that lie outside the effective viewing volume of the display, the purported performance benefits of 3D may be small or altogether absent. Stereoscopic 3D displays come with a host of unique human factors problems including the simulator-sickness-type symptoms of eyestrain, headache, fatigue, disorientation, nausea, and malaise, which appear to effect large numbers of viewers (perhaps as many as 25% to 50% of the general population). Thus, 3D technology should be wielded delicately and applied carefully; and perhaps used only as is necessary to ensure good performance.

  20. Organizational Learning Goes Virtual?: A Study of Employees' Learning Achievement in Stereoscopic 3D Virtual Reality

    Science.gov (United States)

    Lau, Kung Wong

    2015-01-01

    Purpose: This study aims to deepen understanding of the use of stereoscopic 3D technology (stereo3D) in facilitating organizational learning. The emergence of advanced virtual technologies, in particular to the stereo3D virtual reality, has fundamentally changed the ways in which organizations train their employees. However, in academic or…

  1. Novel microscope-integrated stereoscopic heads-up display for intrasurgical optical coherence tomography

    Science.gov (United States)

    Shen, Liangbo; Carrasco-Zevallos, Oscar; Keller, Brenton; Viehland, Christian; Waterman, Gar; Hahn, Paul S.; Kuo, Anthony N.; Toth, Cynthia A.; Izatt, Joseph A.

    2016-01-01

    Intra-operative optical coherence tomography (OCT) requires a display technology which allows surgeons to visualize OCT data without disrupting surgery. Previous research and commercial intrasurgical OCT systems have integrated heads-up display (HUD) systems into surgical microscopes to provide monoscopic viewing of OCT data through one microscope ocular. To take full advantage of our previously reported real-time volumetric microscope-integrated OCT (4D MIOCT) system, we describe a stereoscopic HUD which projects a stereo pair of OCT volume renderings into both oculars simultaneously. The stereoscopic HUD uses a novel optical design employing spatial multiplexing to project dual OCT volume renderings utilizing a single micro-display. The optical performance of the surgical microscope with the HUD was quantitatively characterized and the addition of the HUD was found not to substantially effect the resolution, field of view, or pincushion distortion of the operating microscope. In a pilot depth perception subject study, five ophthalmic surgeons completed a pre-set dexterity task with 50.0% (SD = 37.3%) higher success rate and in 35.0% (SD = 24.8%) less time on average with stereoscopic OCT vision compared to monoscopic OCT vision. Preliminary experience using the HUD in 40 vitreo-retinal human surgeries by five ophthalmic surgeons is reported, in which all surgeons reported that the HUD did not alter their normal view of surgery and that live surgical maneuvers were readily visible in displayed stereoscopic OCT volumes. PMID:27231616

  2. Subjective experiences of watching stereoscopic Avatar and U2 3D in a cinema

    Science.gov (United States)

    Pölönen, Monika; Salmimaa, Marja; Takatalo, Jari; Häkkinen, Jukka

    2012-01-01

    A stereoscopic 3-D version of the film Avatar was shown to 85 people who subsequently answered questions related to sickness, visual strain, stereoscopic image quality, and sense of presence. Viewing Avatar for 165 min induced some symptoms of visual strain and sickness, but the symptom levels remained low. A comparison between Avatar and previously published results for the film U2 3D showed that sickness and visual strain levels were similar despite the films' runtimes. The genre of the film had a significant effect on the viewers' opinions and sense of presence. Avatar, which has been described as a combination of action, adventure, and sci-fi genres, was experienced as more immersive and engaging than the music documentary U2 3D. However, participants in both studies were immersed, focused, and absorbed in watching the stereoscopic 3-D (S3-D) film and were pleased with the film environments. The results also showed that previous stereoscopic 3-D experience significantly reduced the amount of reported eye strain and complaints about the weight of the viewing glasses.

  3. Stereoscopic Vascular Models of the Head and Neck: A Computed Tomography Angiography Visualization

    Science.gov (United States)

    Cui, Dongmei; Lynch, James C.; Smith, Andrew D.; Wilson, Timothy D.; Lehman, Michael N.

    2016-01-01

    Computer-assisted 3D models are used in some medical and allied health science schools; however, they are often limited to online use and 2D flat screen-based imaging. Few schools take advantage of 3D stereoscopic learning tools in anatomy education and clinically relevant anatomical variations when teaching anatomy. A new approach to teaching…

  4. Interaksi pada Museum Virtual Menggunakan Pengindera Tangan dengan Penyajian Stereoscopic 3D

    Directory of Open Access Journals (Sweden)

    Gary Almas Samaita

    2017-01-01

    Full Text Available Kemajuan teknologi menjadikan museum mengembangkan cara penyajian koleksinya. Salah satu teknologi yang diadaptasi dalam penyajian museum virtual adalah Virtual Reality (VR dengan stereoscopic 3D. Sayangnya, museum virtual dengan teknik penyajian stereoscopic masih menggunakan keyboard dan mouse sebagai perangkat interaksi. Penelitian ini bertujuan untuk merancang dan menerapkan interaksi dengan pengindera tangan pada museum virtual dengan penyajian stereoscopic 3D. Museum virtual divisualisasikan dengan teknik stereoscopic side-by-side melalui Head Mounting Display (HMD berbasis Android. HMD juga memiliki fungsi head tracking dengan membaca orientasi kepala. Interaksi tangan diterapkan dengan menggunakan pengindera tangan yang ditempatkan pada HMD. Karena pengindera tangan tidak didukung oleh HMD berbasis Android, maka digunakan server sebagai perantara HMD dan pengindera tangan. Setelah melalui pengujian, diketahui bahwa rata-rata confidence rate dari pembacaan pengindera tangan pada pola tangan untuk memicu interaksi adalah sebesar 99,92% dengan rata-rata efektifitas 92,61%. Uji ketergunaan juga dilakukan dengan pendasaran ISO/IEC 9126-4 untuk mengukur efektifitas, efisiensi, dan kepuasan pengguna dari sistem yang dirancang dengan meminta partisipan untuk melakukan 9 tugas yang mewakili interaksi tangan dalam museum virtual. Hasil pengujian menunjukkan bahwa semua pola tangan yang dirancang dapat dilakukan oleh partisipan meskipun pola tangan dinilai cukup sulit dilakukan. Melalui kuisioner diketahui bahwa total 86,67% partisipan setuju bahwa interaksi tangan memberikan pengalaman baru dalam menikmati museum virtual.

  5. An exploration of the initial effects of stereoscopic displays on optometric parameters

    NARCIS (Netherlands)

    Fortuin, M.F.; Lambooij, M.T.M.; IJsselsteijn, W.A.; Heynderickx, I.E.J.; Edgar, D.F.; Evans, B.J.W.

    2011-01-01

    PURPOSE: To compare the effect on optometric variables of reading text presented in 2-D and 3-D on two types of stereoscopic display. METHODS: This study measured changes in binocular visual acuity, fixation disparity, aligning prism, heterophoria, horizontal fusional reserves, prism facility and

  6. Novel microscope-integrated stereoscopic heads-up display for intrasurgical optical coherence tomography.

    Science.gov (United States)

    Shen, Liangbo; Carrasco-Zevallos, Oscar; Keller, Brenton; Viehland, Christian; Waterman, Gar; Hahn, Paul S; Kuo, Anthony N; Toth, Cynthia A; Izatt, Joseph A

    2016-05-01

    Intra-operative optical coherence tomography (OCT) requires a display technology which allows surgeons to visualize OCT data without disrupting surgery. Previous research and commercial intrasurgical OCT systems have integrated heads-up display (HUD) systems into surgical microscopes to provide monoscopic viewing of OCT data through one microscope ocular. To take full advantage of our previously reported real-time volumetric microscope-integrated OCT (4D MIOCT) system, we describe a stereoscopic HUD which projects a stereo pair of OCT volume renderings into both oculars simultaneously. The stereoscopic HUD uses a novel optical design employing spatial multiplexing to project dual OCT volume renderings utilizing a single micro-display. The optical performance of the surgical microscope with the HUD was quantitatively characterized and the addition of the HUD was found not to substantially effect the resolution, field of view, or pincushion distortion of the operating microscope. In a pilot depth perception subject study, five ophthalmic surgeons completed a pre-set dexterity task with 50.0% (SD = 37.3%) higher success rate and in 35.0% (SD = 24.8%) less time on average with stereoscopic OCT vision compared to monoscopic OCT vision. Preliminary experience using the HUD in 40 vitreo-retinal human surgeries by five ophthalmic surgeons is reported, in which all surgeons reported that the HUD did not alter their normal view of surgery and that live surgical maneuvers were readily visible in displayed stereoscopic OCT volumes.

  7. Stereoscopic 3D display with dynamic optical correction for recovering from asthenopia

    Science.gov (United States)

    Shibata, Takashi; Kawai, Takashi; Otsuki, Masaki; Miyake, Nobuyuki; Yoshihara, Yoshihiro; Iwasaki, Tsuneto

    2005-03-01

    The purpose of this study was to consider a practical application of a newly developed stereoscopic 3-D display that solves the problem of discrepancy between accommodation and convergence. The display uses dynamic optical correction to reduce the discrepancy, and can present images as if they are actually remote objects. The authors thought the display may assist in recovery from asthenopia, which is often caused when the eyes focus on a nearby object for a long time, such as in VDT (Visual Display Terminal) work. In general, recovery from asthenopia, and especially accommodative asthenopia, is achieved by focusing on distant objects. In order to verify this hypothesis, the authors performed visual acuity tests using Landolt rings before and after presenting stereoscopic 3-D images, and evaluated the degree of recovery from asthenopia. The experiment led to three main conclusions: (1) Visual acuity rose after viewing stereoscopic 3-D images on the developed display. (2) Recovery from asthenopia was particularly effective for the dominant eye in comparison with the other eye. (3) Interviews with the subjects indicated that the Landolt rings were particularly clear after viewing the stereoscopic 3-D images.

  8. Measurement of mean rotation and strain-rate tensors by using stereoscopic PIV

    DEFF Research Database (Denmark)

    Özcan, Oktay; Meyer, Knud Erik; Larsen, Poul Scheel

    2005-01-01

    A technique is described for measuring the mean velocity gradient (rate-of-displacement) tensor by using a conventional stereoscopic particle image velocimetry (SPIV) system. Planar measurement of the mean vorticity vector, rate-of-rotation and rate-of-strain tensors and the production of turbule...

  9. Stereoscopic PIV and POD applied to the far turbulent axisymmetric jet

    DEFF Research Database (Denmark)

    Wähnström, Maja; George, William K.; Meyer, Knud Erik

    2006-01-01

    here applies stereoscopic PIV to the far field of the same jet in which the mode-2 phenomenon was first noticed. Indeed azimuthal mode-1 is maximal if all three velocity components are considered, so the new findings are confirmed. This work also addresses a number of outstanding issues from all...

  10. Evaluation of stereoscopic medical video content on an autostereoscopic display for undergraduate medical education

    Science.gov (United States)

    Ilgner, Justus F. R.; Kawai, Takashi; Shibata, Takashi; Yamazoe, Takashi; Westhofen, Martin

    2006-02-01

    Introduction: An increasing number of surgical procedures are performed in a microsurgical and minimally-invasive fashion. However, the performance of surgery, its possibilities and limitations become difficult to teach. Stereoscopic video has evolved from a complex production process and expensive hardware towards rapid editing of video streams with standard and HDTV resolution which can be displayed on portable equipment. This study evaluates the usefulness of stereoscopic video in teaching undergraduate medical students. Material and methods: From an earlier study we chose two clips each of three different microsurgical operations (tympanoplasty type III of the ear, endonasal operation of the paranasal sinuses and laser chordectomy for carcinoma of the larynx). This material was added by 23 clips of a cochlear implantation, which was specifically edited for a portable computer with an autostereoscopic display (PC-RD1-3D, SHARP Corp., Japan). The recording and synchronization of left and right image was performed at the University Hospital Aachen. The footage was edited stereoscopically at the Waseda University by means of our original software for non-linear editing of stereoscopic 3-D movies. Then the material was converted into the streaming 3-D video format. The purpose of the conversion was to present the video clips by a file type that does not depend on a television signal such as PAL or NTSC. 25 4th year medical students who participated in the general ENT course at Aachen University Hospital were asked to estimate depth clues within the six video clips plus cochlear implantation clips. Another 25 4th year students who were shown the material monoscopically on a conventional laptop served as control. Results: All participants noted that the additional depth information helped with understanding the relation of anatomical structures, even though none had hands-on experience with Ear, Nose and Throat operations before or during the course. The monoscopic

  11. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...

  12. The world's fastest camera

    CERN Multimedia

    Piquepaille, Roland

    2006-01-01

    This image processor is not your typical digital camera. It took 6 years to 20 people and $6 million to build the "Regional Calorimeter Trigger"(RCT) which will be a component of the Compact Muon Solenoid (CMS) experiment, one of the detectors on the Large Hadron Collider (LHC) in Geneva, Switzerland (1 page)

  13. Camera network video summarization

    Science.gov (United States)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  14. SEISVIZ3D: Stereoscopic system for the representation of seismic data - Interpretation and Immersion

    Science.gov (United States)

    von Hartmann, Hartwig; Rilling, Stefan; Bogen, Manfred; Thomas, Rüdiger

    2015-04-01

    The seismic method is a valuable tool for getting 3D-images from the subsurface. Seismic data acquisition today is not only a topic for oil and gas exploration but is used also for geothermal exploration, inspections of nuclear waste sites and for scientific investigations. The system presented in this contribution may also have an impact on the visualization of 3D-data of other geophysical methods. 3D-seismic data can be displayed in different ways to give a spatial impression of the subsurface.They are a combination of individual vertical cuts, possibly linked to a cubical portion of the data volume, and the stereoscopic view of the seismic data. By these methods, the spatial perception for the structures and thus of the processes in the subsurface should be increased. Stereoscopic techniques are e. g. implemented in the CAVE and the WALL, both of which require a lot of space and high technical effort. The aim of the interpretation system shown here is stereoscopic visualization of seismic data at the workplace, i.e. at the personal workstation and monitor. The system was developed with following criteria in mind: • Fast rendering of large amounts of data so that a continuous view of the data when changing the viewing angle and the data section is possible, • defining areas in stereoscopic view to translate the spatial impression directly into an interpretation, • the development of an appropriate user interface, including head-tracking, for handling the increased degrees of freedom, • the possibility of collaboration, i.e. teamwork and idea exchange with the simultaneous viewing of a scene at remote locations. The possibilities offered by the use of a stereoscopic system do not replace a conventional interpretation workflow. Rather they have to be implemented into it as an additional step. The amplitude distribution of the seismic data is a challenge for the stereoscopic display because the opacity level and the scaling and selection of the data have to

  15. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  16. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1986-01-01

    A positron emission tomography camera having a plurality of detector rings positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom. Each ring contains a plurality of scintillation detectors which are positioned around an inner circumference with a septum ring extending inwardly from the inner circumference along each outer edge of each ring. An additional septum ring is positioned in the middle of each ring of detectors and parallel to the other septa rings, whereby the inward extent of all the septa rings may be reduced by one-half and the number of detectors required in each ring is reduced. The additional septa reduces the costs of the positron camera and improves its performance

  17. Gamma ray camera

    International Nuclear Information System (INIS)

    Wang, S.-H.; Robbins, C.D.

    1979-01-01

    An Anger gamma ray camera is improved by the substitution of a gamma ray sensitive, proximity type image intensifier tube for the scintillator screen in the Anger camera. The image intensifier tube has a negatively charged flat scintillator screen, a flat photocathode layer, and a grounded, flat output phosphor display screen, all of which have the same dimension to maintain unit image magnification; all components are contained within a grounded metallic tube, with a metallic, inwardly curved input window between the scintillator screen and a collimator. The display screen can be viewed by an array of photomultipliers or solid state detectors. There are two photocathodes and two phosphor screens to give a two stage intensification, the two stages being optically coupled by a light guide. (author)

  18. NSTX Tangential Divertor Camera

    International Nuclear Information System (INIS)

    Roquemore, A.L.; Ted Biewer; Johnson, D.; Zweben, S.J.; Nobuhiro Nishino; Soukhanovskii, V.A.

    2004-01-01

    Strong magnetic field shear around the divertor x-point is numerically predicted to lead to strong spatial asymmetries in turbulence driven particle fluxes. To visualize the turbulence and associated impurity line emission near the lower x-point region, a new tangential observation port has been recently installed on NSTX. A reentrant sapphire window with a moveable in-vessel mirror images the divertor region from the center stack out to R 80 cm and views the x-point for most plasma configurations. A coherent fiber optic bundle transmits the image through a remotely selected filter to a fast camera, for example a 40500 frames/sec Photron CCD camera. A gas puffer located in the lower inboard divertor will localize the turbulence in the region near the x-point. Edge fluid and turbulent codes UEDGE and BOUT will be used to interpret impurity and deuterium emission fluctuation measurements in the divertor

  19. Scanning gamma camera

    International Nuclear Information System (INIS)

    Engdahl, L.W.; Batter, J.F. Jr.; Stout, K.J.

    1977-01-01

    A scanning system for a gamma camera providing for the overlapping of adjacent scan paths is described. A collimator mask having tapered edges provides for a graduated reduction in intensity of radiation received by a detector thereof, the reduction in intensity being graduated in a direction normal to the scanning path to provide a blending of images of adjacent scan paths. 31 claims, 15 figures

  20. Gamma camera display system

    International Nuclear Information System (INIS)

    Stout, K.J.

    1976-01-01

    A gamma camera having an array of photomultipliers coupled via pulse shaping circuitry and a resistor weighting circuit to a display for forming an image of a radioactive subject is described. A linearizing circuit is coupled to the weighting circuit, the linearizing circuit including a nonlinear feedback circuit with diode coupling to the weighting circuit for linearizing the correspondence between points of the display and points of the subject. 4 Claims, 5 Drawing Figures

  1. Comparison of polarimetric cameras

    Science.gov (United States)

    2017-03-01

    Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget , Paperwork Reduction Project (0704-0188...polarimetric camera, remote sensing, space systems 15. NUMBER OF PAGES 93 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18...2016. Hermann Hall, Monterey, CA. The next data in Figure 37. were collected on 01 December 2016 at 1226 PST on the rooftop of the Marriot Hotel in

  2. Stereoscopic measurements of particle dispersion in microgravity turbulent flow

    Science.gov (United States)

    Groszmann, Daniel Eduardo

    2001-08-01

    The presence of particles in turbulent flows adds complexity to an already difficult subject. The work described in this research dissertation was intended to characterize the effects of inertia, isolated from gravity, on the dispersion of solid particles in a turbulent air flow. The experiment consisted of releasing particles of various sizes in an enclosed box of fan- generated, homogenous, isotropic, and stationary turbulent airflow and examining the particle behavior in a microgravity environment. The turbulence box was characterized in ground-based experiments using laser Doppler velocimetry techniques. Microgravity was established by free-floating the experiment apparatus during the parabolic trajectory of NASA's KC-135 reduced gravity aircraft. The microgravity generally lasted about 20 seconds, with about fifty parabolas per flight and one flight per day over a testing period of four days. To cover a broad range of flow regimes of interest, particles with Stokes numbers (St) of 1 to 300 were released in the turbulence box. The three- dimensional measurements of particle motion were made using a three-camera stereo imaging system with a particle-tracking algorithm. Digital photogrammetric techniques were used to determine the particle locations in three-dimensional space from the calibrated camera images. The epipolar geometry constraint was used to identify matching particles from the three different views and a direct spatial intersection scheme determined the coordinates of particles in three-dimensional space. Using velocity and acceleration constraints, particles in a sequence of frames were matched resulting in particle tracks and dispersion measurements. The goal was to compare the dispersion of different Stokes number particles in zero gravity and decouple the effects of inertia and gravity on the dispersion. Results show that higher inertia particles disperse less in zero gravity, in agreement with current models. Particles with St ~ 200

  3. No-reference stereoscopic image quality measurement based on generalized local ternary patterns of binocular energy response

    International Nuclear Information System (INIS)

    Zhou, Wujie; Yu, Lu

    2015-01-01

    Perceptual no-reference (NR) quality measurement of stereoscopic images has become a challenging issue in three-dimensional (3D) imaging fields. In this article, we propose an efficient binocular quality-aware features extraction scheme, namely generalized local ternary patterns (GLTP) of binocular energy response, for general-purpose NR stereoscopic image quality measurement (SIQM). More specifically, we first construct the binocular energy response of a distorted stereoscopic image with different stimuli of amplitude and phase shifts. Then, the binocular quality-aware features are generated from the GLTP of the binocular energy response. Finally, these features are mapped to the subjective quality score of the distorted stereoscopic image by using support vector regression. Experiments on two publicly available 3D databases confirm the effectiveness of the proposed metric compared with the state-of-the-art full reference and NR metrics. (paper)

  4. Radiation-resistant camera tube

    International Nuclear Information System (INIS)

    Kuwahata, Takao; Manabe, Sohei; Makishima, Yasuhiro

    1982-01-01

    It was a long time ago that Toshiba launched on manufacturing black-and-white radiation-resistant camera tubes employing nonbrowning face-plate glass for ITV cameras used in nuclear power plants. Now in compliance with the increasing demand in nuclear power field, the Company is at grips with the development of radiation-resistant single color-camera tubes incorporating a color-stripe filter for color ITV cameras used under radiation environment. Herein represented are the results of experiments on characteristics of materials for single color-camera tubes and prospects for commercialization of the tubes. (author)

  5. In-line phase-contrast stereoscopic X-ray imaging for radiological purposes: An initial experimental study

    International Nuclear Information System (INIS)

    Siegbahn, E.A.; Coan, P.; Zhou, S.-A.; Bravin, A.; Brahme, A.

    2011-01-01

    We report results from a pilot study in which the in-line propagation-based phase-contrast imaging technique is combined with the stereoscopic method. Two phantoms were imaged at several sample-detector distances using monochromatic, 30 keV, X-rays. High contrast- and spatial-resolution phase-contrast stereoscopic pairs of X-ray images were constructed using the anaglyph approach and a vivid stereoscopic effect was demonstrated. On the other hand, images of the same phantoms obtained with a shorter sample-to-detector distance, but otherwise the same experimental conditions (i.e. the same X-ray energy and absorbed radiation dose), corresponding to the conventional attenuation-based imaging mode, hardly revealed stereoscopic effects because of the lower image contrast produced. These results have confirmed our hypothesis that stereoscopic X-ray images of samples with objects composed of low-atomic-number elements are considerably improved if phase-contrast imaging is used. It is our belief that the high-resolution phase-contrast stereoscopic method will be a valuable new medical imaging tool for radiologists and that it will be of help to enhance the diagnostic capability in the examination of patients in future clinical practice, even though further efforts will be needed to optimize the system performance.

  6. Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?

    Science.gov (United States)

    Schild, Jonas; Masuch, Maic

    2012-03-01

    This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.

  7. Interactive and Stereoscopic Hybrid 3D Viewer of Radar Data with Gesture Recognition

    Science.gov (United States)

    Goenetxea, Jon; Moreno, Aitor; Unzueta, Luis; Galdós, Andoni; Segura, Álvaro

    This work presents an interactive and stereoscopic 3D viewer of weather information coming from a Doppler radar. The hybrid system shows a GIS model of the regional zone where the radar is located and the corresponding reconstructed 3D volume weather data. To enhance the immersiveness of the navigation, stereoscopic visualization has been added to the viewer, using a polarized glasses based system. The user can interact with the 3D virtual world using a Nintendo Wiimote for navigating through it and a Nintendo Wii Nunchuk for giving commands by means of hand gestures. We also present a dynamic gesture recognition procedure that measures the temporal advance of the performed gesture postures. Experimental results show how dynamic gestures are effectively recognized so that a more natural interaction and immersive navigation in the virtual world is achieved.

  8. Application of a stereoscopic digital subtraction angiography approach to blood flow analysis

    International Nuclear Information System (INIS)

    Fencil, L.E.; Doi, K.; Hoffmann, K.R.

    1986-01-01

    The authors are developing a stereoscopic digital subtraction angiographic (DSA) approach for accurate measurement of the size, magnification factor, orientation, and blood flow of a selected vessel segment. We employ a Siemens Digitron 2 and a Stereolix x-ray tube with a 25-mm tube shift. Absolute vessel sizes in each stereoscopic image are determined using the magnification factor and an iterative deconvolution technique employing the LSF of the DSA system. From data on vessel diameter and three-dimensional orientation, the effective attenuation coefficient of the diluted contrast medium can be determined, thus allowing accurate blood flow analysis in high-frame-rate DSA images. The accuracy and precision of the approach will be studied using both static and dynamic phantoms

  9. Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals.

    Science.gov (United States)

    Shao, Feng; Jiang, Qiuping; Fu, Randi; Yu, Mei; Jiang, Gangyi

    2016-05-30

    Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images.

  10. A novel no-reference objective stereoscopic video quality assessment method based on visual saliency analysis

    Science.gov (United States)

    Yang, Xinyan; Zhao, Wei; Ye, Long; Zhang, Qin

    2017-07-01

    This paper proposes a no-reference objective stereoscopic video quality assessment method with the motivation that making the effect of objective experiments close to that of subjective way. We believe that the image regions with different visual salient degree should not have the same weights when designing an assessment metric. Therefore, we firstly use GBVS algorithm to each frame pairs and separate both the left and right viewing images into the regions with strong, general and week saliency. Besides, local feature information like blockiness, zero-crossing and depth are extracted and combined with a mathematical model to calculate a quality assessment score. Regions with different salient degree are assigned with different weights in the mathematical model. Experiment results demonstrate the superiority of our method compared with the existed state-of-the-art no-reference objective Stereoscopic video quality assessment methods.

  11. Distortion of depth perception in virtual environments using stereoscopic displays: quantitative assessment and corrective measures

    Science.gov (United States)

    Kleiber, Michael; Winkelholz, Carsten

    2008-02-01

    The aim of the presented research was to quantify the distortion of depth perception when using stereoscopic displays. The visualization parameters of the used virtual reality system such as perspective, haploscopic separation and width of stereoscopic separation were varied. The experiment was designed to measure distortion in depth perception according to allocentric frames of reference. The results of the experiments indicate that some of the parameters have an antithetic effect which allows to compensate the distortion of depth perception for a range of depths. In contrast to earlier research which reported underestimation of depth perception we found that depth was overestimated when using true projection parameters according to the position of the eyes of the user and display geometry.

  12. The Effect of Stereoscopic ("3D") vs. 2D Presentation on Learning through Video and Film

    Science.gov (United States)

    Price, Aaron; Kasal, E.

    2014-01-01

    Two Eyes, 3D is a NSF-funded research project into the effects of stereoscopy on learning of highly spatial concepts. We report final results on one study of the project which tested the effect of stereoscopic presentation on learning outcomes of two short films about Type 1a supernovae and the morphology of the Milky Way. 986 adults watched either film, randomly distributed between stereoscopic and 2D presentation. They took a pre-test and post-test that included multiple choice and drawing tasks related to the spatial nature of the topics in the film. Orientation of the answering device was also tracked and a spatial cognition pre-test was given to control for prior spatial ability. Data collection took place at the Adler Planetarium's Space Visualization Lab and the project is run through the AAVSO.

  13. An MR-compatible stereoscopic in-room 3D display for MR-guided interventions.

    Science.gov (United States)

    Brunner, Alexander; Groebner, Jens; Umathum, Reiner; Maier, Florian; Semmler, Wolfhard; Bock, Michael

    2014-08-01

    A commercial three-dimensional (3D) monitor was modified for use inside the scanner room to provide stereoscopic real-time visualization during magnetic resonance (MR)-guided interventions, and tested in a catheter-tracking phantom experiment at 1.5 T. Brightness, uniformity, radio frequency (RF) emissions and MR image interferences were measured. Due to modifications, the center luminance of the 3D monitor was reduced by 14%, and the addition of a Faraday shield further reduced the remaining luminance by 31%. RF emissions could be effectively shielded; only a minor signal-to-noise ratio (SNR) decrease of 4.6% was observed during imaging. During the tracking experiment, the 3D orientation of the catheter and vessel structures in the phantom could be visualized stereoscopically.

  14. Atomic structure of Fe thin-films on Cu(0 0 1) studied with stereoscopic photography

    International Nuclear Information System (INIS)

    Hattori, Azusa N.; Fujikado, M.; Uchida, T.; Okamoto, S.; Fukumoto, K.; Guo, F.Z.; Matsui, F.; Nakatani, K.; Matsushita, T.; Hattori, K.; Daimon, H.

    2004-01-01

    The complex magnetic properties of Fe films epitaxially grown on Cu(0 0 1) have been discussed in relation to their atomic structure. We have studied the Fe films on Cu(0 0 1) by a new direct method for three-dimensional (3D) atomic structure analysis, so-called 'stereoscopic photography'. The forward-focusing peaks in the photoelectron angular distribution pattern excited by the circularly polarized light rotate around the light axis in either clockwise or counterclockwise direction depending on the light helicity. By using a display-type spherical mirror analyzer for this phenomenon, we can obtain stereoscopic photographs of atomic structure. The photographs revealed that the iron structure changes from bcc to fcc and almost bcc structure with increasing iron film thickness

  15. Phase-only stereoscopic hologram calculation based on Gerchberg–Saxton iterative algorithm

    International Nuclear Information System (INIS)

    Xia Xinyi; Xia Jun

    2016-01-01

    A phase-only computer-generated holography (CGH) calculation method for stereoscopic holography is proposed in this paper. The two-dimensional (2D) perspective projection views of the three-dimensional (3D) object are generated by the computer graphics rendering techniques. Based on these views, a phase-only hologram is calculated by using the Gerchberg–Saxton (GS) iterative algorithm. Comparing with the non-iterative algorithm in the conventional stereoscopic holography, the proposed method improves the holographic image quality, especially for the phase-only hologram encoded from the complex distribution. Both simulation and optical experiment results demonstrate that our proposed method can give higher quality reconstruction comparing with the traditional method. (special topic)

  16. Camera Movement in Narrative Cinema

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2007-01-01

    section unearths what characterizes the literature on camera movement. The second section of the dissertation delineates the history of camera movement itself within narrative cinema. Several organizational principles subtending the on-screen effect of camera movement are revealed in section two...... but they are not organized into a coherent framework. This is the task that section three meets in proposing a functional taxonomy for camera movement in narrative cinema. Two presumptions subtend the taxonomy: That camera movement actively contributes to the way in which we understand the sound and images on the screen......, commentative or valuative manner. 4) Focalization: associating the movement of the camera with the viewpoints of characters or entities in the story world. 5) Reflexive: inviting spectators to engage with the artifice of camera movement. 6) Abstract: visualizing abstract ideas and concepts. In order...

  17. Full aperture imaging with stereoscopic properties in nuclear medicine

    International Nuclear Information System (INIS)

    Strocovsky, Sergio G.; Otero, D.

    2011-01-01

    The imaging techniques based on gamma camera (CG) and used in nuclear medicine have low spatial resolution and low sensitivity due to the use of the collimator. However, this element is essential for the formation of images in CG. The aim of this work is to show the principles of a new technique to overcome the limitations of existing techniques based on CG. Here, we present a Full Aperture Imaging (FAI) technique which is based on the edge-encoding of gamma radiation and differential detection. It takes advantage of the fact that gamma radiation is spatially incoherent. The mathematical principles and the method of images reconstruction with the new proposed technique are explained in detail. The FAI technique is tested by means of Monte Carlo simulations with filiform and spherical sources. The results show that FAI technique has greater sensitivity (>100 times) and greater spatial resolution (>2.6 times) than that of GC with LEHR collimator, in both cases, with and without attenuating material and long and short-distance configurations. The FAI decoding algorithm reconstructs simultaneously four different projections which are located in separate image fields on the detector plane, while GC produces only one projection per acquisition. Simulations have allowed comparison of both techniques under ideal identical conditions. Our results show it is possible to apply an extremely simple encoded imaging technique, and get three-dimensional radioactivity information for simplistic geometry sources. The results are promising enough to evaluate the possibility of future research with more complex sources typical of nuclear medicine imaging. (author)

  18. Dynamic stereoscopic selective visual attention (dssva): integrating motion and shape with depth in video segmentation

    OpenAIRE

    López Bonal, María Teresa; Fernández Caballero, Antonio; Saiz Valverde, Sergio

    2008-01-01

    Depth inclusion as an important parameter for dynamic selective visual attention is presented in this article. The model introduced in this paper is based on two previously developed models, dynamic selective visual attention and visual stereoscopy, giving rise to the so-called dynamic stereoscopic selective visual attention method. The three models are based on the accumulative computation problem-solving method. This paper shows how software reusability enables enhancing results in vision r...

  19. The right view from the wrong location: depth perception in stereoscopic multi-user virtual environments.

    Science.gov (United States)

    Pollock, Brice; Burton, Melissa; Kelly, Jonathan W; Gilbert, Stephen; Winer, Eliot

    2012-04-01

    Stereoscopic depth cues improve depth perception and increase immersion within virtual environments (VEs). However, improper display of these cues can distort perceived distances and directions. Consider a multi-user VE, where all users view identical stereoscopic images regardless of physical location. In this scenario, cues are typically customized for one "leader" equipped with a head-tracking device. This user stands at the center of projection (CoP) and all other users ("followers") view the scene from other locations and receive improper depth cues. This paper examines perceived depth distortion when viewing stereoscopic VEs from follower perspectives and the impact of these distortions on collaborative spatial judgments. Pairs of participants made collaborative depth judgments of virtual shapes viewed from the CoP or after displacement forward or backward. Forward and backward displacement caused perceived depth compression and expansion, respectively, with greater compression than expansion. Furthermore, distortion was less than predicted by a ray-intersection model of stereo geometry. Collaboration times were significantly longer when participants stood at different locations compared to the same location, and increased with greater perceived depth discrepancy between the two viewing locations. These findings advance our understanding of spatial distortions in multi-user VEs, and suggest a strategy for reducing distortion.

  20. Web GIS in practice VII: stereoscopic 3-D solutions for online maps and virtual globes

    Science.gov (United States)

    Boulos, Maged N.K.; Robinson, Larry R.

    2009-01-01

    Because our pupils are about 6.5 cm apart, each eye views a scene from a different angle and sends a unique image to the visual cortex, which then merges the images from both eyes into a single picture. The slight difference between the right and left images allows the brain to properly perceive the 'third dimension' or depth in a scene (stereopsis). However, when a person views a conventional 2-D (two-dimensional) image representation of a 3-D (three-dimensional) scene on a conventional computer screen, each eye receives essentially the same information. Depth in such cases can only be approximately inferred from visual clues in the image, such as perspective, as only one image is offered to both eyes. The goal of stereoscopic 3-D displays is to project a slightly different image into each eye to achieve a much truer and realistic perception of depth, of different scene planes, and of object relief. This paper presents a brief review of a number of stereoscopic 3-D hardware and software solutions for creating and displaying online maps and virtual globes (such as Google Earth) in "true 3D", with costs ranging from almost free to multi-thousand pounds sterling. A practical account is also given of the experience of the USGS BRD UMESC (United States Geological Survey's Biological Resources Division, Upper Midwest Environmental Sciences Center) in setting up a low-cost, full-colour stereoscopic 3-D system.

  1. Monoscopic versus stereoscopic photography in screening for clinically significant macular edema.

    Science.gov (United States)

    Welty, Christopher J; Agarwal, Anita; Merin, Lawrence M; Chomsky, Amy

    2006-01-01

    The purpose of the study was to determine whether monoscopic photography could serve as an accurate tool when used to screen for clinically significant macular edema. In a masked randomized fashion, two readers evaluated monoscopic and stereoscopic retinal photographs of 100 eyes. The photographs were evaluated first individually for probable clinically significant macular edema based on the Early Treatment Diabetic Retinopathy Study criteria and then as stereoscopic pairs. Graders were evaluated for sensitivity and specificity individually and in combination. Individually, reader one had a sensitivity of 0.93 and a specificity of 0.77, and reader two had a sensitivity of 0.88 and a specificity of 0.94. In combination, the readers had a sensitivity of 0.91 and a specificity of 0.86. They correlated on 0.76 of the stereoscopic readings and 0.92 of the monoscopic readings. These results indicate that the use of monoscopic retinal photography may be an accurate screening tool for clinically significant macular edema.

  2. Case study: the introduction of stereoscopic games on the Sony PlayStation 3

    Science.gov (United States)

    Bickerstaff, Ian

    2012-03-01

    A free stereoscopic firmware update on Sony Computer Entertainment's PlayStation® 3 console provides the potential to increase enormously the popularity of stereoscopic 3D in the home. For this to succeed though, a large selection of content has to become available that exploits 3D in the best way possible. In addition to the existing challenges found in creating 3D movies and television programmes, the stereography must compensate for the dynamic and unpredictable environments found in games. Automatically, the software must map the depth range of the scene into the display's comfort zone, while minimising depth compression. This paper presents a range of techniques developed to solve this problem and the challenge of creating twice as many images as the 2D version without excessively compromising the frame rate or image quality. At the time of writing, over 80 stereoscopic PlayStation 3 games have been released and notable titles are used as examples to illustrate how the techniques have been adapted for different game genres. Since the firmware's introduction in 2010, the industry has matured with a large number of developers now producing increasingly sophisticated 3D content. New technologies such as viewer head tracking and head-mounted displays should increase the appeal of 3D in the home still further.

  3. Learning Receptive Fields and Quality Lookups for Blind Quality Assessment of Stereoscopic Images.

    Science.gov (United States)

    Shao, Feng; Lin, Weisi; Wang, Shanshan; Jiang, Gangyi; Yu, Mei; Dai, Qionghai

    2016-03-01

    Blind quality assessment of 3D images encounters more new challenges than its 2D counterparts. In this paper, we propose a blind quality assessment for stereoscopic images by learning the characteristics of receptive fields (RFs) from perspective of dictionary learning, and constructing quality lookups to replace human opinion scores without performance loss. The important feature of the proposed method is that we do not need a large set of samples of distorted stereoscopic images and the corresponding human opinion scores to learn a regression model. To be more specific, in the training phase, we learn local RFs (LRFs) and global RFs (GRFs) from the reference and distorted stereoscopic images, respectively, and construct their corresponding local quality lookups (LQLs) and global quality lookups (GQLs). In the testing phase, blind quality pooling can be easily achieved by searching optimal GRF and LRF indexes from the learnt LQLs and GQLs, and the quality score is obtained by combining the LRF and GRF indexes together. Experimental results on three publicly 3D image quality assessment databases demonstrate that in comparison with the existing methods, the devised algorithm achieves high consistent alignment with subjective assessment.

  4. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  5. Continuous monitoring of prostate position using stereoscopic and monoscopic kV image guidance

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, M. Tynan R.; Parsons, Dave D.; Robar, James L. [Department of Medical Physics, Dalhousie University, Halifax, Nova Scotia B3H 4R2, Canada and Nova Scotia Cancer Centre, QEII Health Science Centre, Halifax, Nova Scotia B3H 2Y9 (Canada)

    2016-05-15

    Purpose: To demonstrate continuous kV x-ray monitoring of prostate motion using both stereoscopic and monoscopic localizations, assess the spatial accuracy of these techniques, and evaluate the dose delivered from the added image guidance. Methods: The authors implemented both stereoscopic and monoscopic fiducial localizations using a room-mounted dual oblique x-ray system. Recently developed monoscopic 3D position estimation techniques potentially overcome the issue of treatment head interference with stereoscopic imaging at certain gantry angles. To demonstrate continuous position monitoring, a gold fiducial marker was placed in an anthropomorphic phantom and placed on the Linac couch. The couch was used as a programmable translation stage. The couch was programmed with a series of patient prostate motion trajectories exemplifying five distinct categories: stable prostate, slow drift, persistent excursion, transient excursion, and high frequency excursions. The phantom and fiducial were imaged using 140 kVp, 0.63 mAs per image at 1 Hz for a 60 s monitoring period. Both stereoscopic and monoscopic 3D localization accuracies were assessed by comparison to the ground-truth obtained from the Linac log file. Imaging dose was also assessed, using optically stimulated luminescence dosimeter inserts in the phantom. Results: Stereoscopic localization accuracy varied between 0.13 ± 0.05 and 0.33 ± 0.30 mm, depending on the motion trajectory. Monoscopic localization accuracy varied from 0.2 ± 0.1 to 1.1 ± 0.7 mm. The largest localization errors were typically observed in the left–right direction. There were significant differences in accuracy between the two monoscopic views, but which view was better varied from trajectory to trajectory. The imaging dose was measured to be between 2 and 15 μGy/mAs, depending on location in the phantom. Conclusions: The authors have demonstrated the first use of monoscopic localization for a room-mounted dual x-ray system. Three

  6. Video Chat with Multiple Cameras

    OpenAIRE

    MacCormick, John

    2012-01-01

    The dominant paradigm for video chat employs a single camera at each end of the conversation, but some conversations can be greatly enhanced by using multiple cameras at one or both ends. This paper provides the first rigorous investigation of multi-camera video chat, concentrating especially on the ability of users to switch between views at either end of the conversation. A user study of 23 individuals analyzes the advantages and disadvantages of permitting a user to switch between views at...

  7. Transmission electron microscope CCD camera

    Science.gov (United States)

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  8. A Motionless Camera

    Science.gov (United States)

    1994-01-01

    Omniview, a motionless, noiseless, exceptionally versatile camera was developed for NASA as a receiving device for guiding space robots. The system can see in one direction and provide as many as four views simultaneously. Developed by Omniview, Inc. (formerly TRI) under a NASA Small Business Innovation Research (SBIR) grant, the system's image transformation electronics produce a real-time image from anywhere within a hemispherical field. Lens distortion is removed, and a corrected "flat" view appears on a monitor. Key elements are a high resolution charge coupled device (CCD), image correction circuitry and a microcomputer for image processing. The system can be adapted to existing installations. Applications include security and surveillance, teleconferencing, imaging, virtual reality, broadcast video and military operations. Omniview technology is now called IPIX. The company was founded in 1986 as TeleRobotics International, became Omniview in 1995, and changed its name to Interactive Pictures Corporation in 1997.

  9. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1977-01-01

    A gamma camera system having control components operating in conjunction with a solid state detector is described. The detector is formed of a plurality of discrete components which are associated in geometrical or coordinate arrangement defining a detector matrix to derive coordinate signal outputs. These outputs are selectively filtered and summed to form coordinate channel signals and corresponding energy channel signals. A control feature of the invention regulates the noted summing and filtering performance to derive data acceptance signals which are addressed to further treating components. The latter components include coordinate and enery channel multiplexers as well as energy-responsive selective networks. A sequential control is provided for regulating the signal processing functions of the system to derive an overall imaging cycle

  10. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    A positron emission tomography camera having a plurality of detector planes positioned side-by-side around a patient area to detect radiation. Each plane includes a plurality of photomultiplier tubes and at least two rows of scintillation crystals on each photomultiplier tube extend across to adjacent photomultiplier tubes for detecting radiation from the patient area. Each row of crystals on each photomultiplier tube is offset from the other rows of crystals, and the area of each crystal on each tube in each row is different than the area of the crystals on the tube in other rows for detecting which crystal is actuated and allowing the detector to detect more inter-plane slides. The crystals are offset by an amount equal to the length of the crystal divided by the number of rows. The rows of crystals on opposite sides of the patient may be rotated 90 degrees relative to each other

  11. The Circular Camera Movement

    DEFF Research Database (Denmark)

    Hansen, Lennard Højbjerg

    2014-01-01

    It has been an accepted precept in film theory that specific stylistic features do not express specific content. Nevertheless, it is possible to find many examples in the history of film in which stylistic features do express specific content: for instance, the circular camera movement is used...... repeatedly to convey the feeling of a man and a woman falling in love. This raises the question of why producers and directors choose certain stylistic features to narrate certain categories of content. Through the analysis of several short film and TV clips, this article explores whether...... or not there are perceptual aspects related to specific stylistic features that enable them to be used for delimited narrational purposes. The article further attempts to reopen this particular stylistic debate by exploring the embodied aspects of visual perception in relation to specific stylistic features...

  12. Automatic locking radioisotope camera lock

    International Nuclear Information System (INIS)

    Rosauer, P.J.

    1978-01-01

    The lock of the present invention secures the isotope source in a stored shielded condition in the camera until a positive effort has been made to open the lock and take the source outside of the camera and prevents disconnection of the source pigtail unless the source is locked in a shielded condition in the camera. It also gives a visual indication of the locked or possible exposed condition of the isotope source and prevents the source pigtail from being completely pushed out of the camera, even when the lock is released. (author)

  13. The Effect of Two-dimensional and Stereoscopic Presentation on Middle School Students' Performance of Spatial Cognition Tasks

    Science.gov (United States)

    Price, Aaron; Lee, Hee-Sun

    2010-02-01

    We investigated whether and how student performance on three types of spatial cognition tasks differs when worked with two-dimensional or stereoscopic representations. We recruited nineteen middle school students visiting a planetarium in a large Midwestern American city and analyzed their performance on a series of spatial cognition tasks in terms of response accuracy and task completion time. Results show that response accuracy did not differ between the two types of representations while task completion time was significantly greater with the stereoscopic representations. The completion time increased as the number of mental manipulations of 3D objects increased in the tasks. Post-interviews provide evidence that some students continued to think of stereoscopic representations as two-dimensional. Based on cognitive load and cue theories, we interpret that, in the absence of pictorial depth cues, students may need more time to be familiar with stereoscopic representations for optimal performance. In light of these results, we discuss potential uses of stereoscopic representations for science learning.

  14. The "All Sky Camera Network"

    Science.gov (United States)

    Caldwell, Andy

    2005-01-01

    In 2001, the "All Sky Camera Network" came to life as an outreach program to connect the Denver Museum of Nature and Science (DMNS) exhibit "Space Odyssey" with Colorado schools. The network is comprised of cameras placed strategically at schools throughout Colorado to capture fireballs--rare events that produce meteorites.…

  15. The Eye of the Camera

    NARCIS (Netherlands)

    van Rompay, Thomas Johannes Lucas; Vonk, Dorette J.; Fransen, M.L.

    2009-01-01

    This study addresses the effects of security cameras on prosocial behavior. Results from previous studies indicate that the presence of others can trigger helping behavior, arising from the need for approval of others. Extending these findings, the authors propose that security cameras can likewise

  16. Computer-enhanced stereoscopic vision in a head-mounted operating binocular

    International Nuclear Information System (INIS)

    Birkfellner, Wolfgang; Figl, Michael; Matula, Christian; Hummel, Johann; Hanel, Rudolf; Imhof, Herwig; Wanschitz, Felix; Wagner, Arne; Watzinger, Franz; Bergmann, Helmar

    2003-01-01

    Based on the Varioscope, a commercially available head-mounted operating binocular, we have developed the Varioscope AR, a see through head-mounted display (HMD) for augmented reality visualization that seamlessly fits into the infrastructure of a surgical navigation system. We have assessed the extent to which stereoscopic visualization improves target localization in computer-aided surgery in a phantom study. In order to quantify the depth perception of a user aiming at a given target, we have designed a phantom simulating typical clinical situations in skull base surgery. Sixteen steel spheres were fixed at the base of a bony skull, and several typical craniotomies were applied. After having taken CT scans, the skull was filled with opaque jelly in order to simulate brain tissue. The positions of the spheres were registered using VISIT, a system for computer-aided surgical navigation. Then attempts were made to locate the steel spheres with a bayonet probe through the craniotomies using VISIT and the Varioscope AR as a stereoscopic display device. Localization of targets 4 mm in diameter using stereoscopic vision and additional visual cues indicating target proximity had a success rate (defined as a first-trial hit rate) of 87.5%. Using monoscopic vision and target proximity indication, the success rate was found to be 66.6%. Omission of visual hints on reaching a target yielded a success rate of 79.2% in the stereo case and 56.25% with monoscopic vision. Time requirements for localizing all 16 targets ranged from 7.5 min (stereo, with proximity cues) to 10 min (mono, without proximity cues). Navigation error is primarily governed by the accuracy of registration in the navigation system, whereas the HMD does not appear to influence localization significantly. We conclude that stereo vision is a valuable tool in augmented reality guided interventions. (note)

  17. Poster - 48: Clinical assessment of ExacTrac stereoscopic imaging of spine alignment for lung SBRT

    International Nuclear Information System (INIS)

    Sattarivand, Mike; Summers, Clare; Robar, James

    2016-01-01

    Purpose: To evaluate the validity of using spine as a surrogate for tumor positioning with ExacTrac stereoscopic imaging in lung stereotactic body radiation therapy (SBRT). Methods: Using the Novalis ExacTrac x-ray system, 39 lung SBRT patients (182 treatments) were aligned before treatment with 6 degrees (6D) of freedom couch (3 translations, 3 rotations) based on spine matching on stereoscopic images. The couch was shifted to treatment isocenter and pre-treatment CBCT was performed based on a soft tissue match around tumor volume. The CBCT data were used to measure residual errors following ExacTrac alignment. The thresholds for re-aligning the patients based on CBCT were 3mm shift or 3° rotation (in any 6D). In order to evaluate the effect of tumor location on residual errors, correlations between tumor distance from spine and individual residual errors were calculated. Results: Residual errors were up to 0.5±2.4mm. Using 3mm/3° thresholds, 80/182 (44%) of the treatments required re-alignment based on CBCT soft tissue matching following ExacTrac spine alignment. Most mismatches were in sup-inf, ant-post, and roll directions which had larger standard deviations. No correlation was found between tumor distance from spine and individual residual errors. Conclusion: ExacTrac stereoscopic imaging offers a quick pre-treatment patient alignment. However, bone matching based on spine is not reliable for aligning lung SBRT patients who require soft tissue image registration from CBCT. Spine can be a poor surrogate for lung SBRT patient alignment even for proximal tumor volumes.

  18. Poster - 48: Clinical assessment of ExacTrac stereoscopic imaging of spine alignment for lung SBRT

    Energy Technology Data Exchange (ETDEWEB)

    Sattarivand, Mike; Summers, Clare; Robar, James [Nova Scotia Cancer Centre, Nova Scotia Cancer Centre, Nova Scotia Cancer Centre (Canada)

    2016-08-15

    Purpose: To evaluate the validity of using spine as a surrogate for tumor positioning with ExacTrac stereoscopic imaging in lung stereotactic body radiation therapy (SBRT). Methods: Using the Novalis ExacTrac x-ray system, 39 lung SBRT patients (182 treatments) were aligned before treatment with 6 degrees (6D) of freedom couch (3 translations, 3 rotations) based on spine matching on stereoscopic images. The couch was shifted to treatment isocenter and pre-treatment CBCT was performed based on a soft tissue match around tumor volume. The CBCT data were used to measure residual errors following ExacTrac alignment. The thresholds for re-aligning the patients based on CBCT were 3mm shift or 3° rotation (in any 6D). In order to evaluate the effect of tumor location on residual errors, correlations between tumor distance from spine and individual residual errors were calculated. Results: Residual errors were up to 0.5±2.4mm. Using 3mm/3° thresholds, 80/182 (44%) of the treatments required re-alignment based on CBCT soft tissue matching following ExacTrac spine alignment. Most mismatches were in sup-inf, ant-post, and roll directions which had larger standard deviations. No correlation was found between tumor distance from spine and individual residual errors. Conclusion: ExacTrac stereoscopic imaging offers a quick pre-treatment patient alignment. However, bone matching based on spine is not reliable for aligning lung SBRT patients who require soft tissue image registration from CBCT. Spine can be a poor surrogate for lung SBRT patient alignment even for proximal tumor volumes.

  19. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1982-01-01

    The invention provides a composite solid state detector for use in deriving a display, by spatial coordinate information, of the distribution or radiation emanating from a source within a region of interest, comprising several solid state detector components, each having a given surface arranged for exposure to impinging radiation and exhibiting discrete interactions therewith at given spatially definable locations. The surface of each component and the surface disposed opposite and substantially parallel thereto are associated with impedence means configured to provide for each opposed surface outputs for signals relating the given location of the interactions with one spatial coordinate parameter of one select directional sense. The detector components are arranged to provide groupings of adjacently disposed surfaces mutually linearly oriented to exhibit a common directional sense of the spatial coordinate parameter. Means interconnect at least two of the outputs associated with each of the surfaces within a given grouping for collecting the signals deriving therefrom. The invention also provides a camera system for imaging the distribution of a source of gamma radiation situated within a region of interest

  20. Immersive Televisual Environments: Spectatorship, Stereoscopic Vision and the Failure of 3DTV

    Directory of Open Access Journals (Sweden)

    Ilkin Mehrabov

    2015-09-01

    Full Text Available This article focuses on one of the most ground-breaking technological attempts in creating novel immersive media environments for heightened televisual user experiences: 3DTV, a Network of Excellence funded by the European Commission 6th Framework Information Society Technologies Programme. Based on the theoretical framework outlined by the works of Jonathan Crary and Brian Winston, and on empirical data obtained from author’s fieldwork and laboratory visit notes, as well as discussions with practitioners, the article explores the history of stereoscopic vision and technological progress related with it, and looks for possible reasons of 3DTV’s dramatic commercial failure.

  1. Teaching-learning: stereoscopic 3D versus Traditional methods in Mexico City.

    Science.gov (United States)

    Mendoza Oropeza, Laura; Ortiz Sánchez, Ricardo; Ojeda Villagómez, Raúl

    2015-01-01

    In the UNAM Faculty of Odontology, we use a stereoscopic 3D teaching method that has grown more common in the last year, which makes it important to know whether students can learn better with this strategy. The objective of the study is to know, if the 4th year students of the bachelor's degree in dentistry learn more effectively with the use of stereoscopic 3D than the traditional method in Orthodontics. first, we selected the course topics, to be used for both methods; the traditional method using projection of slides and for the stereoscopic third dimension, with the use of videos in digital stereo projection (seen through "passive" polarized 3D glasses). The main topic was supernumerary teeth, including and diverted from their guide eruption. Afterwards we performed an exam on students, containing 24 items, validated by expert judgment in Orthodontics teaching. The results of the data were compared between the two educational methods for determined effectiveness using the model before and after measurement with the statistical package SPSS 20 version. The results presented for the 9 groups of undergraduates in dentistry, were collected with a total of 218 students for 3D and traditional methods, we found in a traditional method a mean 4.91, SD 1.4752 in the pretest and X=6.96, SD 1.26622, St Error 0.12318 for the posttest. The 3D method had a mean 5.21, SD 1.996779 St Error 0.193036 for the pretest X= 7.82, SD =0.963963, St Error 0.09319 posttest; the analysis of Variance between groups F= 5.60 Prob > 0.0000 and Bartlett's test for equal variances 21.0640 Prob > chi2 = 0.007. These results show that the student's learning in 3D means a significant improvement as compared to the traditional teaching method and having a strong association between the two methods. The findings suggest that the stereoscopic 3D method lead to improved student learning compared to traditional teaching.

  2. Development of underwater camera using high-definition camera

    International Nuclear Information System (INIS)

    Tsuji, Kenji; Watanabe, Masato; Takashima, Masanobu; Kawamura, Shingo; Tanaka, Hiroyuki

    2012-01-01

    In order to reduce the time for core verification or visual inspection of BWR fuels, the underwater camera using a High-Definition camera has been developed. As a result of this development, the underwater camera has 2 lights and 370 x 400 x 328mm dimensions and 20.5kg weight. Using the camera, 6 or so spent-fuel IDs are identified at 1 or 1.5m distance at a time, and 0.3mmφ pin-hole is recognized at 1.5m distance and 20 times zoom-up. Noises caused by radiation less than 15 Gy/h are not affected the images. (author)

  3. Control system for gamma camera

    International Nuclear Information System (INIS)

    Miller, D.W.

    1977-01-01

    An improved gamma camera arrangement is described which utilizing a solid state detector, formed of high purity germanium. the central arrangement of the camera operates to effect the carrying out of a trapezoidal filtering operation over antisymmetrically summed spatial signals through gated integration procedures utilizing idealized integrating intervals. By simultaneously carrying out peak energy evaluation of the input signals, a desirable control over pulse pile-up phenomena is achieved. Additionally, through the use of the time derivative of incoming pulse or signal energy information to initially enable the control system, a low level information evaluation is provided serving to enhance the signal processing efficiency of the camera

  4. Surface topography characterization using 3D stereoscopic reconstruction of SEM images

    Science.gov (United States)

    Vedantha Krishna, Amogh; Flys, Olena; Reddy, Vijeth V.; Rosén, B. G.

    2018-06-01

    A major drawback of the optical microscope is its limitation to resolve finer details. Many microscopes have been developed to overcome the limitations set by the diffraction of visible light. The scanning electron microscope (SEM) is one such alternative: it uses electrons for imaging, which have much smaller wavelength than photons. As a result high magnification with superior image resolution can be achieved. However, SEM generates 2D images which provide limited data for surface measurements and analysis. Often many research areas require the knowledge of 3D structures as they contribute to a comprehensive understanding of microstructure by allowing effective measurements and qualitative visualization of the samples under study. For this reason, stereo photogrammetry technique is employed to convert SEM images into 3D measurable data. This paper aims to utilize a stereoscopic reconstruction technique as a reliable method for characterization of surface topography. Reconstructed results from SEM images are compared with coherence scanning interferometer (CSI) results obtained by measuring a roughness reference standard sample. This paper presents a method to select the most robust/consistent surface texture parameters that are insensitive to the uncertainties involved in the reconstruction technique itself. Results from the two-stereoscopic reconstruction algorithms are also documented in this paper.

  5. Analysis of scene distortions in stereoscopic images due to the variation of the ideal viewing conditions

    Science.gov (United States)

    Viale, Alberto; Villa, Dario

    2011-03-01

    Recently stereoscopy has increased a lot its popularity and various technologies are spreading in theaters and homes allowing observation of stereoscopic images and movies, becoming affordable even for home users. However there are some golden rules that users should follow to ensure a better enjoyment of stereoscopic images, first of all the viewing condition should not be too different from the ideal ones, which were assumed during the production process. To allow the user to perceive stereo depth instead of a flat image, two different views of the same scene are shown to the subject, one is seen just through his left eye and the other just through the right one; the vision process is making the work of merging the two images in a virtual three-dimensional scene, giving to the user the perception of depth. The two images presented to the user were created, either from image synthesis or from more traditional techniques, following the rules of perspective. These rules need some boundary conditions to be explicit, such as eye separation, field of view, parallax distance, viewer position and orientation. In this paper we are interested in studying how the variation of the viewer position and orientation from the ideal ones expressed as specified parameters in the image creation process, is affecting the correctness of the reconstruction of the three-dimensional virtual scene.

  6. Stereoscopic Visual Attention-Based Regional Bit Allocation Optimization for Multiview Video Coding

    Directory of Open Access Journals (Sweden)

    Dai Qionghai

    2010-01-01

    Full Text Available We propose a Stereoscopic Visual Attention- (SVA- based regional bit allocation optimization for Multiview Video Coding (MVC by the exploiting visual redundancies from human perceptions. We propose a novel SVA model, where multiple perceptual stimuli including depth, motion, intensity, color, and orientation contrast are utilized, to simulate the visual attention mechanisms of human visual system with stereoscopic perception. Then, a semantic region-of-interest (ROI is extracted based on the saliency maps of SVA. Both objective and subjective evaluations of extracted ROIs indicated that the proposed SVA model based on ROI extraction scheme outperforms the schemes only using spatial or/and temporal visual attention clues. Finally, by using the extracted SVA-based ROIs, a regional bit allocation optimization scheme is presented to allocate more bits on SVA-based ROIs for high image quality and fewer bits on background regions for efficient compression purpose. Experimental results on MVC show that the proposed regional bit allocation algorithm can achieve over % bit-rate saving while maintaining the subjective image quality. Meanwhile, the image quality of ROIs is improved by  dB at the cost of insensitive image quality degradation of the background image.

  7. A foreground object features-based stereoscopic image visual comfort assessment model

    Science.gov (United States)

    Jin, Xin; Jiang, G.; Ying, H.; Yu, M.; Ding, S.; Peng, Z.; Shao, F.

    2014-11-01

    Since stereoscopic images provide observers with both realistic and discomfort viewing experience, it is necessary to investigate the determinants of visual discomfort. By considering that foreground object draws most attention when human observing stereoscopic images. This paper proposes a new foreground object based visual comfort assessment (VCA) metric. In the first place, a suitable segmentation method is applied to disparity map and then the foreground object is ascertained as the one having the biggest average disparity. In the second place, three visual features being average disparity, average width and spatial complexity of foreground object are computed from the perspective of visual attention. Nevertheless, object's width and complexity do not consistently influence the perception of visual comfort in comparison with disparity. In accordance with this psychological phenomenon, we divide the whole images into four categories on the basis of different disparity and width, and exert four different models to more precisely predict its visual comfort in the third place. Experimental results show that the proposed VCA metric outperformance other existing metrics and can achieve a high consistency between objective and subjective visual comfort scores. The Pearson Linear Correlation Coefficient (PLCC) and Spearman Rank Order Correlation Coefficient (SROCC) are over 0.84 and 0.82, respectively.

  8. Matching methods evaluation framework for stereoscopic breast x-ray images.

    Science.gov (United States)

    Rousson, Johanna; Naudin, Mathieu; Marchessoux, Cédric

    2016-01-01

    Three-dimensional (3-D) imaging has been intensively studied in the past few decades. Depth information is an important added value of 3-D systems over two-dimensional systems. Special focuses were devoted to the development of stereo matching methods for the generation of disparity maps (i.e., depth information within a 3-D scene). Dedicated frameworks were designed to evaluate and rank the performance of different stereo matching methods but never considering x-ray medical images. Yet, 3-D x-ray acquisition systems and 3-D medical displays have already been introduced into the diagnostic market. To access the depth information within x-ray stereoscopic images, computing accurate disparity maps is essential. We aimed at developing a framework dedicated to x-ray stereoscopic breast images used to evaluate and rank several stereo matching methods. A multiresolution pyramid optimization approach was integrated to the framework to increase the accuracy and the efficiency of the stereo matching techniques. Finally, a metric was designed to score the results of the stereo matching compared with the ground truth. Eight methods were evaluated and four of them [locally scaled sum of absolute differences (LSAD), zero mean sum of absolute differences, zero mean sum of squared differences, and locally scaled mean sum of squared differences] appeared to perform equally good with an average error score of 0.04 (0 is the perfect matching). LSAD was selected for generating the disparity maps.

  9. Optoelectronic stereoscopic device for diagnostics, treatment, and developing of binocular vision

    Science.gov (United States)

    Pautova, Larisa; Elkhov, Victor A.; Ovechkis, Yuri N.

    2003-08-01

    Operation of the device is based on alternative generation of pictures for left and right eyes on the monitor screen. Controller gives pulses on LCG so that shutter for left or right eye opens synchronously with pictures. The device provides frequency of switching more than 100 Hz, and that is why the flickering is absent. Thus, a separate demonstration of images to the left eye or to the right one in turn is obtained for patients being unaware and creates the conditions of binocular perception clsoe to natural ones without any additional separation of vision fields. LC-cell transfer characteristic coodination with time parameters of monitor screen has enabled to improve stereo image quality. Complicated problem of computer stereo images with LC-glasses is so called 'ghosts' - noise images that come to blocked eye. We reduced its influence by adapting stereo images to phosphor and LC-cells characteristics. The device is intended for diagnostics and treatment of stabismus, amblyopia and other binocular and stereoscopic vision impairments, for cultivating, training and developing of stereoscopic vision, for measurements of horizontal and vertical phoria, phusion reserves, the stereovision acuity and some else, for fixing central scotoma borders, as well as suppression scotoma in strabismus too.

  10. Employing WebGL to develop interactive stereoscopic 3D content for use in biomedical visualization

    Science.gov (United States)

    Johnston, Semay; Renambot, Luc; Sauter, Daniel

    2013-03-01

    Web Graphics Library (WebGL), the forthcoming web standard for rendering native 3D graphics in a browser, represents an important addition to the biomedical visualization toolset. It is projected to become a mainstream method of delivering 3D online content due to shrinking support for third-party plug-ins. Additionally, it provides a virtual reality (VR) experience to web users accommodated by the growing availability of stereoscopic displays (3D TV, desktop, and mobile). WebGL's value in biomedical visualization has been demonstrated by applications for interactive anatomical models, chemical and molecular visualization, and web-based volume rendering. However, a lack of instructional literature specific to the field prevents many from utilizing this technology. This project defines a WebGL design methodology for a target audience of biomedical artists with a basic understanding of web languages and 3D graphics. The methodology was informed by the development of an interactive web application depicting the anatomy and various pathologies of the human eye. The application supports several modes of stereoscopic displays for a better understanding of 3D anatomical structures.

  11. Passive method of eliminating accommodation/convergence disparity in stereoscopic head-mounted displays

    Science.gov (United States)

    Eichenlaub, Jesse B.

    2005-03-01

    The difference in accommodation and convergence distance experienced when viewing stereoscopic displays has long been recognized as a source of visual discomfort. It is especially problematic in head mounted virtual reality and enhanced reality displays, where images must often be displayed across a large depth range or superimposed on real objects. DTI has demonstrated a novel method of creating stereoscopic images in which the focus and fixation distances are closely matched for all parts of the scene from close distances to infinity. The method is passive in the sense that it does not rely on eye tracking, moving parts, variable focus optics, vibrating optics, or feedback loops. The method uses a rapidly changing illumination pattern in combination with a high speed microdisplay to create cones of light that converge at different distances to form the voxels of a high resolution space filling image. A bench model display was built and a series of visual tests were performed in order to demonstrate the concept and investigate both its capabilities and limitations. Results proved conclusively that real optical images were being formed and that observers had to change their focus to read text or see objects at different distances

  12. Assessing the precision of gaze following using a stereoscopic 3D virtual reality setting.

    Science.gov (United States)

    Atabaki, Artin; Marciniak, Karolina; Dicke, Peter W; Thier, Peter

    2015-07-01

    Despite the ecological importance of gaze following, little is known about the underlying neuronal processes, which allow us to extract gaze direction from the geometric features of the eye and head of a conspecific. In order to understand the neuronal mechanisms underlying this ability, a careful description of the capacity and the limitations of gaze following at the behavioral level is needed. Previous studies of gaze following, which relied on naturalistic settings have the disadvantage of allowing only very limited control of potentially relevant visual features guiding gaze following, such as the contrast of iris and sclera, the shape of the eyelids and--in the case of photographs--they lack depth. Hence, in order to get full control of potentially relevant features we decided to study gaze following of human observers guided by the gaze of a human avatar seen stereoscopically. To this end we established a stereoscopic 3D virtual reality setup, in which we tested human subjects' abilities to detect at which target a human avatar was looking at. Following the gaze of the avatar showed all the features of the gaze following of a natural person, namely a substantial degree of precision associated with a consistent pattern of systematic deviations from the target. Poor stereo vision affected performance surprisingly little (only in certain experimental conditions). Only gaze following guided by targets at larger downward eccentricities exhibited a differential effect of the presence or absence of accompanying movements of the avatar's eyelids and eyebrows. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Assessment of stereoscopic optic disc images using an autostereoscopic screen – experimental study

    Directory of Open Access Journals (Sweden)

    Vaideanu Daniella

    2008-07-01

    Full Text Available Abstract Background Stereoscopic assessment of the optic disc morphology is an important part of the care of patients with glaucoma. The aim of this study was to assess stereoviewing of stereoscopic optic disc images using an example of the new technology of autostereoscopic screens compared to the liquid shutter goggles. Methods Independent assessment of glaucomatous disc characteristics and measurement of optic disc and cup parameters whilst using either an autostereoscopic screen or liquid crystal shutter goggles synchronized with a view switching display. The main outcome measures were inter-modality agreements between the two used modalities as evaluated by the weighted kappa test and Bland Altman plots. Results Inter-modality agreement for measuring optic disc parameters was good [Average kappa coefficient for vertical Cup/Disc ratio was 0.78 (95% CI 0.62–0.91 and 0.81 (95% CI 0.6–0.92 for observer 1 and 2 respectively]. Agreement between modalities for assessing optic disc characteristics for glaucoma on a five-point scale was very good with a kappa value of 0.97. Conclusion This study compared two different methods of stereo viewing. The results of assessment of the different optic disc and cup parameters were comparable using an example of the newly developing autostereoscopic display technologies as compared to the shutter goggles system used. The Inter-modality agreement was high. This new technology carries potential clinical usability benefits in different areas of ophthalmic practice.

  14. Stereoscopic Three-Dimensional Neuroanatomy Lectures Enhance Neurosurgical Training: Prospective Comparison with Traditional Teaching.

    Science.gov (United States)

    Clark, Anna D; Guilfoyle, Mathew R; Candy, Nicholas G; Budohoski, Karol P; Hofmann, Riikka; Barone, Damiano G; Santarius, Thomas; Kirollos, Ramez W; Trivedi, Rikin A

    2017-12-01

    Stereoscopic three-dimensional (3D) imaging is increasingly used in the teaching of neuroanatomy and although this is mainly aimed at undergraduate medical students, it has enormous potential for enhancing the training of neurosurgeons. This study aims to assess whether 3D lecturing is an effective method of enhancing the knowledge and confidence of neurosurgeons and how it compares with traditional two-dimensional (2D) lecturing and cadaveric training. Three separate teaching sessions for neurosurgical trainees were organized: 1) 2D course (2D lecture + cadaveric session), 2) 3D lecture alone, and 3) 3D course (3D lecture + cadaveric session). Before and after each session, delegates were asked to complete questionnaires containing questions relating to surgical experience, anatomic knowledge, confidence in performing procedures, and perceived value of 3D, 2D, and cadaveric teaching. Although both 2D and 3D lectures and courses were similarly effective at improving self-rated knowledge and understanding, the 3D lecture and course were associated with significantly greater gains in confidence reported by the delegates for performing a subfrontal approach and sylvian fissure dissection. Stereoscopic 3D lectures provide neurosurgical trainees with greater confidence for performing standard operative approaches and enhances the benefit of subsequent practical experience in developing technical skills in cadaveric dissection. Copyright © 2017. Published by Elsevier Inc.

  15. Figure and ground in the visual cortex: v2 combines stereoscopic cues with gestalt rules.

    Science.gov (United States)

    Qiu, Fangtu T; von der Heydt, Rüdiger

    2005-07-07

    Figure-ground organization is a process by which the visual system identifies some image regions as foreground and others as background, inferring 3D layout from 2D displays. A recent study reported that edge responses of neurons in area V2 are selective for side-of-figure, suggesting that figure-ground organization is encoded in the contour signals (border ownership coding). Here, we show that area V2 combines two strategies of computation, one that exploits binocular stereoscopic information for the definition of local depth order, and another that exploits the global configuration of contours (Gestalt factors). These are combined in single neurons so that the "near" side of the preferred 3D edge generally coincides with the preferred side-of-figure in 2D displays. Thus, area V2 represents the borders of 2D figures as edges of surfaces, as if the figures were objects in 3D space. Even in 3D displays, Gestalt factors influence the responses and can enhance or null the stereoscopic depth information.

  16. Analyzer for gamma cameras diagnostic

    International Nuclear Information System (INIS)

    Oramas Polo, I.; Osorio Deliz, J. F.; Diaz Garcia, A.

    2013-01-01

    This research work was carried out to develop an analyzer for gamma cameras diagnostic. It is composed of an electronic system that includes hardware and software capabilities, and operates from the acquisition of the 4 head position signals of a gamma camera detector. The result is the spectrum of the energy delivered by nuclear radiation coming from the camera detector head. This system includes analog processing of position signals from the camera, digitization and the subsequent processing of the energy signal in a multichannel analyzer, sending data to a computer via a standard USB port and processing of data in a personal computer to obtain the final histogram. The circuits are composed of an analog processing board and a universal kit with micro controller and programmable gate array. (Author)

  17. Astronomy and the camera obscura

    Science.gov (United States)

    Feist, M.

    2000-02-01

    The camera obscura (from Latin meaning darkened chamber) is a simple optical device with a long history. In the form considered here, it can be traced back to 1550. It had its heyday during the Victorian era when it was to be found at the seaside as a tourist attraction or sideshow. It was also used as an artist's drawing aid and, in 1620, the famous astronomer-mathematician, Johannes Kepler used a small tent camera obscura to trace the scenery.

  18. The future of consumer cameras

    Science.gov (United States)

    Battiato, Sebastiano; Moltisanti, Marco

    2015-03-01

    In the last two decades multimedia, and in particular imaging devices (camcorders, tablets, mobile phones, etc.) have been dramatically diffused. Moreover the increasing of their computational performances, combined with an higher storage capability, allows them to process large amount of data. In this paper an overview of the current trends of consumer cameras market and technology will be given, providing also some details about the recent past (from Digital Still Camera up today) and forthcoming key issues.

  19. Science, conservation, and camera traps

    Science.gov (United States)

    Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.

  20. Computing camera heading: A study

    Science.gov (United States)

    Zhang, John Jiaxiang

    2000-08-01

    An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.

  1. The Influence of Manifest Strabismus and Stereoscopic Vision on Non-Verbal Abilities of Visually Impaired Children

    Science.gov (United States)

    Gligorovic, Milica; Vucinic, Vesna; Eskirovic, Branka; Jablan, Branka

    2011-01-01

    This research was conducted in order to examine the influence of manifest strabismus and stereoscopic vision on non-verbal abilities of visually impaired children aged between 7 and 15. The sample included 55 visually impaired children from the 1st to the 6th grade of elementary schools for visually impaired children in Belgrade. RANDOT stereotest…

  2. The effects of 5.1 sound presentations on the perception of stereoscopic imagery in video games

    Science.gov (United States)

    Cullen, Brian; Galperin, Daniel; Collins, Karen; Hogue, Andrew; Kapralos, Bill

    2013-03-01

    Stereoscopic 3D (S3D) content in games, film and other audio-visual media has been steadily increasing over the past number of years. However, there are still open, fundamental questions regarding its implementation, particularly as it relates to a multi-modal experience that involves sound and haptics. Research has shown that sound has considerable impact on our perception of 2D phenomena, but very little research has considered how sound may influence stereoscopic 3D. Here we present the results of an experiment that examined the effects of 5.1 surround sound (5.1) and stereo loudspeaker setups on depth perception in relation to S3D imagery within a video game environment. Our aim was to answer the question: "can 5.1 surround sound enhance the participant's perception of depth in the stereoscopic field when compared to traditional stereo sound presentations?" In addition, our study examined how the presence or absence of Doppler frequency shift and frequency fall-off audio effects can also influence depth judgment under these conditions. Results suggest that 5.1 surround sound presentations enhance the apparent depth of stereoscopic imagery when compared to stereo presentations. Results also suggest that the addition of audio effects such as Doppler shift and frequency fall-off filters can influence the apparent depth of S3D objects.

  3. Quantitative Measurement of Eyestrain on 3D Stereoscopic Display Considering the Eye Foveation Model and Edge Information

    Directory of Open Access Journals (Sweden)

    Hwan Heo

    2014-05-01

    Full Text Available We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user’s gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD, stereoscopic disparity (SD, frame cancellation effect (FCE, and edge component (EC of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors.

  4. Comparing Short- and Long-Term Learning Effects between Stereoscopic and Two-Dimensional Film at a Planetarium

    Science.gov (United States)

    Price, C. Aaron; Lee, Hee-Sun; Subbarao, Mark; Kasal, Evan; Aguileara, Julieta

    2015-01-01

    Science centers such as museums and planetariums have used stereoscopic ("three-dimensional") films to draw interest from and educate their visitors for decades. Despite the fact that most adults who are finished with their formal education get their science knowledge from such free-choice learning settings very little is known about the…

  5. Evaluation of the Performance of Vortex Generators on the DU 91-W2-250 Profile using Stereoscopic PIV

    DEFF Research Database (Denmark)

    Velte, Clara Marika; Hansen, Martin Otto Laver; Meyer, Knud Erik

    2009-01-01

    Stereoscopic PIV measurements investigating the effect of Vortex Generators on the lift force near stall and on glide ratio at best aerodynamic performance have been carried out in the LM Glasfiber wind tunnel on a DU 91-W2-250 profile. Measurements at two Reynolds numbers were analyzed; Re=0...

  6. Evaluation of the Performance of Vortex Generators on the DU 91-W2-250 Profile using Stereoscopic PIV

    DEFF Research Database (Denmark)

    Velte, Clara Marika; Hansen, Martin Otto Laver; Meyer, Knud Erik

    2008-01-01

    Stereoscopic PIV measurements investigating the effect of Vortex Generators on the lift force near stall and on glide ratio at best aerodynamic performance have been carried out in the LM Glasfiber wind tunnel on a DU 91-W2-250 profile. Measurements at two Reynolds numbers were analyzed; Re=0...

  7. 3D pressure imaging of an aircraft propeller blade-tip flow by phase-locked stereoscopic PIV

    NARCIS (Netherlands)

    Ragni, D.; Van Oudheusden, B.W.; Scarano, F.

    2011-01-01

    The flow field at the tip region of a scaled DHC Beaver aircraft propeller, running at transonic speed, has been investigated by means of a multi-plane stereoscopic particle image velocimetry setup. Velocity fields, phase-locked with the blade rotational motion, are acquired across several planes

  8. An interactive, stereoscopic virtual environment for medical imaging visualization, simulation and training

    Science.gov (United States)

    Krueger, Evan; Messier, Erik; Linte, Cristian A.; Diaz, Gabriel

    2017-03-01

    Recent advances in medical image acquisition allow for the reconstruction of anatomies with 3D, 4D, and 5D renderings. Nevertheless, standard anatomical and medical data visualization still relies heavily on the use of traditional 2D didactic tools (i.e., textbooks and slides), which restrict the presentation of image data to a 2D slice format. While these approaches have their merits beyond being cost effective and easy to disseminate, anatomy is inherently three-dimensional. By using 2D visualizations to illustrate more complex morphologies, important interactions between structures can be missed. In practice, such as in the planning and execution of surgical interventions, professionals require intricate knowledge of anatomical complexities, which can be more clearly communicated and understood through intuitive interaction with 3D volumetric datasets, such as those extracted from high-resolution CT or MRI scans. Open source, high quality, 3D medical imaging datasets are freely available, and with the emerging popularity of 3D display technologies, affordable and consistent 3D anatomical visualizations can be created. In this study we describe the design, implementation, and evaluation of one such interactive, stereoscopic visualization paradigm for human anatomy extracted from 3D medical images. A stereoscopic display was created by projecting the scene onto the lab floor using sequential frame stereo projection and viewed through active shutter glasses. By incorporating a PhaseSpace motion tracking system, a single viewer can navigate an augmented reality environment and directly manipulate virtual objects in 3D. While this paradigm is sufficiently versatile to enable a wide variety of applications in need of 3D visualization, we designed our study to work as an interactive game, which allows users to explore the anatomy of various organs and systems. In this study we describe the design, implementation, and evaluation of an interactive and stereoscopic

  9. Sub-Camera Calibration of a Penta-Camera

    Science.gov (United States)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  10. The fly's eye camera system

    Science.gov (United States)

    Mészáros, L.; Pál, A.; Csépány, G.; Jaskó, A.; Vida, K.; Oláh, K.; Mezö, G.

    2014-12-01

    We introduce the Fly's Eye Camera System, an all-sky monitoring device intended to perform time domain astronomy. This camera system design will provide complementary data sets for other synoptic sky surveys such as LSST or Pan-STARRS. The effective field of view is obtained by 19 cameras arranged in a spherical mosaic form. These individual cameras of the device stand on a hexapod mount that is fully capable of achieving sidereal tracking for the subsequent exposures. This platform has many advantages. First of all it requires only one type of moving component and does not include unique parts. Hence this design not only eliminates problems implied by unique elements, but the redundancy of the hexapod allows smooth operations even if one or two of the legs are stuck. In addition, it can calibrate itself by observed stars independently from both the geographical location (including northen and southern hemisphere) and the polar alignment of the full mount. All mechanical elements and electronics are designed within the confines of our institute Konkoly Observatory. Currently, our instrument is in testing phase with an operating hexapod and reduced number of cameras.

  11. Event detection intelligent camera development

    International Nuclear Information System (INIS)

    Szappanos, A.; Kocsis, G.; Molnar, A.; Sarkozi, J.; Zoletnik, S.

    2008-01-01

    A new camera system 'event detection intelligent camera' (EDICAM) is being developed for the video diagnostics of W-7X stellarator, which consists of 10 distinct and standalone measurement channels each holding a camera. Different operation modes will be implemented for continuous and for triggered readout as well. Hardware level trigger signals will be generated from real time image processing algorithms optimized for digital signal processor (DSP) and field programmable gate array (FPGA) architectures. At full resolution a camera sends 12 bit sampled 1280 x 1024 pixels with 444 fps which means 1.43 Terabyte over half an hour. To analyse such a huge amount of data is time consuming and has a high computational complexity. We plan to overcome this problem by EDICAM's preprocessing concepts. EDICAM camera system integrates all the advantages of CMOS sensor chip technology and fast network connections. EDICAM is built up from three different modules with two interfaces. A sensor module (SM) with reduced hardware and functional elements to reach a small and compact size and robust action in harmful environment as well. An image processing and control unit (IPCU) module handles the entire user predefined events and runs image processing algorithms to generate trigger signals. Finally a 10 Gigabit Ethernet compatible image readout card functions as the network interface for the PC. In this contribution all the concepts of EDICAM and the functions of the distinct modules are described

  12. Video camera use at nuclear power plants

    International Nuclear Information System (INIS)

    Estabrook, M.L.; Langan, M.O.; Owen, D.E.

    1990-08-01

    A survey of US nuclear power plants was conducted to evaluate video camera use in plant operations, and determine equipment used and the benefits realized. Basic closed circuit television camera (CCTV) systems are described and video camera operation principles are reviewed. Plant approaches for implementing video camera use are discussed, as are equipment selection issues such as setting task objectives, radiation effects on cameras, and the use of disposal cameras. Specific plant applications are presented and the video equipment used is described. The benefits of video camera use --- mainly reduced radiation exposure and increased productivity --- are discussed and quantified. 15 refs., 6 figs

  13. Digital fundus image grading with the non-mydriatic Visucam(PRO NM) versus the FF450(plus) camera in diabetic retinopathy.

    Science.gov (United States)

    Neubauer, Aljoscha S; Rothschuh, Antje; Ulbig, Michael W; Blum, Marcus

    2008-03-01

    Grading diabetic retinopathy in clinical trials is frequently based on 7-field stereo photography of the fundus in diagnostic mydriasis. In terms of image quality, the FF450(plus) camera (Carl Zeiss Meditec AG, Jena, Germany) defines a high-quality reference. The aim of the study was to investigate if the fully digital fundus camera Visucam(PRO NM) could serve as an alternative in clinical trials requiring 7-field stereo photography. A total of 128 eyes of diabetes patients were enrolled in the randomized, controlled, prospective trial. Seven-field stereo photography was performed with the Visucam(PRO NM) and the FF450(plus) camera, in random order, both in diagnostic mydriasis. The resulting 256 image sets from the two camera systems were graded for retinopathy levels and image quality (on a scale of 1-5); both were anonymized and blinded to the image source. On FF450(plus) stereoscopic imaging, 20% of the patients had no or mild diabetic retinopathy (ETDRS level cameras regarding retinopathy levels (kappa 0.87) and macular oedema (kappa 0.80). In diagnostic mydriasis the image quality of the Visucam was graded slightly as better than that of the FF450(plus) (2.20 versus 2.41; p camera for applications and clinical trials requiring 7-field stereo photography.

  14. Streaming video-based 3D reconstruction method compatible with existing monoscopic and stereoscopic endoscopy systems

    Science.gov (United States)

    Bouma, Henri; van der Mark, Wannes; Eendebak, Pieter T.; Landsmeer, Sander H.; van Eekeren, Adam W. M.; ter Haar, Frank B.; Wieringa, F. Pieter; van Basten, Jean-Paul

    2012-06-01

    Compared to open surgery, minimal invasive surgery offers reduced trauma and faster recovery. However, lack of direct view limits space perception. Stereo-endoscopy improves depth perception, but is still restricted to the direct endoscopic field-of-view. We describe a novel technology that reconstructs 3D-panoramas from endoscopic video streams providing a much wider cumulative overview. The method is compatible with any endoscope. We demonstrate that it is possible to generate photorealistic 3D-environments from mono- and stereoscopic endoscopy. The resulting 3D-reconstructions can be directly applied in simulators and e-learning. Extended to real-time processing, the method looks promising for telesurgery or other remote vision-guided tasks.

  15. Stereoscopic particle image velocimetry investigations of the mixed convection exchange flow through a horizontal vent

    Science.gov (United States)

    Varrall, Kevin; Pretrel, Hugues; Vaux, Samuel; Vauquelin, Olivier

    2017-10-01

    The exchange flow through a horizontal vent linking two compartments (one above the other) is studied experimentally. This exchange is here governed by both the buoyant natural effect due to the temperature difference of the fluids in both compartments, and the effect of a (forced) mechanical ventilation applied in the lower compartment. Such a configuration leads to uni- or bi-directional flows through the vent. In the experiments, buoyancy is induced in the lower compartment thanks to an electrical resistor. The forced ventilation is applied in exhaust or supply modes and three different values of the vent area. To estimate both velocity fields and flow rates at the vent, measurements are realized at thermal steady state, flush the vent in the upper compartment using stereoscopic particle image velocimetry (SPIV), which is original for this kind of flow. The SPIV measurements allows the area occupied by both upward and downward flows to be determined.

  16. Stereoscopic visualization in curved spacetime: seeing deep inside a black hole

    International Nuclear Information System (INIS)

    Hamilton, Andrew J S; Polhemus, Gavin

    2010-01-01

    Stereoscopic visualization adds an additional dimension to the viewer's experience, giving them a sense of distance. In a general relativistic visualization, distance can be measured in a variety of ways. We argue that the affine distance, which matches the usual notion of distance in flat spacetime, is a natural distance to use in curved spacetime. As an example, we apply affine distance to the visualization of the interior of a black hole. Affine distance is not the distance perceived with normal binocular vision in curved spacetime. However, the failure of binocular vision is simply a limitation of animals that have evolved in flat spacetime, not a fundamental obstacle to depth perception in curved spacetime. Trinocular vision would provide superior depth perception.

  17. Stereoscopic Augmented Reality System for Supervised Training on Minimal Invasive Surgery Robots

    DEFF Research Database (Denmark)

    Matu, Florin-Octavian; Thøgersen, Mikkel; Galsgaard, Bo

    2014-01-01

    the need for efficient training. When training with the robot, the communication between the trainer and the trainee is limited, since the trainee often cannot see the trainer. To overcome this issue, this paper proposes an Augmented Reality (AR) system where the trainer is controlling two virtual robotic...... arms. These arms are virtually superimposed on the video feed to the trainee, and can therefore be used to demonstrate and perform various tasks for the trainee. Furthermore, the trainer is presented with a 3D image through a stereoscopic display. Because of the added depth perception, this enables...... the procedure, and thereby enhances the training experience. The virtual overlay was also found to work as a good and illustrative approach for enhanced communication. However, the delay of the prototype made it difficult to use for actual training....

  18. The GISMO-2 Bolometer Camera

    Science.gov (United States)

    Staguhn, Johannes G.; Benford, Dominic J.; Fixsen, Dale J.; Hilton, Gene; Irwin, Kent D.; Jhabvala, Christine A.; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; Miller, Timothy M.; hide

    2012-01-01

    We present the concept for the GISMO-2 bolometer camera) which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISM0-2 will operate Simultaneously in the 1 mm and 2 mm atmospherical windows. The 1 mm channel uses a 32 x 40 TES-based Backshort Under Grid (BUG) bolometer array, the 2 mm channel operates with a 16 x 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISM0-2 was strongly influenced by our experience with the GISMO 2 mm bolometer camera which is successfully operating at the 30m telescope. GISMO is accessible to the astronomical community through the regular IRAM call for proposals.

  19. Dark Energy Camera for Blanco

    Energy Technology Data Exchange (ETDEWEB)

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images from the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.

  20. Perceptual Color Characterization of Cameras

    Directory of Open Access Journals (Sweden)

    Javier Vazquez-Corral

    2014-12-01

    Full Text Available Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as \\(XYZ\\, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a \\(3 \\times 3\\ matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson al., to perform a perceptual color characterization. In particular, we search for the \\(3 \\times 3\\ matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE \\(\\Delta E\\ error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3for the \\(\\Delta E\\ error, 7& for the S-CIELAB error and 13% for the CID error measures.

  1. Stereoscopic neuroanatomy lectures using a three-dimensional virtual reality environment.

    Science.gov (United States)

    Kockro, Ralf A; Amaxopoulou, Christina; Killeen, Tim; Wagner, Wolfgang; Reisch, Robert; Schwandt, Eike; Gutenberg, Angelika; Giese, Alf; Stofft, Eckart; Stadie, Axel T

    2015-09-01

    Three-dimensional (3D) computer graphics are increasingly used to supplement the teaching of anatomy. While most systems consist of a program which produces 3D renderings on a workstation with a standard screen, the Dextrobeam virtual reality VR environment allows the presentation of spatial neuroanatomical models to larger groups of students through a stereoscopic projection system. Second-year medical students (n=169) were randomly allocated to receive a standardised pre-recorded audio lecture detailing the anatomy of the third ventricle accompanied by either a two-dimensional (2D) PowerPoint presentation (n=80) or a 3D animated tour of the third ventricle with the DextroBeam. Students completed a 10-question multiple-choice exam based on the content learned and a subjective evaluation of the teaching method immediately after the lecture. Students in the 2D group achieved a mean score of 5.19 (±2.12) compared to 5.45 (±2.16) in the 3D group, with the results in the 3D group statistically non-inferior to those of the 2D group (p<0.0001). The students rated the 3D method superior to 2D teaching in four domains (spatial understanding, application in future anatomy classes, effectiveness, enjoyableness) (p<0.01). Stereoscopically enhanced 3D lectures are valid methods of imparting neuroanatomical knowledge and are well received by students. More research is required to define and develop the role of large-group VR systems in modern neuroanatomy curricula. Copyright © 2015 Elsevier GmbH. All rights reserved.

  2. Remote stereoscopic video play platform for naked eyes based on the Android system

    Science.gov (United States)

    Jia, Changxin; Sang, Xinzhu; Liu, Jing; Cheng, Mingsheng

    2014-11-01

    As people's life quality have been improved significantly, the traditional 2D video technology can not meet people's urgent desire for a better video quality, which leads to the rapid development of 3D video technology. Simultaneously people want to watch 3D video in portable devices,. For achieving the above purpose, we set up a remote stereoscopic video play platform. The platform consists of a server and clients. The server is used for transmission of different formats of video and the client is responsible for receiving remote video for the next decoding and pixel restructuring. We utilize and improve Live555 as video transmission server. Live555 is a cross-platform open source project which provides solutions for streaming media such as RTSP protocol and supports transmission of multiple video formats. At the receiving end, we use our laboratory own player. The player for Android, which is with all the basic functions as the ordinary players do and able to play normal 2D video, is the basic structure for redevelopment. Also RTSP is implemented into this structure for telecommunication. In order to achieve stereoscopic display, we need to make pixel rearrangement in this player's decoding part. The decoding part is the local code which JNI interface calls so that we can extract video frames more effectively. The video formats that we process are left and right, up and down and nine grids. In the design and development, a large number of key technologies from Android application development have been employed, including a variety of wireless transmission, pixel restructuring and JNI call. By employing these key technologies, the design plan has been finally completed. After some updates and optimizations, the video player can play remote 3D video well anytime and anywhere and meet people's requirement.

  3. Alternation Frequency Thresholds for Stereopsis as a Technique for Exploring Stereoscopic Difficulties

    Directory of Open Access Journals (Sweden)

    Svetlana Rychkova

    2011-01-01

    Full Text Available When stereoscopic images are presented alternately to the two eyes, stereopsis occurs at F ⩾ 1 Hz full-cycle frequencies for very simple stimuli, and F ⩾ 3 Hz full-cycle frequencies for random-dot stereograms (eg Ludwig I, Pieper W, Lachnit H, 2007 “Temporal integration of monocular images separated in time: stereopsis, stereoacuity, and binocular luster” Perception & Psychophysics 69 92–102. Using twenty different stereograms presented through liquid crystal shutters, we studied the transition to stereopsis with fifteen subjects. The onset of stereopsis was observed during a stepwise increase of the alternation frequency, and its disappearance was observed during a stepwise decrease in frequency. The lowest F values (around 2.5 Hz were observed with stimuli involving two to four simple disjoint elements (circles, arcs, rectangles. Higher F values were needed for stimuli containing slanted elements or curved surfaces (about 1 Hz increment, overlapping elements at two different depths (about 2.5 Hz increment, or camouflaged overlapping surfaces (> 7 Hz increment. A textured cylindrical surface with a horizontal axis appeared easier to interpret (5.7 Hz than a pair of slanted segments separated in depth but forming a cross in projection (8 Hz. Training effects were minimal, and F usually increased as disparities were reduced. The hierarchy of difficulties revealed in the study may shed light on various problems that the brain needs to solve during stereoscopic interpretation. During the construction of the three-dimensional percept, the loss of information due to natural decay of the stimuli traces must be compensated by refreshes of visual input. In the discussion an attempt is made to link our results with recent advances in the comprehension of visual scene memory.

  4. Objective quality assessment of stereoscopic images with vertical disparity using EEG

    Science.gov (United States)

    Shahbazi Avarvand, Forooz; Bosse, Sebastian; Müller, Klaus-Robert; Schäfer, Ralf; Nolte, Guido; Wiegand, Thomas; Curio, Gabriel; Samek, Wojciech

    2017-08-01

    Objective. Neurophysiological correlates of vertical disparity in 3D images are studied in an objective approach using EEG technique. These disparities are known to negatively affect the quality of experience and to cause visual discomfort in stereoscopic visualizations. Approach. We have presented four conditions to subjects: one in 2D and three conditions in 3D, one without vertical disparity and two with different vertical disparity levels. Event related potentials (ERPs) are measured for each condition and the differences between ERP components are studied. Analysis is also performed on the induced potentials in the time frequency domain. Main results. Results show that there is a significant increase in the amplitude of P1 components in 3D conditions in comparison to 2D. These results are consistent with previous studies which have shown that P1 amplitude increases due to the depth perception in 3D compared to 2D. However the amplitude is significantly smaller for maximum vertical disparity (3D-3) in comparison to 3D with no vertical disparity. Our results therefore suggest that the vertical disparity in 3D-3 condition decreases the perception of depth compared to other 3D conditions and the amplitude of P1 component can be used as a discriminative feature. Significance. The results show that the P1 component increases in amplitude due to the depth perception in the 3D stimuli compared to the 2D stimulus. On the other hand the vertical disparity in the stereoscopic images is studied here. We suggest that the amplitude of P1 component is modulated with this parameter and decreases due to the decrease in the perception of depth.

  5. EDICAM (Event Detection Intelligent Camera)

    Energy Technology Data Exchange (ETDEWEB)

    Zoletnik, S. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Szabolics, T., E-mail: szabolics.tamas@wigner.mta.hu [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Kocsis, G.; Szepesi, T.; Dunai, D. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary)

    2013-10-15

    Highlights: ► We present EDICAM's hardware modules. ► We present EDICAM's main design concepts. ► This paper will describe EDICAM firmware architecture. ► Operation principles description. ► Further developments. -- Abstract: A new type of fast framing camera has been developed for fusion applications by the Wigner Research Centre for Physics during the last few years. A new concept was designed for intelligent event driven imaging which is capable of focusing image readout to Regions of Interests (ROIs) where and when predefined events occur. At present these events mean intensity changes and external triggers but in the future more sophisticated methods might also be defined. The camera provides 444 Hz frame rate at full resolution of 1280 × 1024 pixels, but monitoring of smaller ROIs can be done in the 1–116 kHz range even during exposure of the full image. Keeping space limitations and the harsh environment in mind the camera is divided into a small Sensor Module and a processing card interconnected by a fast 10 Gbit optical link. This camera hardware has been used for passive monitoring of the plasma in different devices for example at ASDEX Upgrade and COMPASS with the first version of its firmware. The new firmware and software package is now available and ready for testing the new event processing features. This paper will present the operation principle and features of the Event Detection Intelligent Camera (EDICAM). The device is intended to be the central element in the 10-camera monitoring system of the Wendelstein 7-X stellarator.

  6. The Sydney University PAPA camera

    Science.gov (United States)

    Lawson, Peter R.

    1994-04-01

    The Precision Analog Photon Address (PAPA) camera is a photon-counting array detector that uses optical encoding to locate photon events on the output of a microchannel plate image intensifier. The Sydney University camera is a 256x256 pixel detector which can operate at speeds greater than 1 million photons per second and produce individual photon coordinates with a deadtime of only 300 ns. It uses a new Gray coded mask-plate which permits a simplified optical alignment and successfully guards against vignetting artifacts.

  7. Streak cameras and their applications

    International Nuclear Information System (INIS)

    Bernet, J.M.; Imhoff, C.

    1987-01-01

    Over the last several years, development of various measurement techniques in the nanosecond and pico-second range has led to increased reliance on streak cameras. This paper will present the main electronic and optoelectronic performances of the Thomson-CSF TSN 506 cameras and their associated devices used to build an automatic image acquisition and processing system (NORMA). A brief survey of the diversity and the spread of the use of high speed electronic cinematography will be illustrated by a few typical applications [fr

  8. Augmented reality during robot-assisted laparoscopic partial nephrectomy: toward real-time 3D-CT to stereoscopic video registration.

    Science.gov (United States)

    Su, Li-Ming; Vagvolgyi, Balazs P; Agarwal, Rahul; Reiley, Carol E; Taylor, Russell H; Hager, Gregory D

    2009-04-01

    To investigate a markerless tracking system for real-time stereo-endoscopic visualization of preoperative computed tomographic imaging as an augmented display during robot-assisted laparoscopic partial nephrectomy. Stereoscopic video segments of a patient undergoing robot-assisted laparoscopic partial nephrectomy for tumor and another for a partial staghorn renal calculus were processed to evaluate the performance of a three-dimensional (3D)-to-3D registration algorithm. After both cases, we registered a segment of the video recording to the corresponding preoperative 3D-computed tomography image. After calibrating the camera and overlay, 3D-to-3D registration was created between the model and the surgical recording using a modified iterative closest point technique. Image-based tracking technology tracked selected fixed points on the kidney surface to augment the image-to-model registration. Our investigation has demonstrated that we can identify and track the kidney surface in real time when applied to intraoperative video recordings and overlay the 3D models of the kidney, tumor (or stone), and collecting system semitransparently. Using a basic computer research platform, we achieved an update rate of 10 Hz and an overlay latency of 4 frames. The accuracy of the 3D registration was 1 mm. Augmented reality overlay of reconstructed 3D-computed tomography images onto real-time stereo video footage is possible using iterative closest point and image-based surface tracking technology that does not use external navigation tracking systems or preplaced surface markers. Additional studies are needed to assess the precision and to achieve fully automated registration and display for intraoperative use.

  9. High-speed holographic camera

    International Nuclear Information System (INIS)

    Novaro, Marc

    The high-speed holographic camera is a disgnostic instrument using holography as an information storing support. It allows us to take 10 holograms, of an object, with exposures times of 1,5ns, separated in time by 1 or 2ns. In order to get these results easily, no mobile part is used in the set-up [fr

  10. The Camera Comes to Court.

    Science.gov (United States)

    Floren, Leola

    After the Lindbergh kidnapping trial in 1935, the American Bar Association sought to eliminate electronic equipment from courtroom proceedings. Eventually, all but two states adopted regulations applying that ban to some extent, and a 1965 Supreme Court decision encouraged the banning of television cameras at trials as well. Currently, some states…

  11. Gamma camera with reflectivity mask

    International Nuclear Information System (INIS)

    Stout, K.J.

    1980-01-01

    In accordance with the present invention there is provided a radiographic camera comprising: a scintillator; a plurality of photodectors positioned to face said scintillator; a plurality of masked regions formed upon a face of said scintillator opposite said photdetectors and positioned coaxially with respective ones of said photodetectors for decreasing the amount of internal reflection of optical photons generated within said scintillator. (auth)

  12. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  13. Use of the stereoscopic virtual reality display system for the detection and characterization of intracranial aneurysms: A Icomparison with conventional computed tomography workstation and 3D rotational angiography.

    Science.gov (United States)

    Liu, Xiujuan; Tao, Haiquan; Xiao, Xigang; Guo, Binbin; Xu, Shangcai; Sun, Na; Li, Maotong; Xie, Li; Wu, Changjun

    2018-07-01

    This study aimed to compare the diagnostic performance of the stereoscopic virtual reality display system with the conventional computed tomography (CT) workstation and three-dimensional rotational angiography (3DRA) for intracranial aneurysm detection and characterization, with a focus on small aneurysms and those near the bone. First, 42 patients with suspected intracranial aneurysms underwent both 256-row CT angiography (CTA) and 3DRA. Volume rendering (VR) images were captured using the conventional CT workstation. Next, VR images were transferred to the stereoscopic virtual reality display system. Two radiologists independently assessed the results that were obtained using the conventional CT workstation and stereoscopic virtual reality display system. The 3DRA results were considered as the ultimate reference standard. Based on 3DRA images, 38 aneurysms were confirmed in 42 patients. Two cases were misdiagnosed and 1 was missed when the traditional CT workstation was used. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of the conventional CT workstation were 94.7%, 85.7%, 97.3%, 75%, and99.3%, respectively, on a per-aneurysm basis. The stereoscopic virtual reality display system missed a case. The sensitivity, specificity, PPV, NPV, and accuracy of the stereoscopic virtual reality display system were 100%, 85.7%, 97.4%, 100%, and 97.8%, respectively. No difference was observed in the accuracy of the traditional CT workstation, stereoscopic virtual reality display system, and 3DRA in detecting aneurysms. The stereoscopic virtual reality display system has some advantages in detecting small aneurysms and those near the bone. The virtual reality stereoscopic vision obtained through the system was found as a useful tool in intracranial aneurysm diagnosis and pre-operative 3D imaging. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Toward 3D-IPTV: design and implementation of a stereoscopic and multiple-perspective video streaming system

    Science.gov (United States)

    Petrovic, Goran; Farin, Dirk; de With, Peter H. N.

    2008-02-01

    3D-Video systems allow a user to perceive depth in the viewed scene and to display the scene from arbitrary viewpoints interactively and on-demand. This paper presents a prototype implementation of a 3D-video streaming system using an IP network. The architecture of our streaming system is layered, where each information layer conveys a single coded video signal or coded scene-description data. We demonstrate the benefits of a layered architecture with two examples: (a) stereoscopic video streaming, (b) monoscopic video streaming with remote multiple-perspective rendering. Our implementation experiments confirm that prototyping 3D-video streaming systems is possible with today's software and hardware. Furthermore, our current operational prototype demonstrates that highly heterogeneous clients can coexist in the system, ranging from auto-stereoscopic 3D displays to resource-constrained mobile devices.

  15. Tissue feature-based intra-fractional motion tracking for stereoscopic x-ray image guided radiotherapy

    Science.gov (United States)

    Xie, Yaoqin; Xing, Lei; Gu, Jia; Liu, Wu

    2013-06-01

    Real-time knowledge of tumor position during radiation therapy is essential to overcome the adverse effect of intra-fractional organ motion. The goal of this work is to develop a tumor tracking strategy by effectively utilizing the inherent image features of stereoscopic x-ray images acquired during dose delivery. In stereoscopic x-ray image guided radiation delivery, two orthogonal x-ray images are acquired either simultaneously or sequentially. The essence of markerless tumor tracking is the reliable identification of inherent points with distinct tissue features on each projection image and their association between two images. The identification of the feature points on a planar x-ray image is realized by searching for points with high intensity gradient. The feature points are associated by using the scale invariance features transform descriptor. The performance of the proposed technique is evaluated by using images of a motion phantom and four archived clinical cases acquired using either a CyberKnife equipped with a stereoscopic x-ray imaging system, or a LINAC equipped with an onboard kV imager and an electronic portal imaging device. In the phantom study, the results obtained using the proposed method agree with the measurements to within 2 mm in all three directions. In the clinical study, the mean error is 0.48 ± 0.46 mm for four patient data with 144 sequential images. In this work, a tissue feature-based tracking method for stereoscopic x-ray image guided radiation therapy is developed. The technique avoids the invasive procedure of fiducial implantation and may greatly facilitate the clinical workflow.

  16. Tissue feature-based intra-fractional motion tracking for stereoscopic x-ray image guided radiotherapy

    International Nuclear Information System (INIS)

    Xie Yaoqin; Gu Jia; Xing Lei; Liu Wu

    2013-01-01

    Real-time knowledge of tumor position during radiation therapy is essential to overcome the adverse effect of intra-fractional organ motion. The goal of this work is to develop a tumor tracking strategy by effectively utilizing the inherent image features of stereoscopic x-ray images acquired during dose delivery. In stereoscopic x-ray image guided radiation delivery, two orthogonal x-ray images are acquired either simultaneously or sequentially. The essence of markerless tumor tracking is the reliable identification of inherent points with distinct tissue features on each projection image and their association between two images. The identification of the feature points on a planar x-ray image is realized by searching for points with high intensity gradient. The feature points are associated by using the scale invariance features transform descriptor. The performance of the proposed technique is evaluated by using images of a motion phantom and four archived clinical cases acquired using either a CyberKnife equipped with a stereoscopic x-ray imaging system, or a LINAC equipped with an onboard kV imager and an electronic portal imaging device. In the phantom study, the results obtained using the proposed method agree with the measurements to within 2 mm in all three directions. In the clinical study, the mean error is 0.48 ± 0.46 mm for four patient data with 144 sequential images. In this work, a tissue feature-based tracking method for stereoscopic x-ray image guided radiation therapy is developed. The technique avoids the invasive procedure of fiducial implantation and may greatly facilitate the clinical workflow. (paper)

  17. Effect of Stereoscopic Anaglyphic 3-Dimensional Video Didactics on Learning Neuroanatomy.

    Science.gov (United States)

    Goodarzi, Amir; Monti, Sara; Lee, Darrin; Girgis, Fady

    2017-11-01

    The teaching of neuroanatomy in medical education has historically been based on didactic instruction, cadaveric dissections, and intraoperative experience for students. Multiple novel 3-dimensional (3D) modalities have recently emerged. Among these, stereoscopic anaglyphic video is easily accessible and affordable, however, its effects have not yet formally been investigated. This study aimed to investigate if 3D stereoscopic anaglyphic video instruction in neuroanatomy could improve learning for content-naive students, as compared with 2-dimensional (2D) video instruction. A single-site controlled prospective case control study was conducted at the School of Education. Content knowledge was assessed at baseline, followed by the presentation of an instructional neuroanatomy video. Participants viewed the video in either 2D or 3D format and then completed a written test of skull base neuroanatomy. Pretest and post-test performances were analyzed with independent Student's t-tests and analysis of covariance. Our study was completed by 249 subjects. At baseline, the 2D (n = 124, F = 97) and 3D groups (n = 125, F = 96) were similar, although the 3D group was older by 1.7 years (P = 0.0355) and the curricula of participating classes differed (P < 0.0001). Average scores for the 3D group were higher for both pretest (2D, M = 19.9%, standard deviation [SD] = 12.5% vs. 3D, M = 23.9%, SD = 14.9%, P = 0.0234) and post-test performances (2D, M = 68.5%, SD = 18.6% vs. 3D, M = 77.3%, SD = 18.8%, P = 0.003), but the magnitude of improvement across groups did not reach statistical significance (2D, M = 48.7%, SD = 21.3%, vs. 3D, M = 53.5%, SD = 22.7%, P = 0.0855). Incorporation of 3D video instruction into curricula without careful integration is insufficient to promote learning over 2D video. Published by Elsevier Inc.

  18. Architectural Design Document for Camera Models

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study.......Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study....

  19. Selecting a digital camera for telemedicine.

    Science.gov (United States)

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  20. 21 CFR 886.1120 - Opthalmic camera.

    Science.gov (United States)

    2010-04-01

    ... DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1120 Opthalmic camera. (a) Identification. An ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding area...

  1. Improved positron emission tomography camera

    International Nuclear Information System (INIS)

    Mullani, N.A.

    1986-01-01

    A positron emission tomography camera having a plurality of rings of detectors positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom, and a plurality of scintillation crystals positioned relative to the photomultiplier tubes whereby each tube is responsive to more than one crystal. Each alternate crystal in the ring may be offset by one-half or less of the thickness of the crystal such that the staggered crystals are seen by more than one photomultiplier tube. This sharing of crystals and photomultiplier tubes allows identification of the staggered crystal and the use of smaller detectors shared by larger photomultiplier tubes thereby requiring less photomultiplier tubes, creating more scanning slices, providing better data sampling, and reducing the cost of the camera. (author)

  2. Vehicular camera pedestrian detection research

    Science.gov (United States)

    Liu, Jiahui

    2018-03-01

    With the rapid development of science and technology, it has made great development, but at the same time of highway traffic more convenient in highway traffic and transportation. However, in the meantime, traffic safety accidents occur more and more frequently in China. In order to deal with the increasingly heavy traffic safety. So, protecting the safety of people's personal property and facilitating travel has become a top priority. The real-time accurate pedestrian and driving environment are obtained through a vehicular camera which are used to detection and track the preceding moving targets. It is popular in the domain of intelligent vehicle safety driving, autonomous navigation and traffic system research. Based on the pedestrian video obtained by the Vehicular Camera, this paper studies the trajectory of pedestrian detection and its algorithm.

  3. Graphic design of pinhole cameras

    Science.gov (United States)

    Edwards, H. B.; Chu, W. P.

    1979-01-01

    The paper describes a graphic technique for the analysis and optimization of pinhole size and focal length. The technique is based on the use of the transfer function of optical elements described by Scott (1959) to construct the transfer function of a circular pinhole camera. This transfer function is the response of a component or system to a pattern of lines having a sinusoidally varying radiance at varying spatial frequencies. Some specific examples of graphic design are presented.

  4. The MVACS Robotic Arm Camera

    Science.gov (United States)

    Keller, H. U.; Hartwig, H.; Kramm, R.; Koschny, D.; Markiewicz, W. J.; Thomas, N.; Fernades, M.; Smith, P. H.; Reynolds, R.; Lemmon, M. T.; Weinberg, J.; Marcialis, R.; Tanner, R.; Boss, B. J.; Oquest, C.; Paige, D. A.

    2001-08-01

    The Robotic Arm Camera (RAC) is one of the key instruments newly developed for the Mars Volatiles and Climate Surveyor payload of the Mars Polar Lander. This lightweight instrument employs a front lens with variable focus range and takes images at distances from 11 mm (image scale 1:1) to infinity. Color images with a resolution of better than 50 μm can be obtained to characterize the Martian soil. Spectral information of nearby objects is retrieved through illumination with blue, green, and red lamp sets. The design and performance of the camera are described in relation to the science objectives and operation. The RAC uses the same CCD detector array as the Surface Stereo Imager and shares the readout electronics with this camera. The RAC is mounted at the wrist of the Robotic Arm and can characterize the contents of the scoop, the samples of soil fed to the Thermal Evolved Gas Analyzer, the Martian surface in the vicinity of the lander, and the interior of trenches dug out by the Robotic Arm. It can also be used to take panoramic images and to retrieve stereo information with an effective baseline surpassing that of the Surface Stereo Imager by about a factor of 3.

  5. Coaxial fundus camera for opthalmology

    Science.gov (United States)

    de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.

    2015-09-01

    A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.

  6. 16 CFR 501.1 - Camera film.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk still...

  7. An Open Standard for Camera Trap Data

    NARCIS (Netherlands)

    Forrester, Tavis; O'Brien, Tim; Fegraus, Eric; Jansen, P.A.; Palmer, Jonathan; Kays, Roland; Ahumada, Jorge; Stern, Beth; McShea, William

    2016-01-01

    Camera traps that capture photos of animals are a valuable tool for monitoring biodiversity. The use of camera traps is rapidly increasing and there is an urgent need for standardization to facilitate data management, reporting and data sharing. Here we offer the Camera Trap Metadata Standard as an

  8. A camera specification for tendering purposes

    International Nuclear Information System (INIS)

    Lunt, M.J.; Davies, M.D.; Kenyon, N.G.

    1985-01-01

    A standardized document is described which is suitable for sending to companies which are being invited to tender for the supply of a gamma camera. The document refers to various features of the camera, the performance specification of the camera, maintenance details, price quotations for various options and delivery, installation and warranty details. (U.K.)

  9. Time Dependence of Intrafraction Patient Motion Assessed by Repeat Stereoscopic Imaging

    International Nuclear Information System (INIS)

    Hoogeman, Mischa S.; Nuyttens, Joost J.; Levendag, Peter C.; Heijmen, Ben J.M.

    2008-01-01

    Purpose: To quantify intrafraction patient motion and its time dependence in immobilized intracranial and extracranial patients. The data can be used to optimize the intrafraction imaging frequency and consequent patient setup correction with an image guidance and tracking system, and to establish the required safety margins in the absence of such a system. Method and Materials: The intrafraction motion of 32 intracranial patients, immobilized with a thermoplastic mask, and 11 supine- and 14 prone-treated extracranial spine patients, immobilized with a vacuum bag, were analyzed. The motion was recorded by an X-ray, stereoscopic, image-guidance system. For each group, we calculated separately the systematic (overall mean and SD) and the random displacement as a function of elapsed intrafraction time. Results: The SD of the systematic intrafraction displacements increased linearly over time for all three patient groups. For intracranial-, supine-, and prone-treated patients, the SD increased to 0.8, 1.2, and 2.2 mm, respectively, in a period of 15 min. The random displacements for the prone-treated patients were significantly higher than for the other groups, namely 1.6 mm (1 SD), probably caused by respiratory motion. Conclusions: Despite the applied immobilization devices, patients drift away from their initial position during a treatment fraction. These drifts are in general small if compared with conventional treatment margins, but will significantly contribute to the margin for high-precision radiation treatments with treatment times of 15 min or longer

  10. Real-time Stereoscopic 3D for E-Robotics Learning

    Directory of Open Access Journals (Sweden)

    Richard Y. Chiou

    2011-02-01

    Full Text Available Following the design and testing of a successful 3-Dimensional surveillance system, this 3D scheme has been implemented into online robotics learning at Drexel University. A real-time application, utilizing robot controllers, programmable logic controllers and sensors, has been developed in the “MET 205 Robotics and Mechatronics” class to provide the students with a better robotic education. The integration of the 3D system allows the students to precisely program the robot and execute functions remotely. Upon the students’ recommendation, polarization has been chosen to be the main platform behind the 3D robotic system. Stereoscopic calculations are carried out for calibration purposes to display the images with the highest possible comfort-level and 3D effect. The calculations are further validated by comparing the results with students’ evaluations. Due to the Internet-based feature, multiple clients have the opportunity to perform the online automation development. In the future, students, in different universities, will be able to cross-control robotic components of different types around the world. With the development of this 3D ERobotics interface, automation resources and robotic learning can be shared and enriched regardless of location.

  11. Lie group model neuromorphic geometric engine for real-time terrain reconstruction from stereoscopic aerial photos

    Science.gov (United States)

    Tsao, Thomas R.; Tsao, Doris

    1997-04-01

    In the 1980's, neurobiologist suggested a simple mechanism in primate visual cortex for maintaining a stable and invariant representation of a moving object. The receptive field of visual neurons has real-time transforms in response to motion, to maintain a stable representation. When the visual stimulus is changed due to motion, the geometric transform of the stimulus triggers a dual transform of the receptive field. This dual transform in the receptive fields compensates geometric variation in the stimulus. This process can be modelled using a Lie group method. The massive array of affine parameter sensing circuits will function as a smart sensor tightly coupled to the passive imaging sensor (retina). Neural geometric engine is a neuromorphic computing device simulating our Lie group model of spatial perception of primate's primal visual cortex. We have developed the computer simulation and experimented on realistic and synthetic image data, and performed a preliminary research of using analog VLSI technology for implementation of the neural geometric engine. We have benchmark tested on DMA's terrain data with their result and have built an analog integrated circuit to verify the computational structure of the engine. When fully implemented on ANALOG VLSI chip, we will be able to accurately reconstruct a 3D terrain surface in real-time from stereoscopic imagery.

  12. Stereoscopic virtual reality models for planning tumor resection in the sellar region

    Directory of Open Access Journals (Sweden)

    Wang Shou-sen

    2012-11-01

    Full Text Available Abstract Background It is difficult for neurosurgeons to perceive the complex three-dimensional anatomical relationships in the sellar region. Methods To investigate the value of using a virtual reality system for planning resection of sellar region tumors. The study included 60 patients with sellar tumors. All patients underwent computed tomography angiography, MRI-T1W1, and contrast enhanced MRI-T1W1 image sequence scanning. The CT and MRI scanning data were collected and then imported into a Dextroscope imaging workstation, a virtual reality system that allows structures to be viewed stereoscopically. During preoperative assessment, typical images for each patient were chosen and printed out for use by the surgeons as references during surgery. Results All sellar tumor models clearly displayed bone, the internal carotid artery, circle of Willis and its branches, the optic nerve and chiasm, ventricular system, tumor, brain, soft tissue and adjacent structures. Depending on the location of the tumors, we simulated the transmononasal sphenoid sinus approach, transpterional approach, and other approaches. Eleven surgeons who used virtual reality models completed a survey questionnaire. Nine of the participants said that the virtual reality images were superior to other images but that other images needed to be used in combination with the virtual reality images. Conclusions The three-dimensional virtual reality models were helpful for individualized planning of surgery in the sellar region. Virtual reality appears to be promising as a valuable tool for sellar region surgery in the future.

  13. Formalizing the potential of stereoscopic 3D user experience in interactive entertainment

    Science.gov (United States)

    Schild, Jonas; Masuch, Maic

    2015-03-01

    The use of stereoscopic 3D vision affects how interactive entertainment has to be developed as well as how it is experienced by the audience. The large amount of possibly impacting factors and variety as well as a certain subtlety of measured effects on user experience make it difficult to grasp the overall potential of using S3D vision. In a comprehensive approach, we (a) present a development framework which summarizes possible variables in display technology, content creation and human factors, and (b) list a scheme of S3D user experience effects concerning initial fascination, emotions, performance, and behavior as well as negative feelings of discomfort and complexity. As a major contribution we propose a qualitative formalization which derives dependencies between development factors and user effects. The argumentation is based on several previously published user studies. We further show how to apply this formula to identify possible opportunities and threats in content creation as well as how to pursue future steps for a possible quantification.

  14. Full-reference quality assessment of stereoscopic images by learning binocular receptive field properties.

    Science.gov (United States)

    Shao, Feng; Li, Kemeng; Lin, Weisi; Jiang, Gangyi; Yu, Mei; Dai, Qionghai

    2015-10-01

    Quality assessment of 3D images encounters more challenges than its 2D counterparts. Directly applying 2D image quality metrics is not the solution. In this paper, we propose a new full-reference quality assessment for stereoscopic images by learning binocular receptive field properties to be more in line with human visual perception. To be more specific, in the training phase, we learn a multiscale dictionary from the training database, so that the latent structure of images can be represented as a set of basis vectors. In the quality estimation phase, we compute sparse feature similarity index based on the estimated sparse coefficient vectors by considering their phase difference and amplitude difference, and compute global luminance similarity index by considering luminance changes. The final quality score is obtained by incorporating binocular combination based on sparse energy and sparse complexity. Experimental results on five public 3D image quality assessment databases demonstrate that in comparison with the most related existing methods, the devised algorithm achieves high consistency with subjective assessment.

  15. Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema

    Science.gov (United States)

    Manolas, Christos; Pauletto, Sandra

    2014-09-01

    Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.

  16. Parts-based stereoscopic image assessment by learning binocular manifold color visual properties

    Science.gov (United States)

    Xu, Haiyong; Yu, Mei; Luo, Ting; Zhang, Yun; Jiang, Gangyi

    2016-11-01

    Existing stereoscopic image quality assessment (SIQA) methods are mostly based on the luminance information, in which color information is not sufficiently considered. Actually, color is part of the important factors that affect human visual perception, and nonnegative matrix factorization (NMF) and manifold learning are in line with human visual perception. We propose an SIQA method based on learning binocular manifold color visual properties. To be more specific, in the training phase, a feature detector is created based on NMF with manifold regularization by considering color information, which not only allows parts-based manifold representation of an image, but also manifests localized color visual properties. In the quality estimation phase, visually important regions are selected by considering different human visual attention, and feature vectors are extracted by using the feature detector. Then the feature similarity index is calculated and the parts-based manifold color feature energy (PMCFE) for each view is defined based on the color feature vectors. The final quality score is obtained by considering a binocular combination based on PMCFE. The experimental results on LIVE I and LIVE Π 3-D IQA databases demonstrate that the proposed method can achieve much higher consistency with subjective evaluations than the state-of-the-art SIQA methods.

  17. Measurements of steady flow through a bileaflet mechanical heart valve using stereoscopic PIV.

    Science.gov (United States)

    Hutchison, Chris; Sullivan, Pierre; Ethier, C Ross

    2011-03-01

    Computational modeling of bileaflet mechanical heart valve (BiMHV) flow requires experimentally validated datasets and improved knowledge of BiMHV fluid mechanics. In this study, flow was studied downstream of a model BiMHV in an axisymmetric aortic sinus using stereoscopic particle image velocimetry. The inlet flow was steady and the Reynolds number based on the aortic diameter was 7600. Results showed the out-of-plane velocity was of similar magnitude as the transverse velocity. Although additional studies are needed for confirmation, analysis of the out-of-plane velocity showed the possible presence of a four-cell streamwise vortex structure in the mean velocity field. Spatial data for all six Reynolds stress components were obtained. Reynolds normal stress profiles revealed similarities between the central jet and free jets. These findings are important to BiMHV flow modeling, though clinical relevance is limited due to the idealized conditions chosen. To this end, the dataset is publicly available for CFD validation purposes.

  18. Relative camera localisation in non-overlapping camera networks using multiple trajectories

    NARCIS (Netherlands)

    John, V.; Englebienne, G.; Kröse, B.J.A.

    2012-01-01

    In this article we present an automatic camera calibration algorithm using multiple trajectories in a multiple camera network with non-overlapping field-of-views (FOV). Visible trajectories within a camera FOV are assumed to be measured with respect to the camera local co-ordinate system.

  19. Stereo Pinhole Camera: Assembly and experimental activities

    Directory of Open Access Journals (Sweden)

    Gilmário Barbosa Santos

    2015-05-01

    Full Text Available This work describes the assembling of a stereo pinhole camera for capturing stereo-pairs of images and proposes experimental activities with it. A pinhole camera can be as sophisticated as you want, or so simple that it could be handcrafted with practically recyclable materials. This paper describes the practical use of the pinhole camera throughout history and currently. Aspects of optics and geometry involved in the building of the stereo pinhole camera are presented with illustrations. Furthermore, experiments are proposed by using the images obtained by the camera for 3D visualization through a pair of anaglyph glasses, and the estimation of relative depth by triangulation is discussed.

  20. Single Camera Calibration in 3D Vision

    Directory of Open Access Journals (Sweden)

    Caius SULIMAN

    2009-12-01

    Full Text Available Camera calibration is a necessary step in 3D vision in order to extract metric information from 2D images. A camera is considered to be calibrated when the parameters of the camera are known (i.e. principal distance, lens distorsion, focal length etc.. In this paper we deal with a single camera calibration method and with the help of this method we try to find the intrinsic and extrinsic camera parameters. The method was implemented with succes in the programming and simulation environment Matlab.

  1. Mobile viewer system for virtual 3D space using infrared LED point markers and camera

    Science.gov (United States)

    Sakamoto, Kunio; Taneji, Shoto

    2006-09-01

    The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.

  2. Collimator changer for scintillation camera

    International Nuclear Information System (INIS)

    Jupa, E.C.; Meeder, R.L.; Richter, E.K.

    1976-01-01

    A collimator changing assembly mounted on the support structure of a scintillation camera is described. A vertical support column positioned proximate the detector support column with a plurality of support arms mounted thereon in a rotatable cantilevered manner at separate vertical positions. Each support arm is adapted to carry one of the plurality of collimators which are interchangeably mountable on the underside of the detector and to transport the collimator between a store position remote from the detector and a change position underneath said detector

  3. Robot Tracer with Visual Camera

    Science.gov (United States)

    Jabbar Lubis, Abdul; Dwi Lestari, Yuyun; Dafitri, Haida; Azanuddin

    2017-12-01

    Robot is a versatile tool that can function replace human work function. The robot is a device that can be reprogrammed according to user needs. The use of wireless networks for remote monitoring needs can be utilized to build a robot that can be monitored movement and can be monitored using blueprints and he can track the path chosen robot. This process is sent using a wireless network. For visual robot using high resolution cameras to facilitate the operator to control the robot and see the surrounding circumstances.

  4. Using DSLR cameras in digital holography

    Science.gov (United States)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  5. Human tracking over camera networks: a review

    Science.gov (United States)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  6. Image compensation for camera and lighting variability

    Science.gov (United States)

    Daley, Wayne D.; Britton, Douglas F.

    1996-12-01

    With the current trend of integrating machine vision systems in industrial manufacturing and inspection applications comes the issue of camera and illumination stabilization. Unless each application is built around a particular camera and highly controlled lighting environment, the interchangeability of cameras of fluctuations in lighting become a problem as each camera usually has a different response. An empirical approach is proposed where color tile data is acquired using the camera of interest, and a mapping is developed to some predetermined reference image using neural networks. A similar analytical approach based on a rough analysis of the imaging systems is also considered for deriving a mapping between cameras. Once a mapping has been determined, all data from one camera is mapped to correspond to the images of the other prior to performing any processing on the data. Instead of writing separate image processing algorithms for the particular image data being received, the image data is adjusted based on each particular camera and lighting situation. All that is required when swapping cameras is the new mapping for the camera being inserted. The image processing algorithms can remain the same as the input data has been adjusted appropriately. The results of utilizing this technique are presented for an inspection application.

  7. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  8. A pilot study on pupillary and cardiovascular changes induced by stereoscopic video movies

    Directory of Open Access Journals (Sweden)

    Sugita Norihiro

    2007-10-01

    Full Text Available Abstract Background Taking advantage of developed image technology, it is expected that image presentation would be utilized to promote health in the field of medical care and public health. To accumulate knowledge on biomedical effects induced by image presentation, an essential prerequisite for these purposes, studies on autonomic responses in more than one physiological system would be necessary. In this study, changes in parameters of the pupillary light reflex and cardiovascular reflex evoked by motion pictures were examined, which would be utilized to evaluate the effects of images, and to avoid side effects. Methods Three stereoscopic video movies with different properties were field-sequentially rear-projected through two LCD projectors on an 80-inch screen. Seven healthy young subjects watched movies in a dark room. Pupillary parameters were measured before and after presentation of movies by an infrared pupillometer. ECG and radial blood pressure were continuously monitored. The maximum cross-correlation coefficient between heart rate and blood pressure, ρmax, was used as an index to evaluate changes in the cardiovascular reflex. Results Parameters of pupillary and cardiovascular reflexes changed differently after subjects watched three different video movies. Amplitudes of the pupillary light reflex, CR, increased when subjects watched two CG movies (movies A and D, while they did not change after watching a movie with the real scenery (movie R. The ρmax was significantly larger after presentation of the movie D. Scores of the questionnaire for subjective evaluation of physical condition increased after presentation of all movies, but their relationship with changes in CR and ρmax was different in three movies. Possible causes of these biomedical differences are discussed. Conclusion The autonomic responses were effective to monitor biomedical effects induced by image presentation. Further accumulation of data on multiple autonomic

  9. Potential hazards of viewing 3-D stereoscopic television, cinema and computer games: a review.

    Science.gov (United States)

    Howarth, Peter A

    2011-03-01

    The visual stimulus provided by a 3-D stereoscopic display differs from that of the real world because the image provided to each eye is produced on a flat surface. The distance from the screen to the eye remains fixed, providing a single focal distance, but the introduction of disparity between the images allows objects to be located geometrically in front of, or behind, the screen. Unlike in the real world, the stimulus to accommodation and the stimulus to convergence do not match. Although this mismatch is used positively in some forms of Orthoptic treatment, a number of authors have suggested that it could negatively lead to the development of asthenopic symptoms. From knowledge of the zone of clear, comfortable, single binocular vision one can predict that, for people with normal binocular vision, adverse symptoms will not be present if the discrepancy is small, but are likely if it is large, and that what constitutes 'large' and 'small' are idiosyncratic to the individual. The accommodation-convergence mismatch is not, however, the only difference between the natural and the artificial stimuli. In the former case, an object located in front of, or behind, a fixated object will not only be perceived as double if the images fall outside Panum's fusional areas, but it will also be defocused and blurred. In the latter case, however, it is usual for the producers of cinema, TV or computer game content to provide an image that is in focus over the whole of the display, and as a consequence diplopic images will be sharply in focus. The size of Panum's fusional area is spatial frequency-dependent, and because of this the high spatial frequencies present in the diplopic 3-D image will provide a different stimulus to the fusion system from that found naturally. © 2011 The College of Optometrists.

  10. Holistic processing for bodies and body parts: New evidence from stereoscopic depth manipulations.

    Science.gov (United States)

    Harris, Alison; Vyas, Daivik B; Reed, Catherine L

    2016-10-01

    Although holistic processing has been documented extensively for upright faces, it is unclear whether it occurs for other visual categories with more extensive substructure, such as body postures. Like faces, body postures have high social relevance, but they differ in having fine-grain organization not only of basic parts (e.g., arm) but also subparts (e.g., elbow, wrist, hand). To compare holistic processing for whole bodies and body parts, we employed a novel stereoscopic depth manipulation that creates either the percept of a whole body occluded by a set of bars, or of segments of a body floating in front of a background. Despite sharing low-level visual properties, only the stimulus perceived as being behind bars should be holistically "filled in" via amodal completion. In two experiments, we tested for better identification of individual body parts within the context of a body versus in isolation. Consistent with previous findings, recognition of body parts was better in the context of a whole body when the body was amodally completed behind occluders. However, when the same bodies were perceived as floating in strips, performance was significantly worse, and not significantly different, from that for amodally completed parts, supporting holistic processing of body postures. Intriguingly, performance was worst for parts in the frontal depth condition, suggesting that these effects may extend from gross body organization to a more local level. These results provide suggestive evidence that holistic representations may not be "all-or-none," but rather also operate on body regions of more limited spatial extent.

  11. Turbulent Structure of a Simplified Urban Fluid Flow Studied Through Stereoscopic Particle Image Velocimetry

    Science.gov (United States)

    Monnier, Bruno; Goudarzi, Sepehr A.; Vinuesa, Ricardo; Wark, Candace

    2018-02-01

    Stereoscopic particle image velocimetry was used to provide a three-dimensional characterization of the flow around a simplified urban model defined by a 5 by 7 array of blocks, forming four parallel streets, perpendicular to the incoming wind direction corresponding to a zero angle of incidence. Channeling of the flow through the array under consideration was observed, and its effect increased as the incoming wind direction, or angle of incidence ( AOI), was changed from 0° to 15°, 30°, and 45°. The flow between blocks can be divided into two regions: a region of low turbulence kinetic energy (TKE) levels close to the leeward side of the upstream block, and a high TKE area close to the downstream block. The centre of the arch vortex is located in the low TKE area, and two regions of large streamwise velocity fluctuation bound the vortex in the spanwise direction. Moreover, a region of large spanwise velocity fluctuation on the downstream block is found between the vortex legs. Our results indicate that the reorientation of the arch vortex at increasing AOI is produced by the displacement of the different TKE regions and their interaction with the shear layers on the sides and top of the upstream and downstream blocks, respectively. There is also a close connection between the turbulent structure between the blocks and the wind gusts. The correlations among gust components were also studied, and it was found that in the near-wall region of the street the correlations between the streamwise and spanwise gusts R_{uv} were dominant for all four AOI cases. At higher wall-normal positions in the array, the R_{uw} correlation decreased with increasing AOI, whereas the R_{uv} coefficient increased as AOI increased, and at {it{AOI}}=45° all three correlations exhibited relatively high values of around 0.4.

  12. Photogrammetric Applications of Immersive Video Cameras

    OpenAIRE

    Kwiatek, K.; Tokarczyk, R.

    2014-01-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to ov...

  13. Movement-based Interaction in Camera Spaces

    DEFF Research Database (Denmark)

    Eriksson, Eva; Riisgaard Hansen, Thomas; Lykke-Olesen, Andreas

    2006-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movement-based projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space......, relations, and feedback. We see these as central for describing and analysing movement-based systems using camera tracking and we show how these three concepts can be used to analyse other camera tracking applications....

  14. Performance analysis for gait in camera networks

    OpenAIRE

    Michela Goffredo; Imed Bouchrika; John Carter; Mark Nixon

    2008-01-01

    This paper deploys gait analysis for subject identification in multi-camera surveillance scenarios. We present a new method for viewpoint independent markerless gait analysis that does not require camera calibration and works with a wide range of directions of walking. These properties make the proposed method particularly suitable for gait identification in real surveillance scenarios where people and their behaviour need to be tracked across a set of cameras. Tests on 300 synthetic and real...

  15. Explosive Transient Camera (ETC) Program

    Science.gov (United States)

    Ricker, George

    1991-01-01

    Since the inception of the ETC program, a wide range of new technologies was developed to support this astronomical instrument. The prototype unit was installed at ETC Site 1. The first partially automated observations were made and some major renovations were later added to the ETC hardware. The ETC was outfitted with new thermoelectrically-cooled CCD cameras and a sophisticated vacuum manifold, which, together, made the ETC a much more reliable unit than the prototype. The ETC instrumentation and building were placed under full computer control, allowing the ETC to operate as an automated, autonomous instrument with virtually no human intervention necessary. The first fully-automated operation of the ETC was performed, during which the ETC monitored the error region of the repeating soft gamma-ray burster SGR 1806-21.

  16. Camera processing with chromatic aberration.

    Science.gov (United States)

    Korneliussen, Jan Tore; Hirakawa, Keigo

    2014-10-01

    Since the refractive index of materials commonly used for lens depends on the wavelengths of light, practical camera optics fail to converge light to a single point on an image plane. Known as chromatic aberration, this phenomenon distorts image details by introducing magnification error, defocus blur, and color fringes. Though achromatic and apochromatic lens designs reduce chromatic aberration to a degree, they are complex and expensive and they do not offer a perfect correction. In this paper, we propose a new postcapture processing scheme designed to overcome these problems computationally. Specifically, the proposed solution is comprised of chromatic aberration-tolerant demosaicking algorithm and post-demosaicking chromatic aberration correction. Experiments with simulated and real sensor data verify that the chromatic aberration is effectively corrected.

  17. Approximations to camera sensor noise

    Science.gov (United States)

    Jin, Xiaodan; Hirakawa, Keigo

    2013-02-01

    Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.

  18. Comparative evaluation of consumer grade cameras and mobile phone cameras for close range photogrammetry

    Science.gov (United States)

    Chikatsu, Hirofumi; Takahashi, Yoji

    2009-08-01

    The authors have been concentrating on developing convenient 3D measurement methods using consumer grade digital cameras, and it was concluded that consumer grade digital cameras are expected to become a useful photogrammetric device for the various close range application fields. On the other hand, mobile phone cameras which have 10 mega pixels were appeared on the market in Japan. In these circumstances, we are faced with alternative epoch-making problem whether mobile phone cameras are able to take the place of consumer grade digital cameras in close range photogrammetric applications. In order to evaluate potentials of mobile phone cameras in close range photogrammetry, comparative evaluation between mobile phone cameras and consumer grade digital cameras are investigated in this paper with respect to lens distortion, reliability, stability and robustness. The calibration tests for 16 mobile phone cameras and 50 consumer grade digital cameras were conducted indoors using test target. Furthermore, practability of mobile phone camera for close range photogrammetry was evaluated outdoors. This paper presents that mobile phone cameras have ability to take the place of consumer grade digital cameras, and develop the market in digital photogrammetric fields.

  19. Decision about buying a gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Ganatra, R D

    1993-12-31

    A large part of the referral to a nuclear medicine department is usually for imaging studies. Sooner or later, the nuclear medicine specialist will be called upon to make a decision about when and what type of gamma camera to buy. There is no longer an option of choosing between a rectilinear scanner and a gamma camera as the former is virtually out of the market. The decision that one has to make is when to invest in a gamma camera, and then on what basis to select the gamma camera 1 tab., 1 fig

  20. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  1. Scintillation camera for high activity sources

    International Nuclear Information System (INIS)

    Arseneau, R.E.

    1978-01-01

    The invention described relates to a scintillation camera used for clinical medical diagnosis. Advanced recognition of many unacceptable pulses allows the scintillation camera to discard such pulses at an early stage in processing. This frees the camera to process a greater number of pulses of interest within a given period of time. Temporary buffer storage allows the camera to accommodate pulses received at a rate in excess of its maximum rated capability due to statistical fluctuations in the level of radioactivity of the radiation source measured. (U.K.)

  2. Decision about buying a gamma camera

    International Nuclear Information System (INIS)

    Ganatra, R.D.

    1992-01-01

    A large part of the referral to a nuclear medicine department is usually for imaging studies. Sooner or later, the nuclear medicine specialist will be called upon to make a decision about when and what type of gamma camera to buy. There is no longer an option of choosing between a rectilinear scanner and a gamma camera as the former is virtually out of the market. The decision that one has to make is when to invest in a gamma camera, and then on what basis to select the gamma camera

  3. Streak camera recording of interferometer fringes

    International Nuclear Information System (INIS)

    Parker, N.L.; Chau, H.H.

    1977-01-01

    The use of an electronic high-speed camera in the streaking mode to record interference fringe motion from a velocity interferometer is discussed. Advantages of this method over the photomultiplier tube-oscilloscope approach are delineated. Performance testing and data for the electronic streak camera are discussed. The velocity profile of a mylar flyer accelerated by an electrically exploded bridge, and the jump-off velocity of metal targets struck by these mylar flyers are measured in the camera tests. Advantages of the streak camera include portability, low cost, ease of operation and maintenance, simplified interferometer optics, and rapid data analysis

  4. Partially converted stereoscopic images and the effects on visual attention and memory

    Science.gov (United States)

    Kim, Sanghyun; Morikawa, Hiroyuki; Mitsuya, Reiko; Kawai, Takashi; Watanabe, Katsumi

    2015-03-01

    This study contained two experimental examinations of the cognitive activities such as visual attention and memory in viewing stereoscopic (3D) images. For this study, partially converted 3D images were used with binocular parallax added to a specific region of the image. In Experiment 1, change blindness was used as a presented stimulus. The visual attention and impact on memory were investigated by measuring the response time to accomplish the given task. In the change blindness task, an 80 ms blank was intersected between the original and altered images, and the two images were presented alternatingly for 240 ms each. Subjects were asked to temporarily memorize the two switching images and to compare them, visually recognizing the difference between the two. The stimuli for four conditions (2D, 3D, Partially converted 3D, distracted partially converted 3D) were randomly displayed for 20 subjects. The results of Experiment 1 showed that partially converted 3D images tend to attract visual attention and are prone to remain in viewer's memory in the area where moderate negative parallax has been added. In order to examine the impact of a dynamic binocular disparity on partially converted 3D images, an evaluation experiment was conducted that applied learning, distraction, and recognition tasks for 33 subjects. The learning task involved memorizing the location of cells in a 5 × 5 matrix pattern using two different colors. Two cells were positioned with alternating colors, and one of the gray cells was moved up, down, left, or right by one cell width. Experimental conditions was set as a partially converted 3D condition in which a gray cell moved diagonally for a certain period of time with a dynamic binocular disparity added, a 3D condition in which binocular disparity was added to all gray cells, and a 2D condition. The correct response rates for recognition of each task after the distraction task were compared. The results of Experiment 2 showed that the correct

  5. Visual discomfort while watching stereoscopic three-dimensional movies at the cinema.

    Science.gov (United States)

    Zeri, Fabrizio; Livi, Stefano

    2015-05-01

    This study investigates discomfort symptoms while watching Stereoscopic three-dimensional (S3D) movies in the 'real' condition of a cinema. In particular, it had two main objectives: to evaluate the presence and nature of visual discomfort while watching S3D movies, and to compare visual symptoms during S3D and 2D viewing. Cinema spectators of S3D or 2D films were interviewed by questionnaire at the theatre exit of different multiplex cinemas immediately after viewing a movie. A total of 854 subjects were interviewed (mean age 23.7 ± 10.9 years; range 8-81 years; 392 females and 462 males). Five hundred and ninety-nine of them viewed different S3D movies, and 255 subjects viewed a 2D version of a film seen in S3D by 251 subjects from the S3D group for a between-subjects design for that comparison. Exploratory factor analysis revealed two factors underlying symptoms: External Symptoms Factors (ESF) with a mean ± S.D. symptom score of 1.51 ± 0.58 comprised of eye burning, eye ache, eye strain, eye irritation and tearing; and Internal Symptoms Factors (ISF) with a mean ± S.D. symptom score of 1.38 ± 0.51 comprised of blur, double vision, headache, dizziness and nausea. ISF and ESF were significantly correlated (Spearman r = 0.55; p = 0.001) but with external symptoms significantly higher than internal ones (Wilcoxon Signed-ranks test; p = 0.001). The age of participants did not significantly affect symptoms. However, females had higher scores than males for both ESF and ISF, and myopes had higher ISF scores than hyperopes. Newly released movies provided lower ESF scores than older movies, while the seat position of spectators had minimal effect. Symptoms while viewing S3D movies were significantly and negatively correlated to the duration of wearing S3D glasses. Kruskal-Wallis results showed that symptoms were significantly greater for S3D compared to those of 2D movies, both for ISF (p = 0.001) and for ESF (p = 0.001). In short, the analysis of the symptoms

  6. Improving Situational Awareness in camera surveillance by combining top-view maps with camera images

    NARCIS (Netherlands)

    Kooi, F.L.; Zeeders, R.

    2009-01-01

    The goal of the experiment described is to improve today's camera surveillance in public spaces. Three designs with the camera images combined on a top-view map were compared to each other and to the current situation in camera surveillance. The goal was to test which design makes spatial

  7. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many

  8. Subjective evaluation of two stereoscopic imaging systems exploiting visual attention to improve 3D quality of experience

    Science.gov (United States)

    Hanhart, Philippe; Ebrahimi, Touradj

    2014-03-01

    Crosstalk and vergence-accommodation rivalry negatively impact the quality of experience (QoE) provided by stereoscopic displays. However, exploiting visual attention and adapting the 3D rendering process on the fly can reduce these drawbacks. In this paper, we propose and evaluate two different approaches that exploit visual attention to improve 3D QoE on stereoscopic displays: an offline system, which uses a saliency map to predict gaze position, and an online system, which uses a remote eye tracking system to measure real time gaze positions. The gaze points were used in conjunction with the disparity map to extract the disparity of the object-of-interest. Horizontal image translation was performed to bring the fixated object on the screen plane. The user preference between standard 3D mode and the two proposed systems was evaluated through a subjective evaluation. Results show that exploiting visual attention significantly improves image quality and visual comfort, with a slight advantage for real time gaze determination. Depth quality is also improved, but the difference is not significant.

  9. Active spectral imaging nondestructive evaluation (SINDE) camera

    Energy Technology Data Exchange (ETDEWEB)

    Simova, E.; Rochefort, P.A., E-mail: eli.simova@cnl.ca [Canadian Nuclear Laboratories, Chalk River, Ontario (Canada)

    2016-06-15

    A proof-of-concept video camera for active spectral imaging nondestructive evaluation has been demonstrated. An active multispectral imaging technique has been implemented in the visible and near infrared by using light emitting diodes with wavelengths spanning from 400 to 970 nm. This shows how the camera can be used in nondestructive evaluation to inspect surfaces and spectrally identify materials and corrosion. (author)

  10. High resolution RGB color line scan camera

    Science.gov (United States)

    Lynch, Theodore E.; Huettig, Fred

    1998-04-01

    A color line scan camera family which is available with either 6000, 8000 or 10000 pixels/color channel, utilizes off-the-shelf lenses, interfaces with currently available frame grabbers, includes on-board pixel by pixel offset correction, and is configurable and controllable via RS232 serial port for computer controlled or stand alone operation is described in this paper. This line scan camera is based on an available 8000 element monochrome line scan camera designed by AOA for OEM use. The new color version includes improvements such as better packaging and additional user features which make the camera easier to use. The heart of the camera is a tri-linear CCD sensor with on-chip color balancing for maximum accuracy and pinned photodiodes for low lag response. Each color channel is digitized to 12 bits and all three channels are multiplexed together so that the resulting camera output video is either a 12 or 8 bit data stream at a rate of up to 24Megpixels/sec. Conversion from 12 to 8 bit, or user-defined gamma, is accomplished by on board user-defined video look up tables. The camera has two user-selectable operating modes; lows speed, high sensitivity mode or high speed, reduced sensitivity mode. The intended uses of the camera include industrial inspection, digital archiving, document scanning, and graphic arts applications.

  11. Ultra fast x-ray streak camera

    International Nuclear Information System (INIS)

    Coleman, L.W.; McConaghy, C.F.

    1975-01-01

    A unique ultrafast x-ray sensitive streak camera, with a time resolution of 50psec, has been built and operated. A 100A thick gold photocathode on a beryllium vacuum window is used in a modified commerical image converter tube. The X-ray streak camera has been used in experiments to observe time resolved emission from laser-produced plasmas. (author)

  12. An Open Standard for Camera Trap Data

    Directory of Open Access Journals (Sweden)

    Tavis Forrester

    2016-12-01

    Full Text Available Camera traps that capture photos of animals are a valuable tool for monitoring biodiversity. The use of camera traps is rapidly increasing and there is an urgent need for standardization to facilitate data management, reporting and data sharing. Here we offer the Camera Trap Metadata Standard as an open data standard for storing and sharing camera trap data, developed by experts from a variety of organizations. The standard captures information necessary to share data between projects and offers a foundation for collecting the more detailed data needed for advanced analysis. The data standard captures information about study design, the type of camera used, and the location and species names for all detections in a standardized way. This information is critical for accurately assessing results from individual camera trapping projects and for combining data from multiple studies for meta-analysis. This data standard is an important step in aligning camera trapping surveys with best practices in data-intensive science. Ecology is moving rapidly into the realm of big data, and central data repositories are becoming a critical tool and are emerging for camera trap data. This data standard will help researchers standardize data terms, align past data to new repositories, and provide a framework for utilizing data across repositories and research projects to advance animal ecology and conservation.

  13. Laser scanning camera inspects hazardous area

    International Nuclear Information System (INIS)

    Fryatt, A.; Miprode, C.

    1985-01-01

    Main operational characteristics of a new laser scanning camera are presented. The camera is intended primarily for low level high resolution viewing inside nuclear reactors. It uses a He-Ne laser beam raster; by detecting the reflected light by means of a phomultiplier, the subject under observation can be reconstructed in an electronic video store and reviewed on a conventional monitor screen

  14. Single chip camera active pixel sensor

    Science.gov (United States)

    Shaw, Timothy (Inventor); Pain, Bedabrata (Inventor); Olson, Brita (Inventor); Nixon, Robert H. (Inventor); Fossum, Eric R. (Inventor); Panicacci, Roger A. (Inventor); Mansoorian, Barmak (Inventor)

    2003-01-01

    A totally digital single chip camera includes communications to operate most of its structure in serial communication mode. The digital single chip camera include a D/A converter for converting an input digital word into an analog reference signal. The chip includes all of the necessary circuitry for operating the chip using a single pin.

  15. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  16. Centering mount for a gamma camera

    International Nuclear Information System (INIS)

    Mirkhodzhaev, A.Kh.; Kuznetsov, N.K.; Ostryj, Yu.E.

    1988-01-01

    A device for centering a γ-camera detector in case of radionuclide diagnosis is described. It permits the use of available medical coaches instead of a table with a transparent top. The device can be used for centering a detector (when it is fixed at the low end of a γ-camera) on a required area of the patient's body

  17. Digital airborne camera introduction and technology

    CERN Document Server

    Sandau, Rainer

    2014-01-01

    The last decade has seen great innovations on the airborne camera. This book is the first ever written on the topic and describes all components of a digital airborne camera ranging from the object to be imaged to the mass memory device.

  18. Adapting virtual camera behaviour through player modelling

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Research in virtual camera control has focused primarily on finding methods to allow designers to place cameras effectively and efficiently in dynamic and unpredictable environments, and to generate complex and dynamic plans for cinematography in virtual environments. In this article, we propose...

  19. Driving with head-slaved camera system

    NARCIS (Netherlands)

    Oving, A.B.; Erp, J.B.F. van

    2001-01-01

    In a field experiment, we tested the effectiveness of a head-slaved camera system for driving an armoured vehicle under armour. This system consists of a helmet-mounted display (HMD), a headtracker, and a motion platform with two cameras. Subjects performed several driving tasks on paved and in

  20. Rosetta Star Tracker and Navigation Camera

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera.......Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera....

  1. Wavefront analysis for plenoptic camera imaging

    International Nuclear Information System (INIS)

    Luan Yin-Sen; Xu Bing; Yang Ping; Tang Guo-Mao

    2017-01-01

    The plenoptic camera is a single lens stereo camera which can retrieve the direction of light rays while detecting their intensity distribution. In this paper, to reveal more truths of plenoptic camera imaging, we present the wavefront analysis for the plenoptic camera imaging from the angle of physical optics but not from the ray tracing model of geometric optics. Specifically, the wavefront imaging model of a plenoptic camera is analyzed and simulated by scalar diffraction theory and the depth estimation is redescribed based on physical optics. We simulate a set of raw plenoptic images of an object scene, thereby validating the analysis and derivations and the difference between the imaging analysis methods based on geometric optics and physical optics are also shown in simulations. (paper)

  2. Modelling Virtual Camera Behaviour Through Player Gaze

    DEFF Research Database (Denmark)

    Picardi, Andrea; Burelli, Paolo; Yannakakis, Georgios N.

    2012-01-01

    industry and game AI research focus on the devel- opment of increasingly sophisticated systems to automate the control of the virtual camera integrating artificial intel- ligence algorithms within physical simulations. However, in both industry and academia little research has been carried out......In a three-dimensional virtual environment, aspects such as narrative and interaction largely depend on the placement and animation of the virtual camera. Therefore, virtual camera control plays a critical role in player experience and, thereby, in the overall quality of a computer game. Both game...... on the relationship between virtual camera, game-play and player behaviour. We run a game user experiment to shed some light on this relationship and identify relevant dif- ferences between camera behaviours through different game sessions, playing behaviours and player gaze patterns. Re- sults show that users can...

  3. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Romps, David [Univ. of California, Berkeley, CA (United States); Oktem, Rusen [Univ. of California, Berkeley, CA (United States)

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together to obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.

  4. Gamma camera performance: technical assessment protocol

    Energy Technology Data Exchange (ETDEWEB)

    Bolster, A.A. [West Glasgow Hospitals NHS Trust, London (United Kingdom). Dept. of Clinical Physics; Waddington, W.A. [University College London Hospitals NHS Trust, London (United Kingdom). Inst. of Nuclear Medicine

    1996-12-31

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera`s computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author).

  5. Development of high-speed video cameras

    Science.gov (United States)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  6. Cloud Computing with Context Cameras

    Science.gov (United States)

    Pickles, A. J.; Rosing, W. E.

    2016-05-01

    We summarize methods and plans to monitor and calibrate photometric observations with our autonomous, robotic network of 2m, 1m and 40cm telescopes. These are sited globally to optimize our ability to observe time-variable sources. Wide field "context" cameras are aligned with our network telescopes and cycle every ˜2 minutes through BVr'i'z' filters, spanning our optical range. We measure instantaneous zero-point offsets and transparency (throughput) against calibrators in the 5-12m range from the all-sky Tycho2 catalog, and periodically against primary standards. Similar measurements are made for all our science images, with typical fields of view of ˜0.5 degrees. These are matched against Landolt, Stetson and Sloan standards, and against calibrators in the 10-17m range from the all-sky APASS catalog. Such measurements provide pretty good instantaneous flux calibration, often to better than 5%, even in cloudy conditions. Zero-point and transparency measurements can be used to characterize, monitor and inter-compare sites and equipment. When accurate calibrations of Target against Standard fields are required, monitoring measurements can be used to select truly photometric periods when accurate calibrations can be automatically scheduled and performed.

  7. Towards Adaptive Virtual Camera Control In Computer Games

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2011-01-01

    Automatic camera control aims to define a framework to control virtual camera movements in dynamic and unpredictable virtual environments while ensuring a set of desired visual properties. We inves- tigate the relationship between camera placement and playing behaviour in games and build a user...... model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platform game, we cluster gaze and camera information to identify camera behaviour profiles and we employ...... camera control in games is discussed....

  8. Reducing the Variance of Intrinsic Camera Calibration Results in the ROS Camera_Calibration Package

    Science.gov (United States)

    Chiou, Geoffrey Nelson

    The intrinsic calibration of a camera is the process in which the internal optical and geometric characteristics of the camera are determined. If accurate intrinsic parameters of a camera are known, the ray in 3D space that every point in the image lies on can be determined. Pairing with another camera allows for the position of the points in the image to be calculated by intersection of the rays. Accurate intrinsics also allow for the position and orientation of a camera relative to some world coordinate system to be calculated. These two reasons for having accurate intrinsic calibration for a camera are especially important in the field of industrial robotics where 3D cameras are frequently mounted on the ends of manipulators. In the ROS (Robot Operating System) ecosystem, the camera_calibration package is the default standard for intrinsic camera calibration. Several researchers from the Industrial Robotics & Automation division at Southwest Research Institute have noted that this package results in large variances in the intrinsic parameters of the camera when calibrating across multiple attempts. There are also open issues on this matter in their public repository that have not been addressed by the developers. In this thesis, we confirm that the camera_calibration package does indeed return different results across multiple attempts, test out several possible hypothesizes as to why, identify the reason, and provide simple solution to fix the cause of the issue.

  9. Costless Platform for High Resolution Stereoscopic Images of a High Gothic Facade

    Science.gov (United States)

    Héno, R.; Chandelier, L.; Schelstraete, D.

    2012-07-01

    In October 2011, the PPMD specialized master's degree students (Photogrammetry, Positionning and Deformation Measurement) of the French ENSG (IGN's School of Geomatics, the Ecole Nationale des Sciences Géographiques) were asked to come and survey the main facade of the cathedral of Amiens, which is very complex as far as size and decoration are concerned. Although it was first planned to use a lift truck for the image survey, budget considerations and taste for experimentation led the project to other perspectives: images shot from the ground level with a long focal camera will be combined to complementary images shot from what higher galleries are available on the main facade with a wide angle camera fixed on a horizontal 2.5 meter long pole. This heteroclite image survey is being processed by the PPMD master's degree students during this academic year. Among other type of products, 3D point clouds will be calculated on specific parts of the facade with both sources of images. If the proposed device and methodology to get full image coverage of the main facade happen to be fruitful, the image acquisition phase will be completed later by another team. This article focuses on the production of 3D point clouds with wide angle images on the rose of the main facade.

  10. COSTLESS PLATFORM FOR HIGH RESOLUTION STEREOSCOPIC IMAGES OF A HIGH GOTHIC FACADE

    Directory of Open Access Journals (Sweden)

    R. Héno

    2012-07-01

    Full Text Available In October 2011, the PPMD specialized master's degree students (Photogrammetry, Positionning and Deformation Measurement of the French ENSG (IGN’s School of Geomatics, the Ecole Nationale des Sciences Géographiques were asked to come and survey the main facade of the cathedral of Amiens, which is very complex as far as size and decoration are concerned. Although it was first planned to use a lift truck for the image survey, budget considerations and taste for experimentation led the project to other perspectives: images shot from the ground level with a long focal camera will be combined to complementary images shot from what higher galleries are available on the main facade with a wide angle camera fixed on a horizontal 2.5 meter long pole. This heteroclite image survey is being processed by the PPMD master's degree students during this academic year. Among other type of products, 3D point clouds will be calculated on specific parts of the facade with both sources of images. If the proposed device and methodology to get full image coverage of the main facade happen to be fruitful, the image acquisition phase will be completed later by another team. This article focuses on the production of 3D point clouds with wide angle images on the rose of the main facade.

  11. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.

  12. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-04-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2 0 deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  13. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-07-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2-deg deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  14. New camera systems for fuel services

    International Nuclear Information System (INIS)

    Hummel, W.; Beck, H.J.

    2010-01-01

    AREVA NP Fuel Services have many years of experience in visual examination and measurements on fuel assemblies and associated core components by using state of the art cameras and measuring technologies. The used techniques allow the surface and dimensional characterization of materials and shapes by visual examination. New enhanced and sophisticated technologies for fuel services f. e. are two shielded color camera systems for use under water and close inspection of a fuel assembly. Nowadays the market requirements for detecting and characterization of small defects (lower than the 10th of one mm) or cracks and analyzing surface appearances on an irradiated fuel rod cladding or fuel assembly structure parts have increased. Therefore it is common practice to use movie cameras with higher resolution. The radiation resistance of high resolution CCD cameras is in general very low and it is not possible to use them unshielded close to a fuel assembly. By extending the camera with a mirror system and shielding around the sensitive parts, the movie camera can be utilized for fuel assembly inspection. AREVA NP Fuel Services is now equipped with such kind of movie cameras. (orig.)

  15. First results from the TOPSAT camera

    Science.gov (United States)

    Greenway, Paul; Tosh, Ian; Morris, Nigel; Burton, Gary; Cawley, Steve

    2017-11-01

    The TopSat camera is a low cost remote sensing imager capable of producing 2.5 metre resolution panchromatic imagery, funded by the British National Space Centre's Mosaic programme. The instrument was designed and assembled at the Space Science & Technology Department of the CCLRC's Rutherford Appleton Laboratory (RAL) in the UK, and was launched on the 27th October 2005 from Plesetsk Cosmodrome in Northern Russia on a Kosmos-3M. The camera utilises an off-axis three mirror system, which has the advantages of excellent image quality over a wide field of view, combined with a compactness that makes its overall dimensions smaller than its focal length. Keeping the costs to a minimum has been a major design driver in the development of this camera. The camera is part of the TopSat mission, which is a collaboration between four UK organisations; QinetiQ, Surrey Satellite Technology Ltd (SSTL), RAL and Infoterra. Its objective is to demonstrate provision of rapid response high resolution imagery to fixed and mobile ground stations using a low cost minisatellite. The paper "Development of the TopSat Camera" presented by RAL at the 5th ICSO in 2004 described the opto-mechanical design, assembly, alignment and environmental test methods implemented. Now that the spacecraft is in orbit and successfully acquiring images, this paper presents the first results from the camera and makes an initial assessment of the camera's in-orbit performance.

  16. State of art in radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Choi; Young Soo; Kim, Seong Ho; Cho, Jae Wan; Kim, Chang Hoi; Seo, Young Chil

    2002-02-01

    Working in radiation environment such as nuclear power plant, RI facility, nuclear fuel fabrication facility, medical center has to be considered radiation exposure, and we can implement these job by remote observation and operation. However the camera used for general industry is weakened at radiation, so radiation-tolerant camera is needed for radiation environment. The application of radiation-tolerant camera system is nuclear industry, radio-active medical, aerospace, and so on. Specially nuclear industry, the demand is continuous in the inspection of nuclear boiler, exchange of pellet, inspection of nuclear waste. In the nuclear developed countries have been an effort to develop radiation-tolerant cameras. Now they have many kinds of radiation-tolerant cameras which can tolerate to 10{sup 6}-10{sup 8} rad total dose. In this report, we examine into the state-of-art about radiation-tolerant cameras, and analyze these technology. We want to grow up the concern of developing radiation-tolerant camera by this paper, and upgrade the level of domestic technology.

  17. Fuzzy logic control for camera tracking system

    Science.gov (United States)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  18. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different....... For this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of scenarios...

  19. Scintillation camera with second order resolution

    International Nuclear Information System (INIS)

    Muehllehner, G.

    1976-01-01

    A scintillation camera for use in radioisotope imaging to determine the concentration of radionuclides in a two-dimensional area is described in which means is provided for second order positional resolution. The phototubes, which normally provide only a single order of resolution, are modified to provide second order positional resolution of radiation within an object positioned for viewing by the scintillation camera. The phototubes are modified in that multiple anodes are provided to receive signals from the photocathode in a manner such that each anode is particularly responsive to photoemissions from a limited portion of the photocathode. Resolution of radioactive events appearing as an output of this scintillation camera is thereby improved

  20. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    Science.gov (United States)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  1. Camera Trajectory fromWide Baseline Images

    Science.gov (United States)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the

  2. Lunar Reconnaissance Orbiter Camera (LROC) instrument overview

    Science.gov (United States)

    Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.

  3. Gamma camera performance: technical assessment protocol

    International Nuclear Information System (INIS)

    Bolster, A.A.; Waddington, W.A.

    1996-01-01

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera's computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author)

  4. Camera Based Navigation System with Augmented Reality

    Directory of Open Access Journals (Sweden)

    M. Marcu

    2012-06-01

    Full Text Available Nowadays smart mobile devices have enough processing power, memory, storage and always connected wireless communication bandwidth that makes them available for any type of application. Augmented reality (AR proposes a new type of applications that tries to enhance the real world by superimposing or combining virtual objects or computer generated information with it. In this paper we present a camera based navigation system with augmented reality integration. The proposed system aims to the following: the user points the camera of the smartphone towards a point of interest, like a building or any other place, and the application searches for relevant information about that specific place and superimposes the data over the video feed on the display. When the user moves the camera away, changing its orientation, the data changes as well, in real-time, with the proper information about the place that is now in the camera view.

  5. CALIBRATION PROCEDURES ON OBLIQUE CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    G. Kemper

    2016-06-01

    Full Text Available Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna –IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first

  6. Portable mini gamma camera for medical applications

    CERN Document Server

    Porras, E; Benlloch, J M; El-Djalil-Kadi-Hanifi, M; López, S; Pavon, N; Ruiz, J A; Sánchez, F; Sebastiá, A

    2002-01-01

    A small, portable and low-cost gamma camera for medical applications has been developed and clinically tested. This camera, based on a scintillator crystal and a Position Sensitive Photo-Multiplier Tube, has a useful field of view of 4.6 cm diameter and provides 2.2 mm of intrinsic spatial resolution. Its mobility and light weight allow to reach the patient from any desired direction. This camera images small organs with high efficiency and so addresses the demand for devices of specific clinical applications. In this paper, we present the camera and briefly describe the procedures that have led us to choose its configuration and the image reconstruction method. The clinical tests and diagnostic capability are also presented and discussed.

  7. Improving maps of ice-sheet surface elevation change using combined laser altimeter and stereoscopic elevation model data

    DEFF Research Database (Denmark)

    Fredenslund Levinsen, Joanna; Howat, I. M.; Tscherning, C. C.

    2013-01-01

    We combine the complementary characteristics of laser altimeter data and stereoscopic digital elevation models (DEMs) to construct high-resolution (_100 m) maps of surface elevations and elevation changes over rapidly changing outlet glaciers in Greenland. Measurements from spaceborne and airborne...... laser altimeters have relatively low errors but are spatially limited to the ground tracks, while DEMs have larger errors but provide spatially continuous surfaces. The principle of our method is to fit the DEM surface to the altimeter point clouds in time and space to minimize the DEM errors and use...... that surface to extrapolate elevations away from altimeter flight lines. This reduces the DEM registration errors and fills the gap between the altimeter paths. We use data from ICESat and ATM as well as SPOT 5 DEMs from 2007 and 2008 and apply them to the outlet glaciers Jakobshavn Isbræ (JI...

  8. 3-D flow characterization and shear stress in a stenosed carotid artery bifurcation model using stereoscopic PIV technique.

    Science.gov (United States)

    Kefayati, Sarah; Poepping, Tamie L

    2010-01-01

    The carotid artery bifurcation is a common site of atherosclerosis which is a major leading cause of ischemic stroke. The impact of stenosis in the atherosclerotic carotid artery is to disturb the flow pattern and produce regions with high shear rate, turbulence, and recirculation, which are key hemodynamic factors associated with plaque rupture, clot formation, and embolism. In order to characterize the disturbed flow in the stenosed carotid artery, stereoscopic PIV measurements were performed in a transparent model with 50% stenosis under pulsatile flow conditions. Simulated ECG gating of the flowrate waveform provides external triggering required for volumetric reconstruction of the complex flow patterns. Based on the three-component velocity data in the lumen region, volumetric shear-stress patterns were derived.

  9. An imaging system for a gamma camera

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1980-01-01

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  10. Semiotic Analysis of Canon Camera Advertisements

    OpenAIRE

    INDRAWATI, SUSAN

    2015-01-01

    Keywords: Semiotic Analysis, Canon Camera, Advertisement. Advertisement is a medium to deliver message to people with the goal to influence the to use certain products. Semiotics is applied to develop a correlation within element used in an advertisement. In this study, the writer chose the Semiotic analysis of canon camera advertisement as the subject to be analyzed using semiotic study based on Peirce's theory. Semiotic approach is employed in interpreting the sign, symbol, icon, and index ...

  11. Imaging camera with multiwire proportional chamber

    International Nuclear Information System (INIS)

    Votruba, J.

    1980-01-01

    The camera for imaging radioisotope dislocations for use in nuclear medicine or for other applications, claimed in the patent, is provided by two multiwire lattices for the x-coordinate connected to a first coincidence circuit, and by two multiwire lattices for the y-coordinate connected to a second coincidence circuit. This arrangement eliminates the need of using a collimator and increases camera sensitivity while reducing production cost. (Ha)

  12. Imaging capabilities of germanium gamma cameras

    International Nuclear Information System (INIS)

    Steidley, J.W.

    1977-01-01

    Quantitative methods of analysis based on the use of a computer simulation were developed and used to investigate the imaging capabilities of germanium gamma cameras. The main advantage of the computer simulation is that the inherent unknowns of clinical imaging procedures are removed from the investigation. The effects of patient scattered radiation were incorporated using a mathematical LSF model which was empirically developed and experimentally verified. Image modifying effects of patient motion, spatial distortions, and count rate capabilities were also included in the model. Spatial domain and frequency domain modeling techniques were developed and used in the simulation as required. The imaging capabilities of gamma cameras were assessed using low contrast lesion source distributions. The results showed that an improvement in energy resolution from 10% to 2% offers significant clinical advantages in terms of improved contrast, increased detectability, and reduced patient dose. The improvements are of greatest significance for small lesions at low contrast. The results of the computer simulation were also used to compare a design of a hypothetical germanium gamma camera with a state-of-the-art scintillation camera. The computer model performed a parametric analysis of the interrelated effects of inherent and technological limitations of gamma camera imaging. In particular, the trade-off between collimator resolution and collimator efficiency for detection of a given low contrast lesion was directly addressed. This trade-off is an inherent limitation of both gamma cameras. The image degrading effects of patient motion, camera spatial distortions, and low count rate were shown to modify the improvements due to better energy resolution. Thus, based on this research, the continued development of germanium cameras to the point of clinical demonstration is recommended

  13. Compact Optical Technique for Streak Camera Calibration

    International Nuclear Information System (INIS)

    Curt Allen; Terence Davies; Frans Janson; Ronald Justin; Bruce Marshall; Oliver Sweningsen; Perry Bell; Roger Griffith; Karla Hagans; Richard Lerche

    2004-01-01

    The National Ignition Facility is under construction at the Lawrence Livermore National Laboratory for the U.S. Department of Energy Stockpile Stewardship Program. Optical streak cameras are an integral part of the experimental diagnostics instrumentation. To accurately reduce data from the streak cameras a temporal calibration is required. This article describes a technique for generating trains of precisely timed short-duration optical pulses that are suitable for temporal calibrations

  14. From stereoscopic recording to virtual reality headsets: Designing a new way to learn surgery.

    Science.gov (United States)

    Ros, M; Trives, J-V; Lonjon, N

    2017-03-01

    To improve surgical practice, there are several different approaches to simulation. Due to wearable technologies, recording 3D movies is now easy. The development of a virtual reality headset allows imagining a different way of watching these videos: using dedicated software to increase interactivity in a 3D immersive experience. The objective was to record 3D movies via a main surgeon's perspective, to watch files using virtual reality headsets and to validate pedagogic interest. Surgical procedures were recorded using a system combining two side-by-side cameras placed on a helmet. We added two LEDs just below the cameras to enhance luminosity. Two files were obtained in mp4 format and edited using dedicated software to create 3D movies. Files obtained were then played using a virtual reality headset. Surgeons who tried the immersive experience completed a questionnaire to evaluate the interest of this procedure for surgical learning. Twenty surgical procedures were recorded. The movies capture a scene which is extended 180° horizontally and 90° vertically. The immersive experience created by the device conveys a genuine feeling of being in the operating room and seeing the procedure first-hand through the eyes of the main surgeon. All surgeons indicated that they believe in pedagogical interest of this method. We succeeded in recording the main surgeon's point of view in 3D and watch it on a virtual reality headset. This new approach enhances the understanding of surgery; most of the surgeons appreciated its pedagogic value. This method could be an effective learning tool in the future. Copyright © 2016. Published by Elsevier Masson SAS.

  15. Review of Calibration Methods for Scheimpflug Camera

    Directory of Open Access Journals (Sweden)

    Cong Sun

    2018-01-01

    Full Text Available The Scheimpflug camera offers a wide range of applications in the field of typical close-range photogrammetry, particle image velocity, and digital image correlation due to the fact that the depth-of-view of Scheimpflug camera can be greatly extended according to the Scheimpflug condition. Yet, the conventional calibration methods are not applicable in this case because the assumptions used by classical calibration methodologies are not valid anymore for cameras undergoing Scheimpflug condition. Therefore, various methods have been investigated to solve the problem over the last few years. However, no comprehensive review exists that provides an insight into recent calibration methods of Scheimpflug cameras. This paper presents a survey of recent calibration methods of Scheimpflug cameras with perspective lens, including the general nonparametric imaging model, and analyzes in detail the advantages and drawbacks of the mainstream calibration models with respect to each other. Real data experiments including calibrations, reconstructions, and measurements are performed to assess the performance of the models. The results reveal that the accuracies of the RMM, PLVM, PCIM, and GNIM are basically equal, while the accuracy of GNIM is slightly lower compared with the other three parametric models. Moreover, the experimental results reveal that the parameters of the tangential distortion are likely coupled with the tilt angle of the sensor in Scheimpflug calibration models. The work of this paper lays the foundation of further research of Scheimpflug cameras.

  16. The Use of Camera Traps in Wildlife

    Directory of Open Access Journals (Sweden)

    Yasin Uçarlı

    2013-11-01

    Full Text Available Camera traps are increasingly used in the abundance and density estimates of wildlife species. Camera traps are very good alternative for direct observation in case, particularly, steep terrain, dense vegetation covered areas or nocturnal species. The main reason for the use of camera traps is eliminated that the economic, personnel and time loss in a continuous manner at the same time in different points. Camera traps, motion and heat sensitive, can take a photo or video according to the models. Crossover points and feeding or mating areas of the focal species are addressed as a priority camera trap set locations. The population size can be finding out by the images combined with Capture-Recapture methods. The population density came out the population size divided to effective sampling area size. Mating and breeding season, habitat choice, group structures and survival rates of the focal species can be achieved from the images. Camera traps are very useful to obtain the necessary data about the particularly mysterious species with economically in planning and conservation efforts.

  17. Towards next generation 3D cameras

    Science.gov (United States)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (robotic inspection and assembly systems.

  18. Multi-Angle Snowflake Camera Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Stuefer, Martin [Univ. of Alaska, Fairbanks, AK (United States); Bailey, J. [Univ. of Alaska, Fairbanks, AK (United States)

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASC cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.

  19. Calibration Procedures in Mid Format Camera Setups

    Science.gov (United States)

    Pivnicka, F.; Kemper, G.; Geissler, S.

    2012-07-01

    A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU), the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and camera can be applied

  20. CALIBRATION PROCEDURES IN MID FORMAT CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    F. Pivnicka

    2012-07-01

    Full Text Available A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU, the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and

  1. Soft x-ray streak cameras

    International Nuclear Information System (INIS)

    Stradling, G.L.

    1988-01-01

    This paper is a discussion of the development and of the current state of the art in picosecond soft x-ray streak camera technology. Accomplishments from a number of institutions are discussed. X-ray streak cameras vary from standard visible streak camera designs in the use of an x-ray transmitting window and an x-ray sensitive photocathode. The spectral sensitivity range of these instruments includes portions of the near UV and extends from the subkilovolt x- ray region to several tens of kilovolts. Attendant challenges encountered in the design and use of x-ray streak cameras include the accommodation of high-voltage and vacuum requirements, as well as manipulation of a photocathode structure which is often fragile. The x-ray transmitting window is generally too fragile to withstand atmospheric pressure, necessitating active vacuum pumping and a vacuum line of sight to the x-ray signal source. Because of the difficulty of manipulating x-ray beams with conventional optics, as is done with visible light, the size of the photocathode sensing area, access to the front of the tube, the ability to insert the streak tube into a vacuum chamber and the capability to trigger the sweep with very short internal delay times are issues uniquely relevant to x-ray streak camera use. The physics of electron imaging may place more stringent limitations on the temporal and spatial resolution obtainable with x-ray photocathodes than with the visible counterpart. Other issues which are common to the entire streak camera community also concern the x-ray streak camera users and manufacturers

  2. Characterization of SWIR cameras by MRC measurements

    Science.gov (United States)

    Gerken, M.; Schlemmer, H.; Haan, Hubertus A.; Siemens, Christofer; Münzberg, M.

    2014-05-01

    Cameras for the SWIR wavelength range are becoming more and more important because of the better observation range for day-light operation under adverse weather conditions (haze, fog, rain). In order to choose the best suitable SWIR camera or to qualify a camera for a given application, characterization of the camera by means of the Minimum Resolvable Contrast MRC concept is favorable as the MRC comprises all relevant properties of the instrument. With the MRC known for a given camera device the achievable observation range can be calculated for every combination of target size, illumination level or weather conditions. MRC measurements in the SWIR wavelength band can be performed widely along the guidelines of the MRC measurements of a visual camera. Typically measurements are performed with a set of resolution targets (e.g. USAF 1951 target) manufactured with different contrast values from 50% down to less than 1%. For a given illumination level the achievable spatial resolution is then measured for each target. The resulting curve is showing the minimum contrast that is necessary to resolve the structure of a target as a function of spatial frequency. To perform MRC measurements for SWIR cameras at first the irradiation parameters have to be given in radiometric instead of photometric units which are limited in their use to the visible range. In order to do so, SWIR illumination levels for typical daylight and twilight conditions have to be defined. At second, a radiation source is necessary with appropriate emission in the SWIR range (e.g. incandescent lamp) and the irradiance has to be measured in W/m2 instead of Lux = Lumen/m2. At third, the contrast values of the targets have to be calibrated newly for the SWIR range because they typically differ from the values determined for the visual range. Measured MRC values of three cameras are compared to the specified performance data of the devices and the results of a multi-band in-house designed Vis-SWIR camera

  3. Advanced system for Gamma Cameras modernization

    International Nuclear Information System (INIS)

    Osorio Deliz, J. F.; Diaz Garcia, A.; Arista Romeu, E. J.

    2015-01-01

    Analog and digital gamma cameras still largely used in developing countries. Many of them rely in old hardware electronics, which in many cases limits their use in actual nuclear medicine diagnostic studies. Consequently, there are different worldwide companies that produce medical equipment engaged into a partial or total Gamma Cameras modernization. Present work has demonstrated the possibility of substitution of almost entire signal processing electronics placed at inside a Gamma Camera detector head by a digitizer PCI card. this card includes four 12 Bits Analog-to-Digital-Converters of 50 MHz speed. It has been installed in a PC and controlled through software developed in Lab View. Besides, there were done some changes to the hardware inside the detector head including redesign of the Orientation Display Block (ODA card). Also a new electronic design was added to the Microprocessor Control Block (MPA card) which comprised a PIC micro controller acting as a tuning system for individual Photomultiplier Tubes. The images, obtained by measurement of 99m Tc point radioactive source, using modernized camera head demonstrate its overall performance. The system was developed and tested in an old Gamma Camera ORBITER II SIEMENS GAMMASONIC at National Institute of Oncology and Radiobiology (INOR) under CAMELUD project supported by National Program PNOULU and IAEA . (Author)

  4. Design of Endoscopic Capsule With Multiple Cameras.

    Science.gov (United States)

    Gu, Yingke; Xie, Xiang; Li, Guolin; Sun, Tianjia; Wang, Dan; Yin, Zheng; Zhang, Pengfei; Wang, Zhihua

    2015-08-01

    In order to reduce the miss rate of the wireless capsule endoscopy, in this paper, we propose a new system of the endoscopic capsule with multiple cameras. A master-slave architecture, including an efficient bus architecture and a four level clock management architecture, is applied for the Multiple Cameras Endoscopic Capsule (MCEC). For covering more area of the gastrointestinal tract wall with low power, multiple cameras with a smart image capture strategy, including movement sensitive control and camera selection, are used in the MCEC. To reduce the data transfer bandwidth and power consumption to prolong the MCEC's working life, a low complexity image compressor with PSNR 40.7 dB and compression rate 86% is implemented. A chipset is designed and implemented for the MCEC and a six cameras endoscopic capsule prototype is implemented by using the chipset. With the smart image capture strategy, the coverage rate of the MCEC prototype can achieve 98% and its power consumption is only about 7.1 mW.

  5. [Analog gamma camera digitalization computer system].

    Science.gov (United States)

    Rojas, G M; Quintana, J C; Jer, J; Astudillo, S; Arenas, L; Araya, H

    2004-01-01

    Digitalization of analogue gamma cameras systems, using special acquisition boards in microcomputers and appropriate software for acquisition and processing of nuclear medicine images is described in detail. Microcomputer integrated systems interconnected by means of a Local Area Network (LAN) and connected to several gamma cameras have been implemented using specialized acquisition boards. The PIP software (Portable Image Processing) was installed on each microcomputer to acquire and preprocess the nuclear medicine images. A specialized image processing software has been designed and developed for these purposes. This software allows processing of each nuclear medicine exam, in a semiautomatic procedure, and recording of the results on radiological films. . A stable, flexible and inexpensive system which makes it possible to digitize, visualize, process, and print nuclear medicine images obtained from analogue gamma cameras was implemented in the Nuclear Medicine Division. Such a system yields higher quality images than those obtained with analogue cameras while keeping operating costs considerably lower (filming: 24.6%, fixing 48.2% and developing 26%.) Analogue gamma camera systems can be digitalized economically. This system makes it possible to obtain optimal clinical quality nuclear medicine images, to increase the acquisition and processing efficiency, and to reduce the steps involved in each exam.

  6. A wide field X-ray camera

    International Nuclear Information System (INIS)

    Sims, M.; Turner, M.J.L.; Willingale, R.

    1980-01-01

    A wide field of view X-ray camera based on the Dicke or Coded Mask principle is described. It is shown that this type of instrument is more sensitive than a pin-hole camera, or than a scanning survey of a given region of sky for all wide field conditions. The design of a practical camera is discussed and the sensitivity and performance of the chosen design are evaluated by means of computer simulations. The Wiener Filter and Maximum Entropy methods of deconvolution are described and these methods are compared with each other and cross-correlation using data from the computer simulations. It is shown that the analytic expressions for sensitivity used by other workers are confirmed by the simulations, and that ghost images caused by incomplete coding can be substantially eliminated by the use of the Wiener Filter and the Maximum Entropy Method, with some penalty in computer time for the latter. The cyclic mask configuration is compared with the simple mask camera. It is shown that when the diffuse X-ray background dominates, the simple system is more sensitive and has the better angular resolution. When sources dominate the simple system is less sensitive. It is concluded that the simple coded mask camera is the best instrument for wide field imaging of the X-ray sky. (orig.)

  7. Calibration of action cameras for photogrammetric purposes.

    Science.gov (United States)

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  8. Calibration of Action Cameras for Photogrammetric Purposes

    Directory of Open Access Journals (Sweden)

    Caterina Balletti

    2014-09-01

    Full Text Available The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a easy to handle, (b capable of performing under extreme conditions and more importantly (c able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  9. Wired and Wireless Camera Triggering with Arduino

    Science.gov (United States)

    Kauhanen, H.; Rönnholm, P.

    2017-10-01

    Synchronous triggering is an important task that allows simultaneous data capture from multiple cameras. Accurate synchronization enables 3D measurements of moving objects or from a moving platform. In this paper, we describe one wired and four wireless variations of Arduino-based low-cost remote trigger systems designed to provide a synchronous trigger signal for industrial cameras. Our wireless systems utilize 315 MHz or 434 MHz frequencies with noise filtering capacitors. In order to validate the synchronization accuracy, we developed a prototype of a rotating trigger detection system (named RoTriDeS). This system is suitable to detect the triggering accuracy of global shutter cameras. As a result, the wired system indicated an 8.91 μs mean triggering time difference between two cameras. Corresponding mean values for the four wireless triggering systems varied between 7.92 and 9.42 μs. Presented values include both camera-based and trigger-based desynchronization. Arduino-based triggering systems appeared to be feasible, and they have the potential to be extended to more complicated triggering systems.

  10. Gamma cameras - a method of evaluation

    International Nuclear Information System (INIS)

    Oates, L.; Bibbo, G.

    2000-01-01

    Full text: With the sophistication and longevity of the modern gamma camera it is not often that the need arises to evaluate a gamma camera for purchase. We have recently been placed in the position of retiring our two single headed cameras of some vintage and replacing them with a state of the art dual head variable angle gamma camera. The process used for the evaluation consisted of five parts: (1) Evaluation of the technical specification as expressed in the tender document; (2) A questionnaire adapted from the British Society of Nuclear Medicine; (3) Site visits to assess gantry configuration, movement, patient access and occupational health, welfare and safety considerations; (4) Evaluation of the processing systems offered; (5) Whole of life costing based on equally configured systems. The results of each part of the evaluation were expressed using a weighted matrix analysis with each of the criteria assessed being weighted in accordance with their importance to the provision of an effective nuclear medicine service for our centre and the particular importance to paediatric nuclear medicine. This analysis provided an objective assessment of each gamma camera system from which a purchase recommendation was made. Copyright (2000) The Australian and New Zealand Society of Nuclear Medicine Inc

  11. Computer vision camera with embedded FPGA processing

    Science.gov (United States)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  12. Designing Camera Networks by Convex Quadratic Programming

    KAUST Repository

    Ghanem, Bernard

    2015-05-04

    ​In this paper, we study the problem of automatic camera placement for computer graphics and computer vision applications. We extend the problem formulations of previous work by proposing a novel way to incorporate visibility constraints and camera-to-camera relationships. For example, the placement solution can be encouraged to have cameras that image the same important locations from different viewing directions, which can enable reconstruction and surveillance tasks to perform better. We show that the general camera placement problem can be formulated mathematically as a convex binary quadratic program (BQP) under linear constraints. Moreover, we propose an optimization strategy with a favorable trade-off between speed and solution quality. Our solution is almost as fast as a greedy treatment of the problem, but the quality is significantly higher, so much so that it is comparable to exact solutions that take orders of magnitude more computation time. Because it is computationally attractive, our method also allows users to explore the space of solutions for variations in input parameters. To evaluate its effectiveness, we show a range of 3D results on real-world floorplans (garage, hotel, mall, and airport). ​

  13. Hidden cameras everything you need to know about covert recording, undercover cameras and secret filming

    CERN Document Server

    Plomin, Joe

    2016-01-01

    Providing authoritative information on the practicalities of using hidden cameras to expose abuse or wrongdoing, this book is vital reading for anyone who may use or encounter secret filming. It gives specific advice on using phones or covert cameras and unravels the complex legal and ethical issues that need to be considered.

  14. Mobile phone camera benchmarking: combination of camera speed and image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-01-01

    When a mobile phone camera is tested and benchmarked, the significance of quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. For example, ISO 15781 defines several measurements to evaluate various camera system delays. However, the speed or rapidity metrics of the mobile phone's camera system have not been used with the quality metrics even if the camera speed has become more and more important camera performance feature. There are several tasks in this work. Firstly, the most important image quality metrics are collected from the standards and papers. Secondly, the speed related metrics of a mobile phone's camera system are collected from the standards and papers and also novel speed metrics are identified. Thirdly, combinations of the quality and speed metrics are validated using mobile phones in the market. The measurements are done towards application programming interface of different operating system. Finally, the results are evaluated and conclusions are made. The result of this work gives detailed benchmarking results of mobile phone camera systems in the market. The paper defines also a proposal of combined benchmarking metrics, which includes both quality and speed parameters.

  15. The eye of the camera: effects of security cameras on pro-social behavior

    NARCIS (Netherlands)

    van Rompay, T.J.L.; Vonk, D.J.; Fransen, M.L.

    2009-01-01

    This study addresses the effects of security cameras on prosocial behavior. Results from previous studies indicate that the presence of others can trigger helping behavior, arising from the need for approval of others. Extending these findings, the authors propose that security cameras can likewise

  16. The development of large-aperture test system of infrared camera and visible CCD camera

    Science.gov (United States)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  17. VUV testing of science cameras at MSFC: QE measurement of the CLASP flight cameras

    Science.gov (United States)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-08-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint MSFC, National Astronomical Observatory of Japan (NAOJ), Instituto de Astrofisica de Canarias (IAC) and Institut D'Astrophysique Spatiale (IAS) sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512 × 512 detector, dual channel analog readout and an internally mounted cold block. At the flight CCD temperature of -20C, the CLASP cameras exceeded the low-noise performance requirements (UV, EUV and X-ray science cameras at MSFC.

  18. Acceptance/Operational Test Report for Tank 241-AN-104 camera and camera purge control system

    International Nuclear Information System (INIS)

    Castleberry, J.L.

    1995-11-01

    This Acceptance/Operational Test Procedure (ATP/OTP) will document the satisfactory operation of the camera purge panel, purge control panel, color camera system and associated control components destined for installation. The final acceptance of the complete system will be performed in the field. The purge panel and purge control panel will be tested for its safety interlock which shuts down the camera and pan-and-tilt inside the tank vapor space during loss of purge pressure and that the correct purge volume exchanges are performed as required by NFPA 496. This procedure is separated into seven sections. This Acceptance/Operational Test Report documents the successful acceptance and operability testing of the 241-AN-104 camera system and camera purge control system

  19. Multi-view collimators for scintillation cameras

    International Nuclear Information System (INIS)

    Hatton, J.; Grenier, R.P.

    1982-01-01

    This patent specification describes a collimator for obtaining multiple images of a portion of a body with a scintillation camera comprises a body of radiation-impervious material defining two or more groups of channels each group comprising a plurality of parallel channels having axes intersecting the portion of the body being viewed on one side of the collimator and intersecting the input surface of the camera on the other side of the collimator to produce a single view of said body, a number of different such views of said body being provided by each of said groups of channels, each axis of each channel lying in a plane approximately perpendicular to the plane of the input surface of the camera and all of such planes containing said axes being approximately parallel to each other. (author)

  20. Collimated trans-axial tomographic scintillation camera

    International Nuclear Information System (INIS)

    1980-01-01

    The objects of this invention are first to reduce the time required to obtain statistically significant data in trans-axial tomographic radioisotope scanning using a scintillation camera. Secondly, to provide a scintillation camera system to increase the rate of acceptance of radioactive events to contribute to the positional information obtainable from a known radiation source without sacrificing spatial resolution. Thirdly to reduce the scanning time without loss of image clarity. The system described comprises a scintillation camera detector, means for moving this in orbit about a cranial-caudal axis relative to a patient and a collimator having septa defining apertures such that gamma rays perpendicular to the axis are admitted with high spatial resolution, parallel to the axis with low resolution. The septa may be made of strips of lead. Detailed descriptions are given. (U.K.)

  1. Gate Simulation of a Gamma Camera

    International Nuclear Information System (INIS)

    Abidi, Sana; Mlaouhi, Zohra

    2008-01-01

    Medical imaging is a very important diagnostic because it allows for an exploration of the internal human body. The nuclear imaging is an imaging technique used in the nuclear medicine. It is to determine the distribution in the body of a radiotracers by detecting the radiation it emits using a detection device. Two methods are commonly used: Single Photon Emission Computed Tomography (SPECT) and the Positrons Emission Tomography (PET). In this work we are interested on modelling of a gamma camera. This simulation is based on Monte-Carlo language and in particular Gate simulator (Geant4 Application Tomographic Emission). We have simulated a clinical gamma camera called GAEDE (GKS-1) and then we validate these simulations by experiments. The purpose of this work is to monitor the performance of these gamma camera and the optimization of the detector performance and the the improvement of the images quality. (Author)

  2. Ultra-fast framing camera tube

    Science.gov (United States)

    Kalibjian, Ralph

    1981-01-01

    An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.

  3. Camera systems for crash and hyge testing

    Science.gov (United States)

    Schreppers, Frederik

    1995-05-01

    Since the beginning of the use of high speed cameras for crash and hyge- testing substantial changements have taken place. Both the high speed cameras and the electronic control equipment are more sophisticated nowadays. With regard to high speed equipment, a short historical retrospective view will show that concerning high speed cameras, the improvements are mainly concentrated in design details, where as the electronic control equipment has taken full advantage of the rapid progress in electronic and computer technology in the course of the last decades. Nowadays many companies and institutes involved in crash and hyge-testing wish to perform this testing, as far as possible, as an automatic computer controlled routine in order to maintain and improve security and quality. By means of several in practice realize solutions, it will be shown how their requirements could be met.

  4. Mechanical Design of the LSST Camera

    Energy Technology Data Exchange (ETDEWEB)

    Nordby, Martin; Bowden, Gordon; Foss, Mike; Guiffre, Gary; /SLAC; Ku, John; /Unlisted; Schindler, Rafe; /SLAC

    2008-06-13

    The LSST camera is a tightly packaged, hermetically-sealed system that is cantilevered into the main beam of the LSST telescope. It is comprised of three refractive lenses, on-board storage for five large filters, a high-precision shutter, and a cryostat that houses the 3.2 giga-pixel CCD focal plane along with its support electronics. The physically large optics and focal plane demand large structural elements to support them, but the overall size of the camera and its components must be minimized to reduce impact on the image stability. Also, focal plane and optics motions must be minimized to reduce systematic errors in image reconstruction. Design and analysis for the camera body and cryostat will be detailed.

  5. Temperature measurement with industrial color camera devices

    Science.gov (United States)

    Schmidradler, Dieter J.; Berndorfer, Thomas; van Dyck, Walter; Pretschuh, Juergen

    1999-05-01

    This paper discusses color camera based temperature measurement. Usually, visual imaging and infrared image sensing are treated as two separate disciplines. We will show, that a well selected color camera device might be a cheaper, more robust and more sophisticated solution for optical temperature measurement in several cases. Herein, only implementation fragments and important restrictions for the sensing element will be discussed. Our aim is to draw the readers attention to the use of visual image sensors for measuring thermal radiation and temperature and to give reasons for the need of improved technologies for infrared camera devices. With AVL-List, our partner of industry, we successfully used the proposed sensor to perform temperature measurement for flames inside the combustion chamber of diesel engines which finally led to the presented insights.

  6. Scintillation camera with second order resolution

    International Nuclear Information System (INIS)

    1975-01-01

    A scintillation camera is described for use in radioisotope imaging to determine the concentration of radionuclides in a two-dimensional area in which means is provided for second-order positional resolution. The phototubes which normally provide only a single order of resolution, are modified to provide second-order positional resolution of radiation within an object positioned for viewing by the scintillation camera. The phototubes are modified in that multiple anodes are provided to receive signals from the photocathode in a manner such that each anode is particularly responsive to photoemissions from a limited portion of the photocathode. Resolution of radioactive events appearing as an output of this scintillation camera is thereby improved

  7. PEOPLE REIDENTIFCATION IN A DISTRIBUTED CAMERA NETWORK

    Directory of Open Access Journals (Sweden)

    Icaro Oliveira de Oliveira

    2010-06-01

    Full Text Available This paper presents an approach to the object reidentification problem in a distributed camera network system. The reidentification or reacquisition problem consists essentially on the matching process of images acquired from different cameras. This work is applied in a monitored environment by cameras. This application is important to modern security systems, in which the targets presence identification in the environment expands the capacity of action by security agents in real time and provides important parameters like localization for each target. We used target’s interest points and target’s color with features for reidentification. The satisfactory results were obtained from real experiments in public video datasets and synthetic images with noise.

  8. Video Sharing System Based on Wi-Fi Camera

    OpenAIRE

    Qidi Lin; Hewei Yu; Jinbin Huang; Weile Liang

    2015-01-01

    This paper introduces a video sharing platform based on WiFi, which consists of camera, mobile phone and PC server. This platform can receive wireless signal from the camera and show the live video on the mobile phone captured by camera. In addition, it is able to send commands to camera and control the camera's holder to rotate. The platform can be applied to interactive teaching and dangerous area's monitoring and so on. Testing results show that the platform can share ...

  9. Optimum color filters for CCD digital cameras

    Science.gov (United States)

    Engelhardt, Kai; Kunz, Rino E.; Seitz, Peter; Brunner, Harald; Knop, Karl

    1993-12-01

    As part of the ESPRIT II project No. 2103 (MASCOT) a high performance prototype color CCD still video camera was developed. Intended for professional usage such as in the graphic arts, the camera provides a maximum resolution of 3k X 3k full color pixels. A high colorimetric performance was achieved through specially designed dielectric filters and optimized matrixing. The color transformation was obtained by computer simulation of the camera system and non-linear optimization which minimized the perceivable color errors as measured in the 1976 CIELUV uniform color space for a set of about 200 carefully selected test colors. The color filters were designed to allow perfect colorimetric reproduction in principle and at the same time with imperceptible color noise and with special attention to fabrication tolerances. The camera system includes a special real-time digital color processor which carries out the color transformation. The transformation can be selected from a set of sixteen matrices optimized for different illuminants and output devices. Because the actual filter design was based on slightly incorrect data the prototype camera showed a mean colorimetric error of 2.7 j.n.d. (CIELUV) in experiments. Using correct input data in the redesign of the filters, a mean colorimetric error of only 1 j.n.d. (CIELUV) seems to be feasible, implying that it is possible with such an optimized color camera to achieve such a high colorimetric performance that the reproduced colors in an image cannot be distinguished from the original colors in a scene, even in direct comparison.

  10. Phase camera experiment for Advanced Virgo

    Energy Technology Data Exchange (ETDEWEB)

    Agatsuma, Kazuhiro, E-mail: agatsuma@nikhef.nl [National Institute for Subatomic Physics, Amsterdam (Netherlands); Beuzekom, Martin van; Schaaf, Laura van der [National Institute for Subatomic Physics, Amsterdam (Netherlands); Brand, Jo van den [National Institute for Subatomic Physics, Amsterdam (Netherlands); VU University, Amsterdam (Netherlands)

    2016-07-11

    We report on a study of the phase camera, which is a frequency selective wave-front sensor of a laser beam. This sensor is utilized for monitoring sidebands produced by phase modulations in a gravitational wave (GW) detector. Regarding the operation of the GW detectors, the laser modulation/demodulation method is used to measure mirror displacements and used for the position controls. This plays a significant role because the quality of controls affect the noise level of the GW detector. The phase camera is able to monitor each sideband separately, which has a great benefit for the manipulation of the delicate controls. Also, overcoming mirror aberrations will be an essential part of Advanced Virgo (AdV), which is a GW detector close to Pisa. Especially low-frequency sidebands can be affected greatly by aberrations in one of the interferometer cavities. The phase cameras allow tracking such changes because the state of the sidebands gives information on mirror aberrations. A prototype of the phase camera has been developed and is currently tested. The performance checks are almost completed and the installation of the optics at the AdV site has started. After the installation and commissioning, the phase camera will be combined to a thermal compensation system that consists of CO{sub 2} lasers and compensation plates. In this paper, we focus on the prototype and show some limitations from the scanner performance. - Highlights: • The phase camera is being developed for a gravitational wave detector. • A scanner performance limits the operation speed and layout design of the system. • An operation range was found by measuring the frequency response of the scanner.

  11. Phase camera experiment for Advanced Virgo

    International Nuclear Information System (INIS)

    Agatsuma, Kazuhiro; Beuzekom, Martin van; Schaaf, Laura van der; Brand, Jo van den

    2016-01-01

    We report on a study of the phase camera, which is a frequency selective wave-front sensor of a laser beam. This sensor is utilized for monitoring sidebands produced by phase modulations in a gravitational wave (GW) detector. Regarding the operation of the GW detectors, the laser modulation/demodulation method is used to measure mirror displacements and used for the position controls. This plays a significant role because the quality of controls affect the noise level of the GW detector. The phase camera is able to monitor each sideband separately, which has a great benefit for the manipulation of the delicate controls. Also, overcoming mirror aberrations will be an essential part of Advanced Virgo (AdV), which is a GW detector close to Pisa. Especially low-frequency sidebands can be affected greatly by aberrations in one of the interferometer cavities. The phase cameras allow tracking such changes because the state of the sidebands gives information on mirror aberrations. A prototype of the phase camera has been developed and is currently tested. The performance checks are almost completed and the installation of the optics at the AdV site has started. After the installation and commissioning, the phase camera will be combined to a thermal compensation system that consists of CO 2 lasers and compensation plates. In this paper, we focus on the prototype and show some limitations from the scanner performance. - Highlights: • The phase camera is being developed for a gravitational wave detector. • A scanner performance limits the operation speed and layout design of the system. • An operation range was found by measuring the frequency response of the scanner.

  12. Small Orbital Stereo Tracking Camera Technology Development

    Science.gov (United States)

    Gagliano, L.; Bryan, T.; MacLeod, T.

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASAs Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  13. Attack of the S. Mutans!: a stereoscopic-3D multiplayer direct-manipulation behavior-modification serious game for improving oral health in pre-teens

    Science.gov (United States)

    Hollander, Ari; Rose, Howard; Kollin, Joel; Moss, William

    2011-03-01

    Attack! of the S. Mutans is a multi-player game designed to harness the immersion and appeal possible with wide-fieldof- view stereoscopic 3D to combat the tooth decay epidemic. Tooth decay is one of the leading causes of school absences and costs more than $100B annually in the U.S. In 2008 the authors received a grant from the National Institutes of Health to build a science museum exhibit that included a suite of serious games involving the behaviors and bacteria that cause cavities. The centerpiece is an adventure game where five simultaneous players use modified Wii controllers to battle biofilms and bacteria while immersed in environments generated within a 11-foot stereoscopic WUXGA display. The authors describe the system and interface used in this prototype application and some of the ways they attempted to use the power of immersion and the appeal of S3D revolution to change health attitudes and self-care habits.

  14. Surgical approaches to complex vascular lesions: the use of virtual reality and stereoscopic analysis as a tool for resident and student education.

    Science.gov (United States)

    Agarwal, Nitin; Schmitt, Paul J; Sukul, Vishad; Prestigiacomo, Charles J

    2012-08-01

    Virtual reality training for complex tasks has been shown to be of benefit in fields involving highly technical and demanding skill sets. The use of a stereoscopic three-dimensional (3D) virtual reality environment to teach a patient-specific analysis of the microsurgical treatment modalities of a complex basilar aneurysm is presented. Three different surgical approaches were evaluated in a virtual environment and then compared to elucidate the best surgical approach. These approaches were assessed with regard to the line-of-sight, skull base anatomy and visualisation of the relevant anatomy at the level of the basilar artery and surrounding structures. Overall, the stereoscopic 3D virtual reality environment with fusion of multimodality imaging affords an excellent teaching tool for residents and medical students to learn surgical approaches to vascular lesions. Future studies will assess the educational benefits of this modality and develop a series of metrics for student assessments.

  15. Performance assessment of gamma cameras. Part 1

    International Nuclear Information System (INIS)

    Elliot, A.T.; Short, M.D.; Potter, D.C.; Barnes, K.J.

    1980-11-01

    The Dept. of Health and Social Security and the Scottish Home and Health Dept. has sponsored a programme of measurements of the important performance characteristics of 15 leading types of gamma cameras providing a routine radionuclide imaging service in hospitals throughout the UK. Measurements have been made of intrinsic resolution, system resolution, non-uniformity, spatial distortion, count rate performance, sensitivity, energy resolution and shield leakage. The main aim of this performance assessment was to provide sound information to the NHS to ease the task of those responsible for the purchase of gamma cameras. (U.K.)

  16. Analysis of Brown camera distortion model

    Science.gov (United States)

    Nowakowski, Artur; Skarbek, Władysław

    2013-10-01

    Contemporary image acquisition devices introduce optical distortion into image. It results in pixel displacement and therefore needs to be compensated for many computer vision applications. The distortion is usually modeled by the Brown distortion model, which parameters can be included in camera calibration task. In this paper we describe original model, its dependencies and analyze orthogonality with regard to radius for its decentering distortion component. We also report experiments with camera calibration algorithm included in OpenCV library, especially a stability of distortion parameters estimation is evaluated.

  17. Camera-enabled techniques for organic synthesis

    Directory of Open Access Journals (Sweden)

    Steven V. Ley

    2013-05-01

    Full Text Available A great deal of time is spent within synthetic chemistry laboratories on non-value-adding activities such as sample preparation and work-up operations, and labour intensive activities such as extended periods of continued data collection. Using digital cameras connected to computer vision algorithms, camera-enabled apparatus can perform some of these processes in an automated fashion, allowing skilled chemists to spend their time more productively. In this review we describe recent advances in this field of chemical synthesis and discuss how they will lead to advanced synthesis laboratories of the future.

  18. Results with the UKIRT infrared camera

    International Nuclear Information System (INIS)

    Mclean, I.S.

    1987-01-01

    Recent advances in focal plane array technology have made an immense impact on infrared astronomy. Results from the commissioning of the first infrared camera on UKIRT (the world's largest IR telescope) are presented. The camera, called IRCAM 1, employs the 62 x 58 InSb DRO array from SBRC in an otherwise general purpose system which is briefly described. Several imaging modes are possible including staring, chopping and a high-speed snapshot mode. Results to be presented include the first true high resolution images at IR wavelengths of the entire Orion nebula

  19. A holographic color camera for recording artifacts

    International Nuclear Information System (INIS)

    Jith, Abhay

    2013-01-01

    Advent of 3D televisions has created a new wave of public interest in images with depth. Though these technologies create moving pictures with apparent depth, it lacks the visual appeal and a set of other positive aspects of color holographic images. The above new wave of interest in 3D will definitely help to fuel popularity of holograms. In view of this, a low cost and handy color holography camera is designed for recording color holograms of artifacts. It is believed that such cameras will help to record medium format color holograms outside conventional holography laboratories and to popularize color holography. The paper discusses the design and the results obtained.

  20. Imacon 600 ultrafast streak camera evaluation

    International Nuclear Information System (INIS)

    Owen, T.C.; Coleman, L.W.

    1975-01-01

    The Imacon 600 has a number of designed in disadvantages for use as an ultrafast diagnostic instrument. The unit is physically large (approximately 5' long) and uses an external power supply rack for the image intensifier. Water cooling is required for the intensifier; it is quiet but not conducive to portability. There is no interlock on the cooling water. The camera does have several switch selectable sweep speeds. This is desirable if one is working with both slow and fast events. The camera can be run in a framing mode. (MOW)

  1. Nonmedical applications of a positron camera

    International Nuclear Information System (INIS)

    Hawkesworth, M.R.; Parker, D.J.; Fowles, P.; Crilly, J.F.; Jefferies, N.L.; Jonkers, G.

    1991-01-01

    The positron camera in the School on Physics and Space Research, University of Birmingham, is based on position-sensitive multiwire γ-ray detectors developed at the Rutherford Appleton Laboratory. The current characteristics of the camera are discussed with particular reference to its suitability for flow mapping in industrial subjects. The techniques developed for studying the dynamics of processes with time scales ranging from milliseconds to days are described, and examples of recent results from a variety of industrial applications are presented. (orig.)

  2. CBF tomographic measurement with the scintillation camera

    International Nuclear Information System (INIS)

    Kayayan, R.; Philippon, B.; Pehlivanian, E.

    1989-01-01

    Single photon emission tomography (SPECT) allows calculation of regional cerebral blood flow (CBF) in multiple cross-sections of the human brain. The methods of Kanno and Lassen is utilized and a study of reproductibility in terms of integration numbers and period of integrations is performed by computer simulation and experimental study with a Gamma-camera. Finally, the possibility of calculating the regional cerabral blood flow with a double headed rotating Gamma-camera by inert gas inhalation, like the Xenon-133 is discussed [fr

  3. Setup accuracy of stereoscopic X-ray positioning with automated correction for rotational errors in patients treated with conformal arc radiotherapy for prostate cancer

    International Nuclear Information System (INIS)

    Soete, Guy; Verellen, Dirk; Tournel, Koen; Storme, Guy

    2006-01-01

    We evaluated setup accuracy of NovalisBody stereoscopic X-ray positioning with automated correction for rotational errors with the Robotics Tilt Module in patients treated with conformal arc radiotherapy for prostate cancer. The correction of rotational errors was shown to reduce random and systematic errors in all directions. (NovalisBody TM and Robotics Tilt Module TM are products of BrainLAB A.G., Heimstetten, Germany)

  4. A comparison of cup-to-disc ratio estimates by fundus biomicroscopy and stereoscopic optic disc photography in the Tema Eye Survey.

    Science.gov (United States)

    Mwanza, J C; Grover, D S; Budenz, D L; Herndon, L W; Nolan, W; Whiteside-de Vos, J; Hay-Smith, G; Bandi, J R; Bhansali, K A; Forbes, L A; Feuer, W J; Barton, K

    2017-08-01

    PurposeTo determine if there are systematic differences in cup-to-disc ratio (CDR) grading using fundus biomicroscopy compared to stereoscopic disc photograph reading.MethodsThe vertical cup-to-disc ratio (VCDR) and horizontal cup-to-disc ratio (HCDR) of 2200 eyes (testing set) were graded by glaucoma subspecialists through fundus biomicroscopy and by a reading center using stereoscopic disc photos. For validation, the glaucoma experts also estimated VCDR and HCDR using stereoscopic disc photos in a subset of 505 eyes that they had assessed biomicroscopically. Agreement between grading methods was assessed with Bland-Altman plots.ResultsIn both sets, photo reading tended to yield small CDRs marginally larger, but read large CDRs marginally smaller than fundus biomicroscopy. The mean differences in VCDR and HCDR were 0.006±0.18 and 0.05±0.18 (testing set), and -0.053±0.23 and -0.028±0.21 (validation set), respectively. The limits of agreement were ~0.4, which is twice as large as the cutoff of clinically significant CDR difference between methods. CDR estimates differed by 0.2 or more in 33.8-48.7% between methods.ConclusionsThe differences in CDR estimates between fundus biomicroscopy and stereoscopic optic disc photo reading showed a wide variation, and reached clinically significance threshold in a large proportion of patients, suggesting a poor agreement. Thus, glaucoma should be monitored by comparing baseline and subsequent CDR estimates using the same method rather than comparing photographs to fundus biomicroscopy.

  5. Compact Optical Technique for Streak Camera Calibration

    International Nuclear Information System (INIS)

    Bell, P; Griffith, R; Hagans, K; Lerche, R; Allen, C; Davies, T; Janson, F; Justin, R; Marshall, B; Sweningsen, O

    2004-01-01

    The National Ignition Facility (NIF) is under construction at the Lawrence Livermore National Laboratory (LLNL) for the U.S. Department of Energy Stockpile Stewardship Program. Optical streak cameras are an integral part of the experimental diagnostics instrumentation. To accurately reduce data from the streak cameras a temporal calibration is required. This article describes a technique for generating trains of precisely timed short-duration optical pulses1 (optical comb generators) that are suitable for temporal calibrations. These optical comb generators (Figure 1) are used with the LLNL optical streak cameras. They are small, portable light sources that produce a series of temporally short, uniformly spaced, optical pulses. Comb generators have been produced with 0.1, 0.5, 1, 3, 6, and 10-GHz pulse trains of 780-nm wavelength light with individual pulse durations of ∼25-ps FWHM. Signal output is via a fiber-optic connector. Signal is transported from comb generator to streak camera through multi-mode, graded-index optical fibers. At the NIF, ultra-fast streak-cameras are used by the Laser Fusion Program experimentalists to record fast transient optical signals. Their temporal resolution is unmatched by any other transient recorder. Their ability to spatially discriminate an image along the input slit allows them to function as a one-dimensional image recorder, time-resolved spectrometer, or multichannel transient recorder. Depending on the choice of photocathode, they can be made sensitive to photon energies from 1.1 eV to 30 keV and beyond. Comb generators perform two important functions for LLNL streak-camera users. First, comb generators are used as a precision time-mark generator for calibrating streak camera sweep rates. Accuracy is achieved by averaging many streak camera images of comb generator signals. Time-base calibrations with portable comb generators are easily done in both the calibration laboratory and in situ. Second, comb signals are applied

  6. Relationship between Stereoscopic Vision, Visual Perception, and Microstructure Changes of Corpus Callosum and Occipital White Matter in the 4-Year-Old Very Low Birth Weight Children

    Directory of Open Access Journals (Sweden)

    Przemko Kwinta

    2015-01-01

    Full Text Available Aim. To assess the relationship between stereoscopic vision, visual perception, and microstructure of the corpus callosum (CC and occipital white matter, 61 children born with a mean birth weight of 1024 g (SD 270 g were subjected to detailed ophthalmologic evaluation, Developmental Test of Visual Perception (DTVP-3, and diffusion tensor imaging (DTI at the age of 4. Results. Abnormal stereoscopic vision was detected in 16 children. Children with abnormal stereoscopic vision had smaller CC (CC length: 53±6 mm versus 61±4 mm; p<0.01; estimated CC area: 314±106 mm2 versus 446±79 mm2; p<0.01 and lower fractional anisotropy (FA values in CC (FA value of rostrum/genu: 0.7±0.09 versus 0.79±0.07; p<0.01; FA value of CC body: 0.74±0.13 versus 0.82±0.09; p=0.03. We found a significant correlation between DTVP-3 scores, CC size, and FA values in rostrum and body. This correlation was unrelated to retinopathy of prematurity. Conclusions. Visual perceptive dysfunction in ex-preterm children without major sequelae of prematurity depends on more subtle changes in the brain microstructure, including CC. Role of interhemispheric connections in visual perception might be more complex than previously anticipated.

  7. Preliminary evaluation of a prototype stereoscopic a-Si:H-based X-ray imaging system for full-field digital mammography

    International Nuclear Information System (INIS)

    Darambara, D.G.; Speller, R.D.; Horrocks, J.A.; Godber, S.; Wilson, R.; Hanby, A.

    2001-01-01

    In a pre-clinical study, we have been investigating the potential of a-Si:H active matrix, flat panel imagers for X-ray full-field digital mammography through the development of an advanced 3D X-ray imaging system and have measured a number of their important imaging characteristics. To enhance the information embodied into the digital images produced by the a-Si array, stereoscopic images, created by viewing the object under examination from two angles and recombining the images, were obtained. This method provided us with a full 3D X-ray image of the test object as well as left and right perspective 2D images all at the same time. Within this scope, images of fresh, small human breast tissue specimens--normal and diseased--were obtained at ±2 deg., processed and stereoscopically displayed for a pre-clinical evaluation by radiologists. It was demonstrated that the stereoscopic presentation of the images provides important additional information and has potential benefits over the more traditional 2D data

  8. Evaluation of mobile phone camera benchmarking using objective camera speed and image quality metrics

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2014-11-01

    When a mobile phone camera is tested and benchmarked, the significance of image quality metrics is widely acknowledged. There are also existing methods to evaluate the camera speed. However, the speed or rapidity metrics of the mobile phone's camera system has not been used with the quality metrics even if the camera speed has become a more and more important camera performance feature. There are several tasks in this work. First, the most important image quality and speed-related metrics of a mobile phone's camera system are collected from the standards and papers and, also, novel speed metrics are identified. Second, combinations of the quality and speed metrics are validated using mobile phones on the market. The measurements are done toward application programming interface of different operating systems. Finally, the results are evaluated and conclusions are made. The paper defines a solution to combine different image quality and speed metrics to a single benchmarking score. A proposal of the combined benchmarking metric is evaluated using measurements of 25 mobile phone cameras on the market. The paper is a continuation of a previous benchmarking work expanded with visual noise measurement and updates of the latest mobile phone versions.

  9. The LLL compact 10-ps streak camera

    International Nuclear Information System (INIS)

    Thomas, S.W.; Houghton, J.W.; Tripp, G.R.; Coleman, L.W.

    1975-01-01

    The 10-ps streak camera has been redesigned to simplify its operation, reduce manufacturing costs, and improve its appearance. The electronics have been simplified, a film indexer added, and a contacted slit has been evaluated. Data support a 10-ps resolution. (author)

  10. Toward standardising gamma camera quality control procedures

    International Nuclear Information System (INIS)

    Alkhorayef, M.A.; Alnaaimi, M.A.; Alduaij, M.A.; Mohamed, M.O.; Ibahim, S.Y.; Alkandari, F.A.; Bradley, D.A.

    2015-01-01

    Attaining high standards of efficiency and reliability in the practice of nuclear medicine requires appropriate quality control (QC) programs. For instance, the regular evaluation and comparison of extrinsic and intrinsic flood-field uniformity enables the quick correction of many gamma camera problems. Whereas QC tests for uniformity are usually performed by exposing the gamma camera crystal to a uniform flux of gamma radiation from a source of known activity, such protocols can vary significantly. Thus, there is a need for optimization and standardization, in part to allow direct comparison between gamma cameras from different vendors. In the present study, intrinsic uniformity was examined as a function of source distance, source activity, source volume and number of counts. The extrinsic uniformity and spatial resolution were also examined. Proper standard QC procedures need to be implemented because of the continual development of nuclear medicine imaging technology and the rapid expansion and increasing complexity of hybrid imaging system data. The present work seeks to promote a set of standard testing procedures to contribute to the delivery of safe and effective nuclear medicine services. - Highlights: • Optimal parameters for quality control of the gamma camera are proposed. • For extrinsic and intrinsic uniformity a minimum of 15,000 counts is recommended. • For intrinsic flood uniformity the activity should not exceed 100 µCi (3.7 MBq). • For intrinsic uniformity the source to detector distance should be at least 60 cm. • The bar phantom measurement must be performed with at least 15 million counts.

  11. Increased Automation in Stereo Camera Calibration Techniques

    Directory of Open Access Journals (Sweden)

    Brandi House

    2006-08-01

    Full Text Available Robotic vision has become a very popular field in recent years due to the numerous promising applications it may enhance. However, errors within the cameras and in their perception of their environment can cause applications in robotics to fail. To help correct these internal and external imperfections, stereo camera calibrations are performed. There are currently many accurate methods of camera calibration available; however, most or all of them are time consuming and labor intensive. This research seeks to automate the most labor intensive aspects of a popular calibration technique developed by Jean-Yves Bouguet. His process requires manual selection of the extreme corners of a checkerboard pattern. The modified process uses embedded LEDs in the checkerboard pattern to act as active fiducials. Images are captured of the checkerboard with the LEDs on and off in rapid succession. The difference of the two images automatically highlights the location of the four extreme corners, and these corner locations take the place of the manual selections. With this modification to the calibration routine, upwards of eighty mouse clicks are eliminated per stereo calibration. Preliminary test results indicate that accuracy is not substantially affected by the modified procedure. Improved automation to camera calibration procedures may finally penetrate the barriers to the use of calibration in practice.

  12. Thermoplastic film camera for holographic recording

    International Nuclear Information System (INIS)

    Liegeois, C.; Meyrueis, P.

    1982-01-01

    The design thermoplastic-film recording camera and its performance for holography of extended objects are reported. Special corona geometry and accurate control of development heat by constant current heating and high resolution measurement of the develop temperature make easy recording of reproducible, large aperture holograms possible. The experimental results give the transfer characteristics, the diffraction efficiency characteristics and the spatial frequency response. (orig.)

  13. A novel super-resolution camera model

    Science.gov (United States)

    Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli

    2015-05-01

    Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.

  14. Scintillation camera for high activity sources

    International Nuclear Information System (INIS)

    Arseneau, R.E.

    1976-01-01

    A scintillation camera is provided with electrical components which expand the intrinsic maximum rate of acceptance for processing of pulses emanating from detected radioactive events. Buffer storage is provided to accommodate temporary increases in the level of radioactivity. An early provisional determination of acceptability of pulses allows many unacceptable pulses to be discarded at an early stage

  15. Novel computer-based endoscopic camera

    Science.gov (United States)

    Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia

    1995-05-01

    We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.

  16. FPS camera sync and reset chassis

    International Nuclear Information System (INIS)

    Yates, G.J.

    1980-06-01

    The sync and reset chassis provides all the circuitry required to synchronize an event to be studied, a remote free-running focus projection and scanning (FPS) data-acquisition TV camera, and a video signal recording system. The functions, design, and operation of this chassis are described in detail

  17. Lessons Learned from Crime Caught on Camera

    DEFF Research Database (Denmark)

    Lindegaard, Marie Rosenkrantz; Bernasco, Wim

    2018-01-01

    Objectives: The widespread use of camera surveillance in public places offers criminologists the opportunity to systematically and unobtrusively observe crime, their main subject matter. The purpose of this essay is to inform the reader of current developments in research on crimes caught on came...

  18. Terrain mapping camera for Chandrayaan-1

    Indian Academy of Sciences (India)

    The camera will have four gain settings to cover the varying illumination conditions of the Moon. Additionally, a provision of imaging with reduced resolution, for improving Signal-to-Noise Ratio (SNR) in polar regions, which have poor illumination conditions throughout, has been made. SNR of better than 100 is expected in ...

  19. Fog camera to visualize ionizing charged particles

    International Nuclear Information System (INIS)

    Trujillo A, L.; Rodriguez R, N. I.; Vega C, H. R.

    2014-10-01

    The human being can not perceive the different types of ionizing radiation, natural or artificial, present in the nature, for what appropriate detection systems have been developed according to the sensibility to certain radiation type and certain energy type. The objective of this work was to build a fog camera to visualize the traces, and to identify the trajectories, produced by charged particles with high energy, coming mainly of the cosmic rays. The origin of the cosmic rays comes from the solar radiation generated by solar eruptions where the protons compose most of this radiation. It also comes, of the galactic radiation which is composed mainly of charged particles and gamma rays that comes from outside of the solar system. These radiation types have energy time millions higher that those detected in the earth surface, being more important as the height on the sea level increases. These particles in their interaction produce secondary particles that are detectable by means of this cameras type. The camera operates by means of a saturated atmosphere of alcohol vapor. In the moment in that a charged particle crosses the cold area of the atmosphere, the medium is ionized and the particle acts like a condensation nucleus of the alcohol vapor, leaving a visible trace of its trajectory. The built camera was very stable, allowing the detection in continuous form and the observation of diverse events. (Author)

  20. The Legal Implications of Surveillance Cameras

    Science.gov (United States)

    Steketee, Amy M.

    2012-01-01

    The nature of school security has changed dramatically over the last decade. Schools employ various measures, from metal detectors to identification badges to drug testing, to promote the safety and security of staff and students. One of the increasingly prevalent measures is the use of security cameras. In fact, the U.S. Department of Education…

  1. Divergence-ratio axi-vision camera (Divcam): A distance mapping camera

    International Nuclear Information System (INIS)

    Iizuka, Keigo

    2006-01-01

    A novel distance mapping camera the divergence-ratio axi-vision camera (Divcam) is proposed. The decay rate of the illuminating light with distance due to the divergence of the light is used as means of mapping the distance. Resolutions of 10 mm over a range of meters and 0.5 mm over a range of decimeters were achieved. The special features of this camera are its high resolution real-time operation, simplicity, compactness, light weight, portability, and yet low fabrication cost. The feasibility of various potential applications is also included

  2. Integrating Gigabit ethernet cameras into EPICS at Diamond light source

    International Nuclear Information System (INIS)

    Cobb, T.

    2012-01-01

    At Diamond Light Source a range of cameras are used to provide images for diagnostic purposes in both the accelerator and photo beamlines. The accelerator and existing beamlines use Point Grey Flea and Flea2 Firewire cameras. We have selected Gigabit Ethernet cameras supporting GigE Vision for our new photon beamlines. GigE Vision is an interface standard for high speed Ethernet cameras which encourages inter-operability between manufacturers. This paper describes the challenges encountered while integrating GigE Vision cameras from a range of vendors into EPICS. GigE Vision cameras appear to be more reliable than the Firewire cameras, and the simple cabling makes much easier to move the cameras to different positions. Upcoming power over Ethernet versions of the cameras will reduce the number of cables still further

  3. New nuclear medicine gamma camera systems

    International Nuclear Information System (INIS)

    Villacorta, Edmundo V.

    1997-01-01

    The acquisition of the Open E.CAM and DIACAM gamma cameras by Makati Medical Center is expected to enhance the capabilities of its nuclear medicine facilities. When used as an aid to diagnosis, nuclear medicine entails the introduction of a minute amount of radioactive material into the patient; thus, no reaction or side-effect is expected. When it reaches the particular target organ, depending on the radiopharmaceutical, a lesion will appear as a decrease (cold) area or increase (hot) area in the radioactive distribution as recorded byu the gamma cameras. Gamma camera images in slices or SPECT (Single Photon Emission Computer Tomography), increase the sensitivity and accuracy in detecting smaller and deeply seated lesions, which otherwise may not be detected in the regular single planar images. Due to the 'open' design of the equipment, claustrophobic patients will no longer feel enclosed during the procedure. These new gamma cameras yield improved resolution and superb image quality, and the higher photon sensitivity shortens imaging acquisition time. The E.CAM, which is the latest generation gamma camera, is featured by its variable angle dual-head system, the only one available in the Philipines, and the excellent choice for Myocardial Perfusion Imaging (MPI). From the usual 45 minutes, the acquisition time for gated SPECT imaging of the heart has now been remarkably reduced to 12 minutes. 'Gated' infers snap-shots of the heart in selected phases of its contraction and relaxation as triggered by ECG. The DIACAM is installed in a room with access outside the main entrance of the department, intended specially for bed-borne patients. Both systems are equipped with a network of high performance Macintosh ICOND acquisition and processing computers. Added to the hardware is the ICON processing software which allows total simultaneous acquisition and processing capabilities in the same operator's terminal. Video film and color printers are also provided. Together

  4. X-ray imaging using digital cameras

    Science.gov (United States)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  5. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  6. A novel fully integrated handheld gamma camera

    International Nuclear Information System (INIS)

    Massari, R.; Ucci, A.; Campisi, C.; Scopinaro, F.; Soluri, A.

    2016-01-01

    In this paper, we present an innovative, fully integrated handheld gamma camera, namely designed to gather in the same device the gamma ray detector with the display and the embedded computing system. The low power consumption allows the prototype to be battery operated. To be useful in radioguided surgery, an intraoperative gamma camera must be very easy to handle since it must be moved to find a suitable view. Consequently, we have developed the first prototype of a fully integrated, compact and lightweight gamma camera for radiopharmaceuticals fast imaging. The device can operate without cables across the sterile field, so it may be easily used in the operating theater for radioguided surgery. The prototype proposed consists of a Silicon Photomultiplier (SiPM) array coupled with a proprietary scintillation structure based on CsI(Tl) crystals. To read the SiPM output signals, we have developed a very low power readout electronics and a dedicated analog to digital conversion system. One of the most critical aspects we faced designing the prototype was the low power consumption, which is mandatory to develop a battery operated device. We have applied this detection device in the lymphoscintigraphy technique (sentinel lymph node mapping) comparing the results obtained with those of a commercial gamma camera (Philips SKYLight). The results obtained confirm a rapid response of the device and an adequate spatial resolution for the use in the scintigraphic imaging. This work confirms the feasibility of a small gamma camera with an integrated display. This device is designed for radioguided surgery and small organ imaging, but it could be easily combined into surgical navigation systems.

  7. A novel fully integrated handheld gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Massari, R.; Ucci, A.; Campisi, C. [Biostructure and Bioimaging Institute (IBB), National Research Council of Italy (CNR), Rome (Italy); Scopinaro, F. [University of Rome “La Sapienza”, S. Andrea Hospital, Rome (Italy); Soluri, A., E-mail: alessandro.soluri@ibb.cnr.it [Biostructure and Bioimaging Institute (IBB), National Research Council of Italy (CNR), Rome (Italy)

    2016-10-01

    In this paper, we present an innovative, fully integrated handheld gamma camera, namely designed to gather in the same device the gamma ray detector with the display and the embedded computing system. The low power consumption allows the prototype to be battery operated. To be useful in radioguided surgery, an intraoperative gamma camera must be very easy to handle since it must be moved to find a suitable view. Consequently, we have developed the first prototype of a fully integrated, compact and lightweight gamma camera for radiopharmaceuticals fast imaging. The device can operate without cables across the sterile field, so it may be easily used in the operating theater for radioguided surgery. The prototype proposed consists of a Silicon Photomultiplier (SiPM) array coupled with a proprietary scintillation structure based on CsI(Tl) crystals. To read the SiPM output signals, we have developed a very low power readout electronics and a dedicated analog to digital conversion system. One of the most critical aspects we faced designing the prototype was the low power consumption, which is mandatory to develop a battery operated device. We have applied this detection device in the lymphoscintigraphy technique (sentinel lymph node mapping) comparing the results obtained with those of a commercial gamma camera (Philips SKYLight). The results obtained confirm a rapid response of the device and an adequate spatial resolution for the use in the scintigraphic imaging. This work confirms the feasibility of a small gamma camera with an integrated display. This device is designed for radioguided surgery and small organ imaging, but it could be easily combined into surgical navigation systems.

  8. Stereoscopic displays for virtual reality in the car manufacturing industry: application to design review and ergonomic studies

    Science.gov (United States)

    Moreau, Guillaume; Fuchs, Philippe

    2002-05-01

    In the car manufacturing industry the trend is to drastically reduce the time-to-market by increasing the use of the Digital Mock-up instead of physical prototypes. Design review and ergonomic studies are specific tasks because they involve qualitative or even subjective judgements. In this paper, we present IMAVE (IMmersion Adapted to a VEhicle) designed for immersive styling review, gaps visualization and simple ergonomic studies. We show that stereoscopic displays are necessary and must fulfill several constraints due to the proximity and size of the car dashboard. The duration fo the work sessions forces us to eliminate all vertical parallax, and 1:1 scale is obviously required for a valid immersion. Two demonstrators were realized allowing us to have a large set of testers (over 100). More than 80% of the testers saw an immediate use of the IMAVE system. We discuss the good and bad marks awarded to the system. Future work include being able to use several rear-projected stereo screens for doors and central console visualization, but without the parallax presently visible in some CAVE-like environments.

  9. Three-dimensional temporally resolved measurements of turbulence-flame interactions using orthogonal-plane cinema-stereoscopic PIV

    Energy Technology Data Exchange (ETDEWEB)

    Steinberg, Adam Michael; Driscoll, James F. [University of Michigan, Department of Aerospace Engineering, Ann Arbor, MI (United States); Ceccio, Steven L. [University of Michigan, Department of Mechanical Engineering, Ann Arbor, MI (United States)

    2009-09-15

    A new orthogonal-plane cinema-stereoscopic particle image velocimetry (OPCS-PIV) diagnostic has been used to measure the dynamics of three-dimensional turbulence-flame interactions. The diagnostic employed two orthogonal PIV planes, with one aligned perpendicular and one aligned parallel to the streamwise flow direction. In the plane normal to the flow, temporally resolved slices of the nine-component velocity gradient tensor were determined using Taylor's hypothesis. Volumetric reconstruction of the 3D turbulence was performed using these slices. The PIV plane parallel to the streamwise flow direction was then used to measure the evolution of the turbulence; the path and strength of 3D turbulent structures as they interacted with the flame were determined from their image in this second plane. Structures of both vorticity and strain-rate magnitude were extracted from the flow. The geometry of these structures agreed well with predictions from direct numerical simulations. The interaction of turbulent structures with the flame also was observed. In three dimensions, these interactions had complex geometries that could not be reflected in either planar measurements or simple flame-vortex configurations. (orig.)

  10. Stereoscopic-3D display design: a new paradigm with Intel Adaptive Stable Image Technology [IA-SIT

    Science.gov (United States)

    Jain, Sunil

    2012-03-01

    Stereoscopic-3D (S3D) proliferation on personal computers (PC) is mired by several technical and business challenges: a) viewing discomfort due to cross-talk amongst stereo images; b) high system cost; and c) restricted content availability. Users expect S3D visual quality to be better than, or at least equal to, what they are used to enjoying on 2D in terms of resolution, pixel density, color, and interactivity. Intel Adaptive Stable Image Technology (IA-SIT) is a foundational technology, successfully developed to resolve S3D system design challenges and deliver high quality 3D visualization at PC price points. Optimizations in display driver, panel timing firmware, backlight hardware, eyewear optical stack, and synch mechanism combined can help accomplish this goal. Agnostic to refresh rate, IA-SIT will scale with shrinking of display transistors and improvements in liquid crystal and LED materials. Industry could profusely benefit from the following calls to action:- 1) Adopt 'IA-SIT S3D Mode' in panel specs (via VESA) to help panel makers monetize S3D; 2) Adopt 'IA-SIT Eyewear Universal Optical Stack' and algorithm (via CEA) to help PC peripheral makers develop stylish glasses; 3) Adopt 'IA-SIT Real Time Profile' for sub-100uS latency control (via BT Sig) to extend BT into S3D; and 4) Adopt 'IA-SIT Architecture' for Monitors and TVs to monetize via PC attach.

  11. Extended two-photon microscopy in live samples with Bessel beams: steadier focus, faster volume scans, and simpler stereoscopic imaging.

    Science.gov (United States)

    Thériault, Gabrielle; Cottet, Martin; Castonguay, Annie; McCarthy, Nathalie; De Koninck, Yves

    2014-01-01

    Two-photon microscopy has revolutionized functional cellular imaging in tissue, but although the highly confined depth of field (DOF) of standard set-ups yields great optical sectioning, it also limits imaging speed in volume samples and ease of use. For this reason, we recently presented a simple and retrofittable modification to the two-photon laser-scanning microscope which extends the DOF through the use of an axicon (conical lens). Here we demonstrate three significant benefits of this technique using biological samples commonly employed in the field of neuroscience. First, we use a sample of neurons grown in culture and move it along the z-axis, showing that a more stable focus is achieved without compromise on transverse resolution. Second, we monitor 3D population dynamics in an acute slice of live mouse cortex, demonstrating that faster volumetric scans can be conducted. Third, we acquire a stereoscopic image of neurons and their dendrites in a fixed sample of mouse cortex, using only two scans instead of the complete stack and calculations required by standard systems. Taken together, these advantages, combined with the ease of integration into pre-existing systems, make the extended depth-of-field imaging based on Bessel beams a strong asset for the field of microscopy and life sciences in general.

  12. 3D pressure imaging of an aircraft propeller blade-tip flow by phase-locked stereoscopic PIV

    Energy Technology Data Exchange (ETDEWEB)

    Ragni, D.; Oudheusden, B.W. van; Scarano, F. [Delft University of Technology, Faculty of Aerospace Engineering, Delft (Netherlands)

    2012-02-15

    The flow field at the tip region of a scaled DHC Beaver aircraft propeller, running at transonic speed, has been investigated by means of a multi-plane stereoscopic particle image velocimetry setup. Velocity fields, phase-locked with the blade rotational motion, are acquired across several planes perpendicular to the blade axis and merged to form a 3D measurement volume. Transonic conditions have been reached at the tip region, with a revolution frequency of 19,800 rpm and a relative free-stream Mach number of 0.73 at the tip. The pressure field and the surface pressure distribution are inferred from the 3D velocity data through integration of the momentum Navier-Stokes equation in differential form, allowing for the simultaneous flow visualization and the aerodynamic loads computation, with respect to a reference frame moving with the blade. The momentum and pressure data are further integrated by means of a contour-approach to yield the aerodynamic sectional force components as well as the blade torsional moment. A steady Reynolds averaged Navier-Stokes numerical simulation of the entire propeller model has been used for comparison to the measurement data. (orig.)

  13. 3D pressure imaging of an aircraft propeller blade-tip flow by phase-locked stereoscopic PIV

    Science.gov (United States)

    Ragni, D.; van Oudheusden, B. W.; Scarano, F.

    2012-02-01

    The flow field at the tip region of a scaled DHC Beaver aircraft propeller, running at transonic speed, has been investigated by means of a multi-plane stereoscopic particle image velocimetry setup. Velocity fields, phase-locked with the blade rotational motion, are acquired across several planes perpendicular to the blade axis and merged to form a 3D measurement volume. Transonic conditions have been reached at the tip region, with a revolution frequency of 19,800 rpm and a relative free-stream Mach number of 0.73 at the tip. The pressure field and the surface pressure distribution are inferred from the 3D velocity data through integration of the momentum Navier-Stokes equation in differential form, allowing for the simultaneous flow visualization and the aerodynamic loads computation, with respect to a reference frame moving with the blade. The momentum and pressure data are further integrated by means of a contour-approach to yield the aerodynamic sectional force components as well as the blade torsional moment. A steady Reynolds averaged Navier-Stokes numerical simulation of the entire propeller model has been used for comparison to the measurement data.

  14. VUV Testing of Science Cameras at MSFC: QE Measurement of the CLASP Flight Cameras

    Science.gov (United States)

    Champey, Patrick R.; Kobayashi, Ken; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512x512 detector, dual channel analog readout electronics and an internally mounted cold block. At the flight operating temperature of -20 C, the CLASP cameras achieved the low-noise performance requirements (less than or equal to 25 e- read noise and greater than or equal to 10 e-/sec/pix dark current), in addition to maintaining a stable gain of approximately equal to 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Four flight-like cameras were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise, dark current and residual non-linearity of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV and EUV science cameras at MSFC.

  15. World's fastest and most sensitive astronomical camera

    Science.gov (United States)

    2009-06-01

    The next generation of instruments for ground-based telescopes took a leap forward with the development of a new ultra-fast camera that can take 1500 finely exposed images per second even when observing extremely faint objects. The first 240x240 pixel images with the world's fastest high precision faint light camera were obtained through a collaborative effort between ESO and three French laboratories from the French Centre National de la Recherche Scientifique/Institut National des Sciences de l'Univers (CNRS/INSU). Cameras such as this are key components of the next generation of adaptive optics instruments of Europe's ground-based astronomy flagship facility, the ESO Very Large Telescope (VLT). ESO PR Photo 22a/09 The CCD220 detector ESO PR Photo 22b/09 The OCam camera ESO PR Video 22a/09 OCam images "The performance of this breakthrough camera is without an equivalent anywhere in the world. The camera will enable great leaps forward in many areas of the study of the Universe," says Norbert Hubin, head of the Adaptive Optics department at ESO. OCam will be part of the second-generation VLT instrument SPHERE. To be installed in 2011, SPHERE will take images of giant exoplanets orbiting nearby stars. A fast camera such as this is needed as an essential component for the modern adaptive optics instruments used on the largest ground-based telescopes. Telescopes on the ground suffer from the blurring effect induced by atmospheric turbulence. This turbulence causes the stars to twinkle in a way that delights poets, but frustrates astronomers, since it blurs the finest details of the images. Adaptive optics techniques overcome this major drawback, so that ground-based telescopes can produce images that are as sharp as if taken from space. Adaptive optics is based on real-time corrections computed from images obtained by a special camera working at very high speeds. Nowadays, this means many hundreds of times each second. The new generation instruments require these

  16. Principle of some gamma cameras (efficiencies, limitations, development)

    International Nuclear Information System (INIS)

    Allemand, R.; Bourdel, J.; Gariod, R.; Laval, M.; Levy, G.; Thomas, G.

    1975-01-01

    The quality of scintigraphic images is shown to depend on the efficiency of both the input collimator and the detector. Methods are described by which the quality of these images may be improved by adaptations to either the collimator (Fresnel zone camera, Compton effect camera) or the detector (Anger camera, image amplification camera). The Anger camera and image amplification camera are at present the two main instruments whereby acceptable space and energy resolutions may be obtained. A theoretical comparative study of their efficiencies is carried out, independently of their technological differences, after which the instruments designed or under study at the LETI are presented: these include the image amplification camera, the electron amplifier tube camera using a semi-conductor target CdTe and HgI 2 detector [fr

  17. Notes on the IMACON 500 streak camera system

    International Nuclear Information System (INIS)

    Clendenin, J.E.

    1985-01-01

    The notes provided are intended to supplement the instruction manual for the IMACON 500 streak camera system. The notes cover the streak analyzer, instructions for timing the streak camera, and calibration

  18. Dynamic gamma camera scintigraphy in primary hypoovarism

    International Nuclear Information System (INIS)

    Peshev, N.; Mladenov, B.; Topalov, I.; Tsanev, Ts.

    1988-01-01

    Twenty-seven patients with primary hypoovarism and 10 controls were examined. After intravenous injection of 111 to 175 MBq 99m Tc pertechnetate, dynamic gamma camera scintigraphy for 15 minutes was carried out. In the patients with primary amenorrhea no functioning ovarial tissue was visualized or the ovaries were diminished in size, strongly reduced and non-homogenous accumulation of the radionuclide with unclear and uneven delineation were observed. In the patients with primary infertility, the gamma camera investigation gave information not only about the presence of ovarial parenchyma, but about the extent of the inflammatory process, too. In the patients after surgical intervention, the dynamic radioisotope investigation gave information about the volume and the site of the surgical intervention, as well as about the conditions of the residual parenchyma

  19. Advanced EVA Suit Camera System Development Project

    Science.gov (United States)

    Mock, Kyla

    2016-01-01

    The National Aeronautics and Space Administration (NASA) at the Johnson Space Center (JSC) is developing a new extra-vehicular activity (EVA) suit known as the Advanced EVA Z2 Suit. All of the improvements to the EVA Suit provide the opportunity to update the technology of the video imagery. My summer internship project involved improving the video streaming capabilities of the cameras that will be used on the Z2 Suit for data acquisition. To accomplish this, I familiarized myself with the architecture of the camera that is currently being tested to be able to make improvements on the design. Because there is a lot of benefit to saving space, power, and weight on the EVA suit, my job was to use Altium Design to start designing a much smaller and simplified interface board for the camera's microprocessor and external components. This involved checking datasheets of various components and checking signal connections to ensure that this architecture could be used for both the Z2 suit and potentially other future projects. The Orion spacecraft is a specific project that may benefit from this condensed camera interface design. The camera's physical placement on the suit also needed to be determined and tested so that image resolution can be maximized. Many of the options of the camera placement may be tested along with other future suit testing. There are multiple teams that work on different parts of the suit, so the camera's placement could directly affect their research or design. For this reason, a big part of my project was initiating contact with other branches and setting up multiple meetings to learn more about the pros and cons of the potential camera placements we are analyzing. Collaboration with the multiple teams working on the Advanced EVA Z2 Suit is absolutely necessary and these comparisons will be used as further progress is made for the overall suit design. This prototype will not be finished in time for the scheduled Z2 Suit testing, so my time was

  20. Declarative camera control for automatic cinematography

    Energy Technology Data Exchange (ETDEWEB)

    Christianson, D.B.; Anderson, S.E.; Li-wei He [Univ. of Washington, Seattle, WA (United States)] [and others

    1996-12-31

    Animations generated by interactive 3D computer graphics applications are typically portrayed either from a particular character`s point of view or from a small set of strategically-placed viewpoints. By ignoring camera placement, such applications fail to realize important storytelling capabilities that have been explored by cinematographers for many years. In this paper, we describe several of the principles of cinematography and show how they can be formalized into a declarative language, called the Declarative Camera Control Language (DCCL). We describe the application of DCCL within the context of a simple interactive video game and argue that DCCL represents cinematic knowledge at the same level of abstraction as expert directors by encoding 16 idioms from a film textbook. These idioms produce compelling animations, as demonstrated on the accompanying videotape.

  1. Cervical SPECT Camera for Parathyroid Imaging

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2012-08-31

    Primary hyperparathyroidism characterized by one or more enlarged parathyroid glands has become one of the most common endocrine diseases in the world affecting about 1 per 1000 in the United States. Standard treatment is highly invasive exploratory neck surgery called Parathyroidectomy. The surgery has a notable mortality rate because of the close proximity to vital structures. The move to minimally invasive parathyroidectomy is hampered by the lack of high resolution pre-surgical imaging techniques that can accurately localize the parathyroid with respect to surrounding structures. We propose to develop a dedicated ultra-high resolution (~ 1 mm) and high sensitivity (10x conventional camera) cervical scintigraphic imaging device. It will be based on a multiple pinhole-camera SPECT system comprising a novel solid state CZT detector that offers the required performance. The overall system will be configured to fit around the neck and comfortably image a patient.

  2. Methods of correcting Anger camera deadtime losses

    International Nuclear Information System (INIS)

    Sorenson, J.A.

    1976-01-01

    Three different methods of correcting for Anger camera deadtime loss were investigated. These included analytic methods (mathematical modeling), the marker-source method, and a new method based on counting ''pileup'' events appearing in a pulseheight analyzer window positioned above the photopeak of interest. The studies were done with /sup 99m/Tc on a Searle Radiographics camera with a measured deadtime of about 6 μsec. Analytic methods were found to be unreliable because of unpredictable changes in deadtime with changes in radiation scattering conditions. Both the marker-source method and the pileup-counting method were found to be accurate to within a few percent for true counting rates of up to about 200 K cps, with the pileup-counting method giving better results. This finding applied to sources at depths ranging up to 10 cm of pressed wood. The relative merits of the two methods are discussed

  3. Multiple-camera tracking: UK government requirements

    Science.gov (United States)

    Hosmer, Paul

    2007-10-01

    The Imagery Library for Intelligent Detection Systems (i-LIDS) is the UK government's new standard for Video Based Detection Systems (VBDS). The standard was launched in November 2006 and evaluations against it began in July 2007. With the first four i-LIDS scenarios completed, the Home Office Scientific development Branch (HOSDB) are looking toward the future of intelligent vision in the security surveillance market by adding a fifth scenario to the standard. The fifth i-LIDS scenario will concentrate on the development, testing and evaluation of systems for the tracking of people across multiple cameras. HOSDB and the Centre for the Protection of National Infrastructure (CPNI) identified a requirement to track targets across a network of CCTV cameras using both live and post event imagery. The Detection and Vision Systems group at HOSDB were asked to determine the current state of the market and develop an in-depth Operational Requirement (OR) based on government end user requirements. Using this OR the i-LIDS team will develop a full i-LIDS scenario to aid the machine vision community in its development of multi-camera tracking systems. By defining a requirement for multi-camera tracking and building this into the i-LIDS standard the UK government will provide a widely available tool that developers can use to help them turn theory and conceptual demonstrators into front line application. This paper will briefly describe the i-LIDS project and then detail the work conducted in building the new tracking aspect of the standard.

  4. Gamma camera scatter suppression unit WAM

    International Nuclear Information System (INIS)

    Kishi, Haruo; Shibahara, Noriyuki; Hirose, Yoshiharu; Shimonishi, Yoshihiro; Oumura, Masahiro; Ikeda, Hozumi; Hamada, Kunio; Ochi, Hironobu; Itagane, Hiroshi.

    1990-01-01

    In gamma camera imaging, scattered radiation is one of big factors to decrease image contrast. Simply, scatter suppression makes signal to noise ratio larger, but it makes statistics error because of radionuclide injection limit to the human body. EWA is a new method that suppresses scattered radiation and improves image contrast. In this article, WAM which is commercialized EWA method by Siemens Gammasonics Inc. is presented. (author)

  5. Quality assessment of gamma camera systems

    International Nuclear Information System (INIS)

    Kindler, M.

    1985-01-01

    There are methods and equipment in nuclear medical diagnostics that allow selective visualisation of the functioning of organs or organ systems, using radioactive substances for labelling and demonstration of metabolic processes. Following a previous contribution on fundamentals and systems components of a gamma camera system, the article in hand deals with the quality characteristics of such a system and with practical quality control and its significance for clinical applications. [de

  6. Combining local and global optimisation for virtual camera control

    OpenAIRE

    Burelli, Paolo; Yannakakis, Georgios N.; 2010 IEEE Symposium on Computational Intelligence and Games

    2010-01-01

    Controlling a virtual camera in 3D computer games is a complex task. The camera is required to react to dynamically changing environments and produce high quality visual results and smooth animations. This paper proposes an approach that combines local and global search to solve the virtual camera control problem. The automatic camera control problem is described and it is decomposed into sub-problems; then a hierarchical architecture that solves each sub-problem using the most appropriate op...

  7. Can Camera Traps Monitor Komodo Dragons a Large Ectothermic Predator?

    OpenAIRE

    Ariefiandy, Achmad; Purwandana, Deni; Seno, Aganto; Ciofi, Claudio; Jessop, Tim S.

    2013-01-01

    Camera trapping has greatly enhanced population monitoring of often cryptic and low abundance apex carnivores. Effectiveness of passive infrared camera trapping, and ultimately population monitoring, relies on temperature mediated differences between the animal and its ambient environment to ensure good camera detection. In ectothermic predators such as large varanid lizards, this criterion is presumed less certain. Here we evaluated the effectiveness of camera trapping to potentially monitor...

  8. Poster: A Software-Defined Multi-Camera Network

    OpenAIRE

    Chen, Po-Yen; Chen, Chien; Selvaraj, Parthiban; Claesen, Luc

    2016-01-01

    The widespread popularity of OpenFlow leads to a significant increase in the number of applications developed in SoftwareDefined Networking (SDN). In this work, we propose the architecture of a Software-Defined Multi-Camera Network consisting of small, flexible, economic, and programmable cameras which combine the functions of the processor, switch, and camera. A Software-Defined Multi-Camera Network can effectively reduce the overall network bandwidth and reduce a large amount of the Capex a...

  9. Theory and applications of smart cameras

    CERN Document Server

    2016-01-01

    This book presents an overview of smart camera systems, considering practical applications but also reviewing fundamental aspects of the underlying technology.  It introduces in a tutorial style the principles of sensing and signal processing, and also describes topics such as wireless connection to the Internet of Things (IoT) which is expected to be the biggest market for smart cameras. It is an excellent guide to the fundamental of smart camera technology, and the chapters complement each other well as the authors have worked as a team under the auspice of GFP(Global Frontier Project), the largest-scale funded research in Korea.  This is the third of three books based on the Integrated Smart Sensors research project, which describe the development of innovative devices, circuits, and system-level enabling technologies.  The aim of the project was to develop common platforms on which various devices and sensors can be loaded, and to create systems offering significant improvements in information processi...

  10. The AOTF-Based NO2 Camera

    Science.gov (United States)

    Dekemper, E.; Fussen, D.; Vanhellemont, F.; Vanhamel, J.; Pieroux, D.; Berkenbosch, S.

    2017-12-01

    In an urban environment, nitrogen dioxide is emitted by a multitude of static and moving point sources (cars, industry, power plants, heating systems,…). Air quality models generally rely on a limited number of monitoring stations which do not capture the whole pattern, neither allow for full validation. So far, there has been a lack of instrument capable of measuring NO2 fields with the necessary spatio-temporal resolution above major point sources (power plants), or more extended ones (cities). We have developed a new type of passive remote sensing instrument aiming at the measurement of 2-D distributions of NO2 slant column densities (SCDs) with a high spatial (meters) and temporal (minutes) resolution. The measurement principle has some similarities with the popular filter-based SO2 camera (used in volcanic and industrial sulfur emissions monitoring) as it relies on spectral images taken at wavelengths where the molecule absorption cross section is different. But contrary to the SO2 camera, the spectral selection is performed by an acousto-optical tunable filter (AOTF) capable of resolving the target molecule's spectral features. A first prototype was successfully tested with the plume of a coal-firing power plant in Romania, revealing the dynamics of the formation of NO2 in the early plume. A lighter version of the NO2 camera is now being tested on other targets, such as oil refineries and urban air masses.

  11. Improvement of passive THz camera images

    Science.gov (United States)

    Kowalski, Marcin; Piszczek, Marek; Palka, Norbert; Szustakowski, Mieczyslaw

    2012-10-01

    Terahertz technology is one of emerging technologies that has a potential to change our life. There are a lot of attractive applications in fields like security, astronomy, biology and medicine. Until recent years, terahertz (THz) waves were an undiscovered, or most importantly, an unexploited area of electromagnetic spectrum. The reasons of this fact were difficulties in generation and detection of THz waves. Recent advances in hardware technology have started to open up the field to new applications such as THz imaging. The THz waves can penetrate through various materials. However, automated processing of THz images can be challenging. The THz frequency band is specially suited for clothes penetration because this radiation does not point any harmful ionizing effects thus it is safe for human beings. Strong technology development in this band have sparked with few interesting devices. Even if the development of THz cameras is an emerging topic, commercially available passive cameras still offer images of poor quality mainly because of its low resolution and low detectors sensitivity. Therefore, THz image processing is very challenging and urgent topic. Digital THz image processing is a really promising and cost-effective way for demanding security and defense applications. In the article we demonstrate the results of image quality enhancement and image fusion of images captured by a commercially available passive THz camera by means of various combined methods. Our research is focused on dangerous objects detection - guns, knives and bombs hidden under some popular types of clothing.

  12. Remote hardware-reconfigurable robotic camera

    Science.gov (United States)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.

    2001-10-01

    In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.

  13. Linearity correction device for a scintillation camera

    Energy Technology Data Exchange (ETDEWEB)

    Lange, Kai

    1978-06-16

    This invention concerns the scintillation cameras still called gamma ray camera. The invention particularly covers the improvement in the resolution and the uniformity of these cameras. Briefly, in the linearity correction device of the invention, the sum is made of the voltage signals of different amplitudes produced by the preamplifiers of all the photomultiplier tubes and the signal obtained is employed to generate bias voltages which represent predetermined percentages of the sum signal. In one design mode, pairs of transistors are blocked when the output signal of the corresponding preamplifier is under a certain point on its gain curve. When the summation of the energies of a given scintillation exceeds this level which corresponds to a first percentage of the total signal, the first transistor of each pair of each line is unblocked, thereby modifying the gain and curve slop. When the total energy of an event exceeds the next preset level, the second transistor is unblocked to alter the shape again, so much so that the curve shows two break points. If needs be, the device can be designed so as to obtain more break points for the increasingly higher levels of energy. Once the signals have been processed as described above, they may be used for calculating the co-ordinates of the scintillation by one of the conventional methods.

  14. Automatic multi-camera calibration for deployable positioning systems

    Science.gov (United States)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  15. Camera Network Coverage Improving by Particle Swarm Optimization

    NARCIS (Netherlands)

    Xu, Y.C.; Lei, B.; Hendriks, E.A.

    2011-01-01

    This paper studies how to improve the field of view (FOV) coverage of a camera network. We focus on a special but practical scenario where the cameras are randomly scattered in a wide area and each camera may adjust its orientation but cannot move in any direction. We propose a particle swarm

  16. A generic model for camera based intelligent road crowd control ...

    African Journals Online (AJOL)

    This research proposes a model for intelligent traffic flow control by implementing camera based surveillance and feedback system. A series of cameras are set minimum three signals ahead from the target junction. The complete software system is developed to help integrating the multiple camera on road as feedback to ...

  17. MISR L1B3 Radiometric Camera-by-camera Cloud Mask Product subset for the RICO region V004

    Data.gov (United States)

    National Aeronautics and Space Administration — This file contains the Radiometric camera-by-camera Cloud Mask dataset over the RICO region. It is used to determine whether a scene is classified as clear or...

  18. Modeling and simulation of gamma camera

    International Nuclear Information System (INIS)

    Singh, B.; Kataria, S.K.; Samuel, A.M.

    2002-08-01

    Simulation techniques play a vital role in designing of sophisticated instruments and also for the training of operating and maintenance staff. Gamma camera systems have been used for functional imaging in nuclear medicine. Functional images are derived from the external counting of the gamma emitting radioactive tracer that after introduction in to the body mimics the behavior of native biochemical compound. The position sensitive detector yield the coordinates of the gamma ray interaction with the detector and are used to estimate the point of gamma ray emission within the tracer distribution space. This advanced imaging device is thus dependent on the performance of algorithm for coordinate computing, estimation of point of emission, generation of image and display of the image data. Contemporary systems also have protocols for quality control and clinical evaluation of imaging studies. Simulation of this processing leads to understanding of the basic camera design problems. This report describes a PC based package for design and simulation of gamma camera along with the options of simulating data acquisition and quality control of imaging studies. Image display and data processing the other options implemented in SIMCAM will be described in separate reports (under preparation). Gamma camera modeling and simulation in SIMCAM has preset configuration of the design parameters for various sizes of crystal detector with the option to pack the PMT on hexagon or square lattice. Different algorithm for computation of coordinates and spatial distortion removal are allowed in addition to the simulation of energy correction circuit. The user can simulate different static, dynamic, MUGA and SPECT studies. The acquired/ simulated data is processed for quality control and clinical evaluation of the imaging studies. Results show that the program can be used to assess these performances. Also the variations in performance parameters can be assessed due to the induced

  19. Stereoscopy in diagnostic radiology and procedure planning: does stereoscopic assessment of volume-rendered CT angiograms lead to more accurate characterisation of cerebral aneurysms compared with traditional monoscopic viewing?

    International Nuclear Information System (INIS)

    Stewart, Nikolas; Lock, Gregory; Coucher, John; Hopcraft, Anthony

    2014-01-01

    Stereoscopic vision is a critical part of the human visual system, conveying more information than two-dimensional, monoscopic observation alone. This study aimed to quantify the contribution of stereoscopy in assessment of radiographic data, using widely available three-dimensional (3D)-capable display monitors by assessing whether stereoscopic viewing improved the characterisation of cerebral aneurysms. Nine radiology registrars were shown 40 different volume-rendered (VR) models of cerebral computed tomography angiograms (CTAs), each in both monoscopic and stereoscopic format and then asked to record aneurysm characteristics on short multiple-choice answer sheets. The monitor used was a current model commercially available 3D television. Responses were marked against a gold standard of assessments made by a consultant radiologist, using the original CT planar images on a diagnostic radiology computer workstation. The participants' results were fairly homogenous, with most showing no difference in diagnosis using stereoscopic VR models. One participant performed better on the monoscopic VR models. On average, monoscopic VRs achieved a slightly better diagnosis by 2.0%. Stereoscopy has a long history, but it has only recently become technically feasible for stored cross-sectional data to be adequately reformatted and displayed in this format. Scant literature exists to quantify the technology's possible contribution to medical imaging - this study attempts to build on this limited knowledge base and promote discussion within the field. Stereoscopic viewing of images should be further investigated and may well eventually find a permanent place in procedural and diagnostic medical imaging.

  20. Traveling wave deflector design for femtosecond streak camera

    International Nuclear Information System (INIS)

    Pei, Chengquan; Wu, Shengli; Luo, Duan; Wen, Wenlong; Xu, Junkai; Tian, Jinshou; Zhang, Minrui; Chen, Pin; Chen, Jianzhong; Liu, Rong

    2017-01-01

    In this paper, a traveling wave deflection deflector (TWD) with a slow-wave property induced by a microstrip transmission line is proposed for femtosecond streak cameras. The pass width and dispersion properties were simulated. In addition, the dynamic temporal resolution of the femtosecond camera was simulated by CST software. The results showed that with the proposed TWD a femtosecond streak camera can achieve a dynamic temporal resolution of less than 600 fs. Experiments were done to test the femtosecond streak camera, and an 800 fs dynamic temporal resolution was obtained. Guidance is provided for optimizing a femtosecond streak camera to obtain higher temporal resolution.

  1. Traveling wave deflector design for femtosecond streak camera

    Energy Technology Data Exchange (ETDEWEB)

    Pei, Chengquan; Wu, Shengli [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi’an 710049 (China); Luo, Duan [Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Wen, Wenlong [Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Xu, Junkai [Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Tian, Jinshou, E-mail: tianjs@opt.ac.cn [Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan, Shanxi 030006 (China); Zhang, Minrui; Chen, Pin [Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi' an 710119 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Chen, Jianzhong [Key Laboratory for Physical Electronics and Devices of the Ministry of Education, Xi' an Jiaotong University, Xi’an 710049 (China); Liu, Rong [Xi' an Technological University, Xi' an 710021 (China)

    2017-05-21

    In this paper, a traveling wave deflection deflector (TWD) with a slow-wave property induced by a microstrip transmission line is proposed for femtosecond streak cameras. The pass width and dispersion properties were simulated. In addition, the dynamic temporal resolution of the femtosecond camera was simulated by CST software. The results showed that with the proposed TWD a femtosecond streak camera can achieve a dynamic temporal resolution of less than 600 fs. Experiments were done to test the femtosecond streak camera, and an 800 fs dynamic temporal resolution was obtained. Guidance is provided for optimizing a femtosecond streak camera to obtain higher temporal resolution.

  2. Using a laser scanning camera for reactor inspection

    International Nuclear Information System (INIS)

    Armour, I.A.; Adrain, R.S.; Klewe, R.C.

    1984-01-01

    Inspection of nuclear reactors is normally carried out using TV or film cameras. There are, however, several areas where these cameras show considerable shortcomings. To overcome these difficulties, laser scanning cameras have been developed. This type of camera can be used for general visual inspection as well as the provision of high resolution video images with high ratio on and off-axis zoom capability. In this paper, we outline the construction and operation of a laser scanning camera and give examples of how it has been used in various power stations, and indicate future potential developments. (author)

  3. Augmented reality to the rescue of the minimally invasive surgeon. The usefulness of the interposition of stereoscopic images in the Da Vinci™ robotic console.

    Science.gov (United States)

    Volonté, Francesco; Buchs, Nicolas C; Pugin, François; Spaltenstein, Joël; Schiltz, Boris; Jung, Minoa; Hagen, Monika; Ratib, Osman; Morel, Philippe

    2013-09-01

    Computerized management of medical information and 3D imaging has become the norm in everyday medical practice. Surgeons exploit these emerging technologies and bring information previously confined to the radiology rooms into the operating theatre. The paper reports the authors' experience with integrated stereoscopic 3D-rendered images in the da Vinci surgeon console. Volume-rendered images were obtained from a standard computed tomography dataset using the OsiriX DICOM workstation. A custom OsiriX plugin was created that permitted the 3D-rendered images to be displayed in the da Vinci surgeon console and to appear stereoscopic. These rendered images were displayed in the robotic console using the TilePro multi-input display. The upper part of the screen shows the real endoscopic surgical field and the bottom shows the stereoscopic 3D-rendered images. These are controlled by a 3D joystick installed on the console, and are updated in real time. Five patients underwent a robotic augmented reality-enhanced procedure. The surgeon was able to switch between the classical endoscopic view and a combined virtual view during the procedure. Subjectively, the addition of the rendered images was considered to be an undeniable help during the dissection phase. With the rapid evolution of robotics, computer-aided surgery is receiving increasing interest. This paper details the authors' experience with 3D-rendered images projected inside the surgical console. The use of this intra-operative mixed reality technology is considered very useful by the surgeon. It has been shown that the usefulness of this technique is a step toward computer-aided surgery that will progress very quickly over the next few years. Copyright © 2012 John Wiley & Sons, Ltd.

  4. ESTABLISHING A STEREOSCOPIC TECHNIQUE FOR DETERMINING THE KINEMATIC PROPERTIES OF SOLAR WIND TRANSIENTS BASED ON A GENERALIZED SELF-SIMILARLY EXPANDING CIRCULAR GEOMETRY

    International Nuclear Information System (INIS)

    Davies, J. A.; Perry, C. H.; Harrison, R. A.; Trines, R. M. G. M.; Lugaz, N.; Möstl, C.; Liu, Y. D.; Steed, K.

    2013-01-01

    The twin-spacecraft STEREO mission has enabled simultaneous white-light imaging of the solar corona and inner heliosphere from multiple vantage points. This has led to the development of numerous stereoscopic techniques to investigate the three-dimensional structure and kinematics of solar wind transients such as coronal mass ejections (CMEs). Two such methods—triangulation and the tangent to a sphere—can be used to determine time profiles of the propagation direction and radial distance (and thereby radial speed) of a solar wind transient as it travels through the inner heliosphere, based on its time-elongation profile viewed by two observers. These techniques are founded on the assumption that the transient can be characterized as a point source (fixed φ, FP, approximation) or a circle attached to Sun-center (harmonic mean, HM, approximation), respectively. These geometries constitute extreme descriptions of solar wind transients, in terms of their cross-sectional extent. Here, we present the stereoscopic expressions necessary to derive propagation direction and radial distance/speed profiles of such transients based on the more generalized self-similar expansion (SSE) geometry, for which the FP and HM geometries form the limiting cases; our implementation of these equations is termed the stereoscopic SSE method. We apply the technique to two Earth-directed CMEs from different phases of the STEREO mission, the well-studied event of 2008 December and a more recent event from 2012 March. The latter CME was fast, with an initial speed exceeding 2000 km s –1 , and highly geoeffective, in stark contrast to the slow and ineffectual 2008 December CME

  5. Diffuse nitrogen loss simulation and impact assessment of stereoscopic agriculture pattern by integrated water system model and consideration of multiple existence forms

    Science.gov (United States)

    Zhang, Yongyong; Gao, Yang; Yu, Qiang

    2017-09-01

    Agricultural nitrogen loss becomes an increasingly important source of water quality deterioration and eutrophication, even threatens water safety for humanity. Nitrogen dynamic mechanism is still too complicated to be well captured at watershed scale due to its multiple existence forms and instability, disturbance of agricultural management practices. Stereoscopic agriculture is a novel agricultural planting pattern to efficiently use local natural resources (e.g., water, land, sunshine, heat and fertilizer). It is widely promoted as a high yield system and can obtain considerable economic benefits, particularly in China. However, its environmental quality implication is not clear. In our study, Qianyanzhou station is famous for its stereoscopic agriculture pattern of Southern China, and an experimental watershed was selected as our study area. Regional characteristics of runoff and nitrogen losses were simulated by an integrated water system model (HEQM) with multi-objective calibration, and multiple agriculture practices were assessed to find the effective approach for the reduction of diffuse nitrogen losses. Results showed that daily variations of runoff and nitrogen forms were well reproduced throughout watershed, i.e., satisfactory performances for ammonium and nitrate nitrogen (NH4-N and NO3-N) loads, good performances for runoff and organic nitrogen (ON) load, and very good performance for total nitrogen (TN) load. The average loss coefficient was 62.74 kg/ha for NH4-N, 0.98 kg/ha for NO3-N, 0.0004 kg/ha for ON and 63.80 kg/ha for TN. The dominating form of nitrogen losses was NH4-N due to the applied fertilizers, and the most dramatic zones aggregated in the middle and downstream regions covered by paddy and orange orchard. In order to control diffuse nitrogen losses, the most effective practices for Qianyanzhou stereoscopic agriculture pattern were to reduce farmland planting scale in the valley by afforestation, particularly for orchard in the

  6. Effects of Intraluminal Thrombus on Patient-Specific Abdominal Aortic Aneurysm Hemodynamics via Stereoscopic Particle Image Velocity and Computational Fluid Dynamics Modeling

    Science.gov (United States)

    Chen, Chia-Yuan; Antón, Raúl; Hung, Ming-yang; Menon, Prahlad; Finol, Ender A.; Pekkan, Kerem

    2014-01-01

    The pathology of the human abdominal aortic aneurysm (AAA) and its relationship to the later complication of intraluminal thrombus (ILT) formation remains unclear. The hemodynamics in the diseased abdominal aorta are hypothesized to be a key contributor to the formation and growth of ILT. The objective of this investigation is to establish a reliable 3D flow visualization method with corresponding validation tests with high confidence in order to provide insight into the basic hemodynamic features for a better understanding of hemodynamics in AAA pathology and seek potential treatment for AAA diseases. A stereoscopic particle image velocity (PIV) experiment was conducted using transparent patient-specific experimental AAA models (with and without ILT) at three axial planes. Results show that before ILT formation, a 3D vortex was generated in the AAA phantom. This geometry-related vortex was not observed after the formation of ILT, indicating its possible role in the subsequent appearance of ILT in this patient. It may indicate that a longer residence time of recirculated blood flow in the aortic lumen due to this vortex caused sufficient shear-induced platelet activation to develop ILT and maintain uniform flow conditions. Additionally, two computational fluid dynamics (CFD) modeling codes (Fluent and an in-house cardiovascular CFD code) were compared with the two-dimensional, three-component velocity stereoscopic PIV data. Results showed that correlation coefficients of the out-of-plane velocity data between PIV and both CFD methods are greater than 0.85, demonstrating good quantitative agreement. The stereoscopic PIV study can be utilized as test case templates for ongoing efforts in cardiovascular CFD solver development. Likewise, it is envisaged that the patient-specific data may provide a benchmark for further studying hemodynamics of actual AAA, ILT, and their convolution effects under physiological conditions for clinical applications. PMID:24316984

  7. Long wavelength infrared camera (LWIRC): a 10 micron camera for the Keck Telescope

    Energy Technology Data Exchange (ETDEWEB)

    Wishnow, E.H.; Danchi, W.C.; Tuthill, P.; Wurtz, R.; Jernigan, J.G.; Arens, J.F.

    1998-05-01

    The Long Wavelength Infrared Camera (LWIRC) is a facility instrument for the Keck Observatory designed to operate at the f/25 forward Cassegrain focus of the Keck I telescope. The camera operates over the wavelength band 7-13 {micro}m using ZnSe transmissive optics. A set of filters, a circular variable filter (CVF), and a mid-infrared polarizer are available, as are three plate scales: 0.05``, 0.10``, 0.21`` per pixel. The camera focal plane array and optics are cooled using liquid helium. The system has been refurbished with a 128 x 128 pixel Si:As detector array. The electronics readout system used to clock the array is compatible with both the hardware and software of the other Keck infrared instruments NIRC and LWS. A new pre-amplifier/A-D converter has been designed and constructed which decreases greatly the system susceptibility to noise.

  8. Preliminary field evaluation of solid state cameras for security applications

    International Nuclear Information System (INIS)

    1987-01-01

    Recent developments in solid state imager technology have resulted in a series of compact, lightweight, all-solid-state closed circuit television (CCTV) cameras. Although it is widely known that the various solid state cameras have less light sensitivity and lower resolution than their vacuum tube counterparts, the potential for having a much longer Mean Time Between Failure (MTBF) for the all-solid-state cameras is generating considerable interest within the security community. Questions have been raised as to whether the newest and best of the solid state cameras are a viable alternative to the high maintenance vacuum tube cameras in exterior security applications. To help answer these questions, a series of tests were performed by Sandia National Laboratories at various test sites and under several lighting conditions. In general, all-solid-state cameras need to be improved in four areas before they can be used as wholesale replacements for tube cameras in exterior security applications: resolution, sensitivity, contrast, and smear. However, with careful design some of the higher performance cameras can be used for perimeter security systems, and all of the cameras have applications where they are uniquely qualified. Many of the cameras are well suited for interior assessment and surveillance uses, and several of the cameras are well designed as robotics and machine vision devices

  9. Initial laboratory evaluation of color video cameras: Phase 2

    Energy Technology Data Exchange (ETDEWEB)

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. The monochrome cameras were selected over color cameras because they have greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color camera technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories has established an on-going program to evaluate the newest color solid-state cameras. Phase One of the Sandia program resulted in the SAND91-2579/1 report titled: Initial Laboratory Evaluation of Color Video Cameras. The report briefly discusses imager chips, color cameras, and monitors, describes the camera selection, details traditional test parameters and procedures, and gives the results reached by evaluating 12 cameras. Here, in Phase Two of the report, we tested 6 additional cameras using traditional methods. In addition, all 18 cameras were tested by newly developed methods. This Phase 2 report details those newly developed test parameters and procedures, and evaluates the results.

  10. Presence capture cameras - a new challenge to the image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  11. Qualification Tests of Micro-camera Modules for Space Applications

    Science.gov (United States)

    Kimura, Shinichi; Miyasaka, Akira

    Visual capability is very important for space-based activities, for which small, low-cost space cameras are desired. Although cameras for terrestrial applications are continually being improved, little progress has been made on cameras used in space, which must be extremely robust to withstand harsh environments. This study focuses on commercial off-the-shelf (COTS) CMOS digital cameras because they are very small and are based on an established mass-market technology. Radiation and ultrahigh-vacuum tests were conducted on a small COTS camera that weighs less than 100 mg (including optics). This paper presents the results of the qualification tests for COTS cameras and for a small, low-cost COTS-based space camera.

  12. The making of analog module for gamma camera interface

    International Nuclear Information System (INIS)

    Yulinarsari, Leli; Rl, Tjutju; Susila, Atang; Sukandar

    2003-01-01

    The making of an analog module for gamma camera has been conducted. For computerization of planar gamma camera 37 PMT it has been developed interface hardware technology and software between the planar gamma camera with PC. With this interface gamma camera image information (Originally analog signal) was changed to digital single, therefore processes of data acquisition, image quality increase and data analysis as well as data base processing can be conducted with the help of computers, there are three gamma camera main signals, i.e. X, Y and Z . This analog module makes digitation of analog signal X and Y from the gamma camera that conveys position information coming from the gamma camera crystal. Analog conversion to digital was conducted by 2 converters ADC 12 bit with conversion time 800 ns each, conversion procedure for each coordinate X and Y was synchronized using suitable strobe signal Z for information acceptance

  13. Use of cameras for monitoring visibility impairment

    Science.gov (United States)

    Malm, William; Cismoski, Scott; Prenni, Anthony; Peters, Melanie

    2018-02-01

    Webcams and automated, color photography cameras have been routinely operated in many U.S. national parks and other federal lands as far back as 1988, with a general goal of meeting interpretive needs within the public lands system and communicating effects of haze on scenic vistas to the general public, policy makers, and scientists. Additionally, it would be desirable to extract quantifiable information from these images to document how visibility conditions change over time and space and to further reflect the effects of haze on a scene, in the form of atmospheric extinction, independent of changing lighting conditions due to time of day, year, or cloud cover. Many studies have demonstrated a link between image indexes and visual range or extinction in urban settings where visibility is significantly degraded and where scenes tend to be gray and devoid of color. In relatively clean, clear atmospheric conditions, clouds and lighting conditions can sometimes affect the image radiance field as much or more than the effects of haze. In addition, over the course of many years, cameras have been replaced many times as technology improved or older systems wore out, and therefore camera image pixel density has changed dramatically. It is shown that gradient operators are very sensitive to image resolution while contrast indexes are not. Furthermore, temporal averaging and time of day restrictions allow for developing quantitative relationships between atmospheric extinction and contrast-type indexes even when image resolution has varied over time. Temporal averaging effectively removes the variability of visibility indexes associated with changing cloud cover and weather conditions, and changes in lighting conditions resulting from sun angle effects are best compensated for by restricting averaging to only certain times of the day.

  14. OCAMS: The OSIRIS-REx Camera Suite

    Science.gov (United States)

    Rizk, B.; Drouet d'Aubigny, C.; Golish, D.; Fellows, C.; Merrill, C.; Smith, P.; Walker, M. S.; Hendershot, J. E.; Hancock, J.; Bailey, S. H.; DellaGiustina, D. N.; Lauretta, D. S.; Tanner, R.; Williams, M.; Harshman, K.; Fitzgibbon, M.; Verts, W.; Chen, J.; Connors, T.; Hamara, D.; Dowd, A.; Lowman, A.; Dubin, M.; Burt, R.; Whiteley, M.; Watson, M.; McMahon, T.; Ward, M.; Booher, D.; Read, M.; Williams, B.; Hunten, M.; Little, E.; Saltzman, T.; Alfred, D.; O'Dougherty, S.; Walthall, M.; Kenagy, K.; Peterson, S.; Crowther, B.; Perry, M. L.; See, C.; Selznick, S.; Sauve, C.; Beiser, M.; Black, W.; Pfisterer, R. N.; Lancaster, A.; Oliver, S.; Oquest, C.; Crowley, D.; Morgan, C.; Castle, C.; Dominguez, R.; Sullivan, M.

    2018-02-01

    The OSIRIS-REx Camera Suite (OCAMS) will acquire images essential to collecting a sample from the surface of Bennu. During proximity operations, these images will document the presence of satellites and plumes, record spin state, enable an accurate model of the asteroid's shape, and identify any surface hazards. They will confirm the presence of sampleable regolith on the surface, observe the sampling event itself, and image the sample head in order to verify its readiness to be stowed. They will document Bennu's history as an example of early solar system material, as a microgravity body with a planetesimal size-scale, and as a carbonaceous object. OCAMS is fitted with three cameras. The MapCam will record color images of Bennu as a point source on approach to the asteroid in order to connect Bennu's ground-based point-source observational record to later higher-resolution surface spectral imaging. The SamCam will document the sample site before, during, and after it is disturbed by the sample mechanism. The PolyCam, using its focus mechanism, will observe the sample site at sub-centimeter resolutions, revealing surface texture and morphology. While their imaging requirements divide naturally between the three cameras, they preserve a strong degree of functional overlap. OCAMS and the other spacecraft instruments will allow the OSIRIS-REx mission to collect a sample from a microgravity body on the same visit during which it was first optically acquired from long range, a useful capability as humanity reaches out to explore near-Earth, Main-Belt and Jupiter Trojan asteroids.

  15. Thermal imaging cameras characteristics and performance

    CERN Document Server

    Williams, Thomas

    2009-01-01

    The ability to see through smoke and mist and the ability to use the variances in temperature to differentiate between targets and their backgrounds are invaluable in military applications and have become major motivators for the further development of thermal imagers. As the potential of thermal imaging is more clearly understood and the cost decreases, the number of industrial and civil applications being exploited is growing quickly. In order to evaluate the suitability of particular thermal imaging cameras for particular applications, it is important to have the means to specify and measur

  16. Compact optical technique for streak camera calibration

    International Nuclear Information System (INIS)

    Bell, Perry; Griffith, Roger; Hagans, Karla; Lerche, Richard; Allen, Curt; Davies, Terence; Janson, Frans; Justin, Ronald; Marshall, Bruce; Sweningsen, Oliver

    2004-01-01

    To produce accurate data from optical streak cameras requires accurate temporal calibration sources. We have reproduced an older technology for generating optical timing marks that had been lost due to component availability. Many improvements have been made which allow the modern units to service a much larger need. Optical calibrators are now available that produce optical pulse trains of 780 nm wavelength light at frequencies ranging from 0.1 to 10 GHz, with individual pulse widths of approximately 25 ps full width half maximum. Future plans include the development of single units that produce multiple frequencies to cover a wide temporal range, and that are fully controllable via an RS232 interface

  17. Compact optical technique for streak camera calibration

    Science.gov (United States)

    Bell, Perry; Griffith, Roger; Hagans, Karla; Lerche, Richard; Allen, Curt; Davies, Terence; Janson, Frans; Justin, Ronald; Marshall, Bruce; Sweningsen, Oliver

    2004-10-01

    To produce accurate data from optical streak cameras requires accurate temporal calibration sources. We have reproduced an older technology for generating optical timing marks that had been lost due to component availability. Many improvements have been made which allow the modern units to service a much larger need. Optical calibrators are now available that produce optical pulse trains of 780 nm wavelength light at frequencies ranging from 0.1 to 10 GHz, with individual pulse widths of approximately 25 ps full width half maximum. Future plans include the development of single units that produce multiple frequencies to cover a wide temporal range, and that are fully controllable via an RS232 interface.

  18. Exact optics - III. Schwarzschild's spectrograph camera revised

    Science.gov (United States)

    Willstrop, R. V.

    2004-03-01

    Karl Schwarzschild identified a system of two mirrors, each defined by conic sections, free of third-order spherical aberration, coma and astigmatism, and with a flat focal surface. He considered it impractical, because the field was too restricted. This system was rediscovered as a quadratic approximation to one of Lynden-Bell's `exact optics' designs which have wider fields. Thus the `exact optics' version has a moderate but useful field, with excellent definition, suitable for a spectrograph camera. The mirrors are strongly aspheric in both the Schwarzschild design and the exact optics version.

  19. Collimator trans-axial tomographic scintillation camera

    International Nuclear Information System (INIS)

    Jaszczak, R.J.

    1977-01-01

    A collimator is provided for a scintillation camera system in which a detector precesses in an orbit about a patient. The collimator is designed to have high resolution and lower sensitivity with respect to radiation traveling in paths laying wholly within planes perpendicular to the cranial-caudal axis of the patient. The collimator has high sensitivity and lower resolution to radiation traveling in other planes. Variances in resolution and sensitivity are achieved by altering the length, spacing or thickness of the septa of the collimator

  20. Picosecond x-ray streak cameras

    Science.gov (United States)

    Averin, V. I.; Bryukhnevich, Gennadii I.; Kolesov, G. V.; Lebedev, Vitaly B.; Miller, V. A.; Saulevich, S. V.; Shulika, A. N.

    1991-04-01

    The first multistage image converter with an X-ray photocathode (UMI-93 SR) was designed in VNIIOFI in 1974 [1]. The experiments carried out in IOFAN pointed out that X-ray electron-optical cameras using the tube provided temporal resolution up to 12 picoseconds [2]. The later work has developed into the creation of the separate streak and intensifying tubes. Thus, PV-003R tube has been built on base of UMI-93SR design, fibre optically connected to PMU-2V image intensifier carrying microchannel plate.