WorldWideScience

Sample records for solid-state video cameras

  1. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  2. Initial laboratory evaluation of color video cameras: Phase 2

    Energy Technology Data Exchange (ETDEWEB)

    Terry, P.L.

    1993-07-01

    Sandia National Laboratories has considerable experience with monochrome video cameras used in alarm assessment video systems. Most of these systems, used for perimeter protection, were designed to classify rather than to identify intruders. The monochrome cameras were selected over color cameras because they have greater sensitivity and resolution. There is a growing interest in the identification function of security video systems for both access control and insider protection. Because color camera technology is rapidly changing and because color information is useful for identification purposes, Sandia National Laboratories has established an on-going program to evaluate the newest color solid-state cameras. Phase One of the Sandia program resulted in the SAND91-2579/1 report titled: Initial Laboratory Evaluation of Color Video Cameras. The report briefly discusses imager chips, color cameras, and monitors, describes the camera selection, details traditional test parameters and procedures, and gives the results reached by evaluating 12 cameras. Here, in Phase Two of the report, we tested 6 additional cameras using traditional methods. In addition, all 18 cameras were tested by newly developed methods. This Phase 2 report details those newly developed test parameters and procedures, and evaluates the results.

  3. Preliminary field evaluation of solid state cameras for security applications

    International Nuclear Information System (INIS)

    1987-01-01

    Recent developments in solid state imager technology have resulted in a series of compact, lightweight, all-solid-state closed circuit television (CCTV) cameras. Although it is widely known that the various solid state cameras have less light sensitivity and lower resolution than their vacuum tube counterparts, the potential for having a much longer Mean Time Between Failure (MTBF) for the all-solid-state cameras is generating considerable interest within the security community. Questions have been raised as to whether the newest and best of the solid state cameras are a viable alternative to the high maintenance vacuum tube cameras in exterior security applications. To help answer these questions, a series of tests were performed by Sandia National Laboratories at various test sites and under several lighting conditions. In general, all-solid-state cameras need to be improved in four areas before they can be used as wholesale replacements for tube cameras in exterior security applications: resolution, sensitivity, contrast, and smear. However, with careful design some of the higher performance cameras can be used for perimeter security systems, and all of the cameras have applications where they are uniquely qualified. Many of the cameras are well suited for interior assessment and surveillance uses, and several of the cameras are well designed as robotics and machine vision devices

  4. Preliminary field evaluation of solid state cameras for security applications

    International Nuclear Information System (INIS)

    Murray, D.W.

    1987-01-01

    Recent developments in solid state imager technology have resulted in a series of compact, lightweight, all-solid-state closed circuit television (CCTV) cameras. Although it is widely known that the various solid state cameras have less light sensitivity and lower resolution than their vacuum tube counterparts, the potential for having a much longer Mean Time Between Failure (MTBF) for the all-solid-state cameras is generating considerable interest within the security community. Questions have been raised as to whether the newest and best of the solid state cameras are a viable alternative to the high maintenance vacuum tube cameras in exterior security applications. To help answer these questions, a series of tests were performed by Sandia National Laboratories at various test sites and under several lighting conditions. The results of these tests as well as a description of the test equipment, test sites, and procedures are presented in this report

  5. State of the art in video system performance

    Science.gov (United States)

    Lewis, Michael J.

    1990-01-01

    The closed circuit television (CCTV) system that is onboard the Space Shuttle has the following capabilities: camera, video signal switching and routing unit (VSU); and Space Shuttle video tape recorder. However, this system is inadequate for use with many experiments that require video imaging. In order to assess the state-of-the-art in video technology and data storage systems, a survey was conducted of the High Resolution, High Frame Rate Video Technology (HHVT) products. The performance of the state-of-the-art solid state cameras and image sensors, video recording systems, data transmission devices, and data storage systems versus users' requirements are shown graphically.

  6. Camera network video summarization

    Science.gov (United States)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  7. Initial clinical experience with dedicated ultra fast solid state cardiac gamma camera

    International Nuclear Information System (INIS)

    Aland, Nusrat; Lele, V.

    2010-01-01

    Full text: To analyze the imaging and diagnostic performance of new dedicated ultra fast solid state detector gamma camera and compare it with standard dual detector gamma camera in myocardial perfusion imaging. Material and Methods: In total 900 patients underwent myocardial perfusion imaging between 1st February 2010 and 29th August 2010 either stress/rest or rest/stress protocol. There was no age or gender bias (there were 630 males and 270 females). 5 and 15 mCi of 99m Tc - Tetrofosmin/MIBI was injected for 1st and 2nd part of the study respectively. Waiting period after injection was 20 min for regular stress and 40 min for pharmacological stress and 40 min after rest injection. Acquisition was performed on solid state detector gamma camera for a duration of 5 min and 3 min for 1st and 2nd part respectively. Interpretation of myocardial perfusion was done and QGS/QPS protocol was used for EF analysis. Out of these, 20 random patients underwent back to back myocardial perfusion SPECT imaging on standard dual detector gamma camera on same day. There was no age or gender bias (there were 9 males, 11 females). Acquisition time was 20 min for each part of the study. Interpretation was done using Autocard and EF analyses with 4 DM SPECT. Images obtained were then compared with those of solid state detector gamma camera. Result: Good quality and high count myocardial perfusion images were obtained with lesser amount of tracer activity on solid state detector gamma camera. Obese patients also showed good quality images with less tracer activity. As compared to conventional dual detector gamma camera images were brighter and showed better contrast with solid state gamma camera. Right ventricular imaging was better seen. Analyses of diastolic dysfunction was possible with 16 frame gated studies with solid state gamma camera. Shorter acquisition time with comfortable position reduced possibility of patient motion. All cardiac views were obtained with no movement of the

  8. Solid-state framing camera with multiple time frames

    Energy Technology Data Exchange (ETDEWEB)

    Baker, K. L.; Stewart, R. E.; Steele, P. T.; Vernon, S. P.; Hsing, W. W.; Remington, B. A. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States)

    2013-10-07

    A high speed solid-state framing camera has been developed which can operate over a wide range of photon energies. This camera measures the two-dimensional spatial profile of the flux incident on a cadmium selenide semiconductor at multiple times. This multi-frame camera has been tested at 3.1 eV and 4.5 keV. The framing camera currently records two frames with a temporal separation between the frames of 5 ps but this separation can be varied between hundreds of femtoseconds up to nanoseconds and the number of frames can be increased by angularly multiplexing the probe beam onto the cadmium selenide semiconductor.

  9. Environmental performance evaluation of an advanced-design solid-state television camera

    Science.gov (United States)

    1979-01-01

    The development of an advanced-design black-and-white solid-state television camera which can survive exposure to space environmental conditions was undertaken. A 380 x 488 element buried-channel CCD is utilized as the image sensor to ensure compatibility with 525-line transmission and display equipment. Specific camera design approaches selected for study and analysis included: (1) component and circuit sensitivity to temperature; (2) circuit board thermal and mechanical design; and (3) CCD temperature control. Preferred approaches were determined and integrated into the final design for two deliverable solid-state TV cameras. One of these cameras was subjected to environmental tests to determine stress limits for exposure to vibration, shock, acceleration, and temperature-vacuum conditions. These tests indicate performance at the design goal limits can be achieved for most of the specified conditions.

  10. Video camera use at nuclear power plants

    International Nuclear Information System (INIS)

    Estabrook, M.L.; Langan, M.O.; Owen, D.E.

    1990-08-01

    A survey of US nuclear power plants was conducted to evaluate video camera use in plant operations, and determine equipment used and the benefits realized. Basic closed circuit television camera (CCTV) systems are described and video camera operation principles are reviewed. Plant approaches for implementing video camera use are discussed, as are equipment selection issues such as setting task objectives, radiation effects on cameras, and the use of disposal cameras. Specific plant applications are presented and the video equipment used is described. The benefits of video camera use --- mainly reduced radiation exposure and increased productivity --- are discussed and quantified. 15 refs., 6 figs

  11. Soft x-ray imaging by a commercial solid-state television camera

    International Nuclear Information System (INIS)

    Matsushima, I.; Koyama, K.; Tanimoto, M.; Yano, M.

    1987-01-01

    A commerical, solid-state television camera has been used to record images of soft x radiation (0.8--12 keV). The performance of the camera is theoretically analyzed and experimentally evaluated compared with an x-ray photographic film (Kodak direct exposure film). In the application, the camera has been used to provide image patterns of x rays from laser-produced plasmas. It is demonstrated that the camera has several advantages over x-ray photographic film

  12. Development of high-speed video cameras

    Science.gov (United States)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  13. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  14. Achieving sub-millimetre precision with a solid-state full-field heterodyning range imaging camera

    Science.gov (United States)

    Dorrington, A. A.; Cree, M. J.; Payne, A. D.; Conroy, R. M.; Carnegie, D. A.

    2007-09-01

    We have developed a full-field solid-state range imaging system capable of capturing range and intensity data simultaneously for every pixel in a scene with sub-millimetre range precision. The system is based on indirect time-of-flight measurements by heterodyning intensity-modulated illumination with a gain modulation intensified digital video camera. Sub-millimetre precision to beyond 5 m and 2 mm precision out to 12 m has been achieved. In this paper, we describe the new sub-millimetre class range imaging system in detail, and review the important aspects that have been instrumental in achieving high precision ranging. We also present the results of performance characterization experiments and a method of resolving the range ambiguity problem associated with homodyne and heterodyne ranging systems.

  15. Video Chat with Multiple Cameras

    OpenAIRE

    MacCormick, John

    2012-01-01

    The dominant paradigm for video chat employs a single camera at each end of the conversation, but some conversations can be greatly enhanced by using multiple cameras at one or both ends. This paper provides the first rigorous investigation of multi-camera video chat, concentrating especially on the ability of users to switch between views at either end of the conversation. A user study of 23 individuals analyzes the advantages and disadvantages of permitting a user to switch between views at...

  16. Experience with dedicated ultra fast solid state cardiac gamma camera: technologist perspective

    International Nuclear Information System (INIS)

    Parab, Anil; Gaikar, Anil; Patil, Kashinath; Lele, V.

    2010-01-01

    Full text: To describe technologist perspective of working with ultra fast solid state gamma camera and comparison with conventional dual head gamma camera. Material and Methods: 900 Myocardial Perfusion scan were carried out on dedicated solid state detector cardiac camera between 1st February 2010 till 29th August 2010. 27 studies were done back to back on a conventional dual head gamma camera. In 2 cases dual head isotope imaging was done (Thallium+ 99m Tc-tetrofosmin). Rest stress protocol was used in 600 patients and stress - rest protocol was used in 300. 1:3 ratio of injected activity was maintained for both protocols (5 mCi for 1st study and 15 mCi for second study). For Rest - Stress protocol, 5 mCi of 99m Tc - Tetrofosmin was injected at rest, 40 minutes later, 5 min image was acquired on the solid state detector. Patient was then stressed. 15 mCi 99m Tc - Tetrofosmin was injected at peak stress. Images were acquired 20 minutes later for 3 minutes (total duration of study 90-100 min). For stress rest protocol, 5 mCi 99m Tc - Tetrofosmin was injected at peak stress. 5 mCi images were acquired 20 minutes later. Rest injection of 15 mCi was given 1 hour post stress injection. Rest images were acquired 40 minutes after rest injection (total duration of study 110-120 min). Results: We observed even with lesser amount activity and acquisition time of 5 min/cardiac scan it showed high sensitivity count rate over 2.2-4.7 kcps (10 times more counts than standard gamma camera). System gives better energy resolution < 7%. Better image contrast. Dual isotope imaging can be possible. Spatial resolution 4.3-4.9 mm. Excellent quality images were obtained using low activities (5 mCi/15 mCi) using 1/3rd the acquisition time compared to conventional dual head gamma camera Even in obese patients 7 mCi/21 mCi activity yielded excellent images at 1/3 rd acquisition time Quick acquisition resulted in greater patient comfort and no motion artifact also due to non rotation of

  17. Photogrammetric Applications of Immersive Video Cameras

    Science.gov (United States)

    Kwiatek, K.; Tokarczyk, R.

    2014-05-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to overcome it and applying immersive cameras in photogrammetry provides a new potential. The paper presents two applications of immersive video in photogrammetry. At first, the creation of a low-cost mobile mapping system based on Ladybug®3 and GPS device is discussed. The amount of panoramas is much too high for photogrammetric purposes as the base line between spherical panoramas is around 1 metre. More than 92 000 panoramas were recorded in one Polish region of Czarny Dunajec and the measurements from panoramas enable the user to measure the area of outdoors (adverting structures) and billboards. A new law is being created in order to limit the number of illegal advertising structures in the Polish landscape and immersive video recorded in a short period of time is a candidate for economical and flexible measurements off-site. The second approach is a generation of 3d video-based reconstructions of heritage sites based on immersive video (structure from immersive video). A mobile camera mounted on a tripod dolly was used to record the interior scene and immersive video, separated into thousands of still panoramas, was converted from video into 3d objects using Agisoft Photoscan Professional. The findings from these experiments demonstrated that immersive photogrammetry seems to be a flexible and prompt method of 3d modelling and provides promising features for mobile mapping systems.

  18. Photogrammetric Applications of Immersive Video Cameras

    OpenAIRE

    Kwiatek, K.; Tokarczyk, R.

    2014-01-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to ov...

  19. Camera Control and Geo-Registration for Video Sensor Networks

    Science.gov (United States)

    Davis, James W.

    With the use of large video networks, there is a need to coordinate and interpret the video imagery for decision support systems with the goal of reducing the cognitive and perceptual overload of human operators. We present computer vision strategies that enable efficient control and management of cameras to effectively monitor wide-coverage areas, and examine the framework within an actual multi-camera outdoor urban video surveillance network. First, we construct a robust and precise camera control model for commercial pan-tilt-zoom (PTZ) video cameras. In addition to providing a complete functional control mapping for PTZ repositioning, the model can be used to generate wide-view spherical panoramic viewspaces for the cameras. Using the individual camera control models, we next individually map the spherical panoramic viewspace of each camera to a large aerial orthophotograph of the scene. The result provides a unified geo-referenced map representation to permit automatic (and manual) video control and exploitation of cameras in a coordinated manner. The combined framework provides new capabilities for video sensor networks that are of significance and benefit to the broad surveillance/security community.

  20. Gamma camera system with composite solid state detector

    International Nuclear Information System (INIS)

    Gerber, M.S.; Miller, D.W.

    1977-01-01

    A composite solid-state detector is described for utilization within gamma cameras. The detector's formed of an array of detector crystals, the opposed surfaces of each of which are formed incorporating an impedance-derived configuration for determining one coordinate of the location of discrete impinging photons upon the detector. A combined read-out for all detectors within the composite array is achieved through a row and column interconnection of the impedance configurations. Utilizing the read-outs for respective sides of the discrete crystals, a resultant time-constant characteristic for the composite detector crystal array remains essentially that of individual crystal detectors

  1. Video Sharing System Based on Wi-Fi Camera

    OpenAIRE

    Qidi Lin; Hewei Yu; Jinbin Huang; Weile Liang

    2015-01-01

    This paper introduces a video sharing platform based on WiFi, which consists of camera, mobile phone and PC server. This platform can receive wireless signal from the camera and show the live video on the mobile phone captured by camera. In addition, it is able to send commands to camera and control the camera's holder to rotate. The platform can be applied to interactive teaching and dangerous area's monitoring and so on. Testing results show that the platform can share ...

  2. Remote control video cameras on a suborbital rocket

    International Nuclear Information System (INIS)

    Wessling, Francis C.

    1997-01-01

    Three video cameras were controlled in real time from the ground to a sub-orbital rocket during a fifteen minute flight from White Sands Missile Range in New Mexico. Telemetry communications with the rocket allowed the control of the cameras. The pan, tilt, zoom, focus, and iris of two of the camera lenses, the power and record functions of the three cameras, and also the analog video signal that would be sent to the ground was controlled by separate microprocessors. A microprocessor was used to record data from three miniature accelerometers, temperature sensors and a differential pressure sensor. In addition to the selected video signal sent to the ground and recorded there, the video signals from the three cameras also were recorded on board the rocket. These recorders were mounted inside the pressurized segment of the rocket payload. The lenses, lens control mechanisms, and the three small television cameras were located in a portion of the rocket payload that was exposed to the vacuum of space. The accelerometers were also exposed to the vacuum of space

  3. Photometric Calibration of Consumer Video Cameras

    Science.gov (United States)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  4. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  5. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    Science.gov (United States)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  6. Improving Photometric Calibration of Meteor Video Camera Systems

    Science.gov (United States)

    Ehlert, Steven; Kingery, Aaron; Suggs, Robert

    2017-01-01

    We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at approx. 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.

  7. A video authentication technique

    International Nuclear Information System (INIS)

    Johnson, C.S.

    1987-01-01

    Unattended video surveillance systems are particularly vulnerable to the substitution of false video images into the cable that connects the camera to the video recorder. New technology has made it practical to insert a solid state video memory into the video cable, freeze a video image from the camera, and hold this image as long as desired. Various techniques, such as line supervision and sync detection, have been used to detect video cable tampering. The video authentication technique described in this paper uses the actual video image from the camera as the basis for detecting any image substitution made during the transmission of the video image to the recorder. The technique, designed for unattended video systems, can be used for any video transmission system where a two-way digital data link can be established. The technique uses similar microprocessor circuitry at the video camera and at the video recorder to select sample points in the video image for comparison. The gray scale value of these points is compared at the recorder controller and if the values agree within limits, the image is authenticated. If a significantly different image was substituted, the comparison would fail at a number of points and the video image would not be authenticated. The video authentication system can run as a stand-alone system or at the request of another system

  8. Video segmentation and camera motion characterization using compressed data

    Science.gov (United States)

    Milanese, Ruggero; Deguillaume, Frederic; Jacot-Descombes, Alain

    1997-10-01

    We address the problem of automatically extracting visual indexes from videos, in order to provide sophisticated access methods to the contents of a video server. We focus on tow tasks, namely the decomposition of a video clip into uniform segments, and the characterization of each shot by camera motion parameters. For the first task we use a Bayesian classification approach to detecting scene cuts by analyzing motion vectors. For the second task a least- squares fitting procedure determines the pan/tilt/zoom camera parameters. In order to guarantee the highest processing speed, all techniques process and analyze directly MPEG-1 motion vectors, without need for video decompression. Experimental results are reported for a database of news video clips.

  9. Modulated electron-multiplied fluorescence lifetime imaging microscope: all-solid-state camera for fluorescence lifetime imaging.

    Science.gov (United States)

    Zhao, Qiaole; Schelen, Ben; Schouten, Raymond; van den Oever, Rein; Leenen, René; van Kuijk, Harry; Peters, Inge; Polderdijk, Frank; Bosiers, Jan; Raspe, Marcel; Jalink, Kees; Geert Sander de Jong, Jan; van Geest, Bert; Stoop, Karel; Young, Ian Ted

    2012-12-01

    We have built an all-solid-state camera that is directly modulated at the pixel level for frequency-domain fluorescence lifetime imaging microscopy (FLIM) measurements. This novel camera eliminates the need for an image intensifier through the use of an application-specific charge coupled device design in a frequency-domain FLIM system. The first stage of evaluation for the camera has been carried out. Camera characteristics such as noise distribution, dark current influence, camera gain, sampling density, sensitivity, linearity of photometric response, and optical transfer function have been studied through experiments. We are able to do lifetime measurement using our modulated, electron-multiplied fluorescence lifetime imaging microscope (MEM-FLIM) camera for various objects, e.g., fluorescein solution, fixed green fluorescent protein (GFP) cells, and GFP-actin stained live cells. A detailed comparison of a conventional microchannel plate (MCP)-based FLIM system and the MEM-FLIM system is presented. The MEM-FLIM camera shows higher resolution and a better image quality. The MEM-FLIM camera provides a new opportunity for performing frequency-domain FLIM.

  10. Video astronomy on the go using video cameras with small telescopes

    CERN Document Server

    Ashley, Joseph

    2017-01-01

    Author Joseph Ashley explains video astronomy's many benefits in this comprehensive reference guide for amateurs. Video astronomy offers a wonderful way to see objects in far greater detail than is possible through an eyepiece, and the ability to use the modern, entry-level video camera to image deep space objects is a wonderful development for urban astronomers in particular, as it helps sidestep the issue of light pollution. The author addresses both the positive attributes of these cameras for deep space imaging as well as the limitations, such as amp glow. The equipment needed for imaging as well as how it is configured is identified with hook-up diagrams and photographs. Imaging techniques are discussed together with image processing (stacking and image enhancement). Video astronomy has evolved to offer great results and great ease of use, and both novices and more experienced amateurs can use this book to find the set-up that works best for them. Flexible and portable, they open up a whole new way...

  11. Video Surveillance using a Multi-Camera Tracking and Fusion System

    OpenAIRE

    Zhang , Zhong; Scanlon , Andrew; Yin , Weihong; Yu , Li; Venetianer , Péter L.

    2008-01-01

    International audience; Usage of intelligent video surveillance (IVS) systems is spreading rapidly. These systems are being utilized in a wide range of applications. In most cases, even in multi-camera installations, the video is processed independently in each feed. This paper describes a system that fuses tracking information from multiple cameras, thus vastly expanding its capabilities. The fusion relies on all cameras being calibrated to a site map, while the individual sensors remain lar...

  12. Acceptance/operational test procedure 241-AN-107 Video Camera System

    International Nuclear Information System (INIS)

    Pedersen, L.T.

    1994-01-01

    This procedure will document the satisfactory operation of the 241-AN-107 Video Camera System. The camera assembly, including camera mast, pan-and-tilt unit, camera, and lights, will be installed in Tank 241-AN-107 to monitor activities during the Caustic Addition Project. The camera focus, zoom, and iris remote controls will be functionally tested. The resolution and color rendition of the camera will be verified using standard reference charts. The pan-and-tilt unit will be tested for required ranges of motion, and the camera lights will be functionally tested. The master control station equipment, including the monitor, VCRs, printer, character generator, and video micrometer will be set up and performance tested in accordance with original equipment manufacturer's specifications. The accuracy of the video micrometer to measure objects in the range of 0.25 inches to 67 inches will be verified. The gas drying distribution system will be tested to ensure that a drying gas can be flowed over the camera and lens in the event that condensation forms on these components. This test will be performed by attaching the gas input connector, located in the upper junction box, to a pressurized gas supply and verifying that the check valve, located in the camera housing, opens to exhaust the compressed gas. The 241-AN-107 camera system will also be tested to assure acceptable resolution of the camera imaging components utilizing the camera system lights

  13. C.C.D. readout of a picosecond streak camera with an intensified C.C.D

    International Nuclear Information System (INIS)

    Lemonier, M.; Richard, J.C.; Cavailler, C.; Mens, A.; Raze, G.

    1984-08-01

    This paper deals with a digital streak camera readout device. The device consists in a low light level television camera made of a solid state C.C.D. array coupled to an image intensifier associated to a video-digitizer coupled to a micro-computer system. The streak camera images are picked-up as a video signal, digitized and stored. This system allows the fast recording and the automatic processing of the data provided by the streak tube

  14. Image-scanning measurement using video dissection cameras

    International Nuclear Information System (INIS)

    Carson, J.S.

    1978-01-01

    A high speed dimensional measuring system capable of scanning a thin film network, and determining if there are conductor widths, resistor widths, or spaces not typical of the design for this product is described. The eye of the system is a conventional TV camera, although such devices as image dissector cameras or solid-state scanners may be used more often in the future. The analog signal from the TV camera is digitized for processing by the computer and is presented to the TV monitor to assist the operator in monitoring the system's operation. Movable stages are required when the field of view of the scanner is less than the size of the object. A minicomputer controls the movement of the stage, and communicates with the digitizer to select picture points that are to be processed. Communications with the system are maintained through a teletype or CRT terminal

  15. Advancement of thyroid surgery video recording: A comparison between two full HD head mounted video cameras.

    Science.gov (United States)

    Ortensi, Andrea; Panunzi, Andrea; Trombetta, Silvia; Cattaneo, Alberto; Sorrenti, Salvatore; D'Orazi, Valerio

    2017-05-01

    The aim of this study was to test two different video cameras and recording systems used in thyroid surgery in our Department. This is meant to be an attempt to record the real point of view of the magnified vision of surgeon, so as to make the viewer aware of the difference with the naked eye vision. In this retrospective study, we recorded and compared twenty thyroidectomies performed using loupes magnification and microsurgical technique: ten were recorded with GoPro ® 4 Session action cam (commercially available) and ten with our new prototype of head mounted video camera. Settings were selected before surgery for both cameras. The recording time is about from 1 to 2 h for GoPro ® and from 3 to 5 h for our prototype. The average time of preparation to fit the camera on the surgeon's head and set the functionality is about 5 min for GoPro ® and 7-8 min for the prototype, mostly due to HDMI wiring cable. Videos recorded with the prototype require no further editing, which is mandatory for videos recorded with GoPro ® to highlight the surgical details. the present study showed that our prototype of video camera, compared with GoPro ® 4 Session, guarantees best results in terms of surgical video recording quality, provides to the viewer the exact perspective of the microsurgeon and shows accurately his magnified view through the loupes in thyroid surgery. These recordings are surgical aids for teaching and education and might be a method of self-analysis of surgical technique. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  16. Video content analysis on body-worn cameras for retrospective investigation

    Science.gov (United States)

    Bouma, Henri; Baan, Jan; ter Haar, Frank B.; Eendebak, Pieter T.; den Hollander, Richard J. M.; Burghouts, Gertjan J.; Wijn, Remco; van den Broek, Sebastiaan P.; van Rest, Jeroen H. C.

    2015-10-01

    In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications and effects, such as the reduction of violence between police and citizens. However, the increased use of bodycams also creates potential challenges. For example: how can end-users extract information from the abundance of video, how can the information be presented, and how can an officer retrieve information efficiently? Nevertheless, such video gives the opportunity to stimulate the professionals' memory, and support complete and accurate reporting. In this paper, we show how video content analysis (VCA) can address these challenges and seize these opportunities. To this end, we focus on methods for creating a complete summary of the video, which allows quick retrieval of relevant fragments. The content analysis for summarization consists of several components, such as stabilization, scene selection, motion estimation, localization, pedestrian tracking and action recognition in the video from a bodycam. The different components and visual representations of summaries are presented for retrospective investigation.

  17. Thermal imagers: from ancient analog video output to state-of-the-art video streaming

    Science.gov (United States)

    Haan, Hubertus; Feuchter, Timo; Münzberg, Mario; Fritze, Jörg; Schlemmer, Harry

    2013-06-01

    The video output of thermal imagers stayed constant over almost two decades. When the famous Common Modules were employed a thermal image at first was presented to the observer in the eye piece only. In the early 1990s TV cameras were attached and the standard output was CCIR. In the civil camera market output standards changed to digital formats a decade ago with digital video streaming being nowadays state-of-the-art. The reasons why the output technique in the thermal world stayed unchanged over such a long time are: the very conservative view of the military community, long planning and turn-around times of programs and a slower growth of pixel number of TIs in comparison to consumer cameras. With megapixel detectors the CCIR output format is not sufficient any longer. The paper discusses the state-of-the-art compression and streaming solutions for TIs.

  18. Engineering task plan for flammable gas atmosphere mobile color video camera systems

    International Nuclear Information System (INIS)

    Kohlman, E.H.

    1995-01-01

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and testing of the mobile video camera systems. The color video camera systems will be used to observe and record the activities within the vapor space of a tank on a limited exposure basis. The units will be fully mobile and designed for operation in the single-shell flammable gas producing tanks. The objective of this tank is to provide two mobile camera systems for use in flammable gas producing single-shell tanks (SSTs) for the Flammable Gas Tank Safety Program. The camera systems will provide observation, video recording, and monitoring of the activities that occur in the vapor space of applied tanks. The camera systems will be designed to be totally mobile, capable of deployment up to 6.1 meters into a 4 inch (minimum) riser

  19. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    Directory of Open Access Journals (Sweden)

    Semi Jeon

    2017-02-01

    Full Text Available Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i robust feature detection using particle keypoints between adjacent frames; (ii camera path estimation and smoothing; and (iii rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV. The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems.

  20. Optimization of video capturing and tone mapping in video camera systems

    NARCIS (Netherlands)

    Cvetkovic, S.D.

    2011-01-01

    Image enhancement techniques are widely employed in many areas of professional and consumer imaging, machine vision and computational imaging. Image enhancement techniques used in surveillance video cameras are complex systems involving controllable lenses, sensors and advanced signal processing. In

  1. CameraCast: flexible access to remote video sensors

    Science.gov (United States)

    Kong, Jiantao; Ganev, Ivan; Schwan, Karsten; Widener, Patrick

    2007-01-01

    New applications like remote surveillance and online environmental or traffic monitoring are making it increasingly important to provide flexible and protected access to remote video sensor devices. Current systems use application-level codes like web-based solutions to provide such access. This requires adherence to user-level APIs provided by such services, access to remote video information through given application-specific service and server topologies, and that the data being captured and distributed is manipulated by third party service codes. CameraCast is a simple, easily used system-level solution to remote video access. It provides a logical device API so that an application can identically operate on local vs. remote video sensor devices, using its own service and server topologies. In addition, the application can take advantage of API enhancements to protect remote video information, using a capability-based model for differential data protection that offers fine grain control over the information made available to specific codes or machines, thereby limiting their ability to violate privacy or security constraints. Experimental evaluations of CameraCast show that the performance of accessing remote video information approximates that of accesses to local devices, given sufficient networking resources. High performance is also attained when protection restrictions are enforced, due to an efficient kernel-level realization of differential data protection.

  2. A model for measurement of noise in CCD digital-video cameras

    International Nuclear Information System (INIS)

    Irie, K; Woodhead, I M; McKinnon, A E; Unsworth, K

    2008-01-01

    This study presents a comprehensive measurement of CCD digital-video camera noise. Knowledge of noise detail within images or video streams allows for the development of more sophisticated algorithms for separating true image content from the noise generated in an image sensor. The robustness and performance of an image-processing algorithm is fundamentally limited by sensor noise. The individual noise sources present in CCD sensors are well understood, but there has been little literature on the development of a complete noise model for CCD digital-video cameras, incorporating the effects of quantization and demosaicing

  3. Real-time pedestrian detection with the videos of car camera

    Directory of Open Access Journals (Sweden)

    Yunling Zhang

    2015-12-01

    Full Text Available Pedestrians in the vehicle path are in danger of being hit, thus causing severe injury to pedestrians and vehicle occupants. Therefore, real-time pedestrian detection with the video of vehicle-mounted camera is of great significance to vehicle–pedestrian collision warning and traffic safety of self-driving car. In this article, a real-time scheme was proposed based on integral channel features and graphics processing unit. The proposed method does not need to resize the input image. Moreover, the computationally expensive convolution of the detectors and the input image was converted into the dot product of two larger matrixes, which can be computed effectively using a graphics processing unit. The experiments showed that the proposed method could be employed to detect pedestrians in the video of car camera at 20+ frames per second with acceptable error rates. Thus, it can be applied in real-time detection tasks with the videos of car camera.

  4. Advances in top-down and bottom-up approaches to video-based camera tracking

    OpenAIRE

    Marimón Sanjuán, David

    2007-01-01

    Video-based camera tracking consists in trailing the three dimensional pose followed by a mobile camera using video as sole input. In order to estimate the pose of a camera with respect to a real scene, one or more three dimensional references are needed. Examples of such references are landmarks with known geometric shape, or objects for which a model is generated beforehand. By comparing what is seen by a camera with what is geometrically known from reality, it is possible to recover the po...

  5. Advances in top-down and bottom-up approaches to video-based camera tracking

    OpenAIRE

    Marimón Sanjuán, David; Ebrahimi, Touradj

    2008-01-01

    Video-based camera tracking consists in trailing the three dimensional pose followed by a mobile camera using video as sole input. In order to estimate the pose of a camera with respect to a real scene, one or more three dimensional references are needed. Examples of such references are landmarks with known geometric shape, or objects for which a model is generated beforehand. By comparing what is seen by a camera with what is geometrically known from reality, it is possible to recover the po...

  6. Surgical video recording with a modified GoPro Hero 4 camera.

    Science.gov (United States)

    Lin, Lily Koo

    2016-01-01

    Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination.

  7. Semantic Information Extraction of Lanes Based on Onboard Camera Videos

    Science.gov (United States)

    Tang, L.; Deng, T.; Ren, C.

    2018-04-01

    In the field of autonomous driving, semantic information of lanes is very important. This paper proposes a method of automatic detection of lanes and extraction of semantic information from onboard camera videos. The proposed method firstly detects the edges of lanes by the grayscale gradient direction, and improves the Probabilistic Hough transform to fit them; then, it uses the vanishing point principle to calculate the lane geometrical position, and uses lane characteristics to extract lane semantic information by the classification of decision trees. In the experiment, 216 road video images captured by a camera mounted onboard a moving vehicle were used to detect lanes and extract lane semantic information. The results show that the proposed method can accurately identify lane semantics from video images.

  8. ALGORITHM OF PLACEMENT OF VIDEO SURVEILLANCE CAMERAS AND ITS SOFTWARE IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Loktev Alexey Alexeevich

    2012-10-01

    Full Text Available Comprehensive distributed safety, control, and monitoring systems applied by companies and organizations of different ownership structure play a substantial role in the present-day society. Video surveillance elements that ensure image processing and decision making in automated or automatic modes are the essential components of new systems. This paper covers the modeling of video surveillance systems installed in buildings, and the algorithm, or pattern, of video camera placement with due account for nearly all characteristics of buildings, detection and recognition facilities, and cameras themselves. This algorithm will be subsequently implemented as a user application. The project contemplates a comprehensive approach to the automatic placement of cameras that take account of their mutual positioning and compatibility of tasks. The project objective is to develop the principal elements of the algorithm of recognition of a moving object to be detected by several cameras. The image obtained by different cameras will be processed. Parameters of motion are to be identified to develop a table of possible options of routes. The implementation of the recognition algorithm represents an independent research project to be covered by a different article. This project consists in the assessment of the degree of complexity of an algorithm of camera placement designated for identification of cases of inaccurate algorithm implementation, as well as in the formulation of supplementary requirements and input data by means of intercrossing sectors covered by neighbouring cameras. The project also contemplates identification of potential problems in the course of development of a physical security and monitoring system at the stage of the project design development and testing. The camera placement algorithm has been implemented as a software application that has already been pilot tested on buildings and inside premises that have irregular dimensions. The

  9. In-camera video-stream processing for bandwidth reduction in web inspection

    Science.gov (United States)

    Jullien, Graham A.; Li, QiuPing; Hajimowlana, S. Hossain; Morvay, J.; Conflitti, D.; Roberts, James W.; Doody, Brian C.

    1996-02-01

    Automated machine vision systems are now widely used for industrial inspection tasks where video-stream data information is taken in by the camera and then sent out to the inspection system for future processing. In this paper we describe a prototype system for on-line programming of arbitrary real-time video data stream bandwidth reduction algorithms; the output of the camera only contains information that has to be further processed by a host computer. The processing system is built into a DALSA CCD camera and uses a microcontroller interface to download bit-stream data to a XILINXTM FPGA. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The camera communicates to a host computer via an RS-232 link to the microcontroller. Static memory is used to both generate a FIFO interface for buffering defect burst data, and for off-line examination of defect detection data. In addition to providing arbitrary FPGA architectures, the internal program of the microcontroller can also be changed via the host computer and a ROM monitor. This paper describes a prototype system board, mounted inside a DALSA camera, and discusses some of the algorithms currently being implemented for web inspection applications.

  10. Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras

    Science.gov (United States)

    2017-10-01

    ARL-TR-8185 ● OCT 2017 US Army Research Laboratory Field Test Data for Detecting Vibrations of a Building Using High-Speed Video...Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras by Caitlin P Conn and Geoffrey H Goldman Sensors and...June 2016 – October 2017 4. TITLE AND SUBTITLE Field Test Data for Detecting Vibrations of a Building Using High-Speed Video Cameras 5a. CONTRACT

  11. Surgical video recording with a modified GoPro Hero 4 camera

    Directory of Open Access Journals (Sweden)

    Lin LK

    2016-01-01

    Full Text Available Lily Koo Lin Department of Ophthalmology and Vision Science, University of California, Davis Eye Center, Sacramento, CA, USA Background: Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. Method: The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Results: Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. Conclusion: The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination. Keywords: teaching, oculoplastic, strabismus

  12. An evaluation of video cameras for collecting observational data on sanctuary-housed chimpanzees (Pan troglodytes).

    Science.gov (United States)

    Hansen, Bethany K; Fultz, Amy L; Hopper, Lydia M; Ross, Stephen R

    2018-05-01

    Video cameras are increasingly being used to monitor captive animals in zoo, laboratory, and agricultural settings. This technology may also be useful in sanctuaries with large and/or complex enclosures. However, the cost of camera equipment and a lack of formal evaluations regarding the use of cameras in sanctuary settings make it challenging for facilities to decide whether and how to implement this technology. To address this, we evaluated the feasibility of using a video camera system to monitor chimpanzees at Chimp Haven. We viewed a group of resident chimpanzees in a large forested enclosure and compared observations collected in person and with remote video cameras. We found that via camera, the observer viewed fewer chimpanzees in some outdoor locations (GLMM post hoc test: est. = 1.4503, SE = 0.1457, Z = 9.951, p sanctuaries to facilitate animal care and observational research. © 2018 Wiley Periodicals, Inc.

  13. Candid camera : video surveillance system can help protect assets

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, L.

    2009-11-15

    By combining closed-circuit cameras with sophisticated video analytics to create video sensors for use in remote areas, Calgary-based IntelliView Technologies Inc.'s explosion-proof video surveillance system can help the oil and gas sector monitor its assets. This article discussed the benefits, features, and applications of IntelliView's technology. Some of the benefits include a reduced need for on-site security and operating personnel and its patented analytics product known as the SmrtDVR, where the camera's images are stored. The technology can be used in temperatures as cold as minus 50 degrees Celsius and as high as 50 degrees Celsius. The product was commercialized in 2006 when it was used by Nexen Inc. It was concluded that false alarms set off by natural occurrences such as rain, snow, glare and shadows were a huge problem with analytics in the past, but that problem has been solved by IntelliView, which has its own source code, and re-programmed code. 1 fig.

  14. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i generation of a three-dimensional (3D human model; (ii human object-based automatic scene calibration; and (iii metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  15. video115_0403 -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  16. GoPro Hero Cameras for Creation of a Three-Dimensional, Educational, Neurointerventional Video.

    Science.gov (United States)

    Park, Min S; Brock, Andrea; Mortimer, Vance; Taussky, Philipp; Couldwell, William T; Quigley, Edward

    2017-10-01

    Neurointerventional education relies on an apprenticeship model, with the trainee observing and participating in procedures with the guidance of a mentor. While educational videos are becoming prevalent in surgical cases, there is a dearth of comparable educational material for trainees in neurointerventional programs. We sought to create a high-quality, three-dimensional video of a routine diagnostic cerebral angiogram for use as an educational tool. A diagnostic cerebral angiogram was recorded using two GoPro HERO 3+ cameras with the Dual HERO System to capture the proceduralist's hands during the case. This video was edited with recordings from the video monitors to create a real-time three-dimensional video of both the actions of the neurointerventionalist and the resulting wire/catheter movements. The final edited video, in either two or three dimensions, can serve as another instructional tool for the training of residents and/or fellows. Additional videos can be created in a similar fashion of more complicated neurointerventional cases. The GoPro HERO 3+ camera and Dual HERO System can be used to create educational videos of neurointerventional procedures.

  17. ATR/OTR-SY Tank Camera Purge System and in Tank Color Video Imaging System

    International Nuclear Information System (INIS)

    Werry, S.M.

    1995-01-01

    This procedure will document the satisfactory operation of the 101-SY tank Camera Purge System (CPS) and 101-SY in tank Color Camera Video Imaging System (CCVIS). Included in the CPRS is the nitrogen purging system safety interlock which shuts down all the color video imaging system electronics within the 101-SY tank vapor space during loss of nitrogen purge pressure

  18. Development and setting of a time-lapse video camera system for the Antarctic lake observation

    Directory of Open Access Journals (Sweden)

    Sakae Kudoh

    2010-11-01

    Full Text Available A submersible video camera system, which aimed to record the growth image of aquatic vegetation in Antarctic lakes for one year, was manufactured. The system consisted of a video camera, a programmable controller unit, a lens-cleaning wiper with a submersible motor, LED lights, and a lithium ion battery unit. Changes of video camera (High Vision System and modification of the lens-cleaning wiper allowed higher sensitivity and clearer recording images compared to the previous submersible video without increasing the power consumption. This system was set on the lake floor in Lake Naga Ike (a tentative name in Skarvsnes in Soya Coast, during the summer activity of the 51th Japanese Antarctic Research Expedition. Interval record of underwater visual image for one year have been started by our diving operation.

  19. Non-mydriatic, wide field, fundus video camera

    Science.gov (United States)

    Hoeher, Bernhard; Voigtmann, Peter; Michelson, Georg; Schmauss, Bernhard

    2014-02-01

    We describe a method we call "stripe field imaging" that is capable of capturing wide field color fundus videos and images of the human eye at pupil sizes of 2mm. This means that it can be used with a non-dilated pupil even with bright ambient light. We realized a mobile demonstrator to prove the method and we could acquire color fundus videos of subjects successfully. We designed the demonstrator as a low-cost device consisting of mass market components to show that there is no major additional technical outlay to realize the improvements we propose. The technical core idea of our method is breaking the rotational symmetry in the optical design that is given in many conventional fundus cameras. By this measure we could extend the possible field of view (FOV) at a pupil size of 2mm from a circular field with 20° in diameter to a square field with 68° by 18° in size. We acquired a fundus video while the subject was slightly touching and releasing the lid. The resulting video showed changes at vessels in the region of the papilla and a change of the paleness of the papilla.

  20. Teacher training for using digital video camera in primary education

    Directory of Open Access Journals (Sweden)

    Pablo García Sempere

    2011-12-01

    Full Text Available This paper shows the partial results of a research carried out in primary schools, which evaluates the ability of teachers in the use of digital video camera. The study took place in the province of Granada, Spain. Our purpose was to know the level of knowledge, interest, difficulties and training needs so as to improve the teaching practice. The work has been done from a descriptive and ecletic approach. Quantitative (questionnaire and qualitative techniques (focus group have been used in this research. The information obtained shows that most of the teachers have a lack of knowledge in the use of video camera and digital edition. On the other hand, the majority agrees to include initial and permanent training on this subject. Finally, the most important conclusions are presented.

  1. video114_0402c -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vesselTatoosh during August 2005. Video data from...

  2. video114_0402b -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  3. Engineering task plan for Tanks 241-AN-103, 104, 105 color video camera systems

    International Nuclear Information System (INIS)

    Kohlman, E.H.

    1994-01-01

    This Engineering Task Plan (ETP) describes the design, fabrication, assembly, and installation of the video camera systems into the vapor space within tanks 241-AN-103, 104, and 105. The one camera remotely operated color video systems will be used to observe and record the activities within the vapor space. Activities may include but are not limited to core sampling, auger activities, crust layer examination, monitoring of equipment installation/removal, and any other activities. The objective of this task is to provide a single camera system in each of the tanks for the Flammable Gas Tank Safety Program

  4. Euratom experience with video surveillance - Single camera and other non-multiplexed

    International Nuclear Information System (INIS)

    Otto, P.; Cozier, T.; Jargeac, B.; Castets, J.P.; Wagner, H.G.; Chare, P.; Roewer, V.

    1991-01-01

    The Euratom Safeguards Directorate (ESD) has been using a number of single camera video systems (Ministar, MIVS, DCS) and non-multiplexed multi-camera systems (Digiquad) for routine safeguards surveillance applications during the last four years. This paper describes aspects of system design and considerations relevant for installation. It reports on system reliability and performance and presents suggestions on future improvements

  5. Automatic measurement for solid state track detectors

    International Nuclear Information System (INIS)

    Ogura, Koichi

    1982-01-01

    Since in solid state track detectors, their tracks are measured with a microscope, observers are forced to do hard works that consume time and labour. This causes to obtain poor statistic accuracy or to produce personal error. Therefore, many researches have been done to aim at simplifying and automating track measurement. There are two categories in automating the measurement: simple counting of the number of tracks and the requirements to know geometrical elements such as the size of tracks or their coordinates as well as the number of tracks. The former is called automatic counting and the latter automatic analysis. The method to generally evaluate the number of tracks in automatic counting is the estimation of the total number of tracks in the total detector area or in a field of view of a microscope. It is suitable for counting when the track density is higher. The method to count tracks one by one includes the spark counting and the scanning microdensitometer. Automatic analysis includes video image analysis in which the high quality images obtained with a high resolution video camera are processed with a micro-computer, and the tracks are automatically recognized and measured by feature extraction. This method is described in detail. In many kinds of automatic measurements reported so far, frequently used ones are ''spark counting'' and ''video image analysis''. (Wakatsuki, Y.)

  6. Automatically assessing properties of dynamic cameras for camera selection and rapid deployment of video content analysis tasks in large-scale ad-hoc networks

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; van Rest, Jeroen H. C.; ten Hove, Johan-Martijn; ter Haar, Frank B.; Burghouts, Gertjan J.

    2017-10-01

    Video analytics is essential for managing large quantities of raw data that are produced by video surveillance systems (VSS) for the prevention, repression and investigation of crime and terrorism. Analytics is highly sensitive to changes in the scene, and for changes in the optical chain so a VSS with analytics needs careful configuration and prompt maintenance to avoid false alarms. However, there is a trend from static VSS consisting of fixed CCTV cameras towards more dynamic VSS deployments over public/private multi-organization networks, consisting of a wider variety of visual sensors, including pan-tilt-zoom (PTZ) cameras, body-worn cameras and cameras on moving platforms. This trend will lead to more dynamic scenes and more frequent changes in the optical chain, creating structural problems for analytics. If these problems are not adequately addressed, analytics will not be able to continue to meet end users' developing needs. In this paper, we present a three-part solution for managing the performance of complex analytics deployments. The first part is a register containing meta data describing relevant properties of the optical chain, such as intrinsic and extrinsic calibration, and parameters of the scene such as lighting conditions or measures for scene complexity (e.g. number of people). A second part frequently assesses these parameters in the deployed VSS, stores changes in the register, and signals relevant changes in the setup to the VSS administrator. A third part uses the information in the register to dynamically configure analytics tasks based on VSS operator input. In order to support the feasibility of this solution, we give an overview of related state-of-the-art technologies for autocalibration (self-calibration), scene recognition and lighting estimation in relation to person detection. The presented solution allows for rapid and robust deployment of Video Content Analysis (VCA) tasks in large scale ad-hoc networks.

  7. A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network

    Science.gov (United States)

    Li, Yiming; Bhanu, Bir

    Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.

  8. Advances in pediatric gastroenterology: introducing video camera capsule endoscopy.

    Science.gov (United States)

    Siaw, Emmanuel O

    2006-04-01

    The video camera capsule endoscope is a gastrointestinal endoscope approved by the U.S. Food and Drug Administration in 2001 for use in diagnosing gastrointestinal disorders in adults. In 2003, the agency approved the device for use in children ages 10 and older, and the endoscope is currently in use at Arkansas Children's Hospital. A capsule camera, lens, battery, transmitter and antenna together record images of the small intestine as the endoscope makes its way through the bowel. The instrument is used with minimal risk to the patient while offering a high degree of accuracy in diagnosing small intestine disorders.

  9. Identifying sports videos using replay, text, and camera motion features

    Science.gov (United States)

    Kobla, Vikrant; DeMenthon, Daniel; Doermann, David S.

    1999-12-01

    Automated classification of digital video is emerging as an important piece of the puzzle in the design of content management systems for digital libraries. The ability to classify videos into various classes such as sports, news, movies, or documentaries, increases the efficiency of indexing, browsing, and retrieval of video in large databases. In this paper, we discuss the extraction of features that enable identification of sports videos directly from the compressed domain of MPEG video. These features include detecting the presence of action replays, determining the amount of scene text in vide, and calculating various statistics on camera and/or object motion. The features are derived from the macroblock, motion,and bit-rate information that is readily accessible from MPEG video with very minimal decoding, leading to substantial gains in processing speeds. Full-decoding of selective frames is required only for text analysis. A decision tree classifier built using these features is able to identify sports clips with an accuracy of about 93 percent.

  10. A passive terahertz video camera based on lumped element kinetic inductance detectors

    Energy Technology Data Exchange (ETDEWEB)

    Rowe, Sam, E-mail: sam.rowe@astro.cf.ac.uk; Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Ade, Peter A. R.; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; House, Julian; Moseley, Paul; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian [Astronomy Instrumentation Group, School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); Wood, Ken [QMC Instruments Ltd., School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); Grainger, William [Rutherford Appleton Laboratory, STFC, Swindon SN2 1SZ (United Kingdom); Mauskopf, Philip [Astronomy Instrumentation Group, School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); School of Earth Science and Space Exploration, Arizona State University, Tempe, Arizona 85281 (United States); Spencer, Locke [Department of Physics and Astronomy, University of Lethbridge, Lethbridge, Alberta T1K 3M4 (Canada)

    2016-03-15

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.

  11. A passive terahertz video camera based on lumped element kinetic inductance detectors

    International Nuclear Information System (INIS)

    Rowe, Sam; Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Ade, Peter A. R.; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; House, Julian; Moseley, Paul; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian; Wood, Ken; Grainger, William; Mauskopf, Philip; Spencer, Locke

    2016-01-01

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.

  12. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  13. Localization of cask and plug remote handling system in ITER using multiple video cameras

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, João, E-mail: jftferreira@ipfn.ist.utl.pt [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Vale, Alberto [Instituto de Plasmas e Fusão Nuclear - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Ribeiro, Isabel [Laboratório de Robótica e Sistemas em Engenharia e Ciência - Laboratório Associado, Instituto Superior Técnico, Universidade Técnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)

    2013-10-15

    Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building.

  14. Localization of cask and plug remote handling system in ITER using multiple video cameras

    International Nuclear Information System (INIS)

    Ferreira, João; Vale, Alberto; Ribeiro, Isabel

    2013-01-01

    Highlights: ► Localization of cask and plug remote handling system with video cameras and markers. ► Video cameras already installed on the building for remote operators. ► Fiducial markers glued or painted on cask and plug remote handling system. ► Augmented reality contents on the video streaming as an aid for remote operators. ► Integration with other localization systems for enhanced robustness and precision. -- Abstract: The cask and plug remote handling system (CPRHS) provides the means for the remote transfer of in-vessel components and remote handling equipment between the Hot Cell building and the Tokamak building in ITER. Different CPRHS typologies will be autonomously guided following predefined trajectories. Therefore, the localization of any CPRHS in operation must be continuously known in real time to provide the feedback for the control system and also for the human supervision. This paper proposes a localization system that uses the video streaming captured by the multiple cameras already installed in the ITER scenario to estimate with precision the position and the orientation of any CPRHS. In addition, an augmented reality system can be implemented using the same video streaming and the libraries for the localization system. The proposed localization system was tested in a mock-up scenario with a scale 1:25 of the divertor level of Tokamak building

  15. Real-time construction and visualisation of drift-free video mosaics from unconstrained camera motion

    Directory of Open Access Journals (Sweden)

    Mateusz Brzeszcz

    2015-08-01

    Full Text Available This work proposes a novel approach for real-time video mosaicking facilitating drift-free mosaic construction and visualisation, with integrated frame blending and redundancy management, that is shown to be flexible to a range of varying mosaic scenarios. The approach supports unconstrained camera motion with in-sequence loop closing, variation in camera focal distance (zoom and recovery from video sequence breaks. Real-time performance, over extended duration sequences, is realised via novel aspects of frame management within the mosaic representation and thus avoiding the high data redundancy associated with temporally dense, spatially overlapping video frame inputs. This managed set of image frames is visualised in real time using a dynamic mosaic representation of overlapping textured graphics primitives in place of the traditional globally constructed, and hence frequently reconstructed, mosaic image. Within this formulation, subsequent optimisation occurring during online construction can thus efficiency adjust relative frame positions via simple primitive position transforms. Effective visualisation is similarly facilitated by online inter-frame blending to overcome the illumination and colour variance associated with modern camera hardware. The evaluation illustrates overall robustness in video mosaic construction under a diverse range of conditions including indoor and outdoor environments, varying illumination and presence of in-scene motion on varying computational platforms.

  16. Camera Networks The Acquisition and Analysis of Videos over Wide Areas

    CERN Document Server

    Roy-Chowdhury, Amit K

    2012-01-01

    As networks of video cameras are installed in many applications like security and surveillance, environmental monitoring, disaster response, and assisted living facilities, among others, image understanding in camera networks is becoming an important area of research and technology development. There are many challenges that need to be addressed in the process. Some of them are listed below: - Traditional computer vision challenges in tracking and recognition, robustness to pose, illumination, occlusion, clutter, recognition of objects, and activities; - Aggregating local information for wide

  17. Low-complexity camera digital signal imaging for video document projection system

    Science.gov (United States)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  18. Development of an image converter of radical design. [employing solid state electronics towards the production of an advanced engineering model camera system

    Science.gov (United States)

    Irwin, E. L.; Farnsworth, D. L.

    1972-01-01

    A long term investigation of thin film sensors, monolithic photo-field effect transistors, and epitaxially diffused phototransistors and photodiodes to meet requirements to produce acceptable all solid state, electronically scanned imaging system, led to the production of an advanced engineering model camera which employs a 200,000 element phototransistor array (organized in a matrix of 400 rows by 500 columns) to secure resolution comparable to commercial television. The full investigation is described for the period July 1962 through July 1972, and covers the following broad topics in detail: (1) sensor monoliths; (2) fabrication technology; (3) functional theory; (4) system methodology; and (5) deployment profile. A summary of the work and conclusions are given, along with extensive schematic diagrams of the final solid state imaging system product.

  19. Evaluation of list-mode ordered subset expectation maximization image reconstruction for pixelated solid-state compton gamma camera with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Chmeissani, M.

    2014-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment.

  20. Development of a solid-state multi-sensor array camera for real time imaging of magnetic fields

    International Nuclear Information System (INIS)

    Benitez, D; Gaydecki, P; Quek, S; Torres, V

    2007-01-01

    The development of a real-time magnetic field imaging camera based on solid-state sensors is described. The final laboratory comprises a 2D array of 33 x 33 solid state, tri-axial magneto-inductive sensors, and is located within a large current-carrying coil. This may be excited to produce either a steady or time-varying magnetic field. Outputs from several rows of sensors are routed to a sub-master controller and all sub-masters route to a master-controller responsible for data coordination and signal pre-processing. The data are finally streamed to a host computer via a USB interface and the image generated and displayed at a rate of several frames per second. Accurate image generation is predicated on a knowledge of the sensor response, magnetic field perturbations and the nature of the target respecting permeability and conductivity. To this end, the development of the instrumentation has been complemented by extensive numerical modelling of field distribution patterns using boundary element methods. Although it was originally intended for deployment in the nondestructive evaluation (NDE) of reinforced concrete, it was soon realised during the course of the work that the magnetic field imaging system had many potential applications, for example, in medicine, security screening, quality assurance (such as the food industry), other areas of nondestructive evaluation (NDE), designs associated with magnetic fields, teaching and research

  1. Development of a solid-state multi-sensor array camera for real time imaging of magnetic fields

    Science.gov (United States)

    Benitez, D.; Gaydecki, P.; Quek, S.; Torres, V.

    2007-07-01

    The development of a real-time magnetic field imaging camera based on solid-state sensors is described. The final laboratory comprises a 2D array of 33 x 33 solid state, tri-axial magneto-inductive sensors, and is located within a large current-carrying coil. This may be excited to produce either a steady or time-varying magnetic field. Outputs from several rows of sensors are routed to a sub-master controller and all sub-masters route to a master-controller responsible for data coordination and signal pre-processing. The data are finally streamed to a host computer via a USB interface and the image generated and displayed at a rate of several frames per second. Accurate image generation is predicated on a knowledge of the sensor response, magnetic field perturbations and the nature of the target respecting permeability and conductivity. To this end, the development of the instrumentation has been complemented by extensive numerical modelling of field distribution patterns using boundary element methods. Although it was originally intended for deployment in the nondestructive evaluation (NDE) of reinforced concrete, it was soon realised during the course of the work that the magnetic field imaging system had many potential applications, for example, in medicine, security screening, quality assurance (such as the food industry), other areas of nondestructive evaluation (NDE), designs associated with magnetic fields, teaching and research.

  2. A high precision video-electronic measuring system for use with solid state track detectors

    International Nuclear Information System (INIS)

    Schott, J.U.; Schopper, E.; Staudte, R.

    1976-01-01

    A video-electronic image analyzing system Quantimet 720 has been modified to meet the requirements of the measurement of tracks of nuclear particles in solid state track detectors with resulting improvement of precision, speed, and the elimination of subjective influences. A microscope equipped with an automatic XY stage projects the image onto the cathode of a vidicon-amplifier. Within the TV-picture generated, characterized by the coordinate XY in the specimen, we determine coordinates xy of events by setting cross lines on the screen which correspond to a digital accuracy of 0.1 μm at the position of the object. Automatic movement in Z-direction can be performed by stepping motor and measured electronically, or continously by setting electric voltage on a piezostrictive support of the objective. (orig.) [de

  3. Video monitoring system for enriched uranium casting furnaces

    International Nuclear Information System (INIS)

    Turner, P.C.

    1978-03-01

    A closed-circuit television (CCTV) system was developed to upgrade the remote-viewing capability on two oralloy (highly enriched uranium) casting furnaces in the Y-12 Plant. A silicon vidicon CCTV camera with a remotely controlled lens and infrared filtering was provided to yield a good-quality video presentation of the furnace crucible as the oralloy material is heated from 25 to 1300 0 C. Existing tube-type CCTV monochrome monitors were replaced with solid-state monitors to increase the system reliability

  4. Optimization of radiation sensors for a passive terahertz video camera for security applications

    NARCIS (Netherlands)

    Zieger, G.J.M.

    2014-01-01

    A passive terahertz video camera allows for fast security screenings from distances of several meters. It avoids irradiation or the impressions of nakedness, which oftentimes cause embarrassment and trepidation of the concerned persons. This work describes the optimization of highly sensitive

  5. Falling-incident detection and throughput enhancement in a multi-camera video-surveillance system.

    Science.gov (United States)

    Shieh, Wann-Yun; Huang, Ju-Chin

    2012-09-01

    For most elderly, unpredictable falling incidents may occur at the corner of stairs or a long corridor due to body frailty. If we delay to rescue a falling elder who is likely fainting, more serious consequent injury may occur. Traditional secure or video surveillance systems need caregivers to monitor a centralized screen continuously, or need an elder to wear sensors to detect falling incidents, which explicitly waste much human power or cause inconvenience for elders. In this paper, we propose an automatic falling-detection algorithm and implement this algorithm in a multi-camera video surveillance system. The algorithm uses each camera to fetch the images from the regions required to be monitored. It then uses a falling-pattern recognition algorithm to determine if a falling incident has occurred. If yes, system will send short messages to someone needs to be noticed. The algorithm has been implemented in a DSP-based hardware acceleration board for functionality proof. Simulation results show that the accuracy of falling detection can achieve at least 90% and the throughput of a four-camera surveillance system can be improved by about 2.1 times. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  6. Neutron imaging system based on a video camera

    International Nuclear Information System (INIS)

    Dinca, M.

    2004-01-01

    The non-destructive testing with cold, thermal, epithermal or fast neutrons is nowadays more and more useful because the world-wide level of industrial development requires considerably higher standards of quality of manufactured products and reliability of technological processes especially where any deviation from standards could result in large-scale catastrophic consequences or human loses. Thanks to their properties, easily obtained and very good discrimination of the materials that penetrate, the thermal neutrons are the most used probe. The methods involved for this technique have advanced from neutron radiography based on converter screens and radiological films to neutron radioscopy based on video cameras, that is, from static images to dynamic images. Many neutron radioscopy systems have been used in the past with various levels of success. The quality of an image depends on the quality of the neutron beam and the type of the neutron imaging system. For real time investigations there are involved tube type cameras, CCD cameras and recently CID cameras that capture the image from an appropriate scintillator through the agency of a mirror. The analog signal of the camera is then converted into digital signal by the signal processing technology included into the camera. The image acquisition card or frame grabber from a PC converts the digital signal into an image. The image is formatted and processed by image analysis software. The scanning position of the object is controlled by the computer that commands the electrical motors that move horizontally, vertically and rotate the table of the object. Based on this system, a lot of static image acquisitions, real time non-destructive investigations of dynamic processes and finally, tomographic investigations of the small objects are done in a short time. A system based on a CID camera is presented. Fundamental differences between CCD and CID cameras lie in their pixel readout structure and technique. CIDs

  7. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room

    Directory of Open Access Journals (Sweden)

    Akshay Gopinathan Nair

    2015-01-01

    Full Text Available Objective: To study the utility of a commercially available small, portable ultra-high definition (HD camera (GoPro Hero 4 for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon′s head. Due care was taken to protect the patient′s identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each. The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon′s head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  8. Surgeon point-of-view recording: Using a high-definition head-mounted video camera in the operating room.

    Science.gov (United States)

    Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Della Rocca, David; Della Rocca, Robert C; Andron, Aleza; Jain, Vandana

    2015-10-01

    To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.

  9. Utilization of an video camera in study of the goshawk (Accipiter gentilis diet

    Directory of Open Access Journals (Sweden)

    Martin Tomešek

    2011-01-01

    Full Text Available In 2009, research was carried out into the food spectrum of goshawk (Accipiter gentilis by means of automatic digital video cameras with a recoding device in the area of the Chřiby Upland. The monitoring took place at two localities in the vicinity of the village of Buchlovice at the southeastern edge of the Chřiby Upland in a period from hatching the chicks to their flying out from a nest. The unambiguous advantage of using the camera systems at the study of food spectrum is a possibility of the exact determination of brought preys in the majority of cases. As much as possible economic and effective technology prepared according to given conditions was used. Results of using automatic digital video cameras with a recoding device consist in a number of valuable data, which clarify the food spectrum of a given species. The main output of the whole project is determination of the food spectrum of goshawk (Accipiter gentilis from two localities, which showed the following composition: 89 % birds, 9.5 % mammals and 1.5 % other animals or unidentifiable components of food. Birds of the genus Turdus were the most frequent prey in both cases of monitoring. As for mammals, Sciurus vulgaris was most frequent.

  10. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    Energy Technology Data Exchange (ETDEWEB)

    Strehlow, J.P.

    1994-08-24

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE` s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1).

  11. Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)

    International Nuclear Information System (INIS)

    Strehlow, J.P.

    1994-01-01

    A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE' s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1)

  12. Underwater video enhancement using multi-camera super-resolution

    Science.gov (United States)

    Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.

    2017-12-01

    Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.

  13. Time-Domain Fluorescence Lifetime Imaging Techniques Suitable for Solid-State Imaging Sensor Arrays

    Directory of Open Access Journals (Sweden)

    Robert K. Henderson

    2012-05-01

    Full Text Available We have successfully demonstrated video-rate CMOS single-photon avalanche diode (SPAD-based cameras for fluorescence lifetime imaging microscopy (FLIM by applying innovative FLIM algorithms. We also review and compare several time-domain techniques and solid-state FLIM systems, and adapt the proposed algorithms for massive CMOS SPAD-based arrays and hardware implementations. The theoretical error equations are derived and their performances are demonstrated on the data obtained from 0.13 μm CMOS SPAD arrays and the multiple-decay data obtained from scanning PMT systems. In vivo two photon fluorescence lifetime imaging data of FITC-albumin labeled vasculature of a P22 rat carcinosarcoma (BD9 rat window chamber are used to test how different algorithms perform on bi-decay data. The proposed techniques are capable of producing lifetime images with enough contrast.

  14. Acceptance/operational test procedure 101-AW tank camera purge system and 101-AW video camera system

    International Nuclear Information System (INIS)

    Castleberry, J.L.

    1994-01-01

    This procedure will document the satisfactory operation of the 101-AW Tank Camera Purge System (CPS) and the 101-AW Video Camera System. The safety interlock which shuts down all the electronics inside the 101-AW vapor space, during loss of purge pressure, will be in place and tested to ensure reliable performance. This procedure is separated into four sections. Section 6.1 is performed in the 306 building prior to delivery to the 200 East Tank Farms and involves leak checking all fittings on the 101-AW Purge Panel for leakage using a Snoop solution and resolving the leakage. Section 7.1 verifies that PR-1, the regulator which maintains a positive pressure within the volume (cameras and pneumatic lines), is properly set. In addition the green light (PRESSURIZED) (located on the Purge Control Panel) is verified to turn on above 10 in. w.g. and after the time delay (TDR) has timed out. Section 7.2 verifies that the purge cycle functions properly, the red light (PURGE ON) comes on, and that the correct flowrate is obtained to meet the requirements of the National Fire Protection Association. Section 7.3 verifies that the pan and tilt, camera, associated controls and components operate correctly. This section also verifies that the safety interlock system operates correctly during loss of purge pressure. During the loss of purge operation the illumination of the amber light (PURGE FAILED) will be verified

  15. Object Occlusion Detection Using Automatic Camera Calibration for a Wide-Area Video Surveillance System

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i automatic camera calibration using both moving objects and a background structure; (ii object depth estimation; and (iii detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems.

  16. Simultaneous dual-radionuclide myocardial perfusion imaging with a solid-state dedicated cardiac camera

    International Nuclear Information System (INIS)

    Ben-Haim, Simona; Kacperski, Krzysztof; Hain, Sharon; Van Gramberg, Dean; Hutton, Brian F.; Erlandsson, Kjell; Waddington, Wendy A.; Ell, Peter J.; Sharir, Tali; Roth, Nathaniel; Berman, Daniel S.

    2010-01-01

    We compared simultaneous dual-radionuclide (DR) stress and rest myocardial perfusion imaging (MPI) with a novel solid-state cardiac camera and a conventional SPECT camera with separate stress and rest acquisitions. Of 27 consecutive patients recruited, 24 (64.5±11.8 years of age, 16 men) were injected with 74 MBq of 201 Tl (rest) and 250 MBq 99m Tc-MIBI (stress). Conventional MPI acquisition times for stress and rest are 21 min and 16 min, respectively. Rest 201 Tl for 6 min and simultaneous DR 15-min list mode gated scans were performed on a D-SPECT cardiac scanner. In 11 patients DR D-SPECT was performed first and in 13 patients conventional stress 99m Tc-MIBI SPECT imaging was performed followed by DR D-SPECT. The DR D-SPECT data were processed using a spill-over and scatter correction method. DR D-SPECT images were compared with rest 201 Tl D-SPECT and with conventional SPECT images by visual analysis employing the 17-segment model and a five-point scale (0 normal, 4 absent) to calculate the summed stress and rest scores. Image quality was assessed on a four-point scale (1 poor, 4 very good) and gut activity was assessed on a four-point scale (0 none, 3 high). Conventional MPI studies were abnormal at stress in 17 patients and at rest in 9 patients. In the 17 abnormal stress studies DR D-SPECT MPI showed 113 abnormal segments and conventional MPI showed 93 abnormal segments. In the nine abnormal rest studies DR D-SPECT showed 45 abnormal segments and conventional MPI showed 48 abnormal segments. The summed stress and rest scores on conventional SPECT and DR D-SPECT were highly correlated (r=0.9790 and 0.9694, respectively). The summed scores of rest 201 Tl D-SPECT and DR-DSPECT were also highly correlated (r=0.9968, p 201 Tl D-SPECT acquisition. (orig.)

  17. VideoWeb Dataset for Multi-camera Activities and Non-verbal Communication

    Science.gov (United States)

    Denina, Giovanni; Bhanu, Bir; Nguyen, Hoang Thanh; Ding, Chong; Kamal, Ahmed; Ravishankar, Chinya; Roy-Chowdhury, Amit; Ivers, Allen; Varda, Brenda

    Human-activity recognition is one of the most challenging problems in computer vision. Researchers from around the world have tried to solve this problem and have come a long way in recognizing simple motions and atomic activities. As the computer vision community heads toward fully recognizing human activities, a challenging and labeled dataset is needed. To respond to that need, we collected a dataset of realistic scenarios in a multi-camera network environment (VideoWeb) involving multiple persons performing dozens of different repetitive and non-repetitive activities. This chapter describes the details of the dataset. We believe that this VideoWeb Activities dataset is unique and it is one of the most challenging datasets available today. The dataset is publicly available online at http://vwdata.ee.ucr.edu/ along with the data annotation.

  18. vid116_0501n -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  19. vid116_0501s -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  20. vid116_0501c -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  1. vid116_0501d -- Point coverage of sediment observations from video collected during 2005 R/V Tatoosh camera sled survey

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vesselTatoosh during August 2005. Video data from...

  2. Solid state optical microscope

    Science.gov (United States)

    Young, Ian T.

    1983-01-01

    A solid state optical microscope wherein wide-field and high-resolution images of an object are produced at a rapid rate by utilizing conventional optics with a charge-coupled photodiode array. A galvanometer scanning mirror, for scanning in one of two orthogonal directions is provided, while the charge-coupled photodiode array scans in the other orthogonal direction. Illumination light from the object is incident upon the photodiodes, creating packets of electrons (signals) which are representative of the illuminated object. The signals are then processed, stored in a memory, and finally displayed as a video signal.

  3. Collaborative real-time scheduling of multiple PTZ cameras for multiple object tracking in video surveillance

    Science.gov (United States)

    Liu, Yu-Che; Huang, Chung-Lin

    2013-03-01

    This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.

  4. Design and Optimization of the VideoWeb Wireless Camera Network

    Directory of Open Access Journals (Sweden)

    Nguyen HoangThanh

    2010-01-01

    Full Text Available Sensor networks have been a very active area of research in recent years. However, most of the sensors used in the development of these networks have been local and nonimaging sensors such as acoustics, seismic, vibration, temperature, humidity. The emerging development of video sensor networks poses its own set of unique challenges, including high-bandwidth and low latency requirements for real-time processing and control. This paper presents a systematic approach by detailing the design, implementation, and evaluation of a large-scale wireless camera network, suitable for a variety of practical real-time applications. We take into consideration issues related to hardware, software, control, architecture, network connectivity, performance evaluation, and data-processing strategies for the network. We also perform multiobjective optimization on settings such as video resolution and compression quality to provide insight into the performance trade-offs when configuring such a network and present lessons learned in the building and daily usage of the network.

  5. Visual fatigue modeling for stereoscopic video shot based on camera motion

    Science.gov (United States)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  6. Flip Video for Dummies

    CERN Document Server

    Hutsko, Joe

    2010-01-01

    The full-color guide to shooting great video with the Flip Video camera. The inexpensive Flip Video camera is currently one of the hottest must-have gadgets. It's portable and connects easily to any computer to transfer video you shoot onto your PC or Mac. Although the Flip Video camera comes with a quick-start guide, it lacks a how-to manual, and this full-color book fills that void! Packed with full-color screen shots throughout, Flip Video For Dummies shows you how to shoot the best possible footage in a variety of situations. You'll learn how to transfer video to your computer and then edi

  7. A multiframe soft x-ray camera with fast video capture for the LSX field reversed configuration (FRC) experiment

    International Nuclear Information System (INIS)

    Crawford, E.A.

    1992-01-01

    Soft x-ray pinhole imaging has proven to be an exceptionally useful diagnostic for qualitative observation of impurity radiation from field reversed configuration plasmas. We used a four frame device, similar in design to those discussed in an earlier paper [E. A. Crawford, D. P. Taggart, and A. D. Bailey III, Rev. Sci. Instrum. 61, 2795 (1990)] as a routine diagnostic during the last six months of the Large s Experiment (LSX) program. Our camera is an improvement over earlier implementations in several significant aspects. It was designed and used from the onset of the LSX experiments with a video frame capture system so that an instant visual record of the shot was available to the machine operator as well as facilitating quantitative interpretation of intensity information recorded in the images. The camera was installed in the end region of the LSX on axis approximately 5.5 m from the plasma midplane. Experience with bolometers on LSX showed serious problems with ''particle dumps'' at the axial location at various times during the plasma discharge. Therefore, the initial implementation of the camera included an effective magnetic sweeper assembly. Overall performance of the camera, video capture system, and sweeper is discussed

  8. Virtual georeferencing : how an 11-eyed video camera helps the pipeline industry

    Energy Technology Data Exchange (ETDEWEB)

    Ball, C.

    2006-08-15

    Designed by Immersive Media Corporation (IMC) the new Telemmersion System is a lightweight camera system capable of generating synchronized high-resolution video streams that represent a full motion spherical world. With 11 cameras configured in a sphere, the system is portable and can be easily mounted on ground and air-borne vehicles for use in surveillance; integration; commanded control of interactive intelligent databases; scenario modelling; and specialized training. The system was recently used to georeference immersive video of existing and proposed pipeline routes. Footage from the system was used to supplement traditional data collection methods for environmental impact assessment; route selection and engineering; regulatory compliance; and public consultation. Traditionally, the medium used to visualize pipeline routes is a map overlaid with aerial photography. Pipeline routes are typically flown throughout the planning stages in order to give environmentalists, engineers and other personnel the opportunity to visually inspect the terrain, identify issues and make decisions. The technology has significantly reduced the costs during the planning stages of pipeline routes, as the remote footage can be stored on DVD, allowing various stakeholders and contractors to view the terrain and zoom in on particular land features. During one recent 3-day trip, 500 km of proposed pipeline was recorded using the technology. Typically, for various regulatory and environmental requirements, a half a dozen trips would have been required. 2 figs.

  9. Design and evaluation of controls for drift, video gain, and color balance in spaceborne facsimile cameras

    Science.gov (United States)

    Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.

    1973-01-01

    The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.

  10. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2016-03-01

    Full Text Available Compressive sensing (CS theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  11. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.; Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  12. A solid state lightning propagation speed sensor

    Science.gov (United States)

    Mach, Douglas M.; Rust, W. David

    1989-01-01

    A device to measure the propagation speeds of cloud-to-ground lightning has been developed. The lightning propagation speed (LPS) device consists of eight solid state silicon photodetectors mounted behind precision horizontal slits in the focal plane of a 50-mm lens on a 35-mm camera. Although the LPS device produces results similar to those obtained from a streaking camera, the LPS device has the advantages of smaller size, lower cost, mobile use, and easier data collection and analysis. The maximum accuracy for the LPS is 0.2 microsec, compared with about 0.8 microsecs for the streaking camera. It is found that the return stroke propagation speed for triggered lightning is different than that for natural lightning if measurements are taken over channel segments less than 500 m. It is suggested that there are no significant differences between the propagation speeds of positive and negative flashes. Also, differences between natural and triggered dart leaders are discussed.

  13. Nyquist Sampling Theorem: Understanding the Illusion of a Spinning Wheel Captured with a Video Camera

    Science.gov (United States)

    Levesque, Luc

    2014-01-01

    Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the…

  14. Cryogenic solid Schmidt camera as a base for future wide-field IR systems

    Science.gov (United States)

    Yudin, Alexey N.

    2011-11-01

    Work is focused on study of capability of solid Schmidt camera to serve as a wide-field infrared lens for aircraft system with whole sphere coverage, working in 8-14 um spectral range, coupled with spherical focal array of megapixel class. Designs of 16 mm f/0.2 lens with 60 and 90 degrees sensor diagonal are presented, their image quality is compared with conventional solid design. Achromatic design with significantly improved performance, containing enclosed soft correcting lens behind protective front lens is proposed. One of the main goals of the work is to estimate benefits from curved detector arrays in 8-14 um spectral range wide-field systems. Coupling of photodetector with solid Schmidt camera by means of frustrated total internal reflection is considered, with corresponding tolerance analysis. The whole lens, except front element, is considered to be cryogenic, with solid Schmidt unit to be flown by hydrogen for improvement of bulk transmission.

  15. Simultaneous dual-radionuclide myocardial perfusion imaging with a solid-state dedicated cardiac camera.

    Science.gov (United States)

    Ben-Haim, Simona; Kacperski, Krzysztof; Hain, Sharon; Van Gramberg, Dean; Hutton, Brian F; Erlandsson, Kjell; Sharir, Tali; Roth, Nathaniel; Waddington, Wendy A; Berman, Daniel S; Ell, Peter J

    2010-08-01

    We compared simultaneous dual-radionuclide (DR) stress and rest myocardial perfusion imaging (MPI) with a novel solid-state cardiac camera and a conventional SPECT camera with separate stress and rest acquisitions. Of 27 consecutive patients recruited, 24 (64.5+/-11.8 years of age, 16 men) were injected with 74 MBq of (201)Tl (rest) and 250 MBq (99m)Tc-MIBI (stress). Conventional MPI acquisition times for stress and rest are 21 min and 16 min, respectively. Rest (201)Tl for 6 min and simultaneous DR 15-min list mode gated scans were performed on a D-SPECT cardiac scanner. In 11 patients DR D-SPECT was performed first and in 13 patients conventional stress (99m)Tc-MIBI SPECT imaging was performed followed by DR D-SPECT. The DR D-SPECT data were processed using a spill-over and scatter correction method. DR D-SPECT images were compared with rest (201)Tl D-SPECT and with conventional SPECT images by visual analysis employing the 17-segment model and a five-point scale (0 normal, 4 absent) to calculate the summed stress and rest scores. Image quality was assessed on a four-point scale (1 poor, 4 very good) and gut activity was assessed on a four-point scale (0 none, 3 high). Conventional MPI studies were abnormal at stress in 17 patients and at rest in 9 patients. In the 17 abnormal stress studies DR D-SPECT MPI showed 113 abnormal segments and conventional MPI showed 93 abnormal segments. In the nine abnormal rest studies DR D-SPECT showed 45 abnormal segments and conventional MPI showed 48 abnormal segments. The summed stress and rest scores on conventional SPECT and DR D-SPECT were highly correlated (r=0.9790 and 0.9694, respectively). The summed scores of rest (201)Tl D-SPECT and DR-DSPECT were also highly correlated (r=0.9968, pstress perfusion defects were significantly larger on stress DR D-SPECT images, and five of these patients were imaged earlier by D-SPECT than by conventional SPECT. Fast and high-quality simultaneous DR MPI is feasible with D-SPECT in a

  16. First results of the multi-purpose real-time processing video camera system on the Wendelstein 7-X stellarator and implications for future devices

    Science.gov (United States)

    Zoletnik, S.; Biedermann, C.; Cseh, G.; Kocsis, G.; König, R.; Szabolics, T.; Szepesi, T.; Wendelstein 7-X Team

    2018-01-01

    A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.

  17. Algorithms for the automatic identification of MARFEs and UFOs in JET database of visible camera videos

    International Nuclear Information System (INIS)

    Murari, A.; Camplani, M.; Cannas, B.; Usai, P.; Mazon, D.; Delaunay, F.

    2010-01-01

    MARFE instabilities and UFOs leave clear signatures in JET fast visible camera videos. Given the potential harmful consequences of these events, particularly as triggers of disruptions, it would be important to have the means of detecting them automatically. In this paper, the results of various algorithms to identify automatically the MARFEs and UFOs in JET visible videos are reported. The objective is to retrieve the videos, which have captured these events, exploring the whole JET database of images, as a preliminary step to the development of real-time identifiers in the future. For the detection of MARFEs, a complete identifier has been finalized, using morphological operators and Hu moments. The final algorithm manages to identify the videos with MARFEs with a success rate exceeding 80%. Due to the lack of a complete statistics of examples, the UFO identifier is less developed, but a preliminary code can detect UFOs quite reliably. (authors)

  18. Frequency identification of vibration signals using video camera image data.

    Science.gov (United States)

    Jeng, Yih-Nen; Wu, Chia-Hung

    2012-10-16

    This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC) can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  19. Frequency Identification of Vibration Signals Using Video Camera Image Data

    Directory of Open Access Journals (Sweden)

    Chia-Hung Wu

    2012-10-01

    Full Text Available This study showed that an image data acquisition system connecting a high-speed camera or webcam to a notebook or personal computer (PC can precisely capture most dominant modes of vibration signal, but may involve the non-physical modes induced by the insufficient frame rates. Using a simple model, frequencies of these modes are properly predicted and excluded. Two experimental designs, which involve using an LED light source and a vibration exciter, are proposed to demonstrate the performance. First, the original gray-level resolution of a video camera from, for instance, 0 to 256 levels, was enhanced by summing gray-level data of all pixels in a small region around the point of interest. The image signal was further enhanced by attaching a white paper sheet marked with a black line on the surface of the vibration system in operation to increase the gray-level resolution. Experimental results showed that the Prosilica CV640C CMOS high-speed camera has the critical frequency of inducing the false mode at 60 Hz, whereas that of the webcam is 7.8 Hz. Several factors were proven to have the effect of partially suppressing the non-physical modes, but they cannot eliminate them completely. Two examples, the prominent vibration modes of which are less than the associated critical frequencies, are examined to demonstrate the performances of the proposed systems. In general, the experimental data show that the non-contact type image data acquisition systems are potential tools for collecting the low-frequency vibration signal of a system.

  20. Calibration grooming and alignment for LDUA High Resolution Stereoscopic Video Camera System (HRSVS)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The High Resolution Stereoscopic Video Camera System (HRSVS) was designed by the Savannah River Technology Center (SRTC) to provide routine and troubleshooting views of tank interiors during characterization and remediation phases of underground storage tank (UST) processing. The HRSVS is a dual color camera system designed to provide stereo viewing of the interior of the tanks including the tank wall in a Class 1, Division 1, flammable atmosphere. The HRSVS was designed with a modular philosophy for easy maintenance and configuration modifications. During operation of the system with the LDUA, the control of the camera system will be performed by the LDUA supervisory data acquisition system (SDAS). Video and control status 1458 will be displayed on monitors within the LDUA control center. All control functions are accessible from the front panel of the control box located within the Operations Control Trailer (OCT). The LDUA will provide all positioning functions within the waste tank for the end effector. Various electronic measurement instruments will be used to perform CG and A activities. The instruments may include a digital volt meter, oscilloscope, signal generator, and other electronic repair equipment. None of these instruments will need to be calibrated beyond what comes from the manufacturer. During CG and A a temperature indicating device will be used to measure the temperature of the outside of the HRSVS from initial startup until the temperature has stabilized. This device will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing. This sensor will not need to be in calibration during CG and A but will have to have a current calibration sticker from the Standards Laboratory during any acceptance testing

  1. Taking it all in : special camera films in 3-D

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, L.

    2006-07-15

    Details of a 360-degree digital camera designed by Immersive Media Telemmersion were presented. The camera has been employed extensively in the United States for homeland security and intelligence-gathering purposes. In Canada, the cameras are now being used by the oil and gas industry. The camera has 11 lenses pointing in all directions and generates high resolution movies that can be analyzed frame-by-frame from every angle. Global positioning satellite data can be gathered during filming so that operators can pinpoint any location. The 11 video streams use more than 100 million pixels per second. After filming, the system displays synchronized, high-resolution video streams, capturing a full motion spherical world complete with directional sound. It can be viewed on a computer monitor, video screen, or head-mounted display. Pembina Pipeline Corporation recently used the Telemmersion system to plot a proposed pipeline route between Alberta's Athabasca region and Edmonton. It was estimated that more than $50,000 was saved by using the camera. The resulting video has been viewed by Pembina's engineering, environmental and geotechnical groups who were able to accurately note the route's river crossings. The cameras were also used to estimate timber salvage. Footage was then given to the operations group, to help staff familiarize themselves with the terrain, the proposed route's right-of-way, and the number of water crossings and access points. Oil and gas operators have also used the equipment on a recently acquired block of land to select well sites. 4 figs.

  2. A semantic autonomous video surveillance system for dense camera networks in Smart Cities.

    Science.gov (United States)

    Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio

    2012-01-01

    This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.

  3. A Semantic Autonomous Video Surveillance System for Dense Camera Networks in Smart Cities

    Directory of Open Access Journals (Sweden)

    Antonio Sánchez-Esguevillas

    2012-08-01

    Full Text Available This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.

  4. Light-reflection random-target method for measurement of the modulation transfer function of a digital video-camera

    Science.gov (United States)

    Pospisil, J.; Jakubik, P.; Machala, L.

    2005-11-01

    This article reports the suggestion, realization and verification of the newly developed measuring means of the noiseless and locally shift-invariant modulation transfer function (MTF) of a digital video camera in a usual incoherent visible region of optical intensity, especially of its combined imaging, detection, sampling and digitizing steps which are influenced by the additive and spatially discrete photodetector, aliasing and quantization noises. Such means relates to the still camera automatic working regime and static two-dimensional spatially continuous light-reflection random target of white-noise property. The introduced theoretical reason for such a random-target method is also performed under exploitation of the proposed simulation model of the linear optical intensity response and possibility to express the resultant MTF by a normalized and smoothed rate of the ascertainable output and input power spectral densities. The random-target and resultant image-data were obtained and processed by means of a processing and evaluational PC with computation programs developed on the basis of MATLAB 6.5E The present examples of results and other obtained results of the performed measurements demonstrate the sufficient repeatability and acceptability of the described method for comparative evaluations of the performance of digital video cameras under various conditions.

  5. CRED Fish Observations from Stereo Video Cameras on a SeaBED AUV collected around Tutuila, American Samoa in 2012

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Black and white imagery were collected using a stereo pair of underwater video cameras mounted on a SeaBED autonomous underwater vehicle (AUV) and deployed around...

  6. Light-reflection random-target method for measurement of the modulation transfer function of a digital video-camera

    Czech Academy of Sciences Publication Activity Database

    Pospíšil, Jaroslav; Jakubík, P.; Machala, L.

    2005-01-01

    Roč. 116, - (2005), s. 573-585 ISSN 0030-4026 Institutional research plan: CEZ:AV0Z10100522 Keywords : random-target measuring method * light-reflection white - noise target * digital video camera * modulation transfer function * power spectral density Subject RIV: BH - Optics, Masers, Lasers Impact factor: 0.395, year: 2005

  7. Video Comparator

    International Nuclear Information System (INIS)

    Rose, R.P.

    1978-01-01

    The Video Comparator is a comparative gage that uses electronic images from two sources, a standard and an unknown. Two matched video cameras are used to obtain the electronic images. The video signals are mixed and displayed on a single video receiver (CRT). The video system is manufactured by ITP of Chatsworth, CA and is a Tele-Microscope II, Model 148. One of the cameras is mounted on a toolmaker's microscope stand and produces a 250X image of a cast. The other camera is mounted on a stand and produces an image of a 250X template. The two video images are mixed in a control box provided by ITP and displayed on a CRT. The template or the cast can be moved to align the desired features. Vertical reference lines are provided on the CRT, and a feature on the cast can be aligned with a line on the CRT screen. The stage containing the casts can be moved using a Boeckleler micrometer equipped with a digital readout, and a second feature aligned with the reference line and the distance moved obtained from the digital display

  8. Hierarchical video surveillance architecture: a chassis for video big data analytics and exploration

    Science.gov (United States)

    Ajiboye, Sola O.; Birch, Philip; Chatwin, Christopher; Young, Rupert

    2015-03-01

    There is increasing reliance on video surveillance systems for systematic derivation, analysis and interpretation of the data needed for predicting, planning, evaluating and implementing public safety. This is evident from the massive number of surveillance cameras deployed across public locations. For example, in July 2013, the British Security Industry Association (BSIA) reported that over 4 million CCTV cameras had been installed in Britain alone. The BSIA also reveal that only 1.5% of these are state owned. In this paper, we propose a framework that allows access to data from privately owned cameras, with the aim of increasing the efficiency and accuracy of public safety planning, security activities, and decision support systems that are based on video integrated surveillance systems. The accuracy of results obtained from government-owned public safety infrastructure would improve greatly if privately owned surveillance systems `expose' relevant video-generated metadata events, such as triggered alerts and also permit query of a metadata repository. Subsequently, a police officer, for example, with an appropriate level of system permission can query unified video systems across a large geographical area such as a city or a country to predict the location of an interesting entity, such as a pedestrian or a vehicle. This becomes possible with our proposed novel hierarchical architecture, the Fused Video Surveillance Architecture (FVSA). At the high level, FVSA comprises of a hardware framework that is supported by a multi-layer abstraction software interface. It presents video surveillance systems as an adapted computational grid of intelligent services, which is integration-enabled to communicate with other compatible systems in the Internet of Things (IoT).

  9. State of art in radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Choi; Young Soo; Kim, Seong Ho; Cho, Jae Wan; Kim, Chang Hoi; Seo, Young Chil

    2002-02-01

    Working in radiation environment such as nuclear power plant, RI facility, nuclear fuel fabrication facility, medical center has to be considered radiation exposure, and we can implement these job by remote observation and operation. However the camera used for general industry is weakened at radiation, so radiation-tolerant camera is needed for radiation environment. The application of radiation-tolerant camera system is nuclear industry, radio-active medical, aerospace, and so on. Specially nuclear industry, the demand is continuous in the inspection of nuclear boiler, exchange of pellet, inspection of nuclear waste. In the nuclear developed countries have been an effort to develop radiation-tolerant cameras. Now they have many kinds of radiation-tolerant cameras which can tolerate to 10{sup 6}-10{sup 8} rad total dose. In this report, we examine into the state-of-art about radiation-tolerant cameras, and analyze these technology. We want to grow up the concern of developing radiation-tolerant camera by this paper, and upgrade the level of domestic technology.

  10. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many

  11. Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras

    Science.gov (United States)

    Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.

    2017-02-01

    Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.

  12. A Method for Estimating Surveillance Video Georeferences

    Directory of Open Access Journals (Sweden)

    Aleksandar Milosavljević

    2017-07-01

    Full Text Available The integration of a surveillance camera video with a three-dimensional (3D geographic information system (GIS requires the georeferencing of that video. Since a video consists of separate frames, each frame must be georeferenced. To georeference a video frame, we rely on the information about the camera view at the moment that the frame was captured. A camera view in 3D space is completely determined by the camera position, orientation, and field-of-view. Since the accurate measuring of these parameters can be extremely difficult, in this paper we propose a method for their estimation based on matching video frame coordinates of certain point features with their 3D geographic locations. To obtain these coordinates, we rely on high-resolution orthophotos and digital elevation models (DEM of the area of interest. Once an adequate number of points are matched, Levenberg–Marquardt iterative optimization is applied to find the most suitable video frame georeference, i.e., position and orientation of the camera.

  13. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  14. Video auto stitching in multicamera surveillance system

    Science.gov (United States)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  15. Solid state physics

    CERN Document Server

    Burns, Gerald

    2013-01-01

    Solid State Physics, International Edition covers the fundamentals and the advanced concepts of solid state physics. The book is comprised of 18 chapters that tackle a specific aspect of solid state physics. Chapters 1 to 3 discuss the symmetry aspects of crystalline solids, while Chapter 4 covers the application of X-rays in solid state science. Chapter 5 deals with the anisotropic character of crystals. Chapters 6 to 8 talk about the five common types of bonding in solids, while Chapters 9 and 10 cover the free electron theory and band theory. Chapters 11 and 12 discuss the effects of moveme

  16. System design description for the LDUA high resolution stereoscopic video camera system (HRSVS)

    International Nuclear Information System (INIS)

    Pardini, A.F.

    1998-01-01

    The High Resolution Stereoscopic Video Camera System (HRSVS), system 6230, was designed to be used as an end effector on the LDUA to perform surveillance and inspection activities within a waste tank. It is attached to the LDUA by means of a Tool Interface Plate (TIP) which provides a feed through for all electrical and pneumatic utilities needed by the end effector to operate. Designed to perform up close weld and corrosion inspection roles in US T operations, the HRSVS will support and supplement the Light Duty Utility Arm (LDUA) and provide the crucial inspection tasks needed to ascertain waste tank condition

  17. Deep video deblurring

    KAUST Repository

    Su, Shuochen

    2016-11-25

    Motion blur from camera shake is a major problem in videos captured by hand-held devices. Unlike single-image deblurring, video-based approaches can take advantage of the abundant information that exists across neighboring frames. As a result the best performing methods rely on aligning nearby frames. However, aligning images is a computationally expensive and fragile procedure, and methods that aggregate information must therefore be able to identify which regions have been accurately aligned and which have not, a task which requires high level scene understanding. In this work, we introduce a deep learning solution to video deblurring, where a CNN is trained end-to-end to learn how to accumulate information across frames. To train this network, we collected a dataset of real videos recorded with a high framerate camera, which we use to generate synthetic motion blur for supervision. We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.

  18. Video Liveness for Citizen Journalism: Attacks and Defenses

    OpenAIRE

    Rahman, Mahmudur; Azimpourkivi, Mozhgan; Topkara, Umut; Carbunar, Bogdan

    2017-01-01

    The impact of citizen journalism raises important video integrity and credibility issues. In this article, we introduce Vamos, the first user transparent video "liveness" verification solution based on video motion, that accommodates the full range of camera movements, and supports videos of arbitrary length. Vamos uses the agreement between video motion and camera movement to corroborate the video authenticity. Vamos can be integrated into any mobile video capture application without requiri...

  19. Solid-state radar switchboard

    Science.gov (United States)

    Thiebaud, P.; Cross, D. C.

    1980-07-01

    A new solid-state radar switchboard equipped with 16 input ports which will output data to 16 displays is presented. Each of the ports will handle a single two-dimensional radar input, or three ports will accommodate a three-dimensional radar input. A video switch card of the switchboard is used to switch all signals, with the exception of the IFF-mode-control lines. Each card accepts inputs from up to 16 sources and can pass a signal with bandwidth greater than 20 MHz to the display assigned to that card. The synchro amplifier of current systems has been eliminated and in the new design each PPI receives radar data via a single coaxial cable. This significant reduction in cabling is achieved by adding a serial-to-parallel interface and a digital-to-synchro converter located at the PPI.

  20. Control system for gamma camera

    International Nuclear Information System (INIS)

    Miller, D.W.

    1977-01-01

    An improved gamma camera arrangement is described which utilizing a solid state detector, formed of high purity germanium. the central arrangement of the camera operates to effect the carrying out of a trapezoidal filtering operation over antisymmetrically summed spatial signals through gated integration procedures utilizing idealized integrating intervals. By simultaneously carrying out peak energy evaluation of the input signals, a desirable control over pulse pile-up phenomena is achieved. Additionally, through the use of the time derivative of incoming pulse or signal energy information to initially enable the control system, a low level information evaluation is provided serving to enhance the signal processing efficiency of the camera

  1. NV-CMOS HD camera for day/night imaging

    Science.gov (United States)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  2. Video microblogging

    DEFF Research Database (Denmark)

    Bornoe, Nis; Barkhuus, Louise

    2010-01-01

    Microblogging is a recently popular phenomenon and with the increasing trend for video cameras to be built into mobile phones, a new type of microblogging has entered the arena of electronic communication: video microblogging. In this study we examine video microblogging, which is the broadcasting...... of short videos. A series of semi-structured interviews offers an understanding of why and how video microblogging is used and what the users post and broadcast....

  3. Calibration of action cameras for photogrammetric purposes.

    Science.gov (United States)

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  4. Calibration of Action Cameras for Photogrammetric Purposes

    Directory of Open Access Journals (Sweden)

    Caterina Balletti

    2014-09-01

    Full Text Available The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a easy to handle, (b capable of performing under extreme conditions and more importantly (c able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  5. Hydrogen peroxide plasma sterilization of a waterproof, high-definition video camera case for intraoperative imaging in veterinary surgery.

    Science.gov (United States)

    Adin, Christopher A; Royal, Kenneth D; Moore, Brandon; Jacob, Megan

    2018-06-13

    To evaluate the safety and usability of a wearable, waterproof high-definition camera/case for acquisition of surgical images by sterile personnel. An in vitro study to test the efficacy of biodecontamination of camera cases. Usability for intraoperative image acquisition was assessed in clinical procedures. Two waterproof GoPro Hero4 Silver camera cases were inoculated by immersion in media containing Staphylococcus pseudointermedius or Escherichia coli at ≥5.50E+07 colony forming units/mL. Cases were biodecontaminated by manual washing and hydrogen peroxide plasma sterilization. Cultures were obtained by swab and by immersion in enrichment broth before and after each contamination/decontamination cycle (n = 4). The cameras were then applied by a surgeon in clinical procedures by using either a headband or handheld mode and were assessed for usability according to 5 user characteristics. Cultures of all poststerilization swabs were negative. One of 8 cultures was positive in enrichment broth, consistent with a low level of contamination in 1 sample. Usability of the camera was considered poor in headband mode, with limited battery life, inability to control camera functions, and lack of zoom function affecting image quality. Handheld operation of the camera by the primary surgeon improved usability, allowing close-up still and video intraoperative image acquisition. Vaporized hydrogen peroxide sterilization of this camera case was considered effective for biodecontamination. Handheld operation improved usability for intraoperative image acquisition. Vaporized hydrogen peroxide sterilization and thorough manual washing of a waterproof camera may provide cost effective intraoperative image acquisition for documentation purposes. © 2018 The American College of Veterinary Surgeons.

  6. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    International Nuclear Information System (INIS)

    Anderson, Robert J.

    2014-01-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  7. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  8. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Robotic and Security Systems Dept.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  9. Innovative Solution to Video Enhancement

    Science.gov (United States)

    2001-01-01

    Through a licensing agreement, Intergraph Government Solutions adapted a technology originally developed at NASA's Marshall Space Flight Center for enhanced video imaging by developing its Video Analyst(TM) System. Marshall's scientists developed the Video Image Stabilization and Registration (VISAR) technology to help FBI agents analyze video footage of the deadly 1996 Olympic Summer Games bombing in Atlanta, Georgia. VISAR technology enhanced nighttime videotapes made with hand-held camcorders, revealing important details about the explosion. Intergraph's Video Analyst System is a simple, effective, and affordable tool for video enhancement and analysis. The benefits associated with the Video Analyst System include support of full-resolution digital video, frame-by-frame analysis, and the ability to store analog video in digital format. Up to 12 hours of digital video can be stored and maintained for reliable footage analysis. The system also includes state-of-the-art features such as stabilization, image enhancement, and convolution to help improve the visibility of subjects in the video without altering underlying footage. Adaptable to many uses, Intergraph#s Video Analyst System meets the stringent demands of the law enforcement industry in the areas of surveillance, crime scene footage, sting operations, and dash-mounted video cameras.

  10. Guerrilla Video: A New Protocol for Producing Classroom Video

    Science.gov (United States)

    Fadde, Peter; Rich, Peter

    2010-01-01

    Contemporary changes in pedagogy point to the need for a higher level of video production value in most classroom video, replacing the default video protocol of an unattended camera in the back of the classroom. The rich and complex environment of today's classroom can be captured more fully using the higher level, but still easily manageable,…

  11. The multi-camera optical surveillance system (MOS)

    International Nuclear Information System (INIS)

    Otto, P.; Wagner, H.; Richter, B.; Gaertner, K.J.; Laszlo, G.; Neumann, G.

    1991-01-01

    The transition from film camera to video surveillance systems, in particular the implementation of high capacity multi-camera video systems, results in a large increase in the amount of recorded scenes. Consequently, there is a substantial increase in the manpower requirements for review. Moreover, modern microprocessor controlled equipment facilitates the collection of additional data associated with each scene. Both the scene and the annotated information have to be evaluated by the inspector. The design of video surveillance systems for safeguards necessarily has to account for both appropriate recording and reviewing techniques. An aspect of principal importance is that the video information is stored on tape. Under the German Support Programme to the Agency a technical concept has been developed which aims at optimizing the capabilities of a multi-camera optical surveillance (MOS) system including the reviewing technique. This concept is presented in the following paper including a discussion of reviewing and reliability

  12. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    International Nuclear Information System (INIS)

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L; Yang, J; Beadle, B

    2014-01-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination

  13. Study of recognizing multiple persons' complicated hand gestures from the video sequence acquired by a moving camera

    Science.gov (United States)

    Dan, Luo; Ohya, Jun

    2010-02-01

    Recognizing hand gestures from the video sequence acquired by a dynamic camera could be a useful interface between humans and mobile robots. We develop a state based approach to extract and recognize hand gestures from moving camera images. We improved Human-Following Local Coordinate (HFLC) System, a very simple and stable method for extracting hand motion trajectories, which is obtained from the located human face, body part and hand blob changing factor. Condensation algorithm and PCA-based algorithm was performed to recognize extracted hand trajectories. In last research, this Condensation Algorithm based method only applied for one person's hand gestures. In this paper, we propose a principal component analysis (PCA) based approach to improve the recognition accuracy. For further improvement, temporal changes in the observed hand area changing factor are utilized as new image features to be stored in the database after being analyzed by PCA. Every hand gesture trajectory in the database is classified into either one hand gesture categories, two hand gesture categories, or temporal changes in hand blob changes. We demonstrate the effectiveness of the proposed method by conducting experiments on 45 kinds of sign language based Japanese and American Sign Language gestures obtained from 5 people. Our experimental recognition results show better performance is obtained by PCA based approach than the Condensation algorithm based method.

  14. An imaging system for a gamma camera

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1980-01-01

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  15. optical, electrical and solid state properties of nano crystalline zinc ...

    African Journals Online (AJOL)

    Vincent

    electrical conductivity decreases as the energy increases while the optical conductivity increases gradually ... reflection coatings on window glass, video screen, camera ... are used for photo-thermal-devices. .... Transmission measurements were performed at normal ... The absorption coefficient (α) was determined from the.

  16. Patterned Video Sensors For Low Vision

    Science.gov (United States)

    Juday, Richard D.

    1996-01-01

    Miniature video cameras containing photoreceptors arranged in prescribed non-Cartesian patterns to compensate partly for some visual defects proposed. Cameras, accompanied by (and possibly integrated with) miniature head-mounted video display units restore some visual function in humans whose visual fields reduced by defects like retinitis pigmentosa.

  17. Solid State Division

    International Nuclear Information System (INIS)

    Green, P.H.; Watson, D.M.

    1989-08-01

    This report contains brief discussions on work done in the Solid State Division of Oak Ridge National Laboratory. The topics covered are: Theoretical Solid State Physics; Neutron scattering; Physical properties of materials; The synthesis and characterization of materials; Ion beam and laser processing; and Structure of solids and surfaces

  18. Solid State Division

    Energy Technology Data Exchange (ETDEWEB)

    Green, P.H.; Watson, D.M. (eds.)

    1989-08-01

    This report contains brief discussions on work done in the Solid State Division of Oak Ridge National Laboratory. The topics covered are: Theoretical Solid State Physics; Neutron scattering; Physical properties of materials; The synthesis and characterization of materials; Ion beam and laser processing; and Structure of solids and surfaces. (LSP)

  19. Automatic video segmentation employing object/camera modeling techniques

    NARCIS (Netherlands)

    Farin, D.S.

    2005-01-01

    Practically established video compression and storage techniques still process video sequences as rectangular images without further semantic structure. However, humans watching a video sequence immediately recognize acting objects as semantic units. This semantic object separation is currently not

  20. Hardware accelerator design for tracking in smart camera

    Science.gov (United States)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.

  1. A low-cost, high-resolution, video-rate imaging optical radar

    Energy Technology Data Exchange (ETDEWEB)

    Sackos, J.T.; Nellums, R.O.; Lebien, S.M.; Diegert, C.F. [Sandia National Labs., Albuquerque, NM (United States); Grantham, J.W.; Monson, T. [Air Force Research Lab., Eglin AFB, FL (United States)

    1998-04-01

    Sandia National Laboratories has developed a unique type of portable low-cost range imaging optical radar (laser radar or LADAR). This innovative sensor is comprised of an active floodlight scene illuminator and an image intensified CCD camera receiver. It is a solid-state device (no moving parts) that offers significant size, performance, reliability, and simplicity advantages over other types of 3-D imaging sensors. This unique flash LADAR is based on low cost, commercially available hardware, and is well suited for many government and commercial uses. This paper presents an update of Sandia`s development of the Scannerless Range Imager technology and applications, and discusses the progress that has been made in evolving the sensor into a compact, low, cost, high-resolution, video rate Laser Dynamic Range Imager.

  2. Beyond rain: Advances in measurements of solid or mixed phase precipitation using a 2D-Video-Distrometer

    Science.gov (United States)

    Schwinzerl, Martin; Schönhuber, Michael; Lammer, Günter

    2015-04-01

    The requirement to estimate for each individual hydrometeor precipitation parameters such as shape, equi-voluminous diameter, fall velocity, height to width ratio, and canting angle gave rise to the development of the family of 2D-Video-Distrometer (2DVD) measurement devices. The measurement principle of the 2DVD is based upon the ability to acquire a side- and front view onto each particle by virtue of two orthogonally arranged high-speed line-scan cameras. The cameras are displaced vertically towards each other by a precisely determined distance in the ballpark of 6 mm, thus allowing the estimation of the vertical fall velocity in-situ on a per-particle basis. The geometrical and velocity information, sampled over a measurement surface of approx. 100 x 100 mm in this way, is then used to derive observables like rain rate and the accumulated equivalent amount of precipitation with a high degree of statistical relevance. One of the biggest assets of this measurement principle is the ability to perform measurements without relying on any externally provided model or phenomenological relationship between observables like particle shape and velocity. For liquid precipitation in the form of natural rain, this allows for example to verify whether established relationships - like, for example, the tabulated values for diameter vs. vertical velocity provided by Gunn & Kinzer - can be reproduced in sampled datasets. For mixed-phase and solid precipitation, different types of hydrometeors like for example different snow flake families, hail and graupel yield - depending on parameters like for example the water content and therefor, in turn, the density of the particle - very diverse results with respect to expected fall velocity, oblateness, or general shape for a given diameter class. The ability of the 2DVD to capture these parameters directly and without reliance on externally provided relationships, has contributed to the attractiveness of this measurement device for in

  3. Advanced real-time manipulation of video streams

    CERN Document Server

    Herling, Jan

    2014-01-01

    Diminished Reality is a new fascinating technology that removes real-world content from live video streams. This sensational live video manipulation actually removes real objects and generates a coherent video stream in real-time. Viewers cannot detect modified content. Existing approaches are restricted to moving objects and static or almost static cameras and do not allow real-time manipulation of video content. Jan Herling presents a new and innovative approach for real-time object removal with arbitrary camera movements.

  4. An integrated port camera and display system for laparoscopy.

    Science.gov (United States)

    Terry, Benjamin S; Ruppert, Austin D; Steinhaus, Kristen R; Schoen, Jonathan A; Rentschler, Mark E

    2010-05-01

    In this paper, we built and tested the port camera, a novel, inexpensive, portable, and battery-powered laparoscopic tool that integrates the components of a vision system with a cannula port. This new device 1) minimizes the invasiveness of laparoscopic surgery by combining a camera port and tool port; 2) reduces the cost of laparoscopic vision systems by integrating an inexpensive CMOS sensor and LED light source; and 3) enhances laparoscopic surgical procedures by mechanically coupling the camera, tool port, and liquid crystal display (LCD) screen to provide an on-patient visual display. The port camera video system was compared to two laparoscopic video systems: a standard resolution unit from Karl Storz (model 22220130) and a high definition unit from Stryker (model 1188HD). Brightness, contrast, hue, colorfulness, and sharpness were compared. The port camera video is superior to the Storz scope and approximately equivalent to the Stryker scope. An ex vivo study was conducted to measure the operative performance of the port camera. The results suggest that simulated tissue identification and biopsy acquisition with the port camera is as efficient as with a traditional laparoscopic system. The port camera was successfully used by a laparoscopic surgeon for exploratory surgery and liver biopsy during a porcine surgery, demonstrating initial surgical feasibility.

  5. Automatic mashup generation of multiple-camera videos

    NARCIS (Netherlands)

    Shrestha, P.

    2009-01-01

    The amount of user generated video content is growing enormously with the increase in availability and affordability of technologies for video capturing (e.g. camcorders, mobile-phones), storing (e.g. magnetic and optical devices, online storage services), and sharing (e.g. broadband internet,

  6. Theoretical solid state physics

    CERN Document Server

    Haug, Albert

    2013-01-01

    Theoretical Solid State Physics, Volume 1 focuses on the study of solid state physics. The volume first takes a look at the basic concepts and structures of solid state physics, including potential energies of solids, concept and classification of solids, and crystal structure. The book then explains single-electron approximation wherein the methods for calculating energy bands; electron in the field of crystal atoms; laws of motion of the electrons in solids; and electron statistics are discussed. The text describes general forms of solutions and relationships, including collective electron i

  7. Gamma ray camera

    International Nuclear Information System (INIS)

    Wang, S.-H.; Robbins, C.D.

    1979-01-01

    An Anger gamma ray camera is improved by the substitution of a gamma ray sensitive, proximity type image intensifier tube for the scintillator screen in the Anger camera. The image intensifier tube has a negatively charged flat scintillator screen, a flat photocathode layer, and a grounded, flat output phosphor display screen, all of which have the same dimension to maintain unit image magnification; all components are contained within a grounded metallic tube, with a metallic, inwardly curved input window between the scintillator screen and a collimator. The display screen can be viewed by an array of photomultipliers or solid state detectors. There are two photocathodes and two phosphor screens to give a two stage intensification, the two stages being optically coupled by a light guide. (author)

  8. Retail video analytics: an overview and survey

    Science.gov (United States)

    Connell, Jonathan; Fan, Quanfu; Gabbur, Prasad; Haas, Norman; Pankanti, Sharath; Trinh, Hoang

    2013-03-01

    Today retail video analytics has gone beyond the traditional domain of security and loss prevention by providing retailers insightful business intelligence such as store traffic statistics and queue data. Such information allows for enhanced customer experience, optimized store performance, reduced operational costs, and ultimately higher profitability. This paper gives an overview of various camera-based applications in retail as well as the state-ofthe- art computer vision techniques behind them. It also presents some of the promising technical directions for exploration in retail video analytics.

  9. Video Surveillance in Mental Health Facilities: Is it Ethical?

    Science.gov (United States)

    Stolovy, Tali; Melamed, Yuval; Afek, Arnon

    2015-05-01

    Video surveillance is a tool for managing safety and security within public spaces. In mental health facilities, the major benefit of video surveillance is that it enables 24 hour monitoring of patients, which has the potential to reduce violent and aggressive behavior. The major disadvantage is that such observation is by nature intrusive. It diminishes privacy, a factor of huge importance for psychiatric inpatients. Thus, an ongoing debate has developed following the increasing use of cameras in this setting. This article presents the experience of a medium-large academic state hospital that uses video surveillance, and explores the various ethical and administrative aspects of video surveillance in mental health facilities.

  10. Solid state radiation dosimetry

    International Nuclear Information System (INIS)

    Moran, P.R.

    1976-01-01

    Important recent developments provide accurate, sensitive, and reliable radiation measurements by using solid state radiation dosimetry methods. A review of the basic phenomena, devices, practical limitations, and categories of solid state methods is presented. The primary focus is upon the general physics underlying radiation measurements with solid state devices

  11. Luminescence and the solid state

    CERN Document Server

    Ropp, Richard C

    2013-01-01

    Since the discovery of the transistor in 1948, the study of the solid state has been burgeoning. Recently, cold fusion and the ceramic superconductor have given cause for excitement. There are two approaches possible to this area of science, namely, that of solid state physics and solid state chemistry, although both overlap extensively. The former is more concerned with electronic states in solids (including electromagnetics) whereas the latter is more concerned with interactions of atoms in solids. The area of solid state physics is well documented, however, there are very few texts which de

  12. Estimating Residual Solids Volume In Underground Storage Tanks

    International Nuclear Information System (INIS)

    Clark, Jason L.; Worthy, S. Jason; Martin, Bruce A.; Tihey, John R.

    2014-01-01

    The Savannah River Site liquid waste system consists of multiple facilities to safely receive and store legacy radioactive waste, treat, and permanently dispose waste. The large underground storage tanks and associated equipment, known as the 'tank farms', include a complex interconnected transfer system which includes underground transfer pipelines and ancillary equipment to direct the flow of waste. The waste in the tanks is present in three forms: supernatant, sludge, and salt. The supernatant is a multi-component aqueous mixture, while sludge is a gel-like substance which consists of insoluble solids and entrapped supernatant. The waste from these tanks is retrieved and treated as sludge or salt. The high level (radioactive) fraction of the waste is vitrified into a glass waste form, while the low-level waste is immobilized in a cementitious grout waste form called saltstone. Once the waste is retrieved and processed, the tanks are closed via removing the bulk of the waste, chemical cleaning, heel removal, stabilizing remaining residuals with tailored grout formulations and severing/sealing external penetrations. The comprehensive liquid waste disposition system, currently managed by Savannah River Remediation, consists of 1) safe storage and retrieval of the waste as it is prepared for permanent disposition; (2) definition of the waste processing techniques utilized to separate the high-level waste fraction/low-level waste fraction; (3) disposition of LLW in saltstone; (4) disposition of the HLW in glass; and (5) closure state of the facilities, including tanks. This paper focuses on determining the effectiveness of waste removal campaigns through monitoring the volume of residual solids in the waste tanks. Volume estimates of the residual solids are performed by creating a map of the residual solids on the waste tank bottom using video and still digital images. The map is then used to calculate the volume of solids remaining in the waste tank. The ability to

  13. Detecting fire in video stream using statistical analysis

    Directory of Open Access Journals (Sweden)

    Koplík Karel

    2017-01-01

    Full Text Available The real time fire detection in video stream is one of the most interesting problems in computer vision. In fact, in most cases it would be nice to have fire detection algorithm implemented in usual industrial cameras and/or to have possibility to replace standard industrial cameras with one implementing the fire detection algorithm. In this paper, we present new algorithm for detecting fire in video. The algorithm is based on tracking suspicious regions in time with statistical analysis of their trajectory. False alarms are minimized by combining multiple detection criteria: pixel brightness, trajectories of suspicious regions for evaluating characteristic fire flickering and persistence of alarm state in sequence of frames. The resulting implementation is fast and therefore can run on wide range of affordable hardware.

  14. Two-Stage Classification Approach for Human Detection in Camera Video in Bulk Ports

    Directory of Open Access Journals (Sweden)

    Mi Chao

    2015-09-01

    Full Text Available With the development of automation in ports, the video surveillance systems with automated human detection begun to be applied in open-air handling operation areas for safety and security. The accuracy of traditional human detection based on the video camera is not high enough to meet the requirements of operation surveillance. One of the key reasons is that Histograms of Oriented Gradients (HOG features of the human body will show great different between front & back standing (F&B and side standing (Side human body. Therefore, the final training for classifier will only gain a few useful specific features which have contribution to classification and are insufficient to support effective classification, while using the HOG features directly extracted by the samples from different human postures. This paper proposes a two-stage classification method to improve the accuracy of human detection. In the first stage, during preprocessing classification, images is mainly divided into possible F&B human body and not F&B human body, and then they were put into the second-stage classification among side human and non-human recognition. The experimental results in Tianjin port show that the two-stage classifier can improve the classification accuracy of human detection obviously.

  15. Toward 1-mm depth precision with a solid state full-field range imaging system

    Science.gov (United States)

    Dorrington, Adrian A.; Carnegie, Dale A.; Cree, Michael J.

    2006-02-01

    Previously, we demonstrated a novel heterodyne based solid-state full-field range-finding imaging system. This system is comprised of modulated LED illumination, a modulated image intensifier, and a digital video camera. A 10 MHz drive is provided with 1 Hz difference between the LEDs and image intensifier. A sequence of images of the resulting beating intensifier output are captured and processed to determine phase and hence distance to the object for each pixel. In a previous publication, we detailed results showing a one-sigma precision of 15 mm to 30 mm (depending on signal strength). Furthermore, we identified the limitations of the system and potential improvements that were expected to result in a range precision in the order of 1 mm. These primarily include increasing the operating frequency and improving optical coupling and sensitivity. In this paper, we report on the implementation of these improvements and the new system characteristics. We also comment on the factors that are important for high precision image ranging and present configuration strategies for best performance. Ranging with sub-millimeter precision is demonstrated by imaging a planar surface and calculating the deviations from a planar fit. The results are also illustrated graphically by imaging a garden gnome.

  16. Distributed embedded smart cameras architectures, design and applications

    CERN Document Server

    Velipasalar, Senem

    2014-01-01

    This publication addresses distributed embedded smart camerascameras that perform onboard analysis and collaborate with other cameras. This book provides the material required to better understand the architectural design challenges of embedded smart camera systems, the hardware/software ecosystem, the design approach for, and applications of distributed smart cameras together with the state-of-the-art algorithms. The authors concentrate on the architecture, hardware/software design, realization of smart camera networks from applications to architectures, in particular in the embedded and mobile domains. •                    Examines energy issues related to wireless communication such as decreasing energy consumption to increase battery-life •                    Discusses processing large volumes of video data on an embedded environment in real-time •                    Covers design of realistic applications of distributed and embedded smart...

  17. Computing camera heading: A study

    Science.gov (United States)

    Zhang, John Jiaxiang

    2000-08-01

    An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.

  18. Citizen Camera-Witnessing: A Case Study of the Umbrella Movement

    Directory of Open Access Journals (Sweden)

    Wai Han Lo

    2016-08-01

    Full Text Available Citizen camera-witness is a new concept by which to describe using mobile camera phone to engage in civic expression. I argue that the meaning of this concept should not be limited to painful testimony; instead, it is a mode of civic camera-mediated mass self-testimony to brutality. The use of mobile phone recordings in Hong Kong’s Umbrella Movement is examined to understand how mobile cameras are employed as personal witnessing devices to provide recordings to indict unjust events and engage others in the civic movement. This study has examined the Facebook posts and You Tube videos of the Umbrella Movement between September 22, 2014 and December 22, 2014. The results suggest that the camera phone not only contributes to witnessing the brutal repression of the state, but also witnesses the beauty of the movement, and provides a testimony that allows for rituals to develop and semi-codes to be transformed.

  19. Forming a three-dimensional porous organic network via solid-state explosion of organic single crystals.

    Science.gov (United States)

    Bae, Seo-Yoon; Kim, Dongwook; Shin, Dongbin; Mahmood, Javeed; Jeon, In-Yup; Jung, Sun-Min; Shin, Sun-Hee; Kim, Seok-Jin; Park, Noejung; Lah, Myoung Soo; Baek, Jong-Beom

    2017-11-17

    Solid-state reaction of organic molecules holds a considerable advantage over liquid-phase processes in the manufacturing industry. However, the research progress in exploring this benefit is largely staggering, which leaves few liquid-phase systems to work with. Here, we show a synthetic protocol for the formation of a three-dimensional porous organic network via solid-state explosion of organic single crystals. The explosive reaction is realized by the Bergman reaction (cycloaromatization) of three enediyne groups on 2,3,6,7,14,15-hexaethynyl-9,10-dihydro-9,10-[1,2]benzenoanthracene. The origin of the explosion is systematically studied using single-crystal X-ray diffraction and differential scanning calorimetry, along with high-speed camera and density functional theory calculations. The results suggest that the solid-state explosion is triggered by an abrupt change in lattice energy induced by release of primer molecules in the 2,3,6,7,14,15-hexaethynyl-9,10-dihydro-9,10-[1,2]benzenoanthracene crystal lattice.

  20. Head-camera video recordings of trauma core competency procedures can evaluate surgical resident's technical performance as well as colocated evaluators.

    Science.gov (United States)

    Mackenzie, Colin F; Pasley, Jason; Garofalo, Evan; Shackelford, Stacy; Chen, Hegang; Longinaker, Nyaradzo; Granite, Guinevere; Pugh, Kristy; Hagegeorge, George; Tisherman, Samuel A

    2017-07-01

    Unbiased evaluation of trauma core competency procedures is necessary to determine if residency and predeployment training courses are useful. We tested whether a previously validated individual procedure score (IPS) for individual procedure vascular exposure and fasciotomy (FAS) performance skills could discriminate training status by comparing IPS of evaluators colocated with surgeons to blind video evaluations. Performance of axillary artery (AA), brachial artery (BA), and femoral artery (FA) vascular exposures and lower extremity FAS on fresh cadavers by 40 PGY-2 to PGY-6 residents was video-recorded from head-mounted cameras. Two colocated trained evaluators assessed IPS before and after training. One surgeon in each pretraining tertile of IPS for each procedure was randomly identified for blind video review. The same 12 surgeons were video-recorded repeating the procedures less than 4 weeks after training. Five evaluators independently reviewed all 96 randomly arranged deidentified videos. Inter-rater reliability/consistency, intraclass correlation coefficients were compared by colocated versus video review of IPS, and errors. Study methodology and bias were judged by Medical Education Research Study Quality Instrument and the Quality Assessment of Diagnostic Accuracy Studies criteria. There were no differences (p ≥ 0.5) in IPS for AA, FA, FAS, whether evaluators were colocated or reviewed video recordings. Evaluator consistency was 0.29 (BA) - 0.77 (FA). Video and colocated evaluators were in total agreement (p = 1.0) for error recognition. Intraclass correlation coefficient was 0.73 to 0.92, dependent on procedure. Correlations video versus colocated evaluations were 0.5 to 0.9. Except for BA, blinded video evaluators discriminated (p competency. Prognostic study, level II.

  1. Geographic Video 3d Data Model And Retrieval

    Science.gov (United States)

    Han, Z.; Cui, C.; Kong, Y.; Wu, H.

    2014-04-01

    Geographic video includes both spatial and temporal geographic features acquired through ground-based or non-ground-based cameras. With the popularity of video capture devices such as smartphones, the volume of user-generated geographic video clips has grown significantly and the trend of this growth is quickly accelerating. Such a massive and increasing volume poses a major challenge to efficient video management and query. Most of the today's video management and query techniques are based on signal level content extraction. They are not able to fully utilize the geographic information of the videos. This paper aimed to introduce a geographic video 3D data model based on spatial information. The main idea of the model is to utilize the location, trajectory and azimuth information acquired by sensors such as GPS receivers and 3D electronic compasses in conjunction with video contents. The raw spatial information is synthesized to point, line, polygon and solid according to the camcorder parameters such as focal length and angle of view. With the video segment and video frame, we defined the three categories geometry object using the geometry model of OGC Simple Features Specification for SQL. We can query video through computing the spatial relation between query objects and three categories geometry object such as VFLocation, VSTrajectory, VSFOView and VFFovCone etc. We designed the query methods using the structured query language (SQL) in detail. The experiment indicate that the model is a multiple objective, integration, loosely coupled, flexible and extensible data model for the management of geographic stereo video.

  2. Distributed Real-Time Embedded Video Processing

    National Research Council Canada - National Science Library

    Lv, Tiehan

    2004-01-01

    .... A deployable multi-camera video system must perform distributed computation, including computation near the camera as well as remote computations, in order to meet performance and power requirements...

  3. Solid-state laser engineering

    CERN Document Server

    Koechner, Walter

    1999-01-01

    Solid-State Laser Engineering, written from an industrial perspective, discusses in detail the characteristics, design, construction, and performance of solid-state lasers. Emphasis is placed on engineering and practical considerations; phenomenological aspects using models are preferred to abstract mathematical derivations. This new edition has extensively been updated to account for recent developments in the areas of diode-laser pumping, laser materials, and nonlinear crystals. Walter Koechner received a doctorate in Electrical Engineering from the University of Technology in Vienna, Austria, in 1965. He has published numerous papers in the fields of solid-state physics, optics, and lasers. Dr. Koechner is founder and president of Fibertek, Inc., a research firm specializing in the design, development, and production of advanced solid-state lasers, optical radars, and remote-sensing systems.

  4. A Portable Shoulder-Mounted Camera System for Surgical Education in Spine Surgery.

    Science.gov (United States)

    Pham, Martin H; Ohiorhenuan, Ifije E; Patel, Neil N; Jakoi, Andre M; Hsieh, Patrick C; Acosta, Frank L; Wang, Jeffrey C; Liu, John C

    2017-02-07

    The past several years have demonstrated an increased recognition of operative videos as an important adjunct for resident education. Currently lacking, however, are effective methods to record video for the purposes of illustrating the techniques of minimally invasive (MIS) and complex spine surgery. We describe here our experiences developing and using a shoulder-mounted camera system for recording surgical video. Our requirements for an effective camera system included wireless portability to allow for movement around the operating room, camera mount location for comfort and loupes/headlight usage, battery life for long operative days, and sterile control of on/off recording. With this in mind, we created a shoulder-mounted camera system utilizing a GoPro™ HERO3+, its Smart Remote (GoPro, Inc., San Mateo, California), a high-capacity external battery pack, and a commercially available shoulder-mount harness. This shoulder-mounted system was more comfortable to wear for long periods of time in comparison to existing head-mounted and loupe-mounted systems. Without requiring any wired connections, the surgeon was free to move around the room as needed. Over the past several years, we have recorded numerous MIS and complex spine surgeries for the purposes of surgical video creation for resident education. Surgical videos serve as a platform to distribute important operative nuances in rich multimedia. Effective and practical camera system setups are needed to encourage the continued creation of videos to illustrate the surgical maneuvers in minimally invasive and complex spinal surgery. We describe here a novel portable shoulder-mounted camera system setup specifically designed to be worn and used for long periods of time in the operating room.

  5. Understanding solid state physics

    CERN Document Server

    Holgate, Sharon Ann

    2009-01-01

    Where Sharon Ann Holgate has succeeded in this book is in packing it with examples of the application of solid state physics to technology. … All the basic elements of solid state physics are covered … . The range of materials is good, including as it does polymers and glasses as well as crystalline solids. In general, the style makes for easy reading. … Overall this book succeeds in showing the relevance of solid state physics to the modern world … .-Contemporary Physics, Vol. 52, No. 2, 2011I was indeed amused and inspired by the wonderful images throughout the book, carefully selected by th

  6. High resolution RGB color line scan camera

    Science.gov (United States)

    Lynch, Theodore E.; Huettig, Fred

    1998-04-01

    A color line scan camera family which is available with either 6000, 8000 or 10000 pixels/color channel, utilizes off-the-shelf lenses, interfaces with currently available frame grabbers, includes on-board pixel by pixel offset correction, and is configurable and controllable via RS232 serial port for computer controlled or stand alone operation is described in this paper. This line scan camera is based on an available 8000 element monochrome line scan camera designed by AOA for OEM use. The new color version includes improvements such as better packaging and additional user features which make the camera easier to use. The heart of the camera is a tri-linear CCD sensor with on-chip color balancing for maximum accuracy and pinned photodiodes for low lag response. Each color channel is digitized to 12 bits and all three channels are multiplexed together so that the resulting camera output video is either a 12 or 8 bit data stream at a rate of up to 24Megpixels/sec. Conversion from 12 to 8 bit, or user-defined gamma, is accomplished by on board user-defined video look up tables. The camera has two user-selectable operating modes; lows speed, high sensitivity mode or high speed, reduced sensitivity mode. The intended uses of the camera include industrial inspection, digital archiving, document scanning, and graphic arts applications.

  7. Multimodal Semantics Extraction from User-Generated Videos

    Directory of Open Access Journals (Sweden)

    Francesco Cricri

    2012-01-01

    Full Text Available User-generated video content has grown tremendously fast to the point of outpacing professional content creation. In this work we develop methods that analyze contextual information of multiple user-generated videos in order to obtain semantic information about public happenings (e.g., sport and live music events being recorded in these videos. One of the key contributions of this work is a joint utilization of different data modalities, including such captured by auxiliary sensors during the video recording performed by each user. In particular, we analyze GPS data, magnetometer data, accelerometer data, video- and audio-content data. We use these data modalities to infer information about the event being recorded, in terms of layout (e.g., stadium, genre, indoor versus outdoor scene, and the main area of interest of the event. Furthermore we propose a method that automatically identifies the optimal set of cameras to be used in a multicamera video production. Finally, we detect the camera users which fall within the field of view of other cameras recording at the same public happening. We show that the proposed multimodal analysis methods perform well on various recordings obtained in real sport events and live music performances.

  8. Solid-state laser engineering

    CERN Document Server

    Koechner, Walter

    1996-01-01

    Solid-State Laser Engineering, written from an industrial perspective, discusses in detail the characteristics, design, construction, and performance of solid-state lasers. Emphasis is placed on engineering and practical considerations; phenomenological aspects using models are preferred to abstract mathematical derivations. This new edition has extensively been updated to account for recent developments in the areas of diode-laser pumping, mode locking, ultrashort-pulse generation etc. Walter Koechner received a doctorate in Electrical Engineering from the University of Technology in Vienna, Austria, in 1965. He has published numerous papers in the fields of solid-state physics, optics, and lasers. Dr. Koechner is founder and president of Fibertek, Inc., a research firm specializing in the design, development, and production of advanced solid-state lasers, optical radars, and remote-sensing systems.

  9. Video content analysis on body-worn cameras for retrospective investigation

    NARCIS (Netherlands)

    Bouma, H.; Baan, J.; Haar, F.B. ter; Eendebak, P.T.; Hollander, R.J.M. den; Burghouts, G.J.; Wijn, R.; Broek, S.P. van den; Rest, J.H.C. van

    2015-01-01

    In the security domain, cameras are important to assess critical situations. Apart from fixed surveillance cameras we observe an increasing number of sensors on mobile platforms, such as drones, vehicles and persons. Mobile cameras allow rapid and local deployment, enabling many novel applications

  10. Key Issues in Modeling of Complex 3D Structures from Video Sequences

    Directory of Open Access Journals (Sweden)

    Shengyong Chen

    2012-01-01

    Full Text Available Construction of three-dimensional structures from video sequences has wide applications for intelligent video analysis. This paper summarizes the key issues of the theory and surveys the recent advances in the state of the art. Reconstruction of a scene object from video sequences often takes the basic principle of structure from motion with an uncalibrated camera. This paper lists the typical strategies and summarizes the typical solutions or algorithms for modeling of complex three-dimensional structures. Open difficult problems are also suggested for further study.

  11. Direct measurement of erythrocyte deformability in diabetes mellitus with a transparent microchannel capillary model and high-speed video camera system.

    Science.gov (United States)

    Tsukada, K; Sekizuka, E; Oshio, C; Minamitani, H

    2001-05-01

    To measure erythrocyte deformability in vitro, we made transparent microchannels on a crystal substrate as a capillary model. We observed axisymmetrically deformed erythrocytes and defined a deformation index directly from individual flowing erythrocytes. By appropriate choice of channel width and erythrocyte velocity, we could observe erythrocytes deforming to a parachute-like shape similar to that occurring in capillaries. The flowing erythrocytes magnified 200-fold through microscopy were recorded with an image-intensified high-speed video camera system. The sensitivity of deformability measurement was confirmed by comparing the deformation index in healthy controls with erythrocytes whose membranes were hardened by glutaraldehyde. We confirmed that the crystal microchannel system is a valuable tool for erythrocyte deformability measurement. Microangiopathy is a characteristic complication of diabetes mellitus. A decrease in erythrocyte deformability may be part of the cause of this complication. In order to identify the difference in erythrocyte deformability between control and diabetic erythrocytes, we measured erythrocyte deformability using transparent crystal microchannels and a high-speed video camera system. The deformability of diabetic erythrocytes was indeed measurably lower than that of erythrocytes in healthy controls. This result suggests that impaired deformability in diabetic erythrocytes can cause altered viscosity and increase the shear stress on the microvessel wall. Copyright 2001 Academic Press.

  12. Solid state chemistry an introduction

    CERN Document Server

    Smart, Lesley E

    2012-01-01

    ""Smart and Moore are engaging writers, providing clear explanations for concepts in solid-state chemistry from the atomic/molecular perspective. The fourth edition is a welcome addition to my bookshelves. … What I like most about Solid State Chemistry is that it gives simple clear descriptions for a large number of interesting materials and correspondingly clear explanations of their applications. Solid State Chemistry could be used for a solid state textbook at the third or fourth year undergraduate level, especially for chemistry programs. It is also a useful resource for beginning graduate

  13. Video-rate or high-precision: a flexible range imaging camera

    Science.gov (United States)

    Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.; Payne, Andrew D.; Conroy, Richard M.; Godbaz, John P.; Jongenelen, Adrian P. P.

    2008-02-01

    A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene, hence is well suited to certain machine vision applications. Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512) pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s). Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many situations. The system's frame rate and length of acquisition is fully configurable in software, which means the measurement rate can be increased by compromising precision and image resolution. In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat signal frequency selection.

  14. Feature Quantization and Pooling for Videos

    Science.gov (United States)

    2014-05-01

    less vertical motion. The exceptions are videos from the classes of biking (mainly due to the camera tracking fast bikers), jumping on a trampoline ...tracking the bikers; the jumping videos, featuring people on trampolines , the swing videos, which are usually recorded in profile view, and the walking

  15. The solid state maser

    CERN Document Server

    Orton, J W; Walling, J C; Ter Haar, D

    1970-01-01

    The Solid State Maser presents readings related to solid state maser amplifier from the first tentative theoretical proposals that appeared in the early 1950s to the successful realization of practical devices and their application to satellite communications and radio astronomy almost exactly 10 years later. The book discusses a historical account of the early developments (including that of the ammonia maser) of solid state maser; the properties of paramagnetic ions in crystals; the development of practical low noise amplifiers; and the characteristics of maser devices designed for communica

  16. Solid state chemistry and its applications

    CERN Document Server

    West, Anthony R

    2013-01-01

    Solid State Chemistry and its Applications, 2nd Edition: Student Edition is an extensive update and sequel to the bestselling textbook Basic Solid State Chemistry, the classic text for undergraduate teaching in solid state chemistry worldwide. Solid state chemistry lies at the heart of many significant scientific advances from recent decades, including the discovery of high-temperature superconductors, new forms of carbon and countless other developments in the synthesis, characterisation and applications of inorganic materials. Looking forward, solid state chemistry will be crucial for the

  17. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.

    Science.gov (United States)

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H

    2015-02-01

    Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.

  18. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction.

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-03-20

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  19. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Science.gov (United States)

    Nguyen, Dat Tien; Kim, Ki Wan; Hong, Hyung Gil; Koo, Ja Hyung; Kim, Min Cheol; Park, Kang Ryoung

    2017-01-01

    Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT), speed-up robust feature (SURF), local binary patterns (LBP), histogram of oriented gradients (HOG), and weighted HOG. Recently, the convolutional neural network (CNN) method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images. PMID:28335510

  20. VIDEO ETNOGRAFI: PENGALAMAN PENELITIAN SOSIAL DENGAN VIDEO KAMERA DI SULAWESI SELATAN

    Directory of Open Access Journals (Sweden)

    Moh Yasir Alimi

    2013-04-01

    Full Text Available Tujuan penelitian ini adalah untuk mengeksplorasi bagaimana video kamera dapat membantu focus pikiran peneliti, mempertajam naluri etnografi, memperbagus hubungan dengan masyarakat dan memperbagus laporan penelitian. Penelitian dilakukan di Sulawesi Selatan selama satu tahun dengan menggunakan handycam Sony DCR-TRV27, handycam Sony yang dilengkapi dengan fitur screen dan layar pengintip untuk siang hari. Kecanggihan teknologi kamera bertemu dengan kekayaan kehidupan social di Sulawesi Selatan, sebagaimana terefleksikan paling baik dalam upacara pernikahan. Video kamera secara sistematik mempunyai tempat yang penting dalam system knowledge/power local. Melakukan hal ini, artikel ini tidak hanya mendiskusikan tentang penggunaan kamera di praktek etnografis, tetapi juga mendiskusikan dunia sosial yang memungkinkan kamera mempunyai peran yang sangat penting didalamnya. In this article, the author explores how a video camera can refocus researcher’s mind, sharpens ethnographic senses, and improve relation with the community. This sophisticated feature of technology has met with the vibrancy of social life in South Sulawesi, best reflected in wedding ceremonies, placing camera nicely in the existing local system of knowledge/power. The research was done for one year using Sony handycam Sony DCR-TRV27. The study is drawn from a year fieldwork experience in South Sulawesi. This article does not only discuss  the use of camera in an ethnographic practice, but also discusses the social world that has enabled the camera to have a central place in it.

  1. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  2. Laser scanning camera inspects hazardous area

    International Nuclear Information System (INIS)

    Fryatt, A.; Miprode, C.

    1985-01-01

    Main operational characteristics of a new laser scanning camera are presented. The camera is intended primarily for low level high resolution viewing inside nuclear reactors. It uses a He-Ne laser beam raster; by detecting the reflected light by means of a phomultiplier, the subject under observation can be reconstructed in an electronic video store and reviewed on a conventional monitor screen

  3. What Counts as Educational Video?: Working toward Best Practice Alignment between Video Production Approaches and Outcomes

    Science.gov (United States)

    Winslett, Greg

    2014-01-01

    The twenty years since the first digital video camera was made commercially available has seen significant increases in the use of low-cost, amateur video productions for teaching and learning. In the same period, production and consumption of professionally produced video has also increased, as has the distribution platforms to access it.…

  4. Interaction Control Protocols for Distributed Multi-user Multi-camera Environments

    Directory of Open Access Journals (Sweden)

    Gareth W Daniel

    2003-10-01

    Full Text Available Video-centred communication (e.g., video conferencing, multimedia online learning, traffic monitoring, and surveillance is becoming a customary activity in our lives. The management of interactions in such an environment is a complicated HCI issue. In this paper, we present our study on a collection of interaction control protocols for distributed multiuser multi-camera environments. These protocols facilitate different approaches to managing a user's entitlement for controlling a particular camera. We describe a web-based system that allows multiple users to manipulate multiple cameras in varying remote locations. The system was developed using the Java framework, and all protocols discussed have been incorporated into the system. Experiments were designed and conducted to evaluate the effectiveness of these protocols, and to enable the identification of various human factors in a distributed multi-user and multi-camera environment. This work provides an insight into the complexity associated with the interaction management in video-centred communication. It can also serve as a conceptual and experimental framework for further research in this area.

  5. VAP/VAT: video analytics platform and test bed for testing and deploying video analytics

    Science.gov (United States)

    Gorodnichy, Dmitry O.; Dubrofsky, Elan

    2010-04-01

    Deploying Video Analytics in operational environments is extremely challenging. This paper presents a methodological approach developed by the Video Surveillance and Biometrics Section (VSB) of the Science and Engineering Directorate (S&E) of the Canada Border Services Agency (CBSA) to resolve these problems. A three-phase approach to enable VA deployment within an operational agency is presented and the Video Analytics Platform and Testbed (VAP/VAT) developed by the VSB section is introduced. In addition to allowing the integration of third party and in-house built VA codes into an existing video surveillance infrastructure, VAP/VAT also allows the agency to conduct an unbiased performance evaluation of the cameras and VA software available on the market. VAP/VAT consists of two components: EventCapture, which serves to Automatically detect a "Visual Event", and EventBrowser, which serves to Display & Peruse of "Visual Details" captured at the "Visual Event". To deal with Open architecture as well as with Closed architecture cameras, two video-feed capture mechanisms have been developed within the EventCapture component: IPCamCapture and ScreenCapture.

  6. Deep-Sky Video Astronomy

    CERN Document Server

    Massey, Steve

    2009-01-01

    A guide to using modern integrating video cameras for deep-sky viewing and imaging with the kinds of modest telescopes available commercially to amateur astronomers. It includes an introduction and a brief history of the technology and camera types. It examines the pros and cons of this unrefrigerated yet highly efficient technology

  7. A lateral chromatic aberration correction system for ultrahigh-definition color video camera

    Science.gov (United States)

    Yamashita, Takayuki; Shimamoto, Hiroshi; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji

    2006-02-01

    We have developed color camera for an 8k x 4k-pixel ultrahigh-definition video system, which is called Super Hi- Vision, with a 5x zoom lens and a signal-processing system incorporating a function for real-time lateral chromatic aberration correction. The chromatic aberration of the lens degrades color image resolution. So in order to develop a compact zoom lens consistent with ultrahigh-resolution characteristics, we incorporated a real-time correction function in the signal-processing system. The signal-processing system has eight memory tables to store the correction data at eight focal length points on the blue and red channels. When the focal length data is inputted from the lens control units, the relevant correction data are interpolated from two of eights correction data tables. This system performs geometrical conversion on both channels using this correction data. This paper describes that the correction function can successfully reduce the lateral chromatic aberration, to an amount small enough to ensure the desired image resolution was achieved over the entire range of the lens in real time.

  8. Preliminary condensation pool experiments with steam using DN80 and DN100 blowdown pipes[VIDEO CAMERAS

    Energy Technology Data Exchange (ETDEWEB)

    Laine, J.; Puustinen, M. [Lappeenranta University of Technology (Finland)

    2004-03-01

    The report summarizes the results of the preliminary steam blowdown experiments. Altogether eight experiment series, each consisting of several steam blows, were carried out in autumn 2003 with a scaled-down condensation pool test rig designed and constructed at Lappeenranta University of Technology. The main purpose of the experiments was to evaluate the capabilities of the test rig and the needs for measurement and visualization devices. The experiments showed that a high-speed video camera is essential for visual observation due to the rapid condensation of steam bubbles. Furthermore, the maximum measurement frequency of the current combination of instrumentation and data acquisition system is inadequate for the actual steam tests in 2004. (au)

  9. Explosive Breakup of a Water Droplet with a Nontransparent Solid Inclusion Heated in a High-Temperature Gaseous Medium

    Directory of Open Access Journals (Sweden)

    Dmitrienko Margarita A.

    2015-01-01

    Full Text Available This paper investigates the evaporation of a water droplet with a comparably sized solid nontransparent inclusion in a high-temperature (500–800 K gas medium. Water evaporates from the free surface of the inclusion. During this process, intensive vapor formation occurs on the inner interface “water droplet – solid inclusion” with the subsequent explosive decay of the droplet. Experiments have been conducted using high-speed (up to 105 fps video cameras “Phantom” and software “Phantom Camera Control”. The conditions of the explosive vapor formation of the heterogeneous water droplet were found. The typical phase change mechanisms of the heterogeneous water droplet under the conditions of intensive heat exchange were determined.

  10. Linear array of photodiodes to track a human speaker for video recording

    International Nuclear Information System (INIS)

    DeTone, D; Neal, H; Lougheed, R

    2012-01-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant– the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting–a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  11. Linear array of photodiodes to track a human speaker for video recording

    Science.gov (United States)

    DeTone, D.; Neal, H.; Lougheed, R.

    2012-12-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant- the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting-a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  12. Standardized access, display, and retrieval of medical video

    Science.gov (United States)

    Bellaire, Gunter; Steines, Daniel; Graschew, Georgi; Thiel, Andreas; Bernarding, Johannes; Tolxdorff, Thomas; Schlag, Peter M.

    1999-05-01

    The system presented here enhances documentation and data- secured, second-opinion facilities by integrating video sequences into DICOM 3.0. We present an implementation for a medical video server extended by a DICOM interface. Security mechanisms conforming with DICOM are integrated to enable secure internet access. Digital video documents of diagnostic and therapeutic procedures should be examined regarding the clip length and size necessary for second opinion and manageable with today's hardware. Image sources relevant for this paper include 3D laparoscope, 3D surgical microscope, 3D open surgery camera, synthetic video, and monoscopic endoscopes, etc. The global DICOM video concept and three special workplaces of distinct applications are described. Additionally, an approach is presented to analyze the motion of the endoscopic camera for future automatic video-cutting. Digital stereoscopic video sequences are especially in demand for surgery . Therefore DSVS are also integrated into the DICOM video concept. Results are presented describing the suitability of stereoscopic display techniques for the operating room.

  13. Radiation effects on video imagers

    International Nuclear Information System (INIS)

    Yates, G.J.; Bujnosek, J.J.; Jaramillo, S.A.; Walton, R.B.; Martinez, T.M.; Black, J.P.

    1985-01-01

    Radiation sensitivity of several photoconductive, photoemissive, and solid state silicon-based video imagers was measured by analyzing stored photocharge induced by irradiation with continuous and pulsed sources of high energy photons and neutrons. Transient effects as functions of absorbed dose, dose rate, fluences, and ionizing particle energy are presented

  14. Einstein and solid-state physics

    International Nuclear Information System (INIS)

    Aut, I.

    1982-01-01

    A connection between the development of solid-state physics and the works and activity of Albert Einstein is traced. A tremendous Einstein contribution to solid state physics is marked. A strict establishment of particle-wave dualism; a conclusion about the applicability of the Plank radiation law not only to black body radiation; finding out particles indistinguishability - all three discoveries have a principle significance for solid state physics too

  15. Super Normal Vector for Human Activity Recognition with Depth Cameras.

    Science.gov (United States)

    Yang, Xiaodong; Tian, YingLi

    2017-05-01

    The advent of cost-effectiveness and easy-operation depth cameras has facilitated a variety of visual recognition tasks including human activity recognition. This paper presents a novel framework for recognizing human activities from video sequences captured by depth cameras. We extend the surface normal to polynormal by assembling local neighboring hypersurface normals from a depth sequence to jointly characterize local motion and shape information. We then propose a general scheme of super normal vector (SNV) to aggregate the low-level polynormals into a discriminative representation, which can be viewed as a simplified version of the Fisher kernel representation. In order to globally capture the spatial layout and temporal order, an adaptive spatio-temporal pyramid is introduced to subdivide a depth video into a set of space-time cells. In the extensive experiments, the proposed approach achieves superior performance to the state-of-the-art methods on the four public benchmark datasets, i.e., MSRAction3D, MSRDailyActivity3D, MSRGesture3D, and MSRActionPairs3D.

  16. Mobile Video in Everyday Social Interactions

    Science.gov (United States)

    Reponen, Erika; Lehikoinen, Jaakko; Impiö, Jussi

    Video recording has become a spontaneous everyday activity for many people, thanks to the video capabilities of modern mobile phones. Internet connectivity of mobile phones enables fluent sharing of captured material even real-time, which makes video an up-and-coming everyday interaction medium. In this article we discuss the effect of the video camera in the social environment, everyday life situations, mainly based on a study where four groups of people used digital video cameras in their normal settings. We also reflect on another study of ours, relating to real-time mobile video communication and discuss future views. The aim of our research is to understand the possibilities in the domain of mobile video. Live and delayed sharing seem to have their special characteristics, live video being used as a virtual window between places whereas delayed video usage has more scope for good-quality content. While this novel way of interacting via mobile video enables new social patterns, it also raises new concerns for privacy and trust between participating persons in all roles, largely due to the widely spreading possibilities of videos. Video in a social situation affects cameramen (who record), targets (who are recorded), passers-by (who are unintentionally in the situation), and the audience (who follow the videos or recording situations) but also the other way around, the participants affect the video by their varying and evolving personal and communicational motivations for recording.

  17. Physics and Video Analysis

    Science.gov (United States)

    Allain, Rhett

    2016-05-01

    We currently live in a world filled with videos. There are videos on YouTube, feature movies and even videos recorded with our own cameras and smartphones. These videos present an excellent opportunity to not only explore physical concepts, but also inspire others to investigate physics ideas. With video analysis, we can explore the fantasy world in science-fiction films. We can also look at online videos to determine if they are genuine or fake. Video analysis can be used in the introductory physics lab and it can even be used to explore the make-believe physics embedded in video games. This book covers the basic ideas behind video analysis along with the fundamental physics principles used in video analysis. The book also includes several examples of the unique situations in which video analysis can be used.

  18. Solid-state circuits

    CERN Document Server

    Pridham, G J

    2013-01-01

    Solid-State Circuits provides an introduction to the theory and practice underlying solid-state circuits, laying particular emphasis on field effect transistors and integrated circuits. Topics range from construction and characteristics of semiconductor devices to rectification and power supplies, low-frequency amplifiers, sine- and square-wave oscillators, and high-frequency effects and circuits. Black-box equivalent circuits of bipolar transistors, physical equivalent circuits of bipolar transistors, and equivalent circuits of field effect transistors are also covered. This volume is divided

  19. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1982-01-01

    The invention provides a composite solid state detector for use in deriving a display, by spatial coordinate information, of the distribution or radiation emanating from a source within a region of interest, comprising several solid state detector components, each having a given surface arranged for exposure to impinging radiation and exhibiting discrete interactions therewith at given spatially definable locations. The surface of each component and the surface disposed opposite and substantially parallel thereto are associated with impedence means configured to provide for each opposed surface outputs for signals relating the given location of the interactions with one spatial coordinate parameter of one select directional sense. The detector components are arranged to provide groupings of adjacently disposed surfaces mutually linearly oriented to exhibit a common directional sense of the spatial coordinate parameter. Means interconnect at least two of the outputs associated with each of the surfaces within a given grouping for collecting the signals deriving therefrom. The invention also provides a camera system for imaging the distribution of a source of gamma radiation situated within a region of interest

  20. Feasibility Study of Utilization of Action Camera, GoPro Hero 4, Google Glass, and Panasonic HX-A100 in Spine Surgery.

    Science.gov (United States)

    Lee, Chang Kyu; Kim, Youngjun; Lee, Nam; Kim, Byeongwoo; Kim, Doyoung; Yi, Seong

    2017-02-15

    Study for feasibility of commercially available action cameras in recording video of spine. Recent innovation of the wearable action camera with high-definition video recording enables surgeons to use camera in the operation at ease without high costs. The purpose of this study is to compare the feasibility, safety, and efficacy of commercially available action cameras in recording video of spine surgery. There are early reports of medical professionals using Google Glass throughout the hospital, Panasonic HX-A100 action camera, and GoPro. This study is the first report for spine surgery. Three commercially available cameras were tested: GoPro Hero 4 Silver, Google Glass, and Panasonic HX-A100 action camera. Typical spine surgery was selected for video recording; posterior lumbar laminectomy and fusion. Three cameras were used by one surgeon and video was recorded throughout the operation. The comparison was made on the perspective of human factor, specification, and video quality. The most convenient and lightweight device for wearing and holding throughout the long operation time was Google Glass. The image quality; all devices except Google Glass supported HD format and GoPro has unique 2.7K or 4K resolution. Quality of video resolution was best in GoPro. Field of view, GoPro can adjust point of interest, field of view according to the surgery. Narrow FOV option was the best for recording in GoPro to share the video clip. Google Glass has potentials by using application programs. Connectivity such as Wi-Fi and Bluetooth enables video streaming for audience, but only Google Glass has two-way communication feature in device. Action cameras have the potential to improve patient safety, operator comfort, and procedure efficiency in the field of spinal surgery and broadcasting a surgery with development of the device and applied program in the future. N/A.

  1. Gender Recognition from Human-Body Images Using Visible-Light and Thermal Camera Videos Based on a Convolutional Neural Network for Image Feature Extraction

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2017-03-01

    Full Text Available Extracting powerful image features plays an important role in computer vision systems. Many methods have previously been proposed to extract image features for various computer vision applications, such as the scale-invariant feature transform (SIFT, speed-up robust feature (SURF, local binary patterns (LBP, histogram of oriented gradients (HOG, and weighted HOG. Recently, the convolutional neural network (CNN method for image feature extraction and classification in computer vision has been used in various applications. In this research, we propose a new gender recognition method for recognizing males and females in observation scenes of surveillance systems based on feature extraction from visible-light and thermal camera videos through CNN. Experimental results confirm the superiority of our proposed method over state-of-the-art recognition methods for the gender recognition problem using human body images.

  2. User interface using a 3D model for video surveillance

    Science.gov (United States)

    Hata, Toshihiko; Boh, Satoru; Tsukada, Akihiro; Ozaki, Minoru

    1998-02-01

    These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.

  3. Range-Measuring Video Sensors

    Science.gov (United States)

    Howard, Richard T.; Briscoe, Jeri M.; Corder, Eric L.; Broderick, David

    2006-01-01

    Optoelectronic sensors of a proposed type would perform the functions of both electronic cameras and triangulation- type laser range finders. That is to say, these sensors would both (1) generate ordinary video or snapshot digital images and (2) measure the distances to selected spots in the images. These sensors would be well suited to use on robots that are required to measure distances to targets in their work spaces. In addition, these sensors could be used for all the purposes for which electronic cameras have been used heretofore. The simplest sensor of this type, illustrated schematically in the upper part of the figure, would include a laser, an electronic camera (either video or snapshot), a frame-grabber/image-capturing circuit, an image-data-storage memory circuit, and an image-data processor. There would be no moving parts. The laser would be positioned at a lateral distance d to one side of the camera and would be aimed parallel to the optical axis of the camera. When the range of a target in the field of view of the camera was required, the laser would be turned on and an image of the target would be stored and preprocessed to locate the angle (a) between the optical axis and the line of sight to the centroid of the laser spot.

  4. Using a laser scanning camera for reactor inspection

    International Nuclear Information System (INIS)

    Armour, I.A.; Adrain, R.S.; Klewe, R.C.

    1984-01-01

    Inspection of nuclear reactors is normally carried out using TV or film cameras. There are, however, several areas where these cameras show considerable shortcomings. To overcome these difficulties, laser scanning cameras have been developed. This type of camera can be used for general visual inspection as well as the provision of high resolution video images with high ratio on and off-axis zoom capability. In this paper, we outline the construction and operation of a laser scanning camera and give examples of how it has been used in various power stations, and indicate future potential developments. (author)

  5. Noise aliasing in interline-video-based fluoroscopy systems

    International Nuclear Information System (INIS)

    Lai, H.; Cunningham, I.A.

    2002-01-01

    Video-based imaging systems for continuous (nonpulsed) x-ray fluoroscopy use a variety of video formats. Conventional video-camera systems may operate in either interlaced or progressive-scan modes, and CCD systems may operate in interline- or frame-transfer modes. A theoretical model of the image noise power spectrum corresponding to these formats is described. It is shown that with respect to frame-transfer or progressive-readout modes, interline or interlaced cameras operating in a frame-integration mode will result in a spectral shift of 25% of the total image noise power from low spatial frequencies to high. In a field-integration mode, noise power is doubled with most of the increase occurring at high spatial frequencies. The differences are due primarily to the effect of noise aliasing. In interline or interlaced formats, alternate lines are obtained with each video field resulting in a vertical sampling frequency for noise that is one half of the physical sampling frequency. The extent of noise aliasing is modified by differences in the statistical correlations between video fields in the different modes. The theoretical model is validated with experiments using an x-ray image intensifier and CCD-camera system. It is shown that different video modes affect the shape of the noise-power spectrum and therefore the detective quantum efficiency. While the effect on observer performance is not addressed, it is concluded that in order to minimize image noise at the critical mid-to-high spatial frequencies for a specified x-ray exposure, fluoroscopic systems should use only frame-transfer (CCD camera) or progressive-scan (conventional video) formats

  6. Cervical SPECT Camera for Parathyroid Imaging

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2012-08-31

    Primary hyperparathyroidism characterized by one or more enlarged parathyroid glands has become one of the most common endocrine diseases in the world affecting about 1 per 1000 in the United States. Standard treatment is highly invasive exploratory neck surgery called Parathyroidectomy. The surgery has a notable mortality rate because of the close proximity to vital structures. The move to minimally invasive parathyroidectomy is hampered by the lack of high resolution pre-surgical imaging techniques that can accurately localize the parathyroid with respect to surrounding structures. We propose to develop a dedicated ultra-high resolution (~ 1 mm) and high sensitivity (10x conventional camera) cervical scintigraphic imaging device. It will be based on a multiple pinhole-camera SPECT system comprising a novel solid state CZT detector that offers the required performance. The overall system will be configured to fit around the neck and comfortably image a patient.

  7. New materials for solid state electrochemistry

    International Nuclear Information System (INIS)

    Ferloni, P.; Consiglio Nazionale delle Ricerche, Pavia; Magistris, A.; Consiglio Nazionale delle Ricerche, Pavia

    1994-01-01

    Solid state electrochemistry is an interdisciplinary area, undergoing nowadays a fast development. It is related on the one hand to chemistry, and on the other hand to crystallography, solid state physics and materials science. In this paper structural and electrical properties of some families of new materials interesting for solid state electrochemistry are reviewed. Attention is focused essentially on ceramic and crystalline materials, glasses and polymers, displaying high ionic conductivity and potentially suitable for various applications in solid state electrochemical devices. (orig.)

  8. Theoretical solid state physics

    International Nuclear Information System (INIS)

    Anon.

    1977-01-01

    Research activities at ORNL in theoretical solid state physics are described. Topics covered include: surface studies; particle-solid interactions; electronic and magnetic properties; and lattice dynamics

  9. Solid-State Physics Introduction to the Theory

    CERN Document Server

    Patterson, James

    2010-01-01

    Learning Solid State Physics involves a certain degree of maturity, since it involves tying together diverse concepts from many areas of physics. The objective is to understand, in a basic way, how solid materials behave. To do this one needs both a good physical and mathematical background. One definition of Solid State Physics is it is the study of the physical (e.g. the electrical, dielectric, magnetic, elastic, and thermal) properties of solids in terms of basic physical laws. In one sense, Solid State Physics is more like chemistry than some other branches of physics because it focuses on common properties of large classes of materials. It is typical that Solid State Physics emphasizes how physics properties link to electronic structure. We have retained the term Solid Modern solid state physics came of age in the late thirties and forties and is now is part of condensed matter physics which includes liquids, soft materials, and non-crystalline solids. This solid state/condensed matter physics book begin...

  10. A Taxonomy of Asynchronous Instructional Video Styles

    Science.gov (United States)

    Chorianopoulos, Konstantinos

    2018-01-01

    Many educational organizations are employing instructional videos in their pedagogy, but there is a limited understanding of the possible video formats. In practice, the presentation format of instructional videos ranges from direct recording of classroom teaching with a stationary camera, or screencasts with voiceover, to highly elaborate video…

  11. Remote Video Auditing in the Surgical Setting.

    Science.gov (United States)

    Pedersen, Anne; Getty Ritter, Elizabeth; Beaton, Megan; Gibbons, David

    2017-02-01

    Remote video auditing, a method first adopted by the food preparation industry, was later introduced to the health care industry as a novel approach to improving hand hygiene practices. This strategy yielded tremendous and sustained improvement, causing leaders to consider the potential effects of such technology on the complex surgical environment. This article outlines the implementation of remote video auditing and the first year of activity, outcomes, and measurable successes in a busy surgery department in the eastern United States. A team of anesthesia care providers, surgeons, and OR personnel used low-resolution cameras, large-screen displays, and cell phone alerts to make significant progress in three domains: application of the Universal Protocol for preventing wrong site, wrong procedure, wrong person surgery; efficiency metrics; and cleaning compliance. The use of cameras with real-time auditing and results-sharing created an environment of continuous learning, compliance, and synergy, which has resulted in a safer, cleaner, and more efficient OR. Copyright © 2017 AORN, Inc. Published by Elsevier Inc. All rights reserved.

  12. Quantum Computing in Solid State Systems

    CERN Document Server

    Ruggiero, B; Granata, C

    2006-01-01

    The aim of Quantum Computation in Solid State Systems is to report on recent theoretical and experimental results on the macroscopic quantum coherence of mesoscopic systems, as well as on solid state realization of qubits and quantum gates. Particular attention has been given to coherence effects in Josephson devices. Other solid state systems, including quantum dots, optical, ion, and spin devices which exhibit macroscopic quantum coherence are also discussed. Quantum Computation in Solid State Systems discusses experimental implementation of quantum computing and information processing devices, and in particular observations of quantum behavior in several solid state systems. On the theoretical side, the complementary expertise of the contributors provides models of the various structures in connection with the problem of minimizing decoherence.

  13. Solid state theory

    CERN Document Server

    Harrison, Walter A

    2011-01-01

    ""A well-written text . . . should find a wide readership, especially among graduate students."" - Dr. J. I. Pankove, RCA.The field of solid state theory, including crystallography, semi-conductor physics, and various applications in chemistry and electrical engineering, is highly relevant to many areas of modern science and industry. Professor Harrison's well-known text offers an excellent one-year graduate course in this active and important area of research. While presenting a broad overview of the fundamental concepts and methods of solid state physics, including the basic quantum theory o

  14. Solid-State Nanopore

    Directory of Open Access Journals (Sweden)

    Zhishan Yuan

    2018-02-01

    Full Text Available Abstract Solid-state nanopore has captured the attention of many researchers due to its characteristic of nanoscale. Now, different fabrication methods have been reported, which can be summarized into two broad categories: “top-down” etching technology and “bottom-up” shrinkage technology. Ion track etching method, mask etching method chemical solution etching method, and high-energy particle etching and shrinkage method are exhibited in this report. Besides, we also discussed applications of solid-state nanopore fabrication technology in DNA sequencing, protein detection, and energy conversion.

  15. Active spectral imaging nondestructive evaluation (SINDE) camera

    Energy Technology Data Exchange (ETDEWEB)

    Simova, E.; Rochefort, P.A., E-mail: eli.simova@cnl.ca [Canadian Nuclear Laboratories, Chalk River, Ontario (Canada)

    2016-06-15

    A proof-of-concept video camera for active spectral imaging nondestructive evaluation has been demonstrated. An active multispectral imaging technique has been implemented in the visible and near infrared by using light emitting diodes with wavelengths spanning from 400 to 970 nm. This shows how the camera can be used in nondestructive evaluation to inspect surfaces and spectrally identify materials and corrosion. (author)

  16. Development of a camera casing suited for cryogenic and vacuum applications

    Science.gov (United States)

    Delaquis, S. C.; Gornea, R.; Janos, S.; Lüthi, M.; von Rohr, Ch Rudolf; Schenk, M.; Vuilleumier, J.-L.

    2013-12-01

    We report on the design, construction, and operation of a PID temperature controlled and vacuum tight camera casing. The camera casing contains a commercial digital camera and a lighting system. The design of the camera casing and its components are discussed in detail. Pictures taken by this cryo-camera while immersed in argon vapour and liquid nitrogen are presented. The cryo-camera can provide a live view inside cryogenic set-ups and allows to record video.

  17. Instantaneous phase-shifting Fizeau interferometry with high-speed pixelated phase-mask camera

    Science.gov (United States)

    Yatagai, Toyohiko; Jackin, Boaz Jessie; Ono, Akira; Kiyohara, Kosuke; Noguchi, Masato; Yoshii, Minoru; Kiyohara, Motosuke; Niwa, Hayato; Ikuo, Kazuyuki; Onuma, Takashi

    2015-08-01

    A Fizeou interferometer with instantaneous phase-shifting ability using a Wollaston prism is designed. to measure dynamic phase change of objects, a high-speed video camera of 10-5s of shutter speed is used with a pixelated phase-mask of 1024 × 1024 elements. The light source used is a laser of wavelength 532 nm which is split into orthogonal polarization states by passing through a Wollaston prism. By adjusting the tilt of the reference surface it is possible to make the reference and object beam with orthogonal polarizations states to coincide and interfere. Then the pixelated phase-mask camera calculate the phase changes and hence the optical path length difference. Vibration of speakers and turbulence of air flow were successfully measured in 7,000 frames/sec.

  18. Solid State Physics Introduction to the Theory

    CERN Document Server

    Patterson, James D

    2007-01-01

    Learning Solid State Physics involves a certain degree of maturity, since it involves tying together diverse concepts from many areas of physics. The objective is to understand, in a basic way, how solid materials behave. To do this one needs both a good physical and mathematical background. One definition of Solid State Physics is it is the study of the physical (e.g. the electrical, dielectric, magnetic, elastic, and thermal) properties of solids in terms of basic physical laws. In one sense, Solid State Physics is more like chemistry than some other branches of physics because it focuses on common properties of large classes of materials. It is typical that Solid State Physics emphasizes how physics properties link to electronic structure. We have retained the term Solid State Physics, even though Condensed Matter Physics is more commonly used. Condensed Matter Physics includes liquids and non-crystalline solids such as glass, which we shall not discuss in detail. Modern Solid State Physics came of age in ...

  19. Endoscopic Camera Control by Head Movements for Thoracic Surgery

    NARCIS (Netherlands)

    Reilink, Rob; de Bruin, Gart; Franken, M.C.J.; Mariani, Massimo A.; Misra, Sarthak; Stramigioli, Stefano

    2010-01-01

    In current video-assisted thoracic surgery, the endoscopic camera is operated by an assistant of the surgeon, which has several disadvantages. This paper describes a system which enables the surgeon to control the endoscopic camera without the help of an assistant. The system is controlled using

  20. Physical principles and current status of emerging non-volatile solid state memories

    Science.gov (United States)

    Wang, L.; Yang, C.-H.; Wen, J.

    2015-07-01

    Today the influence of non-volatile solid-state memories on persons' lives has become more prominent because of their non-volatility, low data latency, and high robustness. As a pioneering technology that is representative of non-volatile solidstate memories, flash memory has recently seen widespread application in many areas ranging from electronic appliances, such as cell phones and digital cameras, to external storage devices such as universal serial bus (USB) memory. Moreover, owing to its large storage capacity, it is expected that in the near future, flash memory will replace hard-disk drives as a dominant technology in the mass storage market, especially because of recently emerging solid-state drives. However, the rapid growth of the global digital data has led to the need for flash memories to have larger storage capacity, thus requiring a further downscaling of the cell size. Such a miniaturization is expected to be extremely difficult because of the well-known scaling limit of flash memories. It is therefore necessary to either explore innovative technologies that can extend the areal density of flash memories beyond the scaling limits, or to vigorously develop alternative non-volatile solid-state memories including ferroelectric random-access memory, magnetoresistive random-access memory, phase-change random-access memory, and resistive random-access memory. In this paper, we review the physical principles of flash memories and their technical challenges that affect our ability to enhance the storage capacity. We then present a detailed discussion of novel technologies that can extend the storage density of flash memories beyond the commonly accepted limits. In each case, we subsequently discuss the physical principles of these new types of non-volatile solid-state memories as well as their respective merits and weakness when utilized for data storage applications. Finally, we predict the future prospects for the aforementioned solid-state memories for

  1. Surgeon-Manipulated Live Surgery Video Recording Apparatuses: Personal Experience and Review of Literature.

    Science.gov (United States)

    Kapi, Emin

    2017-06-01

    Visual recording of surgical procedures is a method that is used quite frequently in practices of plastic surgery. While presentations containing photographs are quite common in education seminars and congresses, video-containing presentations find more favour. For this reason, the presentation of surgical procedures in the form of real-time video display has increased especially recently. Appropriate technical equipment for video recording is not available in most hospitals, so there is a need to set up external apparatus in the operating room. Among these apparatuses can be listed such options as head-mounted video cameras, chest-mounted cameras, and tripod-mountable cameras. The head-mounted video camera is an apparatus that is capable of capturing high-resolution and detailed close-up footage. The tripod-mountable camera enables video capturing from a fixed point. Certain user-specific modifications can be made to overcome some of these restrictions. Among these modifications, custom-made applications are one of the most effective solutions. The article makes an attempt to present the features and experiences concerning the use of a combination of a head- or chest-mounted action camera, a custom-made portable tripod apparatus of versatile features, and an underwater camera. The descriptions we used are quite easy-to-assembly, quickly installed, and inexpensive apparatuses that do not require specific technical knowledge and can be manipulated by the surgeon personally in all procedures. The author believes that video recording apparatuses will be integrated more to the operating room, become a standard practice, and become more enabling for self-manipulation by the surgeon in the near future. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  2. Science on TeacherTube: A Mixed Methods Analysis of Teacher Produced Video

    Science.gov (United States)

    Chmiel, Margaret (Marjee)

    Increased bandwidth, inexpensive video cameras and easy-to-use video editing software have made social media sites featuring user generated video (UGV) an increasingly popular vehicle for online communication. As such, UGV have come to play a role in education, both formal and informal, but there has been little research on this topic in scholarly literature. In this mixed-methods study, a content and discourse analysis are used to describe the most successful UGV in the science channel of an education-focused site called TeacherTube. The analysis finds that state achievement tests, and their focus on vocabulary and recall-level knowledge, drive much of the content found on TeacherTube.

  3. Intelligent keyframe extraction for video printing

    Science.gov (United States)

    Zhang, Tong

    2004-10-01

    Nowadays most digital cameras have the functionality of taking short video clips, with the length of video ranging from several seconds to a couple of minutes. The purpose of this research is to develop an algorithm which extracts an optimal set of keyframes from each short video clip so that the user could obtain proper video frames to print out. In current video printing systems, keyframes are normally obtained by evenly sampling the video clip over time. Such an approach, however, may not reflect highlights or regions of interest in the video. Keyframes derived in this way may also be improper for video printing in terms of either content or image quality. In this paper, we present an intelligent keyframe extraction approach to derive an improved keyframe set by performing semantic analysis of the video content. For a video clip, a number of video and audio features are analyzed to first generate a candidate keyframe set. These features include accumulative color histogram and color layout differences, camera motion estimation, moving object tracking, face detection and audio event detection. Then, the candidate keyframes are clustered and evaluated to obtain a final keyframe set. The objective is to automatically generate a limited number of keyframes to show different views of the scene; to show different people and their actions in the scene; and to tell the story in the video shot. Moreover, frame extraction for video printing, which is a rather subjective problem, is considered in this work for the first time, and a semi-automatic approach is proposed.

  4. Solid-state devices and applications

    CERN Document Server

    Lewis, Rhys

    1971-01-01

    Solid-State Devices and Applications is an introduction to the solid-state theory and its devices and applications. The book also presents a summary of all major solid-state devices available, their theory, manufacture, and main applications. The text is divided into three sections. The first part deals with the semiconductor theory and discusses the fundamentals of semiconductors; the kinds of diodes and techniques in their manufacture; the types and modes of operation of bipolar transistors; and the basic principles of unipolar transistors and their difference with bipolar transistors. The s

  5. Solid-state polymeric dye lasers

    CERN Document Server

    Singh, S; Sridhar, G; Muthuswamy, V; Raja, K

    2003-01-01

    This paper presents a review of the organic solid-state polymer materials, which have become established as a new laser media. The photostability of these materials is discussed. Different types of solid-state lasers built around these materials are also reviewed.

  6. Video-based measurements for wireless capsule endoscope tracking

    International Nuclear Information System (INIS)

    Spyrou, Evaggelos; Iakovidis, Dimitris K

    2014-01-01

    The wireless capsule endoscope is a swallowable medical device equipped with a miniature camera enabling the visual examination of the gastrointestinal (GI) tract. It wirelessly transmits thousands of images to an external video recording system, while its location and orientation are being tracked approximately by external sensor arrays. In this paper we investigate a video-based approach to tracking the capsule endoscope without requiring any external equipment. The proposed method involves extraction of speeded up robust features from video frames, registration of consecutive frames based on the random sample consensus algorithm, and estimation of the displacement and rotation of interest points within these frames. The results obtained by the application of this method on wireless capsule endoscopy videos indicate its effectiveness and improved performance over the state of the art. The findings of this research pave the way for a cost-effective localization and travel distance measurement of capsule endoscopes in the GI tract, which could contribute in the planning of more accurate surgical interventions. (paper)

  7. Video-based measurements for wireless capsule endoscope tracking

    Science.gov (United States)

    Spyrou, Evaggelos; Iakovidis, Dimitris K.

    2014-01-01

    The wireless capsule endoscope is a swallowable medical device equipped with a miniature camera enabling the visual examination of the gastrointestinal (GI) tract. It wirelessly transmits thousands of images to an external video recording system, while its location and orientation are being tracked approximately by external sensor arrays. In this paper we investigate a video-based approach to tracking the capsule endoscope without requiring any external equipment. The proposed method involves extraction of speeded up robust features from video frames, registration of consecutive frames based on the random sample consensus algorithm, and estimation of the displacement and rotation of interest points within these frames. The results obtained by the application of this method on wireless capsule endoscopy videos indicate its effectiveness and improved performance over the state of the art. The findings of this research pave the way for a cost-effective localization and travel distance measurement of capsule endoscopes in the GI tract, which could contribute in the planning of more accurate surgical interventions.

  8. Authentication Approaches for Standoff Video Surveillance

    International Nuclear Information System (INIS)

    Baldwin, G.; Sweatt, W.; Thomas, M.

    2015-01-01

    Video surveillance for international nuclear safeguards applications requires authentication, which confirms to an inspector reviewing the surveillance images that both the source and the integrity of those images can be trusted. To date, all such authentication approaches originate at the camera. Camera authentication would not suffice for a ''standoff video'' application, where the surveillance camera views an image piped to it from a distant objective lens. Standoff video might be desired in situations where it does not make sense to expose sensitive and costly camera electronics to contamination, radiation, water immersion, or other adverse environments typical of hot cells, reprocessing facilities, and within spent fuel pools, for example. In this paper, we offer optical architectures that introduce a standoff distance of several metres between the scene and camera. Several schemes enable one to authenticate not only that the extended optical path is secure, but also that the scene is being viewed live. They employ optical components with remotely-operated spectral, temporal, directional, and intensity properties that are under the control of the inspector. If permitted by the facility operator, illuminators, reflectors and polarizers placed in the scene offer further possibilities. Any tampering that would insert an alternative image source for the camera, although undetectable with conventional cryptographic authentication of digital camera data, is easily exposed using the approaches we describe. Sandia National Laboratories is a multi-programme laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. Support to Sandia National Laboratories provided by the NNSA Next Generation Safeguards Initiative is gratefully acknowledged. SAND2014-3196 A. (author)

  9. Organic solid-state lasers

    CERN Document Server

    Forget, Sébastien

    2013-01-01

    Organic lasers are broadly tunable coherent sources, potentially compact, convenient and manufactured at low-costs. Appeared in the mid 60’s as solid-state alternatives for liquid dye lasers, they recently gained a new dimension after the demonstration of organic semiconductor lasers in the 90's. More recently, new perspectives appeared at the nanoscale, with organic polariton and surface plasmon lasers. After a brief reminder to laser physics, a first chapter exposes what makes organic solid-state organic lasers specific. The laser architectures used in organic lasers are then reviewed, with a state-of-the-art review of the performances of devices with regard to output power, threshold, lifetime, beam quality etc. A survey of the recent trends in the field is given, highlighting the latest developments with a special focus on the challenges remaining for achieving direct electrical pumping of organic semiconductor lasers. A last chapter covers the applications of organic solid-state lasers.

  10. High Dynamic Range Video

    CERN Document Server

    Myszkowski, Karol

    2008-01-01

    This book presents a complete pipeline forHDR image and video processing fromacquisition, through compression and quality evaluation, to display. At the HDR image and video acquisition stage specialized HDR sensors or multi-exposure techniques suitable for traditional cameras are discussed. Then, we present a practical solution for pixel values calibration in terms of photometric or radiometric quantities, which are required in some technically oriented applications. Also, we cover the problem of efficient image and video compression and encoding either for storage or transmission purposes, in

  11. Ultrahigh-speed, high-sensitivity color camera with 300,000-pixel single CCD

    Science.gov (United States)

    Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Ohtake, H.; Kurita, T.; Tanioka, K.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Etoh, T. G.

    2007-01-01

    We have developed an ultrahigh-speed, high-sensitivity portable color camera with a new 300,000-pixel single CCD. The 300,000-pixel CCD, which has four times the number of pixels of our initial model, was developed by seamlessly joining two 150,000-pixel CCDs. A green-red-green-blue (GRGB) Bayer filter is used to realize a color camera with the single-chip CCD. The camera is capable of ultrahigh-speed video recording at up to 1,000,000 frames/sec, and small enough to be handheld. We also developed a technology for dividing the CCD output signal to enable parallel, highspeed readout and recording in external memory; this makes possible long, continuous shots up to 1,000 frames/second. As a result of an experiment, video footage was imaged at an athletics meet. Because of high-speed shooting, even detailed movements of athletes' muscles were captured. This camera can capture clear slow-motion videos, so it enables previously impossible live footage to be imaged for various TV broadcasting programs.

  12. Comparison of myocardial perfusion imaging between the new high-speed gamma camera and the standard anger camera

    International Nuclear Information System (INIS)

    Tanaka, Hirokazu; Chikamori, Taishiro; Hida, Satoshi

    2013-01-01

    Cadmium-zinc-telluride (CZT) solid-state detectors have been recently introduced into the field of myocardial perfusion imaging. The aim of this study was to prospectively compare the diagnostic performance of the CZT high-speed gamma camera (Discovery NM 530c) with that of the standard 3-head gamma camera in the same group of patients. The study group consisted of 150 consecutive patients who underwent a 1-day stress-rest 99m Tc-sestamibi or tetrofosmin imaging protocol. Image acquisition was performed first on a standard gamma camera with a 15-min scan time each for stress and for rest. All scans were immediately repeated on a CZT camera with a 5-min scan time for stress and a 3-min scan time for rest, using list mode. The correlations between the CZT camera and the standard camera for perfusion and function analyses were strong within narrow Bland-Altman limits of agreement. Using list mode analysis, image quality for stress was rated as good or excellent in 97% of the 3-min scans, and in 100% of the ≥4-min scans. For CZT scans at rest, similarly, image quality was rated as good or excellent in 94% of the 1-min scans, and in 100% of the ≥2-min scans. The novel CZT camera provides excellent image quality, which is equivalent to standard myocardial single-photon emission computed tomography, despite a short scan time of less than half of the standard time. (author)

  13. Solid-State NMR Study of New Copolymers as Solid Polymer Electrolytes

    Directory of Open Access Journals (Sweden)

    Jean-Christophe Daigle

    2018-01-01

    Full Text Available We report the analysis of comb-like polymers by solid-state NMR. The polymers were previously evaluated as solid-polymer-electrolytes (SPE for lithium-polymer-metal batteries that have suitable ionic conductivity at 60 °C. We propose to develop a correlation between 13C solid-state NMR measurements and phase segregation. 13C solid-state NMR is a perfect tool for differentiating polymer phases with fast or slow motions. 7Li was used to monitor the motion of lithium ions in the polymer, and activation energies were calculated.

  14. Final Report for LDRD Project 02-FS-009 Gigapixel Surveillance Camera

    Energy Technology Data Exchange (ETDEWEB)

    Marrs, R E; Bennett, C L

    2010-04-20

    The threats of terrorism and proliferation of weapons of mass destruction add urgency to the development of new techniques for surveillance and intelligence collection. For example, the United States faces a serious and growing threat from adversaries who locate key facilities underground, hide them within other facilities, or otherwise conceal their location and function. Reconnaissance photographs are one of the most important tools for uncovering the capabilities of adversaries. However, current imaging technology provides only infrequent static images of a large area, or occasional video of a small area. We are attempting to add a new dimension to reconnaissance by introducing a capability for large area video surveillance. This capability would enable tracking of all vehicle movements within a very large area. The goal of our project is the development of a gigapixel video surveillance camera for high altitude aircraft or balloon platforms. From very high altitude platforms (20-40 km altitude) it would be possible to track every moving vehicle within an area of roughly 100 km x 100 km, about the size of the San Francisco Bay region, with a gigapixel camera. Reliable tracking of vehicles requires a ground sampling distance (GSD) of 0.5 to 1 m and a framing rate of approximately two frames per second (fps). For a 100 km x 100 km area the corresponding pixel count is 10 gigapixels for a 1-m GSD and 40 gigapixels for a 0.5-m GSD. This is an order of magnitude beyond the 1 gigapixel camera envisioned in our LDRD proposal. We have determined that an instrument of this capacity is feasible.

  15. Human tracking over camera networks: a review

    Science.gov (United States)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  16. A Compton camera for spectroscopic imaging from 100 keV to 1 MeV

    International Nuclear Information System (INIS)

    Earnhart, J.R.D.

    1998-01-01

    A review of spectroscopic imaging issues, applications, and technology is presented. Compton cameras based on solid state semiconductor detectors stands out as the best system for the nondestructive assay of special nuclear materials. A camera for this application has been designed based on an efficient specific purpose Monte Carlo code developed for this project. Preliminary experiments have been performed which demonstrate the validity of the Compton camera concept and the accuracy of the code. Based on these results, a portable prototype system is in development. Proposed future work is addressed

  17. Solid Lithium Ion Conductors (SLIC) for Lithium Solid State Batteries

    Data.gov (United States)

    National Aeronautics and Space Administration — To identify the most lithium-ion conducting solid electrolytes for lithium solid state batteries from the emerging types of solid electrolytes, based on a...

  18. Technical assessment of Navitar Zoom 6000 optic and Sony HDC-X310 camera for MEMS presentations and training.

    Energy Technology Data Exchange (ETDEWEB)

    Diegert, Carl F.

    2006-02-01

    This report evaluates a newly-available, high-definition, video camera coupled with a zoom optical system for microscopic imaging of micro-electro-mechanical systems. We did this work to support configuration of three document-camera-like stations as part of an installation in a new Microsystems building at Sandia National Laboratories. The video display walls to be installed as part of these three presentation and training stations are of extraordinary resolution and quality. The new availability of a reasonably-priced, cinema-quality, high-definition video camera offers the prospect of filling these displays with full-motion imaging of Sandia's microscopic products at a quality substantially beyond the quality of typical video microscopes. Simple and robust operation of the microscope stations will allow the extraordinary-quality imaging to contribute to Sandia's day-to-day research and training operations. This report illustrates the disappointing image quality from a camera/lens system comprised of a Sony HDC-X310 high-definition video camera coupled to a Navitar Zoom 6000 lens. We determined that this Sony camera is capable of substantially more image quality than the Navitar optic can deliver. We identified an optical doubler lens from Navitar as the component of their optical system that accounts for a substantial part of the image quality problem. While work continues to incrementally improve performance of the Navitar system, we are also evaluating optical systems from other vendors to couple to this Sony camera.

  19. Video quality of 3G videophones for telephone cardiopulmonary resuscitation.

    Science.gov (United States)

    Tränkler, Uwe; Hagen, Oddvar; Horsch, Alexander

    2008-01-01

    We simulated a cardiopulmonary resuscitation (CPR) scene with a manikin and used two 3G videophones on the caller's side to transmit video to a laptop PC. Five observers (two doctors with experience in emergency medicine and three paramedics) evaluated the video. They judged whether the manikin was breathing and whether they would give advice for CPR; they also graded the confidence of their decision-making. Breathing was only visible from certain orientations of the videophones, at distances below 150 cm with good illumination and a still background. Since the phones produced a degradation in colours and shadows, detection of breathing mainly depended on moving contours. Low camera positioning produced better results than having the camera high up. Darkness, shaking of the camera and a moving background made detection of breathing almost impossible. The video from the two 3G videophones that were tested was of sufficient quality for telephone CPR provided that camera orientation, distance, illumination and background were carefully chosen. Thus it seems possible to use 3G videophones for emergency calls involving CPR. However, further studies on the required video quality in different scenarios are necessary.

  20. Solid state magnetism

    CERN Document Server

    Crangle, John

    1991-01-01

    Solid state magnetism is important and attempts to understand magnetic properties have led to an increasingly deep insight into the fundamental make up of solids. Both experimental and theoretical research into magnetism continue to be very active, yet there is still much ground to cover before there can be a full understanding. There is a strong interplay between the developments of materials science and of magnetism. Hundreds of new materials have been dis­ covered, often with previously unobserved and puzzling magnetic prop­ erties. A large and growing technology exists that is based on the magnetic properties of materials. Very many devices used in everyday life involve magnetism and new applications are being invented all the time. Under­ standing the fundamental background to the applications is vital to using and developing them. The aim of this book is to provide a simple, up-to-date introduction to the study of solid state magnetism, both intrinsic and technical. It is designed to meet the needs a...

  1. Application of robust face recognition in video surveillance systems

    Science.gov (United States)

    Zhang, De-xin; An, Peng; Zhang, Hao-xiang

    2018-03-01

    In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.

  2. Ground Validation Drop Camera Transect Points - St. Thomas/ St. John USVI - 2011

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This shapefile denotes the location of underwater video that was collected by NOAA scientists using a SeaViewer drop camera system. Video were collected between...

  3. Development of camera technology for monitoring nests. Chapter 15

    Science.gov (United States)

    W. Andrew Cox; M. Shane Pruett; Thomas J. Benson; Scott J. Chiavacci; Frank R., III Thompson

    2012-01-01

    Photo and video technology has become increasingly useful in the study of avian nesting ecology. However, researchers interested in using camera systems are often faced with insufficient information on the types and relative advantages of available technologies. We reviewed the literature for studies of nests that used cameras and summarized them based on study...

  4. Homography-based multiple-camera person-tracking

    Science.gov (United States)

    Turk, Matthew R.

    2009-01-01

    Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of

  5. Digital video recording and archiving in ophthalmic surgery

    Directory of Open Access Journals (Sweden)

    Raju Biju

    2006-01-01

    Full Text Available Currently most ophthalmic operating rooms are equipped with an analog video recording system [analog Charge Couple Device camera for video grabbing and a Video Cassette Recorder for recording]. We discuss the various advantages of a digital video capture device, its archiving capabilities and our experience during the transition from analog to digital video recording and archiving. The basic terminology and concepts related to analog and digital video, along with the choice of hardware, software and formats for archiving are discussed.

  6. 3D video

    CERN Document Server

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  7. A preliminary study to estimate contact rates between free-roaming domestic dogs using novel miniature cameras.

    Directory of Open Access Journals (Sweden)

    Courtenay B Bombara

    Full Text Available Information on contacts between individuals within a population is crucial to inform disease control strategies, via parameterisation of disease spread models. In this study we investigated the use of dog-borne video cameras-in conjunction with global positioning systems (GPS loggers-to both characterise dog-to-dog contacts and to estimate contact rates. We customized miniaturised video cameras, enclosed within 3D-printed plastic cases, and attached these to nylon dog collars. Using two 3400 mAh NCR lithium Li-ion batteries, cameras could record a maximum of 22 hr of continuous video footage. Together with a GPS logger, collars were attached to six free roaming domestic dogs (FRDDs in two remote Indigenous communities in northern Australia. We recorded a total of 97 hr of video footage, ranging from 4.5 to 22 hr (mean 19.1 per dog, and observed a wide range of social behaviours. The majority (69% of all observed interactions between community dogs involved direct physical contact. Direct contact behaviours included sniffing, licking, mouthing and play fighting. No contacts appeared to be aggressive, however multiple teeth baring incidents were observed during play fights. We identified a total of 153 contacts-equating to 8 to 147 contacts per dog per 24 hr-from the videos of the five dogs with camera data that could be analysed. These contacts were attributed to 42 unique dogs (range 1 to 19 per video which could be identified (based on colour patterns and markings. Most dog activity was observed in urban (houses and roads environments, but contacts were more common in bushland and beach environments. A variety of foraging behaviours were observed, included scavenging through rubbish and rolling on dead animal carcasses. Identified food consumed included chicken, raw bones, animal carcasses, rubbish, grass and cheese. For characterising contacts between FRDD, several benefits of analysing videos compared to GPS fixes alone were identified in

  8. High-powered, solid-state rf systems

    International Nuclear Information System (INIS)

    Reid, D.W.

    1987-01-01

    Over the past two years, the requirement to supply megawatts of rf power for space-based applications at uhf and L-band frequencies has caused dramatic increases in silicon solid-state power capabilities in the frequency range from 10 to 3000 MHz. Radar and communications requirements have caused similar increases in gallium arsenide solid-state power capabilities in the frequency ranges from 3000 to 10,000 MHz. This paper reviews the present state of the art for solid-state rf amplifiers for frequencies from 10 to 10,000 MHz. Information regarding power levels, size, weight, and cost will be given. Technical specifications regarding phase and amplitude stability, efficiency, and system architecture will be discussed. Solid-stage rf amplifier susceptibility to radiation damage will also be examined

  9. Intelligent video surveillance systems and technology

    CERN Document Server

    Ma, Yunqian

    2009-01-01

    From the streets of London to subway stations in New York City, hundreds of thousands of surveillance cameras ubiquitously collect hundreds of thousands of videos, often running 24/7. How can such vast volumes of video data be stored, analyzed, indexed, and searched? How can advanced video analysis and systems autonomously recognize people and detect targeted activities real-time? Collating and presenting the latest information Intelligent Video Surveillance: Systems and Technology explores these issues, from fundamentals principle to algorithmic design and system implementation.An Integrated

  10. Solid-state lithium battery

    Science.gov (United States)

    Ihlefeld, Jon; Clem, Paul G; Edney, Cynthia; Ingersoll, David; Nagasubramanian, Ganesan; Fenton, Kyle Ross

    2014-11-04

    The present invention is directed to a higher power, thin film lithium-ion electrolyte on a metallic substrate, enabling mass-produced solid-state lithium batteries. High-temperature thermodynamic equilibrium processing enables co-firing of oxides and base metals, providing a means to integrate the crystalline, lithium-stable, fast lithium-ion conductor lanthanum lithium tantalate (La.sub.1/3-xLi.sub.3xTaO.sub.3) directly with a thin metal foil current collector appropriate for a lithium-free solid-state battery.

  11. Iterative Multiview Side Information for Enhanced Reconstruction in Distributed Video Coding

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available Distributed video coding (DVC is a new paradigm for video compression based on the information theoretical results of Slepian and Wolf (SW and Wyner and Ziv (WZ. DVC entails low-complexity encoders as well as separate encoding of correlated video sources. This is particularly attractive for multiview camera systems in video surveillance and camera sensor network applications, where low complexity is required at the encoder. In addition, the separate encoding of the sources implies no communication between the cameras in a practical scenario. This is an advantage since communication is time and power consuming and requires complex networking. In this work, different intercamera estimation techniques for side information (SI generation are explored and compared in terms of estimating quality, complexity, and rate distortion (RD performance. Further, a technique called iterative multiview side information (IMSI is introduced, where the final SI is used in an iterative reconstruction process. The simulation results show that IMSI significantly improves the RD performance for video with significant motion and activity. Furthermore, DVC outperforms AVC/H.264 Intra for video with average and low motion but it is still inferior to the Inter No Motion and Inter Motion modes.

  12. Evaluation of Origin Ensemble algorithm for image reconstruction for pixelated solid-state detectors with large number of channels

    Science.gov (United States)

    Kolstein, M.; De Lorenzo, G.; Mikhaylova, E.; Chmeissani, M.; Ariño, G.; Calderón, Y.; Ozsahin, I.; Uzun, D.

    2013-04-01

    The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For PET scanners, conventional algorithms like Filtered Back-Projection (FBP) and Ordered Subset Expectation Maximization (OSEM) are straightforward to use and give good results. However, FBP presents difficulties for detectors with limited angular coverage like PEM and Compton gamma cameras, whereas OSEM has an impractically large time and memory consumption for a Compton gamma camera with a large number of channels. In this article, the Origin Ensemble (OE) algorithm is evaluated as an alternative algorithm for image reconstruction. Monte Carlo simulations of the PET design are used to compare the performance of OE, FBP and OSEM in terms of the bias, variance and average mean squared error (MSE) image quality metrics. For the PEM and Compton camera designs, results obtained with OE are presented.

  13. Video Analytics for Business Intelligence

    CERN Document Server

    Porikli, Fatih; Xiang, Tao; Gong, Shaogang

    2012-01-01

    Closed Circuit TeleVision (CCTV) cameras have been increasingly deployed pervasively in public spaces including retail centres and shopping malls. Intelligent video analytics aims to automatically analyze content of massive amount of public space video data and has been one of the most active areas of computer vision research in the last two decades. Current focus of video analytics research has been largely on detecting alarm events and abnormal behaviours for public safety and security applications. However, increasingly CCTV installations have also been exploited for gathering and analyzing business intelligence information, in order to enhance marketing and operational efficiency. For example, in retail environments, surveillance cameras can be utilised to collect statistical information about shopping behaviour and preference for marketing (e.g., how many people entered a shop; how many females/males or which age groups of people showed interests to a particular product; how long did they stay in the sho...

  14. Video Conference System that Keeps Mutual Eye Contact Among Participants

    Directory of Open Access Journals (Sweden)

    Masahiko Yahagi

    2011-10-01

    Full Text Available A novel video conference system is developed. Suppose that three people A, B, and C attend the video conference, the proposed system enables eye contact among every pair. Furthermore, when B and C chat, A feels as if B and C were facing each other (eye contact seems to be kept among B and C. In the case of a triangle video conference, the respective video system is composed of a half mirror, two video cameras, and two monitors. Each participant watches other participants' images that are reflected by the half mirror. Cameras are set behind the half mirror. Since participants' image (face and the camera position are adjusted to be the same direction, eye contact is kept and conversation becomes very natural compared with conventional video conference systems where participants' eyes do not point to the other participant. When 3 participants sit at the vertex of an equilateral triangle, eyes can be kept even for the situation mentioned above (eye contact between B and C from the aspect of A. Eye contact can be kept not only for 2 or 3 participants but also any number of participants as far as they sit at the vertex of a regular polygon.

  15. New system for linear accelerator radiosurgery with a gantry-mounted video camera

    International Nuclear Information System (INIS)

    Kunieda, Etsuo; Kitamura, Masayuki; Kawaguchi, Osamu; Ohira, Takayuki; Ogawa, Kouichi; Ando, Yutaka; Nakamura, Kayoko; Kubo, Atsushi

    1998-01-01

    Purpose: We developed a positioning method that does not depend on the positioning mechanism originally annexed to the linac and investigated the positioning errors of the system. Methods and Materials: A small video camera was placed at a location optically identical to the linac x-ray source. A target pointer comprising a convex lens and bull's eye was attached to the arc of the Leksell stereotactic system so that the lens would form a virtual image of the bull's eye (virtual target) at the position of the center of the arc. The linac gantry and target pointer were placed at the side and top to adjust the arc center to the isocenter by referring the virtual target. Coincidence of the target and the isocenter could be confirmed in any combination of the couch and gantry rotation. In order to evaluate the accuracy of the positioning, a tungsten ball was attached to the stereotactic frame as a simulated target, which was repeatedly localized and repositioned to estimate the magnitude of the error. The center of the circular field defined by the collimator was marked on the film. Results: The differences between the marked centers of the circular field and the centers of the shadow of the simulated target were less than 0.3 mm

  16. Fluidized Bed Reactor as Solid State Fermenter

    Directory of Open Access Journals (Sweden)

    Krishnaiah, K.

    2005-01-01

    Full Text Available Various reactors such as tray, packed bed, rotating drum can be used for solid-state fermentation. In this paper the possibility of fluidized bed reactor as solid-state fermenter is considered. The design parameters, which affect the performances are identified and discussed. This information, in general can be used in the design and the development of an efficient fluidized bed solid-state fermenter. However, the objective here is to develop fluidized bed solid-state fermenter for palm kernel cake conversion into enriched animal and poultry feed.

  17. Solid state physics for metallurgists

    CERN Document Server

    Weiss, Richard J

    2013-01-01

    Metal Physics and Physical Metallurgy, Volume 6: Solid State Physics for Metallurgists provides an introduction to the basic understanding of the properties that make materials useful to mankind. This book discusses the electronic structure of matter, which is the domain of solid state physics.Organized into 12 chapters, this volume begins with an overview of the electronic structure of free atoms and the electronic structure of solids. This text then examines the basis of the Bloch theorem, which is the exact periodicity of the potential. Other chapters consider the fundamental assumption in

  18. Streaming Audio and Video: New Challenges and Opportunities for Museums.

    Science.gov (United States)

    Spadaccini, Jim

    Streaming audio and video present new challenges and opportunities for museums. Streaming media is easier to author and deliver to Internet audiences than ever before; digital video editing is commonplace now that the tools--computers, digital video cameras, and hard drives--are so affordable; the cost of serving video files across the Internet…

  19. Integrating IPix immersive video surveillance with unattended and remote monitoring (UNARM) systems

    International Nuclear Information System (INIS)

    Michel, K.D.; Klosterbuer, S.F.; Langner, D.C.

    2004-01-01

    Commercially available IPix cameras and software are being researched as a means by which an inspector can be virtually immersed into a nuclear facility. A single IPix camera can provide 360 by 180 degree views with full pan-tilt-zoom capability, and with no moving parts on the camera mount. Immersive video technology can be merged into the current Unattended and Remote Monitoring (UNARM) system, thereby providing an integrated system of monitoring capabilities that tie together radiation, video, isotopic analysis, Global Positioning System (GPS), etc. The integration of the immersive video capability with other monitoring methods already in place provides a significantly enhanced situational awareness to the International Atomic Energy Agency (IAEA) inspectors.

  20. Video-based Mobile Mapping System Using Smartphones

    Science.gov (United States)

    Al-Hamad, A.; Moussa, A.; El-Sheimy, N.

    2014-11-01

    The last two decades have witnessed a huge growth in the demand for geo-spatial data. This demand has encouraged researchers around the world to develop new algorithms and design new mapping systems in order to obtain reliable sources for geo-spatial data. Mobile Mapping Systems (MMS) are one of the main sources for mapping and Geographic Information Systems (GIS) data. MMS integrate various remote sensing sensors, such as cameras and LiDAR, along with navigation sensors to provide the 3D coordinates of points of interest from moving platform (e.g. cars, air planes, etc.). Although MMS can provide accurate mapping solution for different GIS applications, the cost of these systems is not affordable for many users and only large scale companies and institutions can benefits from MMS systems. The main objective of this paper is to propose a new low cost MMS with reasonable accuracy using the available sensors in smartphones and its video camera. Using the smartphone video camera, instead of capturing individual images, makes the system easier to be used by non-professional users since the system will automatically extract the highly overlapping frames out of the video without the user intervention. Results of the proposed system are presented which demonstrate the effect of the number of the used images in mapping solution. In addition, the accuracy of the mapping results obtained from capturing a video is compared to the same results obtained from using separate captured images instead of video.

  1. Solid-State Powered X-band Accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Othman, Mohamed A.K. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Nann, Emilio A. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Dolgashev, Valery A. [SLAC National Accelerator Lab., Menlo Park, CA (United States); Tantawi, Sami [SLAC National Accelerator Lab., Menlo Park, CA (United States); Neilson, Jeff [SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2017-03-06

    In this report we disseminate the hot test results of an X-band 100-W solid state amplifier chain for linear accelerator (linac) applications. Solid state power amplifiers have become increasingly attractive solutions for achieving high power in radar and maritime applications. Here the performance of solid state amplifiers when driving an RF cavity is investigated. Commercially available, matched and fully-packaged GaN on SiC HEMTs are utilized, comprising a wideband driver stage and two power stages. The amplifier chain has a high poweradded- efficiency and is able to supply up to ~1.2 MV/m field gradient at 9.2 GHz in a simple test cavity, with a peak power exceeding 100 W. These findings set forth the enabling technology for solid-state powered linacs.

  2. The Galileo Solid-State Imaging experiment

    Science.gov (United States)

    Belton, M.J.S.; Klaasen, K.P.; Clary, M.C.; Anderson, J.L.; Anger, C.D.; Carr, M.H.; Chapman, C.R.; Davies, M.E.; Greeley, R.; Anderson, D.; Bolef, L.K.; Townsend, T.E.; Greenberg, R.; Head, J. W.; Neukum, G.; Pilcher, C.B.; Veverka, J.; Gierasch, P.J.; Fanale, F.P.; Ingersoll, A.P.; Masursky, H.; Morrison, D.; Pollack, James B.

    1992-01-01

    The Solid State Imaging (SSI) experiment on the Galileo Orbiter spacecraft utilizes a high-resolution (1500 mm focal length) television camera with an 800 ?? 800 pixel virtual-phase, charge-coupled detector. It is designed to return images of Jupiter and its satellites that are characterized by a combination of sensitivity levels, spatial resolution, geometric fiedelity, and spectral range unmatched by imaging data obtained previously. The spectral range extends from approximately 375 to 1100 nm and only in the near ultra-violet region (??? 350 nm) is the spectral coverage reduced from previous missions. The camera is approximately 100 times more sensitive than those used in the Voyager mission, and, because of the nature of the satellite encounters, will produce images with approximately 100 times the ground resolution (i.e., ??? 50 m lp-1) on the Galilean satellites. We describe aspects of the detector including its sensitivity to energetic particle radiation and how the requirements for a large full-well capacity and long-term stability in operating voltages led to the choice of the virtual phase chip. The F/8.5 camera system can reach point sources of V(mag) ??? 11 with S/N ??? 10 and extended sources with surface brightness as low as 20 kR in its highest gain state and longest exposure mode. We describe the performance of the system as determined by ground calibration and the improvements that have been made to the telescope (same basic catadioptric design that was used in Mariner 10 and the Voyager high-resolution cameras) to reduce the scattered light reaching the detector. The images are linearly digitized 8-bits deep and, after flat-fielding, are cosmetically clean. Information 'preserving' and 'non-preserving' on-board data compression capabilities are outlined. A special "summation" mode, designed for use deep in the Jovian radiation belts, near Io, is also described. The detector is 'preflashed' before each exposure to ensure the photometric linearity

  3. Video Conferencing for a Virtual Seminar Room

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Fosgerau, A.; Hansen, Peter Søren K.

    2002-01-01

    A PC-based video conferencing system for a virtual seminar room is presented. The platform is enhanced with DSPs for audio and video coding and processing. A microphone array is used to facilitate audio based speaker tracking, which is used for adaptive beam-forming and automatic camera...

  4. A Reaction-Diffusion-Based Coding Rate Control Mechanism for Camera Sensor Networks

    Directory of Open Access Journals (Sweden)

    Naoki Wakamiya

    2010-08-01

    Full Text Available A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  5. A reaction-diffusion-based coding rate control mechanism for camera sensor networks.

    Science.gov (United States)

    Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki

    2010-01-01

    A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  6. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+

    Directory of Open Access Journals (Sweden)

    Steven Nicholas Graves, MA

    2015-02-01

    Conclusions: The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.

  7. High-Speed Video Analysis in a Conceptual Physics Class

    Science.gov (United States)

    Desbien, Dwain M.

    2011-09-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software.2,3 Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting and also allows real-world situations to be analyzed. Many motions are too fast to easily be captured at the standard video frame rate of 30 frames per second (fps) employed by most video cameras. This paper will discuss using a consumer camera that can record high-frame-rate video in a college-level conceptual physics class. In particular this will involve the use of model rockets to determine the acceleration during the boost period right at launch and compare it to a simple model of the expected acceleration.

  8. Jellyfish support high energy intake of leatherback sea turtles (Dermochelys coriacea: video evidence from animal-borne cameras.

    Directory of Open Access Journals (Sweden)

    Susan G Heaslip

    Full Text Available The endangered leatherback turtle is a large, highly migratory marine predator that inexplicably relies upon a diet of low-energy gelatinous zooplankton. The location of these prey may be predictable at large oceanographic scales, given that leatherback turtles perform long distance migrations (1000s of km from nesting beaches to high latitude foraging grounds. However, little is known about the profitability of this migration and foraging strategy. We used GPS location data and video from animal-borne cameras to examine how prey characteristics (i.e., prey size, prey type, prey encounter rate correlate with the daytime foraging behavior of leatherbacks (n = 19 in shelf waters off Cape Breton Island, NS, Canada, during August and September. Video was recorded continuously, averaged 1:53 h per turtle (range 0:08-3:38 h, and documented a total of 601 prey captures. Lion's mane jellyfish (Cyanea capillata was the dominant prey (83-100%, but moon jellyfish (Aurelia aurita were also consumed. Turtles approached and attacked most jellyfish within the camera's field of view and appeared to consume prey completely. There was no significant relationship between encounter rate and dive duration (p = 0.74, linear mixed-effects models. Handling time increased with prey size regardless of prey species (p = 0.0001. Estimates of energy intake averaged 66,018 kJ • d(-1 but were as high as 167,797 kJ • d(-1 corresponding to turtles consuming an average of 330 kg wet mass • d(-1 (up to 840 kg • d(-1 or approximately 261 (up to 664 jellyfish • d(-1. Assuming our turtles averaged 455 kg body mass, they consumed an average of 73% of their body mass • d(-1 equating to an average energy intake of 3-7 times their daily metabolic requirements, depending on estimates used. This study provides evidence that feeding tactics used by leatherbacks in Atlantic Canadian waters are highly profitable and our results are consistent with estimates of mass gain prior to

  9. MOSS spectroscopic camera for imaging time resolved plasma species temperature and flow speed

    International Nuclear Information System (INIS)

    Michael, Clive; Howard, John

    2000-01-01

    A MOSS (Modulated Optical Solid-State) spectroscopic camera has been devised to monitor the spatial and temporal variations of temperatures and flow speeds of plasma ion species, the Doppler broadening measurement being made of spectroscopic lines specified. As opposed to a single channel MOSS spectrometer, the camera images light from plasma onto an array of light detectors, being mentioned 2D imaging of plasma ion temperatures and flow speeds. In addition, compared to a conventional grating spectrometer, the MOSS camera shows an excellent light collecting performance which leads to the improvement of signal to noise ratio and of time resolution. The present paper first describes basic items of MOSS spectroscopy, then follows MOSS camera with an emphasis on the optical system of 2D imaging. (author)

  10. MOSS spectroscopic camera for imaging time resolved plasma species temperature and flow speed

    Energy Technology Data Exchange (ETDEWEB)

    Michael, Clive; Howard, John [Australian National Univ., Plasma Research Laboratory, Canberra (Australia)

    2000-03-01

    A MOSS (Modulated Optical Solid-State) spectroscopic camera has been devised to monitor the spatial and temporal variations of temperatures and flow speeds of plasma ion species, the Doppler broadening measurement being made of spectroscopic lines specified. As opposed to a single channel MOSS spectrometer, the camera images light from plasma onto an array of light detectors, being mentioned 2D imaging of plasma ion temperatures and flow speeds. In addition, compared to a conventional grating spectrometer, the MOSS camera shows an excellent light collecting performance which leads to the improvement of signal to noise ratio and of time resolution. The present paper first describes basic items of MOSS spectroscopy, then follows MOSS camera with an emphasis on the optical system of 2D imaging. (author)

  11. Observations of sea-ice conditions in the Antarctic coastal region using ship-board video cameras

    Directory of Open Access Journals (Sweden)

    Haruhito Shimoda

    1997-03-01

    Full Text Available During the 30th, 31st, and 32nd Japanese Antarctic Research Expeditions (JARE-30,JARE-31,and JARE-32, sea-ice conditions were recorded by video camera on board the SHIRASE. Then, the sea-ice images were used to estimate compactness and thickness quantitatively. Analyzed legs are those toward Breid Bay and from Breid Bay to Syowa Station during JARE-30 and JARE-31,and those toward the Prince Olav Coast, from the Prince Olav Coast to Breid Bay, and from Breid Bay to Syowa Station during JARE-32. The results show yearly variations of ice compactness and thickness, latitudinal variations of thickness, and differences in thickness histograms between JARE-30 and JARE-32 in Lutzow-Holm Bay. Albedo values were measured simultaneously by a shortwave radiometer. These values are proportional to those of ice compactness. Finally, we examined the relationship between ice compactness and vertical gradient of air temperature above sea ice.

  12. Texton-based super-resolution for achieving high spatiotemporal resolution in hybrid camera system

    Science.gov (United States)

    Kamimura, Kenji; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi

    2010-05-01

    Many super-resolution methods have been proposed to enhance the spatial resolution of images by using iteration and multiple input images. In a previous paper, we proposed the example-based super-resolution method to enhance an image through pixel-based texton substitution to reduce the computational cost. In this method, however, we only considered the enhancement of a texture image. In this study, we modified this texton substitution method for a hybrid camera to reduce the required bandwidth of a high-resolution video camera. We applied our algorithm to pairs of high- and low-spatiotemporal-resolution videos, which were synthesized to simulate a hybrid camera. The result showed that the fine detail of the low-resolution video can be reproduced compared with bicubic interpolation and the required bandwidth could be reduced to about 1/5 in a video camera. It was also shown that the peak signal-to-noise ratios (PSNRs) of the images improved by about 6 dB in a trained frame and by 1.0-1.5 dB in a test frame, as determined by comparison with the processed image using bicubic interpolation, and the average PSNRs were higher than those obtained by the well-known Freeman’s patch-based super-resolution method. Compared with that of the Freeman’s patch-based super-resolution method, the computational time of our method was reduced to almost 1/10.

  13. Camera Motion and Surrounding Scene Appearance as Context for Action Recognition

    KAUST Repository

    Heilbron, Fabian Caba; Thabet, Ali Kassem; Niebles, Juan Carlos; Ghanem, Bernard

    2015-01-01

    This paper describes a framework for recognizing human actions in videos by incorporating a new set of visual cues that represent the context of the action. We develop a weak foreground-background segmentation approach in order to robustly extract not only foreground features that are focused on the actors, but also global camera motion and contextual scene information. Using dense point trajectories, our approach separates and describes the foreground motion from the background, represents the appearance of the extracted static background, and encodes the global camera motion that interestingly is shown to be discriminative for certain action classes. Our experiments on four challenging benchmarks (HMDB51, Hollywood2, Olympic Sports, and UCF50) show that our contextual features enable a significant performance improvement over state-of-the-art algorithms.

  14. Camera Motion and Surrounding Scene Appearance as Context for Action Recognition

    KAUST Repository

    Heilbron, Fabian Caba

    2015-04-17

    This paper describes a framework for recognizing human actions in videos by incorporating a new set of visual cues that represent the context of the action. We develop a weak foreground-background segmentation approach in order to robustly extract not only foreground features that are focused on the actors, but also global camera motion and contextual scene information. Using dense point trajectories, our approach separates and describes the foreground motion from the background, represents the appearance of the extracted static background, and encodes the global camera motion that interestingly is shown to be discriminative for certain action classes. Our experiments on four challenging benchmarks (HMDB51, Hollywood2, Olympic Sports, and UCF50) show that our contextual features enable a significant performance improvement over state-of-the-art algorithms.

  15. Automatic multi-camera calibration for deployable positioning systems

    Science.gov (United States)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  16. Face Recognition and Tracking in Videos

    Directory of Open Access Journals (Sweden)

    Swapnil Vitthal Tathe

    2017-07-01

    Full Text Available Advancement in computer vision technology and availability of video capturing devices such as surveillance cameras has evoked new video processing applications. The research in video face recognition is mostly biased towards law enforcement applications. Applications involves human recognition based on face and iris, human computer interaction, behavior analysis, video surveillance etc. This paper presents face tracking framework that is capable of face detection using Haar features, recognition using Gabor feature extraction, matching using correlation score and tracking using Kalman filter. The method has good recognition rate for real-life videos and robust performance to changes due to illumination, environmental factors, scale, pose and orientations.

  17. Multiple-camera tracking: UK government requirements

    Science.gov (United States)

    Hosmer, Paul

    2007-10-01

    The Imagery Library for Intelligent Detection Systems (i-LIDS) is the UK government's new standard for Video Based Detection Systems (VBDS). The standard was launched in November 2006 and evaluations against it began in July 2007. With the first four i-LIDS scenarios completed, the Home Office Scientific development Branch (HOSDB) are looking toward the future of intelligent vision in the security surveillance market by adding a fifth scenario to the standard. The fifth i-LIDS scenario will concentrate on the development, testing and evaluation of systems for the tracking of people across multiple cameras. HOSDB and the Centre for the Protection of National Infrastructure (CPNI) identified a requirement to track targets across a network of CCTV cameras using both live and post event imagery. The Detection and Vision Systems group at HOSDB were asked to determine the current state of the market and develop an in-depth Operational Requirement (OR) based on government end user requirements. Using this OR the i-LIDS team will develop a full i-LIDS scenario to aid the machine vision community in its development of multi-camera tracking systems. By defining a requirement for multi-camera tracking and building this into the i-LIDS standard the UK government will provide a widely available tool that developers can use to help them turn theory and conceptual demonstrators into front line application. This paper will briefly describe the i-LIDS project and then detail the work conducted in building the new tracking aspect of the standard.

  18. Script Design for Information Film and Video.

    Science.gov (United States)

    Shelton, S. M. (Marty); And Others

    1993-01-01

    Shows how the empathy created in the audience by each of the five genres of film/video is a function of the five elements of film design: camera angle, close up, composition, continuity, and cutting. Discusses film/video script designing. Illustrates these concepts with a sample script and story board. (SR)

  19. Re-identification of persons in multi-camera surveillance under varying viewpoints and illumination

    Science.gov (United States)

    Bouma, Henri; Borsboom, Sander; den Hollander, Richard J. M.; Landsmeer, Sander H.; Worring, Marcel

    2012-06-01

    The capability to track individuals in CCTV cameras is important for surveillance and forensics alike. However, it is laborious to do over multiple cameras. Therefore, an automated system is desirable. In literature several methods have been proposed, but their robustness against varying viewpoints and illumination is limited. Hence performance in realistic settings is also limited. In this paper, we present a novel method for the automatic re-identification of persons in video from surveillance cameras in a realistic setting. The method is computationally efficient, robust to a wide variety of viewpoints and illumination, simple to implement and it requires no training. We compare the performance of our method to several state-of-the-art methods on a publically available dataset that contains the variety of viewpoints and illumination to allow benchmarking. The results indicate that our method shows good performance and enables a human operator to track persons five times faster.

  20. Measuring frequency of one-dimensional vibration with video camera using electronic rolling shutter

    Science.gov (United States)

    Zhao, Yipeng; Liu, Jinyue; Guo, Shijie; Li, Tiejun

    2018-04-01

    Cameras offer a unique capability of collecting high density spatial data from a distant scene of interest. They can be employed as remote monitoring or inspection sensors to measure vibrating objects because of their commonplace availability, simplicity, and potentially low cost. A defect of vibrating measurement with the camera is to process the massive data generated by camera. In order to reduce the data collected from the camera, the camera using electronic rolling shutter (ERS) is applied to measure the frequency of one-dimensional vibration, whose frequency is much higher than the speed of the camera. Every row in the image captured by the ERS camera records the vibrating displacement at different times. Those displacements that form the vibration could be extracted by local analysis with sliding windows. This methodology is demonstrated on vibrating structures, a cantilever beam, and an air compressor to identify the validity of the proposed algorithm. Suggestions for applications of this methodology and challenges in real-world implementation are given at last.

  1. Solid-state lighting-a benevolent technology

    International Nuclear Information System (INIS)

    Schubert, E Fred; Kim, Jong Kyu; Luo Hong; Xi, J-Q

    2006-01-01

    Solid-state light sources are in the process of profoundly changing the way humans generate light for general lighting applications. Solid-state light sources possess two highly desirable features, which set them apart from most other light sources: (i) they have the potential to create light with essentially unit power efficiency and (ii) the properties of light, such as spectral composition and temporal modulation, can be controlled to a degree that is not possible with conventional light sources such as incandescent and fluorescent lamps. The implications are enormous and, as a consequence, many positive developments are to be expected including a reduction in global energy consumption, reduction of global-warming-gas and pollutant emissions and a multitude of new functionalities benefiting numerous applications. This review will assess the impact of solid-state lighting technology on energy consumption, the environment and on emerging application fields that make use of the controllability afforded by solid-state sources. The review will also discuss technical areas that fuel continued progress in solid-state lighting. Specifically, we will review the use of novel phosphor distributions in white light-emitting diodes (LEDs) and show the strong influence of phosphor distribution on efficiency. We will also review the use of reflectors in LEDs with emphasis on 'perfect' reflectors, i.e. reflectors with highly reflective omni-directional characteristics. Finally, we will discuss a new class of thin-film materials with an unprecedented low refractive index. Such low-n materials may strongly contribute to the continuous progress in solid-state lighting

  2. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1977-01-01

    A gamma camera system having control components operating in conjunction with a solid state detector is described. The detector is formed of a plurality of discrete components which are associated in geometrical or coordinate arrangement defining a detector matrix to derive coordinate signal outputs. These outputs are selectively filtered and summed to form coordinate channel signals and corresponding energy channel signals. A control feature of the invention regulates the noted summing and filtering performance to derive data acceptance signals which are addressed to further treating components. The latter components include coordinate and enery channel multiplexers as well as energy-responsive selective networks. A sequential control is provided for regulating the signal processing functions of the system to derive an overall imaging cycle

  3. Parallel Computational Intelligence-Based Multi-Camera Surveillance System

    Directory of Open Access Journals (Sweden)

    Sergio Orts-Escolano

    2014-04-01

    Full Text Available In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units. It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.

  4. Two dimensional solid state NMR

    International Nuclear Information System (INIS)

    Kentgens, A.P.M.

    1987-01-01

    This thesis illustrates, by discussing some existing and newly developed 2D solid state experiments, that two-dimensional NMR of solids is a useful and important extension of NMR techniques. Chapter 1 gives an overview of spin interactions and averaging techniques important in solid state NMR. As 2D NMR is already an established technique in solutions, only the basics of two dimensional NMR are presented in chapter 2, with an emphasis on the aspects important for solid spectra. The following chapters discuss the theoretical background and applications of specific 2D solid state experiments. An application of 2D-J resolved NMR, analogous to J-resolved spectroscopy in solutions, to natural rubber is given in chapter 3. In chapter 4 the anisotropic chemical shift is mapped out against the heteronuclear dipolar interaction to obtain information about the orientation of the shielding tensor in poly-(oxymethylene). Chapter 5 concentrates on the study of super-slow molecular motions in polymers using a variant of the 2D exchange experiment developed by us. Finally chapter 6 discusses a new experiment, 2D nutation NMR, which makes it possible to study the quadrupole interaction of half-integer spins. 230 refs.; 48 figs.; 8 tabs

  5. FPS camera sync and reset chassis

    International Nuclear Information System (INIS)

    Yates, G.J.

    1980-06-01

    The sync and reset chassis provides all the circuitry required to synchronize an event to be studied, a remote free-running focus projection and scanning (FPS) data-acquisition TV camera, and a video signal recording system. The functions, design, and operation of this chassis are described in detail

  6. Continuous Learning of a Multilayered Network Topology in a Video Camera Network

    Directory of Open Access Journals (Sweden)

    Zou Xiaotao

    2009-01-01

    Full Text Available Abstract A multilayered camera network architecture with nodes as entry/exit points, cameras, and clusters of cameras at different layers is proposed. Unlike existing methods that used discrete events or appearance information to infer the network topology at a single level, this paper integrates face recognition that provides robustness to appearance changes and better models the time-varying traffic patterns in the network. The statistical dependence between the nodes, indicating the connectivity and traffic patterns of the camera network, is represented by a weighted directed graph and transition times that may have multimodal distributions. The traffic patterns and the network topology may be changing in the dynamic environment. We propose a Monte Carlo Expectation-Maximization algorithm-based continuous learning mechanism to capture the latent dynamically changing characteristics of the network topology. In the experiments, a nine-camera network with twenty-five nodes (at the lowest level is analyzed both in simulation and in real-life experiments and compared with previous approaches.

  7. Continuous Learning of a Multilayered Network Topology in a Video Camera Network

    Directory of Open Access Journals (Sweden)

    Xiaotao Zou

    2009-01-01

    Full Text Available A multilayered camera network architecture with nodes as entry/exit points, cameras, and clusters of cameras at different layers is proposed. Unlike existing methods that used discrete events or appearance information to infer the network topology at a single level, this paper integrates face recognition that provides robustness to appearance changes and better models the time-varying traffic patterns in the network. The statistical dependence between the nodes, indicating the connectivity and traffic patterns of the camera network, is represented by a weighted directed graph and transition times that may have multimodal distributions. The traffic patterns and the network topology may be changing in the dynamic environment. We propose a Monte Carlo Expectation-Maximization algorithm-based continuous learning mechanism to capture the latent dynamically changing characteristics of the network topology. In the experiments, a nine-camera network with twenty-five nodes (at the lowest level is analyzed both in simulation and in real-life experiments and compared with previous approaches.

  8. Progress in passive submillimeter-wave video imaging

    Science.gov (United States)

    Heinz, Erik; May, Torsten; Born, Detlef; Zieger, Gabriel; Peiselt, Katja; Zakosarenko, Vyacheslav; Krause, Torsten; Krüger, André; Schulz, Marco; Bauer, Frank; Meyer, Hans-Georg

    2014-06-01

    Since 2007 we are developing passive submillimeter-wave video cameras for personal security screening. In contradiction to established portal-based millimeter-wave scanning techniques, these are suitable for stand-off or stealth operation. The cameras operate in the 350GHz band and use arrays of superconducting transition-edge sensors (TES), reflector optics, and opto-mechanical scanners. Whereas the basic principle of these devices remains unchanged, there has been a continuous development of the technical details, as the detector array, the scanning scheme, and the readout, as well as system integration and performance. The latest prototype of this camera development features a linear array of 128 detectors and a linear scanner capable of 25Hz frame rate. Using different types of reflector optics, a field of view of 1×2m2 and a spatial resolution of 1-2 cm is provided at object distances of about 5-25m. We present the concept of this camera and give details on system design and performance. Demonstration videos show its capability for hidden threat detection and illustrate possible application scenarios.

  9. Development of Automated Tracking System with Active Cameras for Figure Skating

    Science.gov (United States)

    Haraguchi, Tomohiko; Taki, Tsuyoshi; Hasegawa, Junichi

    This paper presents a system based on the control of PTZ cameras for automated real-time tracking of individual figure skaters moving on an ice rink. In the video images of figure skating, irregular trajectories, various postures, rapid movements, and various costume colors are included. Therefore, it is difficult to determine some features useful for image tracking. On the other hand, an ice rink has a limited area and uniform high intensity, and skating is always performed on ice. In the proposed system, an ice rink region is first extracted from a video image by the region growing method, and then, a skater region is extracted using the rink shape information. In the camera control process, each camera is automatically panned and/or tilted so that the skater region is as close to the center of the image as possible; further, the camera is zoomed to maintain the skater image at an appropriate scale. The results of experiments performed for 10 training scenes show that the skater extraction rate is approximately 98%. Thus, it was concluded that tracking with camera control was successful for almost all the cases considered in the study.

  10. Video Toroid Cavity Imager

    Energy Technology Data Exchange (ETDEWEB)

    Gerald, Rex E. II; Sanchez, Jairo; Rathke, Jerome W.

    2004-08-10

    A video toroid cavity imager for in situ measurement of electrochemical properties of an electrolytic material sample includes a cylindrical toroid cavity resonator containing the sample and employs NMR and video imaging for providing high-resolution spectral and visual information of molecular characteristics of the sample on a real-time basis. A large magnetic field is applied to the sample under controlled temperature and pressure conditions to simultaneously provide NMR spectroscopy and video imaging capabilities for investigating electrochemical transformations of materials or the evolution of long-range molecular aggregation during cooling of hydrocarbon melts. The video toroid cavity imager includes a miniature commercial video camera with an adjustable lens, a modified compression coin cell imager with a fiat circular principal detector element, and a sample mounted on a transparent circular glass disk, and provides NMR information as well as a video image of a sample, such as a polymer film, with micrometer resolution.

  11. Macroscopic modelling of solid-state fermentation

    NARCIS (Netherlands)

    Hoogschagen, M.J.

    2007-01-01

    Solid-state fermentation is different from the more well known process of liquid fermentation because no free flowing water is present. The technique is primarily used in Asia. Well-known products are the foods tempe, soy sauce and saké. In industrial solid-state fermentation, the substrate usually

  12. On the development of new SPMN diurnal video systems for daylight fireball monitoring

    Science.gov (United States)

    Madiedo, J. M.; Trigo-Rodríguez, J. M.; Castro-Tirado, A. J.

    2008-09-01

    Daylight fireball video monitoring High-sensitivity video devices are commonly used for the study of the activity of meteor streams during the night. These provide useful data for the determination, for instance, of radiant, orbital and photometric parameters ([1] to [7]). With this aim, during 2006 three automated video stations supported by Universidad de Huelva were set up in Andalusia within the framework of the SPanish Meteor Network (SPMN). These are endowed with 8-9 high sensitivity wide-field video cameras that achieve a meteor limiting magnitude of about +3. These stations have increased the coverage performed by the low-scan allsky CCD systems operated by the SPMN and, besides, achieve a time accuracy of about 0.01s for determining the appearance of meteor and fireball events. Despite of these nocturnal monitoring efforts, we realised the need of setting up stations for daylight fireball detection. Such effort was also motivated by the appearance of the two recent meteorite-dropping events of Villalbeto de la Peña [8,9] and Puerto Lápice [10]. Although the Villalbeto de la Peña event was casually videotaped, and photographed, no direct pictures or videos were obtained for the Puerto Lápice event. Consequently, in order to perform a continuous recording of daylight fireball events, we setup new automated systems based on CCD video cameras. However, the development of these video stations implies several issues with respect to nocturnal systems that must be properly solved in order to get an optimal operation. The first of these video stations, also supported by University of Huelva, has been setup in Sevilla (Andalusia) during May 2007. But, of course, fireball association is unequivocal only in those cases when two or more stations recorded the fireball, and when consequently the geocentric radiant is accurately determined. With this aim, a second diurnal video station is being setup in Andalusia in the facilities of Centro Internacional de Estudios y

  13. REAL TIME SPEED ESTIMATION FROM MONOCULAR VIDEO

    Directory of Open Access Journals (Sweden)

    M. S. Temiz

    2012-07-01

    Full Text Available In this paper, detailed studies have been performed for developing a real time system to be used for surveillance of the traffic flow by using monocular video cameras to find speeds of the vehicles for secure travelling are presented. We assume that the studied road segment is planar and straight, the camera is tilted downward a bridge and the length of one line segment in the image is known. In order to estimate the speed of a moving vehicle from a video camera, rectification of video images is performed to eliminate the perspective effects and then the interest region namely the ROI is determined for tracking the vehicles. Velocity vectors of a sufficient number of reference points are identified on the image of the vehicle from each video frame. For this purpose sufficient number of points from the vehicle is selected, and these points must be accurately tracked on at least two successive video frames. In the second step, by using the displacement vectors of the tracked points and passed time, the velocity vectors of those points are computed. Computed velocity vectors are defined in the video image coordinate system and displacement vectors are measured by the means of pixel units. Then the magnitudes of the computed vectors in the image space are transformed to the object space to find the absolute values of these magnitudes. The accuracy of the estimated speed is approximately ±1 – 2 km/h. In order to solve the real time speed estimation problem, the authors have written a software system in C++ programming language. This software system has been used for all of the computations and test applications.

  14. Modeling 3D Unknown object by Range Finder and Video Camera ...

    African Journals Online (AJOL)

    real world); proprioceptive and exteroceptive sensors allowing the recreating of the 3D geometric database of an environment (virtual world). The virtual world is projected onto a video display terminal (VDT). Computer-generated and video ...

  15. Video digitizer (real time-frame grabber) with region of interest suitable for quantitative data analysis used on the infrared and H alpha cameras installed on the DIII-D experiment

    International Nuclear Information System (INIS)

    Ferguson, S.W.; Kevan, D.K.; Hill, D.N.; Allen, S.L.

    1987-01-01

    This paper describes a CAMAC based video digitizer with region of interest (ROI) capability that was designed for use with the infrared and H alpha cameras installed by Lawrence Livermore Laboratory on the DIII-D experiment at G.A. Technologies in San Diego, California. The video digitizer uses a custom built CAMAC video synchronizer module to clock data into a CAMAC transient recorder on a line-by-line basis starting at the beginning of a field. The number of fields that are recorded is limited only by the available transient recorder memory. In order to conserve memory, the CAMAC video synchronizer module provides for the alternative selection of a specific region of interest in each successive field to be recorded. Memory conservation can be optimized by specifying lines in the field, start time, stop time, and the number of data samples per line. This video frame grabber has proved versatile for capturing video in such diverse applications as recording video fields from a video tape recorder played in slow motion or recording video fields in real time during a DIII-D shot. In other cases, one or more lines of video are recorded per frame to give a cross sectional slice of the plasma. Since all the data in the digitizer memory is synchronized to video fields and lines, the data can be read directly into the control computer in the proper matrix format to facilitate rapid processing, display, and permanent storage

  16. Video redaction: a survey and comparison of enabling technologies

    Science.gov (United States)

    Sah, Shagan; Shringi, Ameya; Ptucha, Raymond; Burry, Aaron; Loce, Robert

    2017-09-01

    With the prevalence of video recordings from smart phones, dash cams, body cams, and conventional surveillance cameras, privacy protection has become a major concern, especially in light of legislation such as the Freedom of Information Act. Video redaction is used to obfuscate sensitive and personally identifiable information. Today's typical workflow involves simple detection, tracking, and manual intervention. Automated methods rely on accurate detection mechanisms being paired with robust tracking methods across the video sequence to ensure the redaction of all sensitive information while minimizing spurious obfuscations. Recent studies have explored the use of convolution neural networks and recurrent neural networks for object detection and tracking. The present paper reviews the redaction problem and compares a few state-of-the-art detection, tracking, and obfuscation methods as they relate to redaction. The comparison introduces an evaluation metric that is specific to video redaction performance. The metric can be evaluated in a manner that allows balancing the penalty for false negatives and false positives according to the needs of particular application, thereby assisting in the selection of component methods and their associated hyperparameters such that the redacted video has fewer frames that require manual review.

  17. PEOPLE REIDENTIFCATION IN A DISTRIBUTED CAMERA NETWORK

    Directory of Open Access Journals (Sweden)

    Icaro Oliveira de Oliveira

    2010-06-01

    Full Text Available This paper presents an approach to the object reidentification problem in a distributed camera network system. The reidentification or reacquisition problem consists essentially on the matching process of images acquired from different cameras. This work is applied in a monitored environment by cameras. This application is important to modern security systems, in which the targets presence identification in the environment expands the capacity of action by security agents in real time and provides important parameters like localization for each target. We used target’s interest points and target’s color with features for reidentification. The satisfactory results were obtained from real experiments in public video datasets and synthetic images with noise.

  18. Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror

    Directory of Open Access Journals (Sweden)

    Michiaki Inoue

    2017-10-01

    Full Text Available This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time.

  19. HDR video synthesis for vision systems in dynamic scenes

    Science.gov (United States)

    Shopovska, Ivana; Jovanov, Ljubomir; Goossens, Bart; Philips, Wilfried

    2016-09-01

    High dynamic range (HDR) image generation from a number of differently exposed low dynamic range (LDR) images has been extensively explored in the past few decades, and as a result of these efforts a large number of HDR synthesis methods have been proposed. Since HDR images are synthesized by combining well-exposed regions of the input images, one of the main challenges is dealing with camera or object motion. In this paper we propose a method for the synthesis of HDR video from a single camera using multiple, differently exposed video frames, with circularly alternating exposure times. One of the potential applications of the system is in driver assistance systems and autonomous vehicles, involving significant camera and object movement, non- uniform and temporally varying illumination, and the requirement of real-time performance. To achieve these goals simultaneously, we propose a HDR synthesis approach based on weighted averaging of aligned radiance maps. The computational complexity of high-quality optical flow methods for motion compensation is still pro- hibitively high for real-time applications. Instead, we rely on more efficient global projective transformations to solve camera movement, while moving objects are detected by thresholding the differences between the trans- formed and brightness adapted images in the set. To attain temporal consistency of the camera motion in the consecutive HDR frames, the parameters of the perspective transformation are stabilized over time by means of computationally efficient temporal filtering. We evaluated our results on several reference HDR videos, on synthetic scenes, and using 14-bit raw images taken with a standard camera.

  20. Solid State Photovoltaic Research Branch

    Energy Technology Data Exchange (ETDEWEB)

    1990-09-01

    This report summarizes the progress of the Solid State Photovoltaic Research Branch of the Solar Energy Research Institute (SERI) from October 1, 1988, through September 30,l 1989. Six technical sections of the report cover these main areas of SERIs in-house research: Semiconductor Crystal Growth, Amorphous Silicon Research, Polycrystalline Thin Films, III-V High-Efficiency Photovoltaic Cells, Solid-State Theory, and Laser Raman and Luminescence Spectroscopy. Sections have been indexed separately for inclusion on the data base.

  1. Video-assisted functional assessment of index pollicisation in congenital anomalies.

    Science.gov (United States)

    Mas, Virginie; Ilharreborde, Brice; Mallet, Cindy; Mazda, Keyvan; Simon, Anne-Laure; Jehanno, Pascal

    2016-08-01

    Functional results of index pollicisation are usually assessed by the clinical score of Percival. This score is based on elementary hand movements and does not reflect the function of the neo thumb in daily life activities. The aim of this study was to develop a new video-assisted scoring system based on daily life activities to assess index pollicisation functional outcomes. Twenty-two consecutive children, operated between 1998 and 2012, were examined with a mean of 77 months after surgery. The mean age at surgery was 34 months. Post-operative results were evaluated by a new video-assisted 14-point scoring system consisting of seven basic tasks that are frequently used in daily activities. The series of tasks was performed both on the request of the examiner and in real-life conditions with the use of a hidden camera. Each video recording was examined by three different examiners. Each examiner rated the video recordings three times, with an interval of one week between examinations. Inter- and intra-observer agreements were calculated. Inter- and intra-observer agreements were excellent both on request (κ = 0.87 [0.84-0.97] for inter-observer agreement and 0.92 [0.82-0.98] for intra-observer agreement) and on hidden camera (κ = 0.83 [0.78-0.91] for inter-observer agreement and 0.89 [0.83-0.96] for intra-observer agreement). The results were significantly better on request than on hidden camera (p = 0.045). The correlation between the video-assisted scoring system and the Percival score was poor. The video-assisted scoring system is a reliable tool to assess index pollicisation functional outcomes. The scoring system on hidden camera is more representative of the neo thumb use in daily life complex movements. Level IV.

  2. Video systems for alarm assessment

    International Nuclear Information System (INIS)

    Greenwoll, D.A.; Matter, J.C.; Ebel, P.E.

    1991-09-01

    The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs

  3. Video systems for alarm assessment

    Energy Technology Data Exchange (ETDEWEB)

    Greenwoll, D.A.; Matter, J.C. (Sandia National Labs., Albuquerque, NM (United States)); Ebel, P.E. (BE, Inc., Barnwell, SC (United States))

    1991-09-01

    The purpose of this NUREG is to present technical information that should be useful to NRC licensees in designing closed-circuit television systems for video alarm assessment. There is a section on each of the major components in a video system: camera, lens, lighting, transmission, synchronization, switcher, monitor, and recorder. Each section includes information on component selection, procurement, installation, test, and maintenance. Considerations for system integration of the components are contained in each section. System emphasis is focused on perimeter intrusion detection and assessment systems. A glossary of video terms is included. 13 figs., 9 tabs.

  4. The Use of Camera Traps in Wildlife

    Directory of Open Access Journals (Sweden)

    Yasin Uçarlı

    2013-11-01

    Full Text Available Camera traps are increasingly used in the abundance and density estimates of wildlife species. Camera traps are very good alternative for direct observation in case, particularly, steep terrain, dense vegetation covered areas or nocturnal species. The main reason for the use of camera traps is eliminated that the economic, personnel and time loss in a continuous manner at the same time in different points. Camera traps, motion and heat sensitive, can take a photo or video according to the models. Crossover points and feeding or mating areas of the focal species are addressed as a priority camera trap set locations. The population size can be finding out by the images combined with Capture-Recapture methods. The population density came out the population size divided to effective sampling area size. Mating and breeding season, habitat choice, group structures and survival rates of the focal species can be achieved from the images. Camera traps are very useful to obtain the necessary data about the particularly mysterious species with economically in planning and conservation efforts.

  5. Video Field Studies with your Cell Phone

    DEFF Research Database (Denmark)

    Buur, Jacob; Fraser, Euan

    2010-01-01

    Pod? Or with the GoPRO sports camera? Our approach has a strong focus on how to use video in design, rather than on the technical side. The goal is to engage design teams in meaningful discussions based on user empathy, rather than to produce beautiful videos. Basically it is a search for a minimalist way...

  6. Virtual displays for 360-degree video

    Science.gov (United States)

    Gilbert, Stephen; Boonsuk, Wutthigrai; Kelly, Jonathan W.

    2012-03-01

    In this paper we describe a novel approach for comparing users' spatial cognition when using different depictions of 360- degree video on a traditional 2D display. By using virtual cameras within a game engine and texture mapping of these camera feeds to an arbitrary shape, we were able to offer users a 360-degree interface composed of four 90-degree views, two 180-degree views, or one 360-degree view of the same interactive environment. An example experiment is described using these interfaces. This technique for creating alternative displays of wide-angle video facilitates the exploration of how compressed or fish-eye distortions affect spatial perception of the environment and can benefit the creation of interfaces for surveillance and remote system teleoperation.

  7. Plant iodine-131 uptake in relation to root concentration as measured in minirhizotron by video camera:

    International Nuclear Information System (INIS)

    Moss, K.J.

    1990-09-01

    Glass viewing tubes (minirhizotrons) were placed in the soil beneath native perennial bunchgrass (Agropyron spicatum). The tubes provided access for observing and quantifying plant roots with a miniature video camera and soil moisture estimates by neutron hydroprobe. The radiotracer I-131 was delivered to the root zone at three depths with differing root concentrations. The plant was subsequently sampled and analyzed for I-131. Plant uptake was greater when I-131 was applied at soil depths with higher root concentrations. When I-131 was applied at soil depths with lower root concentrations, plant uptake was less. However, the relationship between root concentration and plant uptake was not a direct one. When I-131 was delivered to deeper soil depths with low root concentrations, the quantity of roots there appeared to be less effective in uptake than the same quantity of roots at shallow soil depths with high root concentration. 29 refs., 6 figs., 11 tabs

  8. Solid-state NMR basic principles and practice

    CERN Document Server

    Apperley, David C; Hodgkinson, Paul

    2014-01-01

    Nuclear Magnetic Resonance (NMR) has proved to be a uniquely powerful and versatile tool for analyzing and characterizing chemicals and materials of all kinds. This book focuses on the latest developments and applications for "solid-state" NMR, which has found new uses from archaeology to crystallography to biomaterials and pharmaceutical science research. The book will provide materials engineers, analytical chemists, and physicists, in and out of lab, a survey of the techniques and the essential tools of solid-state NMR, together with a practical guide on applications. In this concise introduction to the growing field of solid-state nuclear magnetic resonance spectroscopy The reader will find: * Basic NMR concepts for solids, including guidance on the spin-1/2 nuclei concept * Coverage of the quantum mechanics aspects of solid state NMR and an introduction to the concept of quadrupolar nuclei * An understanding relaxation, exchange and quantitation in NMR * An analysis and interpretation of NMR data, with e...

  9. Development of plenoptic infrared camera using low dimensional material based photodetectors

    Science.gov (United States)

    Chen, Liangliang

    Infrared (IR) sensor has extended imaging from submicron visible spectrum to tens of microns wavelength, which has been widely used for military and civilian application. The conventional bulk semiconductor materials based IR cameras suffer from low frame rate, low resolution, temperature dependent and highly cost, while the unusual Carbon Nanotube (CNT), low dimensional material based nanotechnology has been made much progress in research and industry. The unique properties of CNT lead to investigate CNT based IR photodetectors and imaging system, resolving the sensitivity, speed and cooling difficulties in state of the art IR imagings. The reliability and stability is critical to the transition from nano science to nano engineering especially for infrared sensing. It is not only for the fundamental understanding of CNT photoresponse induced processes, but also for the development of a novel infrared sensitive material with unique optical and electrical features. In the proposed research, the sandwich-structured sensor was fabricated within two polymer layers. The substrate polyimide provided sensor with isolation to background noise, and top parylene packing blocked humid environmental factors. At the same time, the fabrication process was optimized by real time electrical detection dielectrophoresis and multiple annealing to improve fabrication yield and sensor performance. The nanoscale infrared photodetector was characterized by digital microscopy and precise linear stage in order for fully understanding it. Besides, the low noise, high gain readout system was designed together with CNT photodetector to make the nano sensor IR camera available. To explore more of infrared light, we employ compressive sensing algorithm into light field sampling, 3-D camera and compressive video sensing. The redundant of whole light field, including angular images for light field, binocular images for 3-D camera and temporal information of video streams, are extracted and

  10. From different angles : Exploring and applying the design potential of video

    NARCIS (Netherlands)

    Pasman, G.J.

    2012-01-01

    Recent developments in both hardware and software have brought video within the scope of design students as a new visual design tool. Being more and more equipped with cameras, for example in their smartphones, and video editing programs on their computers, they are increasing using video to record

  11. Opinion rating of comparison photographs of television pictures from CCD cameras under irradiation

    International Nuclear Information System (INIS)

    Reading, V.M.; Dumbreck, A.A.

    1991-01-01

    As part of the development of a general method of testing the effects of gamma radiation on CCD television cameras, this is a report of an experimental study on the optimisation of still photographic representation of video pictures recorded before and during camera irradiation. (author)

  12. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.

  13. Automatic Human Facial Expression Recognition Based on Integrated Classifier From Monocular Video with Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available An automatic recognition framework for human facial expressions from a monocular video with an uncalibrated camera is proposed. The expression characteristics are first acquired from a kind of deformable template, similar to a facial muscle distribution. After associated regularization, the time sequences from the trait changes in space-time under complete expressional production are then arranged line by line in a matrix. Next, the matrix dimensionality is reduced by a method of manifold learning of neighborhood-preserving embedding. Finally, the refined matrix containing the expression trait information is recognized by a classifier that integrates the hidden conditional random field (HCRF and support vector machine (SVM. In an experiment using the Cohn–Kanade database, the proposed method showed a comparatively higher recognition rate than the individual HCRF or SVM methods in direct recognition from two-dimensional human face traits. Moreover, the proposed method was shown to be more robust than the typical Kotsia method because the former contains more structural characteristics of the data to be classified in space-time

  14. Photoemission from solids: the transition from solid-state to atomic physics

    International Nuclear Information System (INIS)

    Shirley, D.A.

    1980-08-01

    As the photon energy is increased, photoemission from solids undergoes a slow transition from solid-state to atomic behavior. However, throughout the energy range hν = 10 to 1000 eV or higher both types of phenomena are present. Thus angle-resolved photoemission can only be understood quantitatively if each experimenter recognizes the presence of band-structure, photoelectron diffraction, and photoelectron asymmetry effects. The quest for this understanding will build some interesting bridges between solid-state and atomic physics and should also yield important new insights about the phenomena associated with photoemission

  15. Lithium-ion transport in inorganic solid state electrolyte

    International Nuclear Information System (INIS)

    Gao Jian; Li Hong; Zhao Yu-Sheng; Shi Si-Qi

    2016-01-01

    An overview of ion transport in lithium-ion inorganic solid state electrolytes is presented, aimed at exploring and designing better electrolyte materials. Ionic conductivity is one of the most important indices of the performance of inorganic solid state electrolytes. The general definition of solid state electrolytes is presented in terms of their role in a working cell (to convey ions while isolate electrons), and the history of solid electrolyte development is briefly summarized. Ways of using the available theoretical models and experimental methods to characterize lithium-ion transport in solid state electrolytes are systematically introduced. Then the various factors that affect ionic conductivity are itemized, including mainly structural disorder, composite materials and interface effects between a solid electrolyte and an electrode. Finally, strategies for future material systems, for synthesis and characterization methods, and for theory and calculation are proposed, aiming to help accelerate the design and development of new solid electrolytes. (topical review)

  16. Advances in Solid State Physics

    CERN Document Server

    Haug, Rolf

    2008-01-01

    The present volume 47 of the Advances in Solid State Physics contains the written version of a large number of the invited talks of the 2007 Spring Meeting of the Arbeitskreis Festkörperphysik which was held in Regensburg, Germany, from March 26 to 30, 2007 in conjunction with the 71st Annual Meeting of the Deutsche Physikalische Gesellschaft.It gives an overview of the present status of solid state physics where low-dimensional systems such as quantum dots and quantum wires are dominating. The importance of magnetic materials is reflected by the large number of contributions in the part dealing with ferromagnetic films and particles. One of the most exciting achievements of the last couple of years is the successful application of electrical contacts to and the investigation of single layers of graphene. This exciting physics is covered in Part IV of this book. Terahertz physics is another rapidly moving field which is presented here by five contributions. Achievements in solid state physics are only rarely...

  17. Multi-view video segmentation and tracking for video surveillance

    Science.gov (United States)

    Mohammadi, Gelareh; Dufaux, Frederic; Minh, Thien Ha; Ebrahimi, Touradj

    2009-05-01

    Tracking moving objects is a critical step for smart video surveillance systems. Despite the complexity increase, multiple camera systems exhibit the undoubted advantages of covering wide areas and handling the occurrence of occlusions by exploiting the different viewpoints. The technical problems in multiple camera systems are several: installation, calibration, objects matching, switching, data fusion, and occlusion handling. In this paper, we address the issue of tracking moving objects in an environment covered by multiple un-calibrated cameras with overlapping fields of view, typical of most surveillance setups. Our main objective is to create a framework that can be used to integrate objecttracking information from multiple video sources. Basically, the proposed technique consists of the following steps. We first perform a single-view tracking algorithm on each camera view, and then apply a consistent object labeling algorithm on all views. In the next step, we verify objects in each view separately for inconsistencies. Correspondent objects are extracted through a Homography transform from one view to the other and vice versa. Having found the correspondent objects of different views, we partition each object into homogeneous regions. In the last step, we apply the Homography transform to find the region map of first view in the second view and vice versa. For each region (in the main frame and mapped frame) a set of descriptors are extracted to find the best match between two views based on region descriptors similarity. This method is able to deal with multiple objects. Track management issues such as occlusion, appearance and disappearance of objects are resolved using information from all views. This method is capable of tracking rigid and deformable objects and this versatility lets it to be suitable for different application scenarios.

  18. Atomic layer deposition of lithium phosphates as solid-state electrolytes for all-solid-state microbatteries

    International Nuclear Information System (INIS)

    Wang, Biqiong; Liu, Jian; Sun, Qian; Li, Ruying; Sun, Xueliang; Sham, Tsun-Kong

    2014-01-01

    Atomic layer deposition (ALD) has been shown as a powerful technique to build three-dimensional (3D) all-solid-state microbattery, because of its unique advantages in fabricating uniform and pinhole-free thin films in 3D structures. The development of solid-state electrolyte by ALD is a crucial step to achieve the fabrication of 3D all-solid-state microbattery by ALD. In this work, lithium phosphate solid-state electrolytes were grown by ALD at four different temperatures (250, 275, 300, and 325 °C) using two precursors (lithium tert-butoxide and trimethylphosphate). A linear dependence of film thickness on ALD cycle number was observed and uniform growth was achieved at all four temperatures. The growth rate was 0.57, 0.66, 0.69, and 0.72 Å/cycle at deposition temperatures of 250, 275, 300, and 325 °C, respectively. Furthermore, x-ray photoelectron spectroscopy confirmed the compositions and chemical structures of lithium phosphates deposited by ALD. Moreover, the lithium phosphate thin films deposited at 300 °C presented the highest ionic conductivity of 1.73 × 10 −8 S cm −1 at 323 K with ∼0.51 eV activation energy based on the electrochemical impedance spectroscopy. The ionic conductivity was calculated to be 3.3 × 10 −8 S cm −1 at 26 °C (299 K). (paper)

  19. Real-time vehicle matching for multi-camera tunnel surveillance

    Science.gov (United States)

    Jelača, Vedran; Niño Castañeda, Jorge Oswaldo; Frías-Velázquez, Andrés; Pižurica, Aleksandra; Philips, Wilfried

    2011-03-01

    Tracking multiple vehicles with multiple cameras is a challenging problem of great importance in tunnel surveillance. One of the main challenges is accurate vehicle matching across the cameras with non-overlapping fields of view. Since systems dedicated to this task can contain hundreds of cameras which observe dozens of vehicles each, for a real-time performance computational efficiency is essential. In this paper, we propose a low complexity, yet highly accurate method for vehicle matching using vehicle signatures composed of Radon transform like projection profiles of the vehicle image. The proposed signatures can be calculated by a simple scan-line algorithm, by the camera software itself and transmitted to the central server or to the other cameras in a smart camera environment. The amount of data is drastically reduced compared to the whole image, which relaxes the data link capacity requirements. Experiments on real vehicle images, extracted from video sequences recorded in a tunnel by two distant security cameras, validate our approach.

  20. Solid-state membrane module

    Science.gov (United States)

    Gordon, John Howard [Salt Lake City, UT; Taylor, Dale M [Murray, UT

    2011-06-07

    Solid-state membrane modules comprising at least one membrane unit, where the membrane unit has a dense mixed conducting oxide layer, and at least one conduit or manifold wherein the conduit or manifold comprises a dense layer and at least one of a porous layer and a slotted layer contiguous with the dense layer. The solid-state membrane modules may be used to carry out a variety of processes including the separating of any ionizable component from a feedstream wherein such ionizable component is capable of being transported through a dense mixed conducting oxide layer of the membrane units making up the membrane modules. For ease of construction, the membrane units may be planar.

  1. The Twist Tensor Nuclear Norm for Video Completion.

    Science.gov (United States)

    Hu, Wenrui; Tao, Dacheng; Zhang, Wensheng; Xie, Yuan; Yang, Yehui

    2017-12-01

    In this paper, we propose a new low-rank tensor model based on the circulant algebra, namely, twist tensor nuclear norm (t-TNN). The twist tensor denotes a three-way tensor representation to laterally store 2-D data slices in order. On one hand, t-TNN convexly relaxes the tensor multirank of the twist tensor in the Fourier domain, which allows an efficient computation using fast Fourier transform. On the other, t-TNN is equal to the nuclear norm of block circulant matricization of the twist tensor in the original domain, which extends the traditional matrix nuclear norm in a block circulant way. We test the t-TNN model on a video completion application that aims to fill missing values and the experiment results validate its effectiveness, especially when dealing with video recorded by a nonstationary panning camera. The block circulant matricization of the twist tensor can be transformed into a circulant block representation with nuclear norm invariance. This representation, after transformation, exploits the horizontal translation relationship between the frames in a video, and endows the t-TNN model with a more powerful ability to reconstruct panning videos than the existing state-of-the-art low-rank models.

  2. A highly sensitive underwater video system for use in turbid aquaculture ponds.

    Science.gov (United States)

    Hung, Chin-Chang; Tsao, Shih-Chieh; Huang, Kuo-Hao; Jang, Jia-Pu; Chang, Hsu-Kuang; Dobbs, Fred C

    2016-08-24

    The turbid, low-light waters characteristic of aquaculture ponds have made it difficult or impossible for previous video cameras to provide clear imagery of the ponds' benthic habitat. We developed a highly sensitive, underwater video system (UVS) for this particular application and tested it in shrimp ponds having turbidities typical of those in southern Taiwan. The system's high-quality video stream and images, together with its camera capacity (up to nine cameras), permit in situ observations of shrimp feeding behavior, shrimp size and internal anatomy, and organic matter residues on pond sediments. The UVS can operate continuously and be focused remotely, a convenience to shrimp farmers. The observations possible with the UVS provide aquaculturists with information critical to provision of feed with minimal waste; determining whether the accumulation of organic-matter residues dictates exchange of pond water; and management decisions concerning shrimp health.

  3. Architecture and Protocol of a Semantic System Designed for Video Tagging with Sensor Data in Mobile Devices

    Science.gov (United States)

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper. PMID:22438753

  4. Architecture and Protocol of a Semantic System Designed for Video Tagging with Sensor Data in Mobile Devices

    Directory of Open Access Journals (Sweden)

    Alvaro Suarez

    2012-02-01

    Full Text Available Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.

  5. Architecture and protocol of a semantic system designed for video tagging with sensor data in mobile devices.

    Science.gov (United States)

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.

  6. MAVIS: Mobile Acquisition and VISualization -\\ud a professional tool for video recording on a mobile platform

    OpenAIRE

    Watten, Phil; Gilardi, Marco; Holroyd, Patrick; Newbury, Paul

    2015-01-01

    Professional video recording is a complex process which often requires expensive cameras and large amounts of ancillary equipment.\\ud With the advancement of mobile technologies, cameras on mobile devices have improved to the point where the quality of their output is sometimes comparable to that obtained from a professional video camera and are often used in professional productions.\\ud However, tools that allow professional users to access the information they need to control the technical ...

  7. OPTIMAL CAMERA NETWORK DESIGN FOR 3D MODELING OF CULTURAL HERITAGE

    Directory of Open Access Journals (Sweden)

    B. S. Alsadik

    2012-07-01

    Full Text Available Digital cultural heritage documentation in 3D is subject to research and practical applications nowadays. Image-based modeling is a technique to create 3D models, which starts with the basic task of designing the camera network. This task is – however – quite crucial in practical applications because it needs a thorough planning and a certain level of expertise and experience. Bearing in mind todays computational (mobile power we think that the optimal camera network should be designed in the field, and, therefore, making the preprocessing and planning dispensable. The optimal camera network is designed when certain accuracy demands are fulfilled with a reasonable effort, namely keeping the number of camera shots at a minimum. In this study, we report on the development of an automatic method to design the optimum camera network for a given object of interest, focusing currently on buildings and statues. Starting from a rough point cloud derived from a video stream of object images, the initial configuration of the camera network assuming a high-resolution state-of-the-art non-metric camera is designed. To improve the image coverage and accuracy, we use a mathematical penalty method of optimization with constraints. From the experimental test, we found that, after optimization, the maximum coverage is attained beside a significant improvement of positional accuracy. Currently, we are working on a guiding system, to ensure, that the operator actually takes the desired images. Further next steps will include a reliable and detailed modeling of the object applying sophisticated dense matching techniques.

  8. The Use of Smart Glasses for Surgical Video Streaming.

    Science.gov (United States)

    Hiranaka, Takafumi; Nakanishi, Yuta; Fujishiro, Takaaki; Hida, Yuichi; Tsubosaka, Masanori; Shibata, Yosaku; Okimura, Kenjiro; Uemoto, Harunobu

    2017-04-01

    Observation of surgical procedures performed by experts is extremely important for acquisition and improvement of surgical skills. Smart glasses are small computers, which comprise a head-mounted monitor and video camera, and can be connected to the internet. They can be used for remote observation of surgeries by video streaming. Although Google Glass is the most commonly used smart glasses for medical purposes, it is still unavailable commercially and has some limitations. This article reports the use of a different type of smart glasses, InfoLinker, for surgical video streaming. InfoLinker has been commercially available in Japan for industrial purposes for more than 2 years. It is connected to a video server via wireless internet directly, and streaming video can be seen anywhere an internet connection is available. We have attempted live video streaming of knee arthroplasty operations that were viewed at several different locations, including foreign countries, on a common web browser. Although the quality of video images depended on the resolution and dynamic range of the video camera, speed of internet connection, and the wearer's attention to minimize image shaking, video streaming could be easily performed throughout the procedure. The wearer could confirm the quality of the video as the video was being shot by the head-mounted display. The time and cost for observation of surgical procedures can be reduced by InfoLinker, and further improvement of hardware as well as the wearer's video shooting technique is expected. We believe that this can be used in other medical settings.

  9. Video Golf

    Science.gov (United States)

    1995-01-01

    George Nauck of ENCORE!!! invented and markets the Advanced Range Performance (ARPM) Video Golf System for measuring the result of a golf swing. After Nauck requested their assistance, Marshall Space Flight Center scientists suggested video and image processing/computing technology, and provided leads on commercial companies that dealt with the pertinent technologies. Nauck contracted with Applied Research Inc. to develop a prototype. The system employs an elevated camera, which sits behind the tee and follows the flight of the ball down range, catching the point of impact and subsequent roll. Instant replay of the video on a PC monitor at the tee allows measurement of the carry and roll. The unit measures distance and deviation from the target line, as well as distance from the target when one is selected. The information serves as an immediate basis for making adjustments or as a record of skill level progress for golfers.

  10. Oriented solid-state NMR spectrosocpy

    DEFF Research Database (Denmark)

    Bertelsen, Kresten

    This thesis is concerned with driving forward oriented solid-state NMR spectroscopy as a viable technique for studying peptides in membrane bilayers. I will show that structural heterogeneity is an intrinsic part of the peptide/lipid system and that NMR can be used to characterize static...... and dynamic structural features of the peptides and its local surroundings. In fact one need to take into account the dynamical features of the system in order to correctly predict the structure from oriented solid-state NMR spectra.      ...

  11. Limits on surveillance: frictions, fragilities and failures in the operation of camera surveillance.

    NARCIS (Netherlands)

    Dubbeld, L.

    2004-01-01

    Public video surveillance tends to be discussed in either utopian or dystopian terms: proponents maintain that camera surveillance is the perfect tool in the fight against crime, while critics argue that the use of security cameras is central to the development of a panoptic, Orwellian surveillance

  12. Solid State Lighting Reliability Components to Systems

    CERN Document Server

    Fan, XJ

    2013-01-01

    Solid State Lighting Reliability: Components to Systems begins with an explanation of the major benefits of solid state lighting (SSL) when compared to conventional lighting systems including but not limited to long useful lifetimes of 50,000 (or more) hours and high efficacy. When designing effective devices that take advantage of SSL capabilities the reliability of internal components (optics, drive electronics, controls, thermal design) take on critical importance. As such a detailed discussion of reliability from performance at the device level to sub components is included as well as the integrated systems of SSL modules, lamps and luminaires including various failure modes, reliability testing and reliability performance. This book also: Covers the essential reliability theories and practices for current and future development of Solid State Lighting components and systems Provides a systematic overview for not only the state-of-the-art, but also future roadmap and perspectives of Solid State Lighting r...

  13. Summarization of Surveillance Video Sequences Using Face Quality Assessment

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.; Rahmati, Mohammad

    2011-01-01

    Constant working surveillance cameras in public places, such as airports and banks, produce huge amount of video data. Faces in such videos can be extracted in real time. However, most of these detected faces are either redundant or useless. Redundant information adds computational costs to facial...

  14. Solid state multinuclear NMR. A versatile tool for studying the reactivity of solid systems

    Energy Technology Data Exchange (ETDEWEB)

    MacKenzie, Kenneth J.D. [MacDiarmid Institute for Advanced Materials and Nanotechnology, Victoria University of Wellington, P.O. Box 600, Wellington (New Zealand)

    2004-08-31

    Traditionally, X-ray powder diffraction has been a favoured method for studying chemical reactions in the solid state, but the increasing importance of energy-efficient synthesis methods for solids (e.g. sol-gel synthesis, mechanochemical synthesis) has led to the need for an analytical method not dependent on long-range structural periodicity. Multinuclear solid state nuclear magnetic resonance (NMR) represents a technique which is equally applicable to amorphous or crystalline solids, and is now used in increasing numbers of solid state studies.This paper briefly outlines the principles and practical details of this powerful technique and gives examples of its use in solid-state chemistry, particularly in very recent studies of mechanochemical synthesis of advanced sialon ceramics. The temperature at which these technically important silicon aluminium oxynitride compounds are formed can be significantly lowered by high-energy grinding of their components to produce X-ray amorphous precursors. Solid-state NMR has been used to provide detailed information which could not have been obtained by any other means about the chemical environment of the Si and Al atoms in these amorphous precursors, and the various atomic movements undergone as they crystallise to the final product.

  15. Logistical and safeguards aspects related to the introduction and implementation of video surveillance equipment by EURATOM

    International Nuclear Information System (INIS)

    Chare, P.J.; Wagner, H.G.; Otto, P.; Schenkel, R.

    1987-01-01

    With the growing availability of reliable video equipment for surveillance applications in safeguards and the disappearance of the Super 8 mm cameras, there will be a period of transition from film camera to video surveillance, a process which started two years ago. This gradual transition, as the film cameras come to the end of their useful lives, will afford the safeguards authorities the opportunity to examine in detail the logistical and procedural changes necessary. This paper examines, on the basis of existing video equipment in use or under development, the differences and problems to be encountered in the approach to surveillance. These problems include tamper resistance of signal transmission, on site and headquarters review, preventative maintenance, reliability, repair, and overall performance evaluation. In addition the advantages and flexibility offered by the introduction of video, such as on site review and increased memory capacity, are also discussed. The paper also considers the overall costs and manpower required by EURATOM for the implementation of the video systems as compared to the existing twin Minolta film camera system to ensure the most efficient use of manpower and equipment

  16. Solid state ionics: a Japan perspective

    Science.gov (United States)

    Yamamoto, Osamu

    2017-12-01

    The 70-year history of scientific endeavor of solid state ionics research in Japan is reviewed to show the contribution of Japanese scientists to the basic science of solid state ionics and its applications. The term 'solid state ionics' was defined by Takehiko Takahashi of Nagoya University, Japan: it refers to ions in solids, especially solids that exhibit high ionic conductivity at a fairly low temperature below their melting points. During the last few decades of exploration, many ion conducting solids have been discovered in Japan such as the copper-ion conductor Rb4Cu16I7Cl13, proton conductor SrCe1-xYxO3, oxide-ion conductor La0.9Sr0.9Ga0.9Mg0.1O3, and lithium-ion conductor Li10GeP2S12. Rb4Cu16I7Cl13 has a conductivity of 0.33 S cm-1 at 25 °C, which is the highest of all room temperature ion conductive solid electrolytes reported to date, and Li10GeP2S12 has a conductivity of 0.012 S cm-1 at 25 °C, which is the highest among lithium-ion conductors reported to date. Research on high-temperature proton conducting ceramics began in Japan. The history, the discovery of novel ionic conductors and the story behind them are summarized along with basic science and technology.

  17. Object Detection and Tracking-Based Camera Calibration for Normalized Human Height Estimation

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-01-01

    Full Text Available This paper presents a normalized human height estimation algorithm using an uncalibrated camera. To estimate the normalized human height, the proposed algorithm detects a moving object and performs tracking-based automatic camera calibration. The proposed method consists of three steps: (i moving human detection and tracking, (ii automatic camera calibration, and (iii human height estimation and error correction. The proposed method automatically calibrates camera by detecting moving humans and estimates the human height using error correction. The proposed method can be applied to object-based video surveillance systems and digital forensic.

  18. Low Cost Wireless Network Camera Sensors for Traffic Monitoring

    Science.gov (United States)

    2012-07-01

    Many freeways and arterials in major cities in Texas are presently equipped with video detection cameras to : collect data and help in traffic/incident management. In this study, carefully controlled experiments determined : the throughput and output...

  19. Camera Based Navigation System with Augmented Reality

    Directory of Open Access Journals (Sweden)

    M. Marcu

    2012-06-01

    Full Text Available Nowadays smart mobile devices have enough processing power, memory, storage and always connected wireless communication bandwidth that makes them available for any type of application. Augmented reality (AR proposes a new type of applications that tries to enhance the real world by superimposing or combining virtual objects or computer generated information with it. In this paper we present a camera based navigation system with augmented reality integration. The proposed system aims to the following: the user points the camera of the smartphone towards a point of interest, like a building or any other place, and the application searches for relevant information about that specific place and superimposes the data over the video feed on the display. When the user moves the camera away, changing its orientation, the data changes as well, in real-time, with the proper information about the place that is now in the camera view.

  20. Do enhanced states exist? Boosting cognitive capacities through an action video-game.

    Science.gov (United States)

    Kozhevnikov, Maria; Li, Yahui; Wong, Sabrina; Obana, Takashi; Amihai, Ido

    2018-04-01

    This research reports the existence of enhanced cognitive states in which dramatic temporary improvements in temporal and spatial aspects of attention were exhibited by participants who played (but not by those who merely observed) action video-games meeting certain criteria. Specifically, Experiments 1 and 2 demonstrate that the attentional improvements were exhibited only by participants whose skills matched the difficulty level of the video game. Experiment 2 showed that arousal (as reflected by the reduction in parasympathetic activity and increase in sympathetic activity) is a critical physiological condition for enhanced cognitive states and corresponding attentional enhancements. Experiment 3 showed that the cognitive enhancements were transient, and were no longer observed after 30 min of rest following video-gaming. Moreover, the results suggest that the enhancements were specific to tasks requiring visual-spatial focused attention, but not distribution of spatial attention as has been reported to improve significantly and durably as a result of long-term video-game playing. Overall, the results suggest that the observed enhancements cannot be simply due to the activity of video-gaming per se, but might rather represent an enhanced cognitive state resulting from specific conditions (heightened arousal in combination with active engagement and optimal challenge), resonant with what has been described in previous phenomenological literature as "flow" (Csikszentmihalyi, 1975) or "peak experiences" (Maslov, 1962). The findings provide empirical evidence for the existence of the enhanced cognitive states and suggest possibilities for consciously accessing latent resources of our brain to temporarily boost our cognitive capacities upon demand. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Intelligent Model for Video Survillance Security System

    Directory of Open Access Journals (Sweden)

    J. Vidhya

    2013-12-01

    Full Text Available Video surveillance system senses and trails out all the threatening issues in the real time environment. It prevents from security threats with the help of visual devices which gather the information related to videos like CCTV’S and IP (Internet Protocol cameras. Video surveillance system has become a key for addressing problems in the public security. They are mostly deployed on the IP based network. So, all the possible security threats exist in the IP based application might also be the threats available for the reliable application which is available for video surveillance. In result, it may increase cybercrime, illegal video access, mishandling videos and so on. Hence, in this paper an intelligent model is used to propose security for video surveillance system which ensures safety and it provides secured access on video.

  2. Introduction to solid state electronics

    CERN Document Server

    Wang, FFY

    1989-01-01

    This textbook is specifically tailored for undergraduate engineering courses offered in the junior year, providing a thorough understanding of solid state electronics without relying on the prerequisites of quantum mechanics. In contrast to most solid state electronics texts currently available, with their generalized treatments of the same topics, this is the first text to focus exclusively and in meaningful detail on introductory material. The original text has already been in use for 10 years. In this new edition, additional problems have been added at the end of most chapters. These proble

  3. Solid state physics an introduction

    CERN Document Server

    Hofmann, Philip

    2015-01-01

    A must-have textbook for any undergraduate studying solid state physics. This successful brief course in solid state physics is now in its second edition. The clear and concise introduction not only describes all the basic phenomena and concepts, but also such advanced issues as magnetism and superconductivity. Each section starts with a gentle introduction, covering basic principles, progressing to a more advanced level in order to present a comprehensive overview of the subject. The book is providing qualitative discussions that help undergraduates understand concepts even if they can?t foll

  4. Ultrasonic methods in solid state physics

    CERN Document Server

    Truell, John; Elbaum, Charles

    1969-01-01

    Ultrasonic Methods in Solid State Physics is devoted to studies of energy loss and velocity of ultrasonic waves which have a bearing on present-day problems in solid-state physics. The discussion is particularly concerned with the type of investigation that can be carried out in the megacycle range of frequencies from a few megacycles to kilomegacycles; it deals almost entirely with short-duration pulse methods rather than with standing-wave methods. The book opens with a chapter on a classical treatment of wave propagation in solids. This is followed by separate chapters on methods and techni

  5. Head-coupled remote stereoscopic camera system for telepresence applications

    Science.gov (United States)

    Bolas, Mark T.; Fisher, Scott S.

    1990-09-01

    The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.

  6. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    Science.gov (United States)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between

  7. Acquisition, compression and rendering of depth and texture for multi-view video

    NARCIS (Netherlands)

    Morvan, Y.

    2009-01-01

    Three-dimensional (3D) video and imaging technologies is an emerging trend in the development of digital video systems, as we presently witness the appearance of 3D displays, coding systems, and 3D camera setups. Three-dimensional multi-view video is typically obtained from a set of synchronized

  8. High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project

    Science.gov (United States)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2015-04-01

    Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.

  9. Using Photogrammetry to Estimate Tank Waste Volumes from Video

    Energy Technology Data Exchange (ETDEWEB)

    Field, Jim G. [Washington River Protection Solutions, LLC, Richland, WA (United States)

    2013-03-27

    Washington River Protection Solutions (WRPS) contracted with HiLine Engineering & Fabrication, Inc. to assess the accuracy of photogrammetry tools as compared to video Camera/CAD Modeling System (CCMS) estimates. This test report documents the results of using photogrammetry to estimate the volume of waste in tank 241-C-I04 from post-retrieval videos and results using photogrammetry to estimate the volume of waste piles in the CCMS test video.

  10. Using Photogrammetry to Estimate Tank Waste Volumes from Video

    International Nuclear Information System (INIS)

    Field, Jim G.

    2013-01-01

    Washington River Protection Solutions (WRPS) contracted with HiLine Engineering and Fabrication, Inc. to assess the accuracy of photogrammetry tools as compared to video Camera/CAD Modeling System (CCMS) estimates. This test report documents the results of using photogrammetry to estimate the volume of waste in tank 241-C-I04 from post-retrieval videos and results using photogrammetry to estimate the volume of waste piles in the CCMS test video

  11. Silicon solid state devices and radiation detection

    CERN Document Server

    Leroy, Claude

    2012-01-01

    This book addresses the fundamental principles of interaction between radiation and matter, the principles of working and the operation of particle detectors based on silicon solid state devices. It covers a broad scope with respect to the fields of application of radiation detectors based on silicon solid state devices from low to high energy physics experiments including in outer space and in the medical environment. This book covers stateof- the-art detection techniques in the use of radiation detectors based on silicon solid state devices and their readout electronics, including the latest developments on pixelated silicon radiation detector and their application.

  12. First results on video meteors from Crete, Greece

    Science.gov (United States)

    Maravelias, G.

    2012-01-01

    This work presents the first systematic video meteor observations from a, forthcoming permanent, station in Crete, Greece, operating as the first official node within the International Meteor Organization's Video Network. It consists of a Watec 902 H2 Ultimate camera equipped with a Panasonic WV-LA1208 (focal length 12mm, f/0.8) lens running MetRec. The system operated for 42 nights during 2011 (August 19-December 30, 2011) recording 1905 meteors. It is significantly more performant than a previous system used by the author during the Perseids 2010 (DMK camera 21AF04.AS by The Imaging Source, CCTV lens of focal length 2.8 mm, UFO Capture v2.22), which operated for 17 nights (August 4-22, 2010) recording 32 meteors. Differences - according to the author's experience - between the two softwares (MetRec, UFO Capture) are discussed along with a small guide to video meteor hardware.

  13. Non-Cooperative Facial Recognition Video Dataset Collection Plan

    Energy Technology Data Exchange (ETDEWEB)

    Kimura, Marcia L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Erikson, Rebecca L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lombardo, Nicholas J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-08-31

    The Pacific Northwest National Laboratory (PNNL) will produce a non-cooperative (i.e. not posing for the camera) facial recognition video data set for research purposes to evaluate and enhance facial recognition systems technology. The aggregate data set consists of 1) videos capturing PNNL role players and public volunteers in three key operational settings, 2) photographs of the role players for enrolling in an evaluation database, and 3) ground truth data that documents when the role player is within various camera fields of view. PNNL will deliver the aggregate data set to DHS who may then choose to make it available to other government agencies interested in evaluating and enhancing facial recognition systems. The three operational settings that will be the focus of the video collection effort include: 1) unidirectional crowd flow 2) bi-directional crowd flow, and 3) linear and/or serpentine queues.

  14. Automatic video surveillance of outdoor scenes using track before detect

    DEFF Research Database (Denmark)

    Hansen, Morten; Sørensen, Helge Bjarup Dissing; Birkemark, Christian M.

    2005-01-01

    This paper concerns automatic video surveillance of outdoor scenes using a single camera. The first step in automatic interpretation of the video stream is activity detection based on background subtraction. Usually, this process will generate a large number of false alarms in outdoor scenes due...

  15. Human recognition in a video network

    Science.gov (United States)

    Bhanu, Bir

    2009-10-01

    Video networks is an emerging interdisciplinary field with significant and exciting scientific and technological challenges. It has great promise in solving many real-world problems and enabling a broad range of applications, including smart homes, video surveillance, environment and traffic monitoring, elderly care, intelligent environments, and entertainment in public and private spaces. This paper provides an overview of the design of a wireless video network as an experimental environment, camera selection, hand-off and control, anomaly detection. It addresses challenging questions for individual identification using gait and face at a distance and present new techniques and their comparison for robust identification.

  16. DistancePPG: Robust non-contact vital signs monitoring using a camera

    Science.gov (United States)

    Kumar, Mayank; Veeraraghavan, Ashok; Sabharwal, Ashutosh

    2015-01-01

    Vital signs such as pulse rate and breathing rate are currently measured using contact probes. But, non-contact methods for measuring vital signs are desirable both in hospital settings (e.g. in NICU) and for ubiquitous in-situ health tracking (e.g. on mobile phone and computers with webcams). Recently, camera-based non-contact vital sign monitoring have been shown to be feasible. However, camera-based vital sign monitoring is challenging for people with darker skin tone, under low lighting conditions, and/or during movement of an individual in front of the camera. In this paper, we propose distancePPG, a new camera-based vital sign estimation algorithm which addresses these challenges. DistancePPG proposes a new method of combining skin-color change signals from different tracked regions of the face using a weighted average, where the weights depend on the blood perfusion and incident light intensity in the region, to improve the signal-to-noise ratio (SNR) of camera-based estimate. One of our key contributions is a new automatic method for determining the weights based only on the video recording of the subject. The gains in SNR of camera-based PPG estimated using distancePPG translate into reduction of the error in vital sign estimation, and thus expand the scope of camera-based vital sign monitoring to potentially challenging scenarios. Further, a dataset will be released, comprising of synchronized video recordings of face and pulse oximeter based ground truth recordings from the earlobe for people with different skin tones, under different lighting conditions and for various motion scenarios. PMID:26137365

  17. Solid state laser technology - A NASA perspective

    Science.gov (United States)

    Allario, F.

    1985-01-01

    NASA's program for developing solid-state laser technology and applying it to the Space Shuttle and Space Platform is discussed. Solid-state lasers are required to fulfill the Earth Observation System's requirements. The role of the Office of Aeronautics and Space Technology in developing a NASA tunable solid-state laser program is described. The major goals of the program involve developing a solid-state pump laser in the green, using AlGaAs array technology, pumping a Nd:YAG/SLAB crystal or glass, and fabricating a lidar system, with either a CO2 laser at 10.6 microns or a Nd:YAG laser at 1.06 microns, to measure tropospheric winds to an accuracy of + or - 1 m/s and a vertical resolution of 1 km. The procedures to be followed in order to visualize this technology plan include: (1) material development and characterization, (2) laser development, and (3) implementation of the lasers.

  18. Distant Measurement of Plethysmographic Signal in Various Lighting Conditions Using Configurable Frame-Rate Camera

    Directory of Open Access Journals (Sweden)

    Przybyło Jaromir

    2016-12-01

    Full Text Available Videoplethysmography is currently recognized as a promising noninvasive heart rate measurement method advantageous for ubiquitous monitoring of humans in natural living conditions. Although the method is considered for application in several areas including telemedicine, sports and assisted living, its dependence on lighting conditions and camera performance is still not investigated enough. In this paper we report on research of various image acquisition aspects including the lighting spectrum, frame rate and compression. In the experimental part, we recorded five video sequences in various lighting conditions (fluorescent artificial light, dim daylight, infrared light, incandescent light bulb using a programmable frame rate camera and a pulse oximeter as the reference. For a video sequence-based heart rate measurement we implemented a pulse detection algorithm based on the power spectral density, estimated using Welch’s technique. The results showed that lighting conditions and selected video camera settings including compression and the sampling frequency influence the heart rate detection accuracy. The average heart rate error also varies from 0.35 beats per minute (bpm for fluorescent light to 6.6 bpm for dim daylight.

  19. Solid state track detectors

    International Nuclear Information System (INIS)

    Reuther, H.

    1976-11-01

    This paper gives a survey of the present state of the development and the application of solid state track detectors. The fundamentals of the physical and chemical processes of the track formation and development are explained, the different detector materials and their registration characteristics are mentioned, the possibilities of the experimental practice and the most variable applications are discussed. (author)

  20. Development and application of remote video monitoring system for combine harvester based on embedded Linux

    Science.gov (United States)

    Chen, Jin; Wang, Yifan; Wang, Xuelei; Wang, Yuehong; Hu, Rui

    2017-01-01

    Combine harvester usually works in sparsely populated areas with harsh environment. In order to achieve the remote real-time video monitoring of the working state of combine harvester. A remote video monitoring system based on ARM11 and embedded Linux is developed. The system uses USB camera for capturing working state video data of the main parts of combine harvester, including the granary, threshing drum, cab and cut table. Using JPEG image compression standard to compress video data then transferring monitoring screen to remote monitoring center over the network for long-range monitoring and management. At the beginning of this paper it describes the necessity of the design of the system. Then it introduces realization methods of hardware and software briefly. And then it describes detailedly the configuration and compilation of embedded Linux operating system and the compiling and transplanting of video server program are elaborated. At the end of the paper, we carried out equipment installation and commissioning on combine harvester and then tested the system and showed the test results. In the experiment testing, the remote video monitoring system for combine harvester can achieve 30fps with the resolution of 800x600, and the response delay in the public network is about 40ms.

  1. Monolithic solid-state lasers for spaceflight

    Science.gov (United States)

    Krainak, Michael A.; Yu, Anthony W.; Stephen, Mark A.; Merritt, Scott; Glebov, Leonid; Glebova, Larissa; Ryasnyanskiy, Aleksandr; Smirnov, Vadim; Mu, Xiaodong; Meissner, Stephanie; Meissner, Helmuth

    2015-02-01

    A new solution for building high power, solid state lasers for space flight is to fabricate the whole laser resonator in a single (monolithic) structure or alternatively to build a contiguous diffusion bonded or welded structure. Monolithic lasers provide numerous advantages for space flight solid-state lasers by minimizing misalignment concerns. The closed cavity is immune to contamination. The number of components is minimized thus increasing reliability. Bragg mirrors serve as the high reflector and output coupler thus minimizing optical coatings and coating damage. The Bragg mirrors also provide spectral and spatial mode selection for high fidelity. The monolithic structure allows short cavities resulting in short pulses. Passive saturable absorber Q-switches provide a soft aperture for spatial mode filtering and improved pointing stability. We will review our recent commercial and in-house developments toward fully monolithic solid-state lasers.

  2. Measuring the Angular Velocity of a Propeller with Video Camera Using Electronic Rolling Shutter

    Directory of Open Access Journals (Sweden)

    Yipeng Zhao

    2018-01-01

    Full Text Available Noncontact measurement for rotational motion has advantages over the traditional method which measures rotational motion by means of installing some devices on the object, such as a rotary encoder. Cameras can be employed as remote monitoring or inspecting sensors to measure the angular velocity of a propeller because of their commonplace availability, simplicity, and potentially low cost. A defect of the measurement with cameras is to process the massive data generated by cameras. In order to reduce the collected data from the camera, a camera using ERS (electronic rolling shutter is applied to measure angular velocities which are higher than the speed of the camera. The effect of rolling shutter can induce geometric distortion in the image, when the propeller rotates during capturing an image. In order to reveal the relationship between the angular velocity and the image distortion, a rotation model has been established. The proposed method was applied to measure the angular velocities of the two-blade propeller and the multiblade propeller. The experimental results showed that this method could detect the angular velocities which were higher than the camera speed, and the accuracy was acceptable.

  3. Reliability of sagittal plane hip, knee, and ankle joint angles from a single frame of video data using the GAITRite camera system.

    Science.gov (United States)

    Ross, Sandy A; Rice, Clinton; Von Behren, Kristyn; Meyer, April; Alexander, Rachel; Murfin, Scott

    2015-01-01

    The purpose of this study was to establish intra-rater, intra-session, and inter-rater, reliability of sagittal plane hip, knee, and ankle angles with and without reflective markers using the GAITRite walkway and single video camera between student physical therapists and an experienced physical therapist. This study included thirty-two healthy participants age 20-59, stratified by age and gender. Participants performed three successful walks with and without markers applied to anatomical landmarks. GAITRite software was used to digitize sagittal hip, knee, and ankle angles at two phases of gait: (1) initial contact; and (2) mid-stance. Intra-rater reliability was more consistent for the experienced physical therapist, regardless of joint or phase of gait. Intra-session reliability was variable, the experienced physical therapist showed moderate to high reliability (intra-class correlation coefficient (ICC) = 0.50-0.89) and the student physical therapist showed very poor to high reliability (ICC = 0.07-0.85). Inter-rater reliability was highest during mid-stance at the knee with markers (ICC = 0.86) and lowest during mid-stance at the hip without markers (ICC = 0.25). Reliability of a single camera system, especially at the knee joint shows promise. Depending on the specific type of reliability, error can be attributed to the testers (e.g. lack of digitization practice and marker placement), participants (e.g. loose fitting clothing) and camera systems (e.g. frame rate and resolution). However, until the camera technology can be upgraded to a higher frame rate and resolution, and the software can be linked to the GAITRite walkway, the clinical utility for pre/post measures is limited.

  4. Heterogeneity image patch index and its application to consumer video summarization.

    Science.gov (United States)

    Dang, Chinh T; Radha, Hayder

    2014-06-01

    Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.

  5. Reliability of video-based identification of footstrike pattern and video time frame at initial contact in recreational runners

    DEFF Research Database (Denmark)

    Damsted, Camma; Larsen, L H; Nielsen, R.O.

    2015-01-01

    and video time frame at initial contact during treadmill running using two-dimensional (2D) video recordings. METHODS: Thirty-one recreational runners were recorded twice, 1 week apart, with a high-speed video camera. Two blinded raters evaluated each video twice with an interval of at least 14 days....... RESULTS: Kappa values for within-day identification of footstrike pattern revealed intra-rater agreement of 0.83-0.88 and inter-rater agreement of 0.50-0.63. Corresponding figures for between-day identification of footstrike pattern were 0.63-0.69 and 0.41-0.53, respectively. Identification of video time...... in 36% of the identifications (kappa=0.41). The 95% limits of agreement for identification of video time frame at initial contact may, at times, allow for different identification of footstrike pattern. Clinicians should, therefore, be encouraged to continue using clinical 2D video setups for intra...

  6. Solid state physics

    CERN Document Server

    Grosso, Giuseppe

    2013-01-01

    Solid State Physics is a textbook for students of physics, material science, chemistry, and engineering. It is the state-of-the-art presentation of the theoretical foundations and application of the quantum structure of matter and materials. This second edition provides timely coverage of the most important scientific breakthroughs of the last decade (especially in low-dimensional systems and quantum transport). It helps build readers' understanding of the newest advances in condensed matter physics with rigorous yet clear mathematics. Examples are an integral part of the text, carefully de

  7. SOLID-STATE STORAGE DEVICE FLASH TRANSLATION LAYER

    DEFF Research Database (Denmark)

    2017-01-01

    Embodiments of the present invention include a method for storing a data page d on a solid-state storage device, wherein the solid-state storage device is configured to maintain a mapping table in a Log-Structure Merge (LSM) tree having a C0 component which is a random access memory (RAM) device...

  8. A practical implementation of free viewpoint video system for soccer games

    Science.gov (United States)

    Suenaga, Ryo; Suzuki, Kazuyoshi; Tezuka, Tomoyuki; Panahpour Tehrani, Mehrdad; Takahashi, Keita; Fujii, Toshiaki

    2015-03-01

    In this paper, we present a free viewpoint video generation system with billboard representation for soccer games. Free viewpoint video generation is a technology that enables users to watch 3-D objects from their desired viewpoints. Practical implementation of free viewpoint video for sports events is highly demanded. However, a commercially acceptable system has not yet been developed. The main obstacles are insufficient user-end quality of the synthesized images and highly complex procedures that sometimes require manual operations. In this work, we aim to develop a commercially acceptable free viewpoint video system with a billboard representation. A supposed scenario is that soccer games during the day can be broadcasted in 3-D, even in the evening of the same day. Our work is still ongoing. However, we have already developed several techniques to support our goal. First, we captured an actual soccer game at an official stadium where we used 20 full-HD professional cameras. Second, we have implemented several tools for free viewpoint video generation as follow. In order to facilitate free viewpoint video generation, all cameras should be calibrated. We calibrated all cameras using checker board images and feature points on the field (cross points of the soccer field lines). We extract each player region from captured images manually. The background region is estimated by observing chrominance changes of each pixel in temporal domain (automatically). Additionally, we have developed a user interface for visualizing free viewpoint video generation using a graphic library (OpenGL), which is suitable for not only commercialized TV sets but also devices such as smartphones. However, practical system has not yet been completed and our study is still ongoing.

  9. Methods and Algorithms for Detecting Objects in Video Files

    Directory of Open Access Journals (Sweden)

    Nguyen The Cuong

    2018-01-01

    Full Text Available Video files are files that store motion pictures and sounds like in real life. In today's world, the need for automated processing of information in video files is increasing. Automated processing of information has a wide range of application including office/home surveillance cameras, traffic control, sports applications, remote object detection, and others. In particular, detection and tracking of object movement in video file plays an important role. This article describes the methods of detecting objects in video files. Today, this problem in the field of computer vision is being studied worldwide.

  10. FPGA-Based Real-Time Motion Detection for Automated Video Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Sanjay Singh

    2016-03-01

    Full Text Available Design of automated video surveillance systems is one of the exigent missions in computer vision community because of their ability to automatically select frames of interest in incoming video streams based on motion detection. This research paper focuses on the real-time hardware implementation of a motion detection algorithm for such vision based automated surveillance systems. A dedicated VLSI architecture has been proposed and designed for clustering-based motion detection scheme. The working prototype of a complete standalone automated video surveillance system, including input camera interface, designed motion detection VLSI architecture, and output display interface, with real-time relevant motion detection capabilities, has been implemented on Xilinx ML510 (Virtex-5 FX130T FPGA platform. The prototyped system robustly detects the relevant motion in real-time in live PAL (720 × 576 resolution video streams directly coming from the camera.

  11. Utilizing ISS Camera Systems for Scientific Analysis of Lightning Characteristics and comparison with ISS-LIS and GLM

    Science.gov (United States)

    Schultz, C. J.; Lang, T. J.; Leake, S.; Runco, M.; Blakeslee, R. J.

    2017-12-01

    Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how georeferenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration. Camera images from the crew cameras and high definition video from the Chiba University Meteor Camera were combined with lightning data from the National Lightning Detection Network (NLDN), ISS-Lightning Imaging Sensor (ISS-LIS), the Geostationary Lightning Mapper (GLM) and lightning mapping arrays. These cameras provide significant spatial resolution advantages ( 10 times or better) over ISS-LIS and GLM, but with lower temporal resolution. Therefore, they can serve as a complementarity analysis tool for studying lightning and thunderstorm processes from space. Lightning sensor data, Visible Infrared Imaging Radiometer Suite (VIIRS) derived city light maps, and other geographic databases were combined with the ISS attitude and position data to reverse geolocate each image or frame. An open-source Python toolkit has been developed to assist with this effort. Next, the locations and sizes of all flashes in each frame or image were computed and compared with flash characteristics from all available lightning datasets. This allowed for characterization of cloud features that are below the 4-km and 8-km resolution of ISS-LIS and GLM which may reduce the light that reaches the ISS-LIS or GLM sensor. In the case of video, consecutive frames were overlaid to determine the rate of change of the light escaping cloud top. Characterization of the rate of change in geometry, more generally the radius, of light escaping cloud top was integrated with the NLDN, ISS-LIS and

  12. optimisation of solid optimisation of solid state fermentation

    African Journals Online (AJOL)

    eobe

    from banana peels via solid state fermentation using Aspergillus niger. ermentation ... [7,8], apple pomace [9], banana peels [4], date palm. [10], carob ... powder, jams, juice, bar, biscuits, wine etc results in ... Yeast extract was taken as nitrogen.

  13. Replicas Strategy and Cache Optimization of Video Surveillance Systems Based on Cloud Storage

    Directory of Open Access Journals (Sweden)

    Rongheng Li

    2018-04-01

    Full Text Available With the rapid development of video surveillance technology, especially the popularity of cloud-based video surveillance applications, video data begins to grow explosively. However, in the cloud-based video surveillance system, replicas occupy an amount of storage space. Also, the slow response to video playback constrains the performance of the system. In this paper, considering the characteristics of video data comprehensively, we propose a dynamic redundant replicas mechanism based on security levels that can dynamically adjust the number of replicas. Based on the location correlation between cameras, this paper also proposes a data cache strategy to improve the response speed of data reading. Experiments illustrate that: (1 our dynamic redundant replicas mechanism can save storage space while ensuring data security; (2 the cache mechanism can predict the playback behaviors of the users in advance and improve the response speed of data reading according to the location and time correlation of the front-end cameras; and (3 in terms of cloud-based video surveillance, our proposed approaches significantly outperform existing methods.

  14. Distributed Sensing and Processing for Multi-Camera Networks

    Science.gov (United States)

    Sankaranarayanan, Aswin C.; Chellappa, Rama; Baraniuk, Richard G.

    Sensor networks with large numbers of cameras are becoming increasingly prevalent in a wide range of applications, including video conferencing, motion capture, surveillance, and clinical diagnostics. In this chapter, we identify some of the fundamental challenges in designing such systems: robust statistical inference, computationally efficiency, and opportunistic and parsimonious sensing. We show that the geometric constraints induced by the imaging process are extremely useful for identifying and designing optimal estimators for object detection and tracking tasks. We also derive pipelined and parallelized implementations of popular tools used for statistical inference in non-linear systems, of which multi-camera systems are examples. Finally, we highlight the use of the emerging theory of compressive sensing in reducing the amount of data sensed and communicated by a camera network.

  15. Credibility and Authenticity of Digitally Signed Videos in Traffic

    Directory of Open Access Journals (Sweden)

    Ivan Grgurević

    2008-11-01

    Full Text Available The paper presents the possibilities of insuring the credibilityand authenticity of the surveillance camera video by digitalsigning, using the public key infrastructure as part of interoperabletraffic and information system in the future intelligenttransport systems. The surveillance camera video is a sequenceof individual frames and a unique digital print, i. e. hash valueis calculated for each of these. By encryption of the hash valuesof the frames using private encryption key of the surveillancecentre, digital signatures are created and they are stored in thedatabase. The surveillance centre can issue a copy of the videoto all the interested subjects for scientific and research workand investigation. Regardless of the scope, each subsequentmanipulation of the video copy contents will certainly changethe hash value of all the frames. The procedure of determiningthe authenticity and credibility of videos is reduced to the comparisonof the hash values of the frames stored in the databaseof the surveillance centre with the values obtained from the interestedsubjects such as the traffic experts and investigators,surveillance-security services etc.

  16. Innovative Video Diagnostic Equipment for Material Science

    Science.gov (United States)

    Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.

    2012-01-01

    Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.

  17. Solid state electrolytes for all-solid-state 3D lithium-ion batteries

    NARCIS (Netherlands)

    Kokal, I.

    2012-01-01

    The focus of this Ph.D. thesis is to understand the lithium ion motion and to enhance the Li-ionic conductivities in commonly known solid state lithium ion conductors by changing the structural properties and preparation methods. In addition, the feasibility for practical utilization of several

  18. The persuasive aims of Metal Gear Solid: A discourse theoretical approach to the study of argumentation in video games

    DEFF Research Database (Denmark)

    Stamenkovic, Dusan; Jacevic, Milan; Wildfeuer, Janina

    2017-01-01

    in a representative video game combine so as to convince the player to act, play, and perhaps think accordingly. The multimodal approach employed in the paper combines the notion of discourse aims and the rhetorical and argumentative structure of Metal Gear Solid (Kojima, 1998), and analyses different narrative...... strategies and verbal cues, as well as the overall interface, control, and gameplay. The results suggest that verbal and textual cues combine with audio-visual elements and highly specific gameplay strategies in order to refrain the player from killing enemies. This might indicate that video games are likely...

  19. Markerless registration for image guided surgery. Preoperative image, intraoperative video image, and patient

    International Nuclear Information System (INIS)

    Kihara, Tomohiko; Tanaka, Yuko

    1998-01-01

    Real-time and volumetric acquisition of X-ray CT, MR, and SPECT is the latest trend of the medical imaging devices. A clinical challenge is to use these multi-modality volumetric information complementary on patient in the entire diagnostic and surgical processes. The intraoperative image and patient integration intents to establish a common reference frame by image in diagnostic and surgical processes. This provides a quantitative measure during surgery, for which we have been relied mostly on doctors' skills and experiences. The intraoperative image and patient integration involves various technologies, however, we think one of the most important elements is the development of markerless registration, which should be efficient and applicable to the preoperative multi-modality data sets, intraoperative image, and patient. We developed a registration system which integrates preoperative multi-modality images, intraoperative video image, and patient. It consists of a real-time registration of video camera for intraoperative use, a markerless surface sampling matching of patient and image, our previous works of markerless multi-modality image registration of X-ray CT, MR, and SPECT, and an image synthesis on video image. We think these techniques can be used in many applications which involve video camera like devices such as video camera, microscope, and image Intensifier. (author)

  20. Distributed video data fusion and mining

    Science.gov (United States)

    Chang, Edward Y.; Wang, Yuan-Fang; Rodoplu, Volkan

    2004-09-01

    This paper presents an event sensing paradigm for intelligent event-analysis in a wireless, ad hoc, multi-camera, video surveillance system. In particilar, we present statistical methods that we have developed to support three aspects of event sensing: 1) energy-efficient, resource-conserving, and robust sensor data fusion and analysis, 2) intelligent event modeling and recognition, and 3) rapid deployment, dynamic configuration, and continuous operation of the camera networks. We outline our preliminary results, and discuss future directions that research might take.

  1. Technology survey on video face tracking

    Science.gov (United States)

    Zhang, Tong; Gomes, Herman Martins

    2014-03-01

    With the pervasiveness of monitoring cameras installed in public areas, schools, hospitals, work places and homes, video analytics technologies for interpreting these video contents are becoming increasingly relevant to people's lives. Among such technologies, human face detection and tracking (and face identification in many cases) are particularly useful in various application scenarios. While plenty of research has been conducted on face tracking and many promising approaches have been proposed, there are still significant challenges in recognizing and tracking people in videos with uncontrolled capturing conditions, largely due to pose and illumination variations, as well as occlusions and cluttered background. It is especially complex to track and identify multiple people simultaneously in real time due to the large amount of computation involved. In this paper, we present a survey on literature and software that are published or developed during recent years on the face tracking topic. The survey covers the following topics: 1) mainstream and state-of-the-art face tracking methods, including features used to model the targets and metrics used for tracking; 2) face identification and face clustering from face sequences; and 3) software packages or demonstrations that are available for algorithm development or trial. A number of publically available databases for face tracking are also introduced.

  2. MPEG-2 Compressed-Domain Algorithms for Video Analysis

    Directory of Open Access Journals (Sweden)

    Hesseler Wolfgang

    2006-01-01

    Full Text Available This paper presents new algorithms for extracting metadata from video sequences in the MPEG-2 compressed domain. Three algorithms for efficient low-level metadata extraction in preprocessing stages are described. The first algorithm detects camera motion using the motion vector field of an MPEG-2 video. The second method extends the idea of motion detection to a limited region of interest, yielding an efficient algorithm to track objects inside video sequences. The third algorithm performs a cut detection using macroblock types and motion vectors.

  3. An assessment of the effectiveness of high definition cameras as remote monitoring tools for dolphin ecology studies.

    Directory of Open Access Journals (Sweden)

    Estênio Guimarães Paiva

    Full Text Available Research involving marine mammals often requires costly field programs. This paper assessed whether the benefits of using cameras outweighs the implications of having personnel performing marine mammal detection in the field. The efficacy of video and still cameras to detect Indo-Pacific bottlenose dolphins (Tursiops aduncus in the Fremantle Harbour (Western Australia was evaluated, with consideration on how environmental conditions affect detectability. The cameras were set on a tower in the Fremantle Port channel and videos were perused at 1.75 times the normal speed. Images from the cameras were used to estimate position of dolphins at the water's surface. Dolphin detections ranged from 5.6 m to 463.3 m for the video camera, and from 10.8 m to 347.8 m for the still camera. Detection range showed to be satisfactory when compared to distances at which dolphins would be detected by field observers. The relative effect of environmental conditions on detectability was considered by fitting a Generalised Estimation Equations (GEEs model with Beaufort, level of glare and their interactions as predictors and a temporal auto-correlation structure. The best fit model indicated level of glare had an effect, with more intense periods of glare corresponding to lower occurrences of observed dolphins. However this effect was not large (-0.264 and the parameter estimate was associated with a large standard error (0.113. The limited field of view was the main restraint in that cameras can be only applied to detections of animals observed rather than counts of individuals. However, the use of cameras was effective for long term monitoring of occurrence of dolphins, outweighing the costs and reducing the health and safety risks to field personal. This study showed that cameras could be effectively implemented onshore for research such as studying changes in habitat use in response to development and construction activities.

  4. Measurement and protocol for evaluating video and still stabilization systems

    Science.gov (United States)

    Cormier, Etienne; Cao, Frédéric; Guichard, Frédéric; Viard, Clément

    2013-01-01

    This article presents a system and a protocol to characterize image stabilization systems both for still images and videos. It uses a six axes platform, three being used for camera rotation and three for camera positioning. The platform is programmable and can reproduce complex motions that have been typically recorded by a gyroscope mounted on different types of cameras in different use cases. The measurement uses a single chart for still image and videos, the texture dead leaves chart. Although the proposed implementation of the protocol uses a motion platform, the measurement itself does not rely on any specific hardware. For still images, a modulation transfer function is measured in different directions and is weighted by a contrast sensitivity function (simulating the human visual system accuracy) to obtain an acutance. The sharpness improvement due to the image stabilization system is a good measurement of performance as recommended by a CIPA standard draft. For video, four markers on the chart are detected with sub-pixel accuracy to determine a homographic deformation between the current frame and a reference position. This model describes well the apparent global motion as translations, but also rotations along the optical axis and distortion due to the electronic rolling shutter equipping most CMOS sensors. The protocol is applied to all types of cameras such as DSC, DSLR and smartphones.

  5. Passivation-free solid state battery

    Science.gov (United States)

    Abraham, Kuzhikalail M.; Peramunage, Dharmasena

    1998-01-01

    This invention pertains to passivation-free solid-state rechargeable batteries composed of Li.sub.4 Ti.sub.5 O.sub.12 anode, a solid polymer electrolyte and a high voltage cathode. The solid polymer electrolyte comprises a polymer host, such as polyacrylonitrile, poly(vinyl chloride), poly(vinyl sulfone), and poly(vinylidene fluoride), plasticized by a solution of a Li salt in an organic solvent. The high voltage cathode includes LiMn.sub.2 O.sub.4, LiCoO.sub.2, LiNiO.sub.2 and LiV.sub.2 O.sub.5 and their derivatives.

  6. The Oxford solid state basics

    CERN Document Server

    Simon, Steven H

    2013-01-01

    The study of solids is one of the richest, most exciting, and most successful branches of physics. While the subject of solid state physics is often viewed as dry and tedious this new book presents the topic instead as an exciting exposition of fundamental principles and great intellectual breakthroughs. Beginning with a discussion of how the study of heat capacity of solids ushered in the quantum revolution, the author presents the key ideas of the field while emphasizing the deepunderlying concepts. The book begins with a discussion of the Einstein/Debye model of specific heat, and the Drude

  7. Cellphones in Classrooms Land Teachers on Online Video Sites

    Science.gov (United States)

    Honawar, Vaishali

    2007-01-01

    Videos of teachers that students taped in secrecy are all over online sites like YouTube and MySpace. Angry teachers, enthusiastic teachers, teachers clowning around, singing, and even dancing are captured, usually with camera phones, for the whole world to see. Some students go so far as to create elaborately edited videos, shot over several…

  8. 241-AZ-101 Waste Tank Color Video Camera System Shop Acceptance Test Report

    Energy Technology Data Exchange (ETDEWEB)

    WERRY, S.M.

    2000-03-23

    This report includes shop acceptance test results. The test was performed prior to installation at tank AZ-101. Both the camera system and camera purge system were originally sought and procured as a part of initial waste retrieval project W-151.

  9. 241-AZ-101 Waste Tank Color Video Camera System Shop Acceptance Test Report

    International Nuclear Information System (INIS)

    WERRY, S.M.

    2000-01-01

    This report includes shop acceptance test results. The test was performed prior to installation at tank AZ-101. Both the camera system and camera purge system were originally sought and procured as a part of initial waste retrieval project W-151

  10. vid113_0401r -- Video groundtruthing collected from RV Tatoosh during August 2005.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A custom built camera sled outfitted with video equipment (and other devices) was deployed from the NOAA research vessel Tatoosh during August 2005. Video data from...

  11. Towards real-time remote processing of laparoscopic video

    Science.gov (United States)

    Ronaghi, Zahra; Duffy, Edward B.; Kwartowitz, David M.

    2015-03-01

    Laparoscopic surgery is a minimally invasive surgical technique where surgeons insert a small video camera into the patient's body to visualize internal organs and small tools to perform surgical procedures. However, the benefit of small incisions has a drawback of limited visualization of subsurface tissues, which can lead to navigational challenges in the delivering of therapy. Image-guided surgery (IGS) uses images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic camera system of interest is the vision system of the daVinci-Si robotic surgical system (Intuitive Surgical, Sunnyvale, CA, USA). The video streams generate approximately 360 megabytes of data per second, demonstrating a trend towards increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Processing this data on a bedside PC has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second (fps) rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. The ability to acquire, process and visualize data in real-time is essential for performance of complex tasks as well as minimizing risk to the patient. As a result, utilizing high-speed networks to access computing clusters will lead to real-time medical image processing and improve surgical experiences by providing real-time augmented laparoscopic data. We aim to develop a medical video processing system using an OpenFlow software defined network that is capable of connecting to multiple remote medical facilities and HPC servers.

  12. Assessment of Machine Learning Algorithms for Automatic Benthic Cover Monitoring and Mapping Using Towed Underwater Video Camera and High-Resolution Satellite Images

    Directory of Open Access Journals (Sweden)

    Hassan Mohamed

    2018-05-01

    Full Text Available Benthic habitat monitoring is essential for many applications involving biodiversity, marine resource management, and the estimation of variations over temporal and spatial scales. Nevertheless, both automatic and semi-automatic analytical methods for deriving ecologically significant information from towed camera images are still limited. This study proposes a methodology that enables a high-resolution towed camera with a Global Navigation Satellite System (GNSS to adaptively monitor and map benthic habitats. First, the towed camera finishes a pre-programmed initial survey to collect benthic habitat videos, which can then be converted to geo-located benthic habitat images. Second, an expert labels a number of benthic habitat images to class habitats manually. Third, attributes for categorizing these images are extracted automatically using the Bag of Features (BOF algorithm. Fourth, benthic cover categories are detected automatically using Weighted Majority Voting (WMV ensembles for Support Vector Machines (SVM, K-Nearest Neighbor (K-NN, and Bagging (BAG classifiers. Fifth, WMV-trained ensembles can be used for categorizing more benthic cover images automatically. Finally, correctly categorized geo-located images can provide ground truth samples for benthic cover mapping using high-resolution satellite imagery. The proposed methodology was tested over Shiraho, Ishigaki Island, Japan, a heterogeneous coastal area. The WMV ensemble exhibited 89% overall accuracy for categorizing corals, sediments, seagrass, and algae species. Furthermore, the same WMV ensemble produced a benthic cover map using a Quickbird satellite image with 92.7% overall accuracy.

  13. Solid-state ring laser gyroscope

    Science.gov (United States)

    Schwartz, S.

    The ring laser gyroscope is a rotation sensor used in most kinds of inertial navigation units. It usually consists in a ring cavity filled with a mixture of helium and neon, together with high-voltage pumping electrodes. The use of a gaseous gain medium, while resulting naturally in a stable bidirectional regime enabling rotation sensing, is however the main industrially limiting factor for the ring laser gyroscopes in terms of cost, reliability and lifetime. We study in this book the possibility of substituting for the gaseous gain medium a solid-state medium (diode-pumped Nd-YAG). For this, a theoretical and experimental overview of the lasing regimes of the solid-state ring laser is reported. We show that the bidirectional emission can be obtained thanks to a feedback loop acting on the states of polarization and inducing differential losses proportional to the difference of intensity between the counterpropagating modes. This leads to the achievement of a solid-state ring laser gyroscope, whose frequency response is modified by mode coupling effects. Several configurations, either mechanically or optically based, are then successively studied, with a view to improving the quality of this frequency response. In particular, vibration of the gain crystal along the longitudinal axis appears to be a very promising technique for reaching high inertial performances with a solid-state ring laser gyroscope. Gyrolaser à état solide. Le gyrolaser est un capteur de rotation utilisé dans la plupart des centrales de navigation inertielle. Dans sa forme usuelle, il est constitué d'une cavité laser en anneau remplie d'un mélange d'hélium et de néon pompé par des électrodes à haute tension. L'utilisation d'un milieu amplificateur gazeux, si elle permet de garantir naturellement le fonctionnement bidirectionnel stable nécessaire à la mesure des rotations, constitue en revanche la principale limitation industrielle des gyrolasers actuels en termes de coût, fiabilit

  14. Playing with the Camera - Creating with Each Other

    DEFF Research Database (Denmark)

    Vestergaard, Vitus

    2015-01-01

    , it is imperative to investigate how museum users in a group create videos and engage with each other and the exhibits. Based on research on young users creating videos in the Media Mixer, this article explores what happens during the creative process in front of a camera. Drawing upon theories of museology, media......Many contemporary museums try to involve users as active participants in a range of new ways. One way to engage young people with visual culture is through exhibits where users produce their own videos. Since museum experiences are largely social in nature and based on the group as a social unit......, learning and creativity, the article discusses how to operationalize and make sense of seemingly chaotic or banal production processes in a museum....

  15. Vehicular camera pedestrian detection research

    Science.gov (United States)

    Liu, Jiahui

    2018-03-01

    With the rapid development of science and technology, it has made great development, but at the same time of highway traffic more convenient in highway traffic and transportation. However, in the meantime, traffic safety accidents occur more and more frequently in China. In order to deal with the increasingly heavy traffic safety. So, protecting the safety of people's personal property and facilitating travel has become a top priority. The real-time accurate pedestrian and driving environment are obtained through a vehicular camera which are used to detection and track the preceding moving targets. It is popular in the domain of intelligent vehicle safety driving, autonomous navigation and traffic system research. Based on the pedestrian video obtained by the Vehicular Camera, this paper studies the trajectory of pedestrian detection and its algorithm.

  16. Teasing Apart Complex Motions using VideoPoint

    Science.gov (United States)

    Fischer, Mark

    2002-10-01

    Using video analysis software such as VideoPoint, it is possible to explore the physics of any phenomenon that can be captured on videotape. The good news is that complex motions can be filmed and analyzed. The bad news is that the motions can become very complex very quickly. An example of such a complicated motion, the 2-dimensional motion of an object as filmed by a camera that is moving and rotating in the same plane will be discussed. Methods for extracting the desired object motion will be given as well as suggestions for shooting more easily analyzable video clips.

  17. Achievement of solid-state plasma fusion ('Cold-Fusion')

    International Nuclear Information System (INIS)

    Arata, Yoshiaki; Zhang, Yue-Chang

    1995-01-01

    Using a 'QMS' (Quadrupole Mass Spectrometer), the authors detected a significantly large amount (10 20 -10 21 [cm -3 ]) of helium ( 2 4 He), which was concluded to have been produced by a deuterium nuclear reaction within a host solid. These results were found to be fully repeatable and supported the authors' proposition that solid state plasma fusion ('Cold Fusion') can be generated in energetic deuterium Strongly Coupled Plasma ('SC-plasma'). This fusion reaction is thought to be sustained by localized 'Latticequake' in a solid-state media with the deuterium density equivalent to that of the host solid. While exploring this basic proposition, the characteristic differences when compared with ultra high temperature-state plasma fusion ('Hot Fusion') are clarified. In general, the most essential reaction product in both types of the deuterium plasma fusion is considered to be helium, irrespective of the 'well-known and/or unknown reactions', which is stored within the solid-state medium in abundance as a 'Residual Product', but which generally can not enter into nor be released from host-solid at a room temperature. Even measuring instruments with relatively poor sensitivity should be able to easily detect such residual helium. An absence of residual helium means that no nuclear fusion reaction has occurred, whereas its presence provides crucial evidence that nuclear fusion has, in fact, occurred in the solid. (author)

  18. Image Alignment for Multiple Camera High Dynamic Range Microscopy.

    Science.gov (United States)

    Eastwood, Brian S; Childs, Elisabeth C

    2012-01-09

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.

  19. Solid state nuclear track detection principles, methods and applications

    CERN Document Server

    Durrani, S A; ter Haar, D

    1987-01-01

    Solid State Nuclear Track Detection: Principles, Methods and Applications is the second book written by the authors after Nuclear Tracks in Solids: Principles and Applications. The book is meant as an introduction to the subject solid state of nuclear track detection. The text covers the interactions of charged particles with matter; the nature of the charged-particle track; the methodology and geometry of track etching; thermal fading of latent damage trails on tracks; the use of dielectric track recorders in particle identification; radiation dossimetry; and solid state nuclear track detecti

  20. An introduction to solid state diffusion

    CERN Document Server

    Borg, Richard J

    2012-01-01

    The energetics and mechanisms of diffusion control the kinetics of such diverse phenomena as the fabrication of semiconductors and superconductors, the tempering of steel, geological metamorphism, the precipitation hardening of nonferrous alloys and corrosion of metals and alloys. This work explains the fundamentals of diffusion in the solid state at a level suitable for upper-level undergraduate and beginning graduate students in materials science, metallurgy, mineralogy, and solid state physics and chemistry. A knowledge of physical chemistry such as is generally provided by a one-year under

  1. Content Area Vocabulary Videos in Multiple Contexts: A Pedagogical Tool

    Science.gov (United States)

    Webb, C. Lorraine; Kapavik, Robin Robinson

    2015-01-01

    The authors challenged pre-service teachers to digitally define a social studies or mathematical vocabulary term in multiple contexts using a digital video camera. The researchers sought to answer the following questions: 1. How will creating a video for instruction affect pre-service teachers' attitudes about teaching with technology, if at all?…

  2. The design of red-blue 3D video fusion system based on DM642

    Science.gov (United States)

    Fu, Rongguo; Luo, Hao; Lv, Jin; Feng, Shu; Wei, Yifang; Zhang, Hao

    2016-10-01

    Aiming at the uncertainty of traditional 3D video capturing including camera focal lengths, distance and angle parameters between two cameras, a red-blue 3D video fusion system based on DM642 hardware processing platform is designed with the parallel optical axis. In view of the brightness reduction of traditional 3D video, the brightness enhancement algorithm based on human visual characteristics is proposed and the luminance component processing method based on YCbCr color space is also proposed. The BIOS real-time operating system is used to improve the real-time performance. The video processing circuit with the core of DM642 enhances the brightness of the images, then converts the video signals of YCbCr to RGB and extracts the R component from one camera, so does the other video and G, B component are extracted synchronously, outputs 3D fusion images finally. The real-time adjustments such as translation and scaling of the two color components are realized through the serial communication between the VC software and BIOS. The system with the method of adding red-blue components reduces the lost of the chrominance components and makes the picture color saturation reduce to more than 95% of the original. Enhancement algorithm after optimization to reduce the amount of data fusion in the processing of video is used to reduce the fusion time and watching effect is improved. Experimental results show that the system can capture images in near distance, output red-blue 3D video and presents the nice experiences to the audience wearing red-blue glasses.

  3. Longitudinal effects of violent video games on aggression in Japan and the United States.

    Science.gov (United States)

    Anderson, Craig A; Sakamoto, Akira; Gentile, Douglas A; Ihori, Nobuko; Shibuya, Akiko; Yukawa, Shintaro; Naito, Mayumi; Kobayashi, Kumiko

    2008-11-01

    Youth worldwide play violent video games many hours per week. Previous research suggests that such exposure can increase physical aggression. We tested whether high exposure to violent video games increases physical aggression over time in both high- (United States) and low- (Japan) violence cultures. We hypothesized that the amount of exposure to violent video games early in a school year would predict changes in physical aggressiveness assessed later in the school year, even after statistically controlling for gender and previous physical aggressiveness. In 3 independent samples, participants' video game habits and physically aggressive behavior tendencies were assessed at 2 points in time, separated by 3 to 6 months. One sample consisted of 181 Japanese junior high students ranging in age from 12 to 15 years. A second Japanese sample consisted of 1050 students ranging in age from 13 to 18 years. The third sample consisted of 364 United States 3rd-, 4th-, and 5th-graders ranging in age from 9 to 12 years. RESULTS. Habitual violent video game play early in the school year predicted later aggression, even after controlling for gender and previous aggressiveness in each sample. Those who played a lot of violent video games became relatively more physically aggressive. Multisample structure equation modeling revealed that this longitudinal effect was of a similar magnitude in the United States and Japan for similar-aged youth and was smaller (but still significant) in the sample that included older youth. These longitudinal results confirm earlier experimental and cross-sectional studies that had suggested that playing violent video games is a significant risk factor for later physically aggressive behavior and that this violent video game effect on youth generalizes across very different cultures. As a whole, the research strongly suggests reducing the exposure of youth to this risk factor.

  4. 3D-Printing Electrolytes for Solid-State Batteries.

    Science.gov (United States)

    McOwen, Dennis W; Xu, Shaomao; Gong, Yunhui; Wen, Yang; Godbey, Griffin L; Gritton, Jack E; Hamann, Tanner R; Dai, Jiaqi; Hitz, Gregory T; Hu, Liangbing; Wachsman, Eric D

    2018-05-01

    Solid-state batteries have many enticing advantages in terms of safety and stability, but the solid electrolytes upon which these batteries are based typically lead to high cell resistance. Both components of the resistance (interfacial, due to poor contact with electrolytes, and bulk, due to a thick electrolyte) are a result of the rudimentary manufacturing capabilities that exist for solid-state electrolytes. In general, solid electrolytes are studied as flat pellets with planar interfaces, which minimizes interfacial contact area. Here, multiple ink formulations are developed that enable 3D printing of unique solid electrolyte microstructures with varying properties. These inks are used to 3D-print a variety of patterns, which are then sintered to reveal thin, nonplanar, intricate architectures composed only of Li 7 La 3 Zr 2 O 12 solid electrolyte. Using these 3D-printing ink formulations to further study and optimize electrolyte structure could lead to solid-state batteries with dramatically lower full cell resistance and higher energy and power density. In addition, the reported ink compositions could be used as a model recipe for other solid electrolyte or ceramic inks, perhaps enabling 3D printing in related fields. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Analysis of Camera Arrays Applicable to the Internet of Things.

    Science.gov (United States)

    Yang, Jiachen; Xu, Ru; Lv, Zhihan; Song, Houbing

    2016-03-22

    The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are used in various applications and analyzed in the research work, there are few discussions on the comparison of them. Therefore, we make a detailed analysis about their performance over different shooting distances. From our analysis, we find that the threshold of shooting distance for converged cameras is 7 m. In addition, we design a camera array in our work that can be used as a parallel camera array, as well as a converged camera array and take some images and videos with it to identify the threshold.

  6. An Energy-Efficient and High-Quality Video Transmission Architecture in Wireless Video-Based Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yasaman Samei

    2008-08-01

    Full Text Available Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN. With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture. This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.

  7. An Energy-Efficient and High-Quality Video Transmission Architecture in Wireless Video-Based Sensor Networks.

    Science.gov (United States)

    Aghdasi, Hadi S; Abbaspour, Maghsoud; Moghadam, Mohsen Ebrahimi; Samei, Yasaman

    2008-08-04

    Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS) and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN). With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture). This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.

  8. Accurate estimation of camera shot noise in the real-time

    Science.gov (United States)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.

    2017-10-01

    Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the

  9. A zwitterionic gel electrolyte for efficient solid-state supercapacitors

    Science.gov (United States)

    Peng, Xu; Liu, Huili; Yin, Qin; Wu, Junchi; Chen, Pengzuo; Zhang, Guangzhao; Liu, Guangming; Wu, Changzheng; Xie, Yi

    2016-01-01

    Gel electrolytes have attracted increasing attention for solid-state supercapacitors. An ideal gel electrolyte usually requires a combination of advantages of high ion migration rate, reasonable mechanical strength and robust water retention ability at the solid state for ensuring excellent work durability. Here we report a zwitterionic gel electrolyte that successfully brings the synergic advantages of robust water retention ability and ion migration channels, manifesting in superior electrochemical performance. When applying the zwitterionic gel electrolyte, our graphene-based solid-state supercapacitor reaches a volume capacitance of 300.8 F cm−3 at 0.8 A cm−3 with a rate capacity of only 14.9% capacitance loss as the current density increases from 0.8 to 20 A cm−3, representing the best value among the previously reported graphene-based solid-state supercapacitors, to the best of our knowledge. We anticipate that zwitterionic gel electrolyte may be developed as a gel electrolyte in solid-state supercapacitors. PMID:27225484

  10. A television/still camera with common optical system for reactor inspection

    International Nuclear Information System (INIS)

    Hughes, G.; McBane, P.

    1976-01-01

    One of the problems of reactor inspection is to obtain permanent high quality records. Video recordings provide a record of poor quality but known content. Still cameras can be used but the frame content is not predictable. Efforts have been made to combine T.V. viewing to align a still camera but a simple combination does not provide the same frame size. The necessity to preset the still camera controls severely restricts the flexibility of operation. A camera has, therefore, been designed which allows a search operation using the T.V. system. When an anomaly is found the still camera controls can be remotely set, an exact record obtained and the search operation continued without removal from the reactor. An application of this camera in the environment of the blanket gas region above the sodium region in PFR at 150 0 C is described

  11. Solid State Theory An Introduction

    CERN Document Server

    Rössler, Ulrich

    2009-01-01

    Solid-State Theory - An Introduction is a textbook for graduate students of physics and material sciences. It stands in the tradition of older textbooks on this subject but takes up new developments in theoretical concepts and materials which are connected with such path breaking discoveries as the Quantum-Hall Effects, the high-Tc superconductors, and the low-dimensional systems realized in solids. Thus besides providing the fundamental concepts to describe the physics of electrons and ions of which the solid consists, including their interactions and the interaction with light, the book casts a bridge to the experimental facts and opens the view into current research fields.

  12. Self-healing liquid/solid state battery

    Science.gov (United States)

    Burke, Paul J.; Chung, Brice H.V.; Phadke, Satyajit R.; Ning, Xiaohui; Sadoway, Donald R.

    2018-02-27

    A battery system that exchanges energy with an external device is provided. The battery system includes a positive electrode having a first metal or alloy, a negative electrode having a second metal or alloy, and an electrolyte including a salt of the second metal or alloy. The positive electrode, the negative electrode, and the electrolyte are in a liquid phase at an operating temperature during at least one portion of operation. The positive electrode is entirely in a liquid phase in one charged state and includes a solid phase in another charged state. The solid phase of the positive electrode includes a solid intermetallic formed by the first and the second metals or alloys. Methods of storing electrical energy from an external circuit using such a battery system are also provided.

  13. Harwell's atomic, molecular and solid state computer programs

    International Nuclear Information System (INIS)

    Harker, A.H.

    1976-02-01

    This document is intended to introduce the computational facilities available in the fields of atomic, molecular the solid state theory on the IBM370/165 at Harwell. The programs have all been implemented and thoroughly tested by the Theory of Solid State Materials Group. (author)

  14. A video Hartmann wavefront diagnostic that incorporates a monolithic microlens array

    International Nuclear Information System (INIS)

    Toeppen, J.S.; Bliss, E.S.; Long, T.W.; Salmon, J.T.

    1991-07-01

    we have developed a video Hartmann wavefront sensor that incorporates a monolithic array of microlenses as the focusing elements. The sensor uses a monolithic array of photofabricated lenslets. Combined with a video processor, this system reveals local gradients of the wavefront at a video frame rate of 30 Hz. Higher bandwidth is easily attainable with a camera and video processor that have faster frame rates. When used with a temporal filter, the reconstructed wavefront error is less than 1/10th wave

  15. Real-time multi-camera video acquisition and processing platform for ADAS

    Science.gov (United States)

    Saponara, Sergio

    2016-04-01

    The paper presents the design of a real-time and low-cost embedded system for image acquisition and processing in Advanced Driver Assisted Systems (ADAS). The system adopts a multi-camera architecture to provide a panoramic view of the objects surrounding the vehicle. Fish-eye lenses are used to achieve a large Field of View (FOV). Since they introduce radial distortion of the images projected on the sensors, a real-time algorithm for their correction is also implemented in a pre-processor. An FPGA-based hardware implementation, re-using IP macrocells for several ADAS algorithms, allows for real-time processing of input streams from VGA automotive CMOS cameras.

  16. Cross-relaxation solid state lasers

    International Nuclear Information System (INIS)

    Antipenko, B.M.

    1989-01-01

    Cross-relaxation functional diagrams provide a high quantum efficiency for pumping bands of solid state laser media and a low waste heat. A large number of the cross-relaxation mechanisms for decay rare earth excited states in crystals have been investigated. These investigations have been a starting-point for development of the cross-relaxation solid state lasers. For example, the cross-relaxation interactions, have been used for the laser action development of LiYF 4 :Gd-Tb. These interactions are important elements of the functional diagrams of the 2 μm Ho-doped media sensitized with Er and Tm and the 3 μm Er-doped media. Recently, new efficient 2 μm laser media with cross-relaxation pumping diagrams have been developed. Physical aspects of these media are the subject of this paper. A new concept of the Er-doped medium, sensitized with Yb, is illustrated

  17. Energy-saving approaches to solid state street lighting

    Science.gov (United States)

    Vitta, Pranciškus; Stanikūnas, Rytis; Tuzikas, Arūnas; Reklaitis, Ignas; Stonkus, Andrius; Petrulis, Andrius; Vaitkevičius, Henrikas; Žukauskas, Artūras

    2011-10-01

    We consider the energy-saving potential of solid-state street lighting due to improved visual performance, weather sensitive luminance control and tracking of pedestrians and vehicles. A psychophysical experiment on the measurement of reaction time with a decision making task was performed under mesopic levels of illumination provided by a highpressure sodium (HPS) lamp and different solid-state light sources, such as daylight and warm-white phosphor converted light-emitting diodes (LEDs) and red-green-blue LED clusters. The results of the experiment imply that photopic luminances of road surface provided by solid-state light sources with an optimized spectral power distribution might be up to twice as low as those provided by the HPS lamp. Dynamical correction of road luminance against road surface conditions typical of Lithuanian climate was estimated to save about 20% of energy in comparison with constant-level illumination. The estimated energy savings due to the tracking of pedestrians and vehicles amount at least 25% with the cumulative effect of intelligent control of at least 40%. A solid-state street lighting system with intelligent control was demonstrated using a 300 m long test ground consisting of 10 solid-state street luminaires, a meteorological station and microwave motion sensor network operated via power line communication.

  18. Event detection intelligent camera development

    International Nuclear Information System (INIS)

    Szappanos, A.; Kocsis, G.; Molnar, A.; Sarkozi, J.; Zoletnik, S.

    2008-01-01

    A new camera system 'event detection intelligent camera' (EDICAM) is being developed for the video diagnostics of W-7X stellarator, which consists of 10 distinct and standalone measurement channels each holding a camera. Different operation modes will be implemented for continuous and for triggered readout as well. Hardware level trigger signals will be generated from real time image processing algorithms optimized for digital signal processor (DSP) and field programmable gate array (FPGA) architectures. At full resolution a camera sends 12 bit sampled 1280 x 1024 pixels with 444 fps which means 1.43 Terabyte over half an hour. To analyse such a huge amount of data is time consuming and has a high computational complexity. We plan to overcome this problem by EDICAM's preprocessing concepts. EDICAM camera system integrates all the advantages of CMOS sensor chip technology and fast network connections. EDICAM is built up from three different modules with two interfaces. A sensor module (SM) with reduced hardware and functional elements to reach a small and compact size and robust action in harmful environment as well. An image processing and control unit (IPCU) module handles the entire user predefined events and runs image processing algorithms to generate trigger signals. Finally a 10 Gigabit Ethernet compatible image readout card functions as the network interface for the PC. In this contribution all the concepts of EDICAM and the functions of the distinct modules are described

  19. Solid-state characterization of the HIV protease inhibitor

    CERN Document Server

    Kim, Y A

    2002-01-01

    The LB71350, (3S, 4R)-Epoxy-(5S)-[[N-(1-methylethoxy) carbonyl]-3-(methylsulfonyl)-L-valinyl]amin= o]-N-[2-methyl-(1R)-[(phenyl)carbonyl]propyl-6-phenylhexanamide, is a novel HIV protease inhibitor. Its equilibrium solubility at room temperature was less than 40 mu g/mL. It was speculated that the low aqueous solubility might be due to the high crystalline lattice energy resulting from intermolecular hydrogen bonds. The present study was carried out to learn the solid-state characteristics of LB71350 using analytical methods such as NMR, FT-IR and XRD. sup 1 sup 3 C Solid-state NMR, solution NMR, and FT-IR spectra of the various solid forms of LB71350 were used to identify the conformation and structure of the solid forms. The chemical shifts of sup 1 sup 3 C solid-state NMR spectra suggest that the crystalline form might have 3 intermolecular hydrogen bondings between monomers.

  20. Integrated Interface Strategy toward Room Temperature Solid-State Lithium Batteries.

    Science.gov (United States)

    Ju, Jiangwei; Wang, Yantao; Chen, Bingbing; Ma, Jun; Dong, Shanmu; Chai, Jingchao; Qu, Hongtao; Cui, Longfei; Wu, Xiuxiu; Cui, Guanglei

    2018-04-25

    Solid-state lithium batteries have drawn wide attention to address the safety issues of power batteries. However, the development of solid-state lithium batteries is substantially limited by the poor electrochemical performances originating from the rigid interface between solid electrodes and solid-state electrolytes. In this work, a composite of poly(vinyl carbonate) and Li 10 SnP 2 S 12 solid-state electrolyte is fabricated successfully via in situ polymerization to improve the rigid interface issues. The composite electrolyte presents a considerable room temperature conductivity of 0.2 mS cm -1 , an electrochemical window exceeding 4.5 V, and a Li + transport number of 0.6. It is demonstrated that solid-state lithium metal battery of LiFe 0.2 Mn 0.8 PO 4 (LFMP)/composite electrolyte/Li can deliver a high capacity of 130 mA h g -1 with considerable capacity retention of 88% and Coulombic efficiency of exceeding 99% after 140 cycles at the rate of 0.5 C at room temperature. The superior electrochemical performance can be ascribed to the good compatibility of the composite electrolyte with Li metal and the integrated compatible interface between solid electrodes and the composite electrolyte engineered by in situ polymerization, which leads to a significant interfacial impedance decrease from 1292 to 213 Ω cm 2 in solid-state Li-Li symmetrical cells. This work provides vital reference for improving the interface compatibility for room temperature solid-state lithium batteries.

  1. Solid-state laser engineering

    CERN Document Server

    Koechner, Walter

    1992-01-01

    This book is written from an industrial perspective and provides a detailed discussion of solid-state lasers, their characteristics, design and construction. Emphasis is placed on engineering and practical considerations. The book is aimed mainly at the practicing scientist or engineer who is interested in the design or use of solid-state lasers, but the comprehensive treatment of the subject will make the work useful also to students of laser physics who seek to supplement their theoretical knowledge with engineering information. In order to present the subject as clearly as possible, phenomenological descriptions using models have been used rather than abstract mathematical descriptions. This results in a simplified presentation. The descriptions are enhanced by the inclusion of numerical and technical data, tables and graphs. This new edition has been updated and revised to take account of important new developments, concepts, and technologies that have emerged since the publication of the first and second...

  2. Advanced methods for image registration applied to JET videos

    Energy Technology Data Exchange (ETDEWEB)

    Craciunescu, Teddy, E-mail: teddy.craciunescu@jet.uk [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Murari, Andrea [Consorzio RFX, Associazione EURATOM-ENEA per la Fusione, Padova (Italy); Gelfusa, Michela [Associazione EURATOM-ENEA – University of Rome “Tor Vergata”, Roma (Italy); Tiseanu, Ion; Zoita, Vasile [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Arnoux, Gilles [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon (United Kingdom)

    2015-10-15

    Graphical abstract: - Highlights: • Development of an image registration method for JET IR and fast visible cameras. • Method based on SIFT descriptors and coherent point drift points set registration technique. • Method able to deal with extremely noisy images and very low luminosity images. • Computation time compatible with the inter-shot analysis. - Abstract: The last years have witnessed a significant increase in the use of digital cameras on JET. They are routinely applied for imaging in the IR and visible spectral regions. One of the main technical difficulties in interpreting the data of camera based diagnostics is the presence of movements of the field of view. Small movements occur due to machine shaking during normal pulses while large ones may arise during disruptions. Some cameras show a correlation of image movement with change of magnetic field strength. For deriving unaltered information from the videos and for allowing correct interpretation an image registration method, based on highly distinctive scale invariant feature transform (SIFT) descriptors and on the coherent point drift (CPD) points set registration technique, has been developed. The algorithm incorporates a complex procedure for rejecting outliers. The method has been applied for vibrations correction to videos collected by the JET wide angle infrared camera and for the correction of spurious rotations in the case of the JET fast visible camera (which is equipped with an image intensifier). The method has proved to be able to deal with the images provided by this camera frequently characterized by low contrast and a high level of blurring and noise.

  3. Solid state nuclear magnetic resonance of fossil fuels

    International Nuclear Information System (INIS)

    Axelson, D.E.

    1985-01-01

    This book contains the following chapters: Principles of solid state NMR; Relaxation processes: Introduction to pulse sequences; Quantitative analysis; Removal of artifacts from CPMAS FT experiments; Line broadening mechanisms; Resolution enhancement of solid state NMR spectra; and /sup 13/C CPMAS NMR of fossil fuels--general applications

  4. Real-time unmanned aircraft systems surveillance video mosaicking using GPU

    Science.gov (United States)

    Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.

    2010-04-01

    Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.

  5. Principal axis-based correspondence between multiple cameras for people tracking.

    Science.gov (United States)

    Hu, Weiming; Hu, Min; Zhou, Xue; Tan, Tieniu; Lou, Jianguang; Maybank, Steve

    2006-04-01

    Visual surveillance using multiple cameras has attracted increasing interest in recent years. Correspondence between multiple cameras is one of the most important and basic problems which visual surveillance using multiple cameras brings. In this paper, we propose a simple and robust method, based on principal axes of people, to match people across multiple cameras. The correspondence likelihood reflecting the similarity of pairs of principal axes of people is constructed according to the relationship between "ground-points" of people detected in each camera view and the intersections of principal axes detected in different camera views and transformed to the same view. Our method has the following desirable properties: 1) Camera calibration is not needed. 2) Accurate motion detection and segmentation are less critical due to the robustness of the principal axis-based feature to noise. 3) Based on the fused data derived from correspondence results, positions of people in each camera view can be accurately located even when the people are partially occluded in all views. The experimental results on several real video sequences from outdoor environments have demonstrated the effectiveness, efficiency, and robustness of our method.

  6. Using the OOI Cabled Array HD Camera to Explore Geophysical and Oceanographic Problems at Axial Seamount

    Science.gov (United States)

    Crone, T. J.; Knuth, F.; Marburg, A.

    2016-12-01

    A broad array of Earth science problems can be investigated using high-definition video imagery from the seafloor, ranging from those that are geological and geophysical in nature, to those that are biological and water-column related. A high-definition video camera was installed as part of the Ocean Observatory Initiative's core instrument suite on the Cabled Array, a real-time fiber optic data and power system that stretches from the Oregon Coast to Axial Seamount on the Juan de Fuca Ridge. This camera runs a 14-minute pan-tilt-zoom routine 8 times per day, focusing on locations of scientific interest on and near the Mushroom vent in the ASHES hydrothermal field inside the Axial caldera. The system produces 13 GB of lossless HD video every 3 hours, and at the time of this writing it has generated 2100 recordings totaling 28.5 TB since it began streaming data into the OOI archive in August of 2015. Because of the large size of this dataset, downloading the entirety of the video for long timescale investigations is not practical. We are developing a set of user-side tools for downloading single frames and frame ranges from the OOI HD camera raw data archive to aid users interested in using these data for their research. We use these tools to download about one year's worth of partial frame sets to investigate several questions regarding the hydrothermal system at ASHES, including the variability of bacterial "floc" in the water-column, and changes in high temperature fluid fluxes using optical flow techniques. We show that while these user-side tools can facilitate rudimentary scientific investigations using the HD camera data, a server-side computing environment that allows users to explore this dataset without downloading any raw video will be required for more advanced investigations to flourish.

  7. Measurement of the Dynamic Displacements of Railway Bridges Using Video Technology

    Directory of Open Access Journals (Sweden)

    Ribeiro Diogo

    2015-01-01

    Full Text Available This article describes the development of a non-contact dynamic displacement measurement system for railway bridges based on video technology. The system, consisting of a high speed video camera, an optical lens, lighting lamps and a precision target, can perform measurements with high precision for distances from the camera to the target up to 25 m, with acquisition frame rates ranging from 64 fps to 500 fps, and be articulated with other measurement systems, which promotes its integration in structural health monitoring systems. The system’s performance was evaluated based on two tests, one in the laboratory and other on the field. The laboratory test evaluated the performance of the system in measuring the displacement of a steel beam, subjected to a point load applied dynamically, for distances from the camera to the target between 3 m and 15 m. The field test allowed evaluating the system’s performance in the dynamic measurement of the displacement of a point on the deck of a railway bridge, induced by passing trains at speeds between 160 km/h and 180 km/h, for distances from the camera to the target up to 25 m. The results of both tests show a very good agreement between the displacement measurement obtained with the video system and with a LVDT.

  8. Advances in Solid State Physics

    CERN Document Server

    Haug, Rolf

    2007-01-01

    The present volume 46 of Advances in Solid State Physics contains the written versions of selected invited lectures from the spring meeting of the Arbeitskreis Festkörperphysik of the Deutsche Physikalische Gesellschaft which was held from 27 to 31 March 2006 in Dresden, Germany. Many topical talks given at the numerous symposia are included. Most of these were organized collaboratively by several of the divisions of the Arbeitskreis. The topis range from zero-dimensional physics in quantum dots, molecules and nanoparticles over one-dimensional physics in nanowires and 1d systems to more applied subjects like optoelectronics and materials science in thin films. The contributions span the whole width of solid-state physics from truly basic science to applications.

  9. Creating personalized memories from social events: Community-based support for multi-camera recordings of school concerts

    OpenAIRE

    Guimaraes R.L.; Cesar P.; Bulterman D.C.A.; Zsombori V.; Kegel I.

    2011-01-01

    htmlabstractThe wide availability of relatively high-quality cameras makes it easy for many users to capture video fragments of social events such as concerts, sports events or community gatherings. The wide availability of simple sharing tools makes it nearly as easy to upload individual fragments to on-line video sites. Current work on video mashups focuses on the creation of a video summary based on the characteristics of individual media fragments, but it fails to address the interpersona...

  10. Laser solid sampling for a solid-state-detector ICP emission spectrometer

    International Nuclear Information System (INIS)

    Noelte, J.; Moenke-Blankenburg, L.; Schumann, T.

    1994-01-01

    Solid sampling with laser vaporization has been coupled to an ICP emission spectrometer with an Echelle optical system and a solid-state-detector for the analysis of steel and soil samples. Pulsation of the vaporized material flow was compensated by real-time background correction and internal standardization, resulting in good accuracy and precision. (orig.)

  11. An extrapolation scheme for solid-state NMR chemical shift calculations

    Science.gov (United States)

    Nakajima, Takahito

    2017-06-01

    Conventional quantum chemical and solid-state physical approaches include several problems to accurately calculate solid-state nuclear magnetic resonance (NMR) properties. We propose a reliable computational scheme for solid-state NMR chemical shifts using an extrapolation scheme that retains the advantages of these approaches but reduces their disadvantages. Our scheme can satisfactorily yield solid-state NMR magnetic shielding constants. The estimated values have only a small dependence on the low-level density functional theory calculation with the extrapolation scheme. Thus, our approach is efficient because the rough calculation can be performed in the extrapolation scheme.

  12. Phosphate phosphors for solid-state lighting

    Energy Technology Data Exchange (ETDEWEB)

    Shinde, Kartik N. [N.S. Science and Arts College, Bhadrawati (India). Dept. of Physics; Swart, H.C. [University of the Orange Free State, Bloemfontein (South Africa). Dept. of Physics; Dhoble, S.J. [R.T.M. Nagpur Univ. (India). Dept. of Physics; Park, Kyeongsoon [Sejong Univ., Seoul (Korea, Republic of). Faculty of Nanotechnology and Advanced Materials Engineering

    2012-07-01

    Essential information for students in researchers working towards new and more efficient solid-state lighting. Comprehensive survey based on the authors' long experience. Useful both for teaching and reference. The idea for this book arose out of the realization that, although excellent surveys and a phosphor handbook are available, there is no single source covering the area of phosphate based phosphors especially for lamp industry. Moreover, as this field gets only limited attention in most general books on luminescence, there is a clear need for a book in which attention is specifically directed toward this rapidly growing field of solid state lighting and its many applications. This book is aimed at providing a sound introduction to the synthesis and optical characterization of phosphate phosphor for undergraduate and graduate students as well as teachers and researchers. The book provides guidance through the multidisciplinary field of solid state lighting specially phosphate phosphors for beginners, scientists and engineers from universities, research organizations, and especially industry. In order to make it useful for a wide audience, both fundamentals and applications are discussed, together.

  13. A software framework for analysing solid-state MAS NMR data

    International Nuclear Information System (INIS)

    Stevens, Tim J.; Fogh, Rasmus H.; Boucher, Wayne; Higman, Victoria A.; Eisenmenger, Frank; Bardiaux, Benjamin; Rossum, Barth-Jan van; Oschkinat, Hartmut; Laue, Ernest D.

    2011-01-01

    Solid-state magic-angle-spinning (MAS) NMR of proteins has undergone many rapid methodological developments in recent years, enabling detailed studies of protein structure, function and dynamics. Software development, however, has not kept pace with these advances and data analysis is mostly performed using tools developed for solution NMR which do not directly address solid-state specific issues. Here we present additions to the CcpNmr Analysis software package which enable easier identification of spinning side bands, straightforward analysis of double quantum spectra, automatic consideration of non-uniform labelling schemes, as well as extension of other existing features to the needs of solid-state MAS data. To underpin this, we have updated and extended the CCPN data model and experiment descriptions to include transfer types and nomenclature appropriate for solid-state NMR experiments, as well as a set of experiment prototypes covering the experiments commonly employed by solid-sate MAS protein NMR spectroscopists. This work not only improves solid-state MAS NMR data analysis but provides a platform for anyone who uses the CCPN data model for programming, data transfer, or data archival involving solid-state MAS NMR data.

  14. Bring your own camera to the trap: An inexpensive, versatile, and portable triggering system tested on wild hummingbirds.

    Science.gov (United States)

    Rico-Guevara, Alejandro; Mickley, James

    2017-07-01

    The study of animals in the wild offers opportunities to collect relevant information on their natural behavior and abilities to perform ecologically relevant tasks. However, it also poses challenges such as accounting for observer effects, human sensory limitations, and the time intensiveness of this type of research. To meet these challenges, field biologists have deployed camera traps to remotely record animal behavior in the wild. Despite their ubiquity in research, many commercial camera traps have limitations, and the species and behavior of interest may present unique challenges. For example, no camera traps support high-speed video recording. We present a new and inexpensive camera trap system that increases versatility by separating the camera from the triggering mechanism. Our system design can pair with virtually any camera and allows for independent positioning of a variety of sensors, all while being low-cost, lightweight, weatherproof, and energy efficient. By using our specialized trigger and customized sensor configurations, many limitations of commercial camera traps can be overcome. We use this system to study hummingbird feeding behavior using high-speed video cameras to capture fast movements and multiple sensors placed away from the camera to detect small body sizes. While designed for hummingbirds, our application can be extended to any system where specialized camera or sensor features are required, or commercial camera traps are cost-prohibitive, allowing camera trap use in more research avenues and by more researchers.

  15. Review on solid electrolytes for all-solid-state lithium-ion batteries

    Science.gov (United States)

    Zheng, Feng; Kotobuki, Masashi; Song, Shufeng; Lai, Man On; Lu, Li

    2018-06-01

    All-solid-state (ASS) lithium-ion battery has attracted great attention due to its high safety and increased energy density. One of key components in the ASS battery (ASSB) is solid electrolyte that determines performance of the ASSB. Many types of solid electrolytes have been investigated in great detail in the past years, including NASICON-type, garnet-type, perovskite-type, LISICON-type, LiPON-type, Li3N-type, sulfide-type, argyrodite-type, anti-perovskite-type and many more. This paper aims to provide comprehensive reviews on some typical types of key solid electrolytes and some ASSBs, and on gaps that should be resolved.

  16. Heterogeneous CPU-GPU moving targets detection for UAV video

    Science.gov (United States)

    Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan

    2017-07-01

    Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.

  17. INTEGRATION OF VIDEO IMAGES AND CAD WIREFRAMES FOR 3D OBJECT LOCALIZATION

    Directory of Open Access Journals (Sweden)

    R. A. Persad

    2012-07-01

    Full Text Available The tracking of moving objects from single images has received widespread attention in photogrammetric computer vision and considered to be at a state of maturity. This paper presents a model-driven solution for localizing moving objects detected from monocular, rotating and zooming video images in a 3D reference frame. To realize such a system, the recovery of 2D to 3D projection parameters is essential. Automatic estimation of these parameters is critical, particularly for pan-tilt-zoom (PTZ surveillance cameras where parameters change spontaneously upon camera motion. In this work, an algorithm for automated parameter retrieval is proposed. This is achieved by matching linear features between incoming images from video sequences and simple geometric 3D CAD wireframe models of man-made structures. The feature matching schema uses a hypothesis-verify optimization framework referred to as LR-RANSAC. This novel method improves the computational efficiency of the matching process in comparison to the standard RANSAC robust estimator. To demonstrate the applicability and performance of the method, experiments have been performed on indoor and outdoor image sequences under varying conditions with lighting changes and occlusions. Reliability of the matching algorithm has been analyzed by comparing the automatically determined camera parameters with ground truth (GT. Dependability of the retrieved parameters for 3D localization has also been assessed by comparing the difference between 3D positions of moving image objects estimated using the LR-RANSAC-derived parameters and those computed using GT parameters.

  18. Laser Imaging Video Camera Sees Through Fire, Fog, Smoke

    Science.gov (United States)

    2015-01-01

    Under a series of SBIR contracts with Langley Research Center, inventor Richard Billmers refined a prototype for a laser imaging camera capable of seeing through fire, fog, smoke, and other obscurants. Now, Canton, Ohio-based Laser Imaging through Obscurants (LITO) Technologies Inc. is demonstrating the technology as a perimeter security system at Glenn Research Center and planning its future use in aviation, shipping, emergency response, and other fields.

  19. Precise X-ray and video overlay for augmented reality fluoroscopy.

    Science.gov (United States)

    Chen, Xin; Wang, Lejing; Fallavollita, Pascal; Navab, Nassir

    2013-01-01

    The camera-augmented mobile C-arm (CamC) augments any mobile C-arm by a video camera and mirror construction and provides a co-registration of X-ray with video images. The accurate overlay between these images is crucial to high-quality surgical outcomes. In this work, we propose a practical solution that improves the overlay accuracy for any C-arm orientation by: (i) improving the existing CamC calibration, (ii) removing distortion effects, and (iii) accounting for the mechanical sagging of the C-arm gantry due to gravity. A planar phantom is constructed and placed at different distances to the image intensifier in order to obtain the optimal homography that co-registers X-ray and video with a minimum error. To alleviate distortion, both X-ray calibration based on equidistant grid model and Zhang's camera calibration method are implemented for distortion correction. Lastly, the virtual detector plane (VDP) method is adapted and integrated to reduce errors due to the mechanical sagging of the C-arm gantry. The overlay errors are 0.38±0.06 mm when not correcting for distortion, 0.27±0.06 mm when applying Zhang's camera calibration, and 0.27±0.05 mm when applying X-ray calibration. Lastly, when taking into account all angular and orbital rotations of the C-arm, as well as correcting for distortion, the overlay errors are 0.53±0.24 mm using VDP and 1.67±1.25 mm excluding VDP. The augmented reality fluoroscope achieves an accurate video and X-ray overlay when applying the optimal homography calculated from distortion correction using X-ray calibration together with the VDP.

  20. Design and Characterisation of Solid Electrolytes for All-Solid-State Lithium Batteries

    DEFF Research Database (Denmark)

    Sveinbjörnsson, Dadi Þorsteinn

    The development of all-solid-state lithium batteries, in which the currently used liquid electrolytes are substituted for solid electrolyte materials, could lead to safer batteries offering higher energy densities and longer cycle lifetimes. Designing suitable solid electrolytes with sufficient...... chemical and electrochemical stability, high lithium ion conduction and negligible electronic conduction remains a challenge. The highly lithium ion conducting LiBH4-LiI solid solution is a promising solid electrolyte material. Solid solutions with a LiI content of 6.25%-50% were synthesised by planetary......-rich microstructures during ball milling is found to significantly influence the conductivity of the samples. The long-range diffusion of lithium ions was measured using quasi-elastic neutron scattering. The solid solutions are found to exhibit two-dimensional conduction in the hexagonal plane of the crystal structure...

  1. High power diode pumped solid state lasers

    International Nuclear Information System (INIS)

    Solarz, R.; Albrecht, G.; Beach, R.; Comaskey, B.

    1992-01-01

    Although operational for over twenty years, diode pumped solid state lasers have, for most of their existence, been limited to individual diodes pumping a tiny volume of active medium in an end pumped configuration. More recent years have witnessed the appearance of diode bars, packing around 100 diodes in a 1 cm bar which have enabled end and side pumped small solid state lasers at the few Watt level of output. This paper describes the subsequent development of how proper cooling and stacking of bars enables the fabrication of multi kill average power diode pump arrays with irradiances of 1 kw/cm peak and 250 W/cm 2 average pump power. Since typical conversion efficiencies from the diode light to the pumped laser output light are of order 30% or more, kW average power diode pumped solid state lasers now are possible

  2. Solid state physics principles and modern applications

    CERN Document Server

    Quinn, John J

    2018-01-01

    This book provides the basis for a two-semester graduate course on solid-state physics. The first half presents all the knowledge necessary for a one-semester survey of solid-state physics, but in greater depth than most introductory solid state physics courses. The second half includes most of the important research over the past half-century, covering both the fundamental principles and most recent advances. This new edition includes the latest developments in the treatment of strongly interacting two-dimensional electrons and discusses the generalization from small to larger systems. The book provides explanations in a class-tested tutorial style, and each chapter includes problems reviewing key concepts and calculations. The updated exercises and solutions enable students to become familiar with contemporary research activities, such as the electronic properties of massless fermions in graphene and topological insulators.

  3. A new video studio for CERN

    CERN Multimedia

    Anaïs Vernede

    2011-01-01

    On Monday, 14 February 2011 CERN's new video studio was inaugurated with a recording of "Spotlight on CERN", featuring an interview with the DG, Rolf Heuer.   CERN's new video studio. Almost all international organisations have a studio for their audiovisual communications, and now it's CERN’s turn to acquire such a facility. “In the past, we've made videos using the Globe audiovisual facilities and sometimes using the small photographic studio, which is equipped with simple temporary sets that aren’t really suitable for video,” explains Jacques Fichet, head of CERN‘s audiovisual service. Once the decision had been taken to create the new 100 square-metre video studio, the work took only five months to complete. The studio, located in Building 510, is equipped with a cyclorama (a continuous smooth white wall used as a background) measuring 3 m in height and 16 m in length, as well as a teleprompter, a rail-mounted camera dolly fo...

  4. Solid-state resistor for pulsed power machines

    Science.gov (United States)

    Stoltzfus, Brian; Savage, Mark E.; Hutsel, Brian Thomas; Fowler, William E.; MacRunnels, Keven Alan; Justus, David; Stygar, William A.

    2016-12-06

    A flexible solid-state resistor comprises a string of ceramic resistors that can be used to charge the capacitors of a linear transformer driver (LTD) used in a pulsed power machine. The solid-state resistor is able to absorb the energy of a switch prefire, thereby limiting LTD cavity damage, yet has a sufficiently low RC charge time to allow the capacitor to be recharged without disrupting the operation of the pulsed power machine.

  5. Shot Boundary Detection in Soccer Video using Twin-comparison Algorithm and Dominant Color Region

    Directory of Open Access Journals (Sweden)

    Matko Šarić

    2008-06-01

    Full Text Available The first step in generic video processing is temporal segmentation, i.e. shot boundary detection. Camera shot transitions can be either abrupt (e.g. cuts or gradual (e.g. fades, dissolves, wipes. Sports video is one of the most challenging domains for robust shot boundary detection. We proposed a shot boundary detection algorithm for soccer video based on the twin-comparison method and the absolute difference between frames in their ratios of dominant colored pixels to total number of pixels. With this approach the detection of gradual transitions is improved by decreasing the number of false positives caused by some camera operations. We also compared performances of our algorithm and the standard twin-comparison method.

  6. Solid-state dependent dissolution and oral bioavailability of piroxicam in rats.

    Science.gov (United States)

    Lust, Andres; Laidmäe, Ivo; Palo, Mirja; Meos, Andres; Aaltonen, Jaakko; Veski, Peep; Heinämäki, Jyrki; Kogermann, Karin

    2013-01-23

    The aim of this study was to gain understanding about the effects of different solid-state forms of a poorly water-soluble piroxicam on drug dissolution and oral bioavailability in rats. Three different solid-state forms of piroxicam were studied: anhydrate I (AH), monohydrate (MH), and amorphous form in solid dispersion (SD). In addition, the effect of a new polymeric excipient Soluplus® (polyvinyl caprolactam-polyvinyl acetate-polyethylene glycol graft copolymer) on oral bioavailability of piroxicam was investigated. Significant differences in the dissolution and oral bioavailability were found between the solid-state forms of piroxicam. Amorphous piroxicam in SD showed the fastest dissolution in vitro and a solid-state transformation to MH in the dissolution medium. Despite the presence of solid-state transformation, SD exhibited the highest rate and extent of oral absorption in rats. Oral bioavailability of other two solid-state forms decreased in the order AH and MH. The use of Soluplus® was found to enhance the dissolution and oral bioavailability of piroxicam in rats. The present study shows the importance of solid-state form selection for oral bioavailability of a poorly water-soluble drug. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Use of video taping during simulator training

    International Nuclear Information System (INIS)

    Helton, M.; Young, P.

    1987-01-01

    The use of a video camera for training is not a new idea and is used throughout the country for training in such areas as computers, car repair, music and even in such non-technical areas as fishing. Reviewing a taped simulator training session will aid the student in his job performance regardless of the position he holds in his organization. If the student is to be examined on simulator performance, video will aid in this training in many different ways

  8. Automatic Traffic Data Collection under Varying Lighting and Temperature Conditions in Multimodal Environments: Thermal versus Visible Spectrum Video-Based Systems

    Directory of Open Access Journals (Sweden)

    Ting Fu

    2017-01-01

    Full Text Available Vision-based monitoring systems using visible spectrum (regular video cameras can complement or substitute conventional sensors and provide rich positional and classification data. Although new camera technologies, including thermal video sensors, may improve the performance of digital video-based sensors, their performance under various conditions has rarely been evaluated at multimodal facilities. The purpose of this research is to integrate existing computer vision methods for automated data collection and evaluate the detection, classification, and speed measurement performance of thermal video sensors under varying lighting and temperature conditions. Thermal and regular video data was collected simultaneously under different conditions across multiple sites. Although the regular video sensor narrowly outperformed the thermal sensor during daytime, the performance of the thermal sensor is significantly better for low visibility and shadow conditions, particularly for pedestrians and cyclists. Retraining the algorithm on thermal data yielded an improvement in the global accuracy of 48%. Thermal speed measurements were consistently more accurate than for the regular video at daytime and nighttime. Thermal video is insensitive to lighting interference and pavement temperature, solves issues associated with visible light cameras for traffic data collection, and offers other benefits such as privacy, insensitivity to glare, storage space, and lower processing requirements.

  9. Development of diode-pumped medical solid-state lasers

    International Nuclear Information System (INIS)

    Kim, Cheol Jung; Kim, Min Suk

    2000-09-01

    Two thirds of human body consists of water and the absorption of laser by water is an important factor in medical laser treatment. Er medical lasers have been used in the dermatology, ophthalmology and dental treatments due to its highest absorption by water. However, 2.9 um Er laser can not be transmitted through an optical fiber. On the other hand, Tm laser can be transmitted through an fiber and also has very high absorption by water. Therefore, Tm lasers are used in ophthalmology and heart treatment wherein the fiber delivery is very important for the treatment. Until now, mainly lamp-pumped solid-state lasers have been used in medical treatments, but the lamp-pumped solid-state lasers are being replaced with the diode-pumped solid-state lasers because the diode-pumped solid-state lasers are more compact and much easier to maintain. Following this trend, end-pumped Er and side-pumped Tm lasers have been developed and the output power of 1 W was obtained for Er and Tm respectively

  10. Development of diode-pumped medical solid-state lasers

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Cheol Jung; Kim, Min Suk

    2000-09-01

    Two thirds of human body consists of water and the absorption of laser by water is an important factor in medical laser treatment. Er medical lasers have been used in the dermatology, ophthalmology and dental treatments due to its highest absorption by water. However, 2.9 um Er laser can not be transmitted through an optical fiber. On the other hand, Tm laser can be transmitted through an fiber and also has very high absorption by water. Therefore, Tm lasers are used in ophthalmology and heart treatment wherein the fiber delivery is very important for the treatment. Until now, mainly lamp-pumped solid-state lasers have been used in medical treatments, but the lamp-pumped solid-state lasers are being replaced with the diode-pumped solid-state lasers because the diode-pumped solid-state lasers are more compact and much easier to maintain. Following this trend, end-pumped Er and side-pumped Tm lasers have been developed and the output power of 1 W was obtained for Er and Tm respectively.

  11. All solid-state SBS phase conjugate mirror

    Science.gov (United States)

    Dane, C.B.; Hackel, L.A.

    1999-03-09

    A stimulated Brillouin scattering (SBS) phase conjugate laser mirror uses a solid-state nonlinear gain medium instead of the conventional liquid or high pressure gas medium. The concept has been effectively demonstrated using common optical-grade fused silica. An energy threshold of 2.5 mJ and a slope efficiency of over 90% were achieved, resulting in an overall energy reflectivity of >80% for 15 ns, 1 um laser pulses. The use of solid-state materials is enabled by a multi-pass resonant architecture which suppresses transient fluctuations that would otherwise result in damage to the SBS medium. This all solid state phase conjugator is safer, more reliable, and more easily manufactured than prior art designs. It allows nonlinear wavefront correction to be implemented in industrial and defense laser systems whose operating environments would preclude the introduction of potentially hazardous liquids or high pressure gases. 8 figs.

  12. An unsupervised method for summarizing egocentric sport videos

    Science.gov (United States)

    Habibi Aghdam, Hamed; Jahani Heravi, Elnaz; Puig, Domenec

    2015-12-01

    People are getting more interested to record their sport activities using head-worn or hand-held cameras. This type of videos which is called egocentric sport videos has different motion and appearance patterns compared with life-logging videos. While a life-logging video can be defined in terms of well-defined human-object interactions, notwithstanding, it is not trivial to describe egocentric sport videos using well-defined activities. For this reason, summarizing egocentric sport videos based on human-object interaction might fail to produce meaningful results. In this paper, we propose an unsupervised method for summarizing egocentric videos by identifying the key-frames of the video. Our method utilizes both appearance and motion information and it automatically finds the number of the key-frames. Our blind user study on the new dataset collected from YouTube shows that in 93:5% cases, the users choose the proposed method as their first video summary choice. In addition, our method is within the top 2 choices of the users in 99% of studies.

  13. Declarative camera control for automatic cinematography

    Energy Technology Data Exchange (ETDEWEB)

    Christianson, D.B.; Anderson, S.E.; Li-wei He [Univ. of Washington, Seattle, WA (United States)] [and others

    1996-12-31

    Animations generated by interactive 3D computer graphics applications are typically portrayed either from a particular character`s point of view or from a small set of strategically-placed viewpoints. By ignoring camera placement, such applications fail to realize important storytelling capabilities that have been explored by cinematographers for many years. In this paper, we describe several of the principles of cinematography and show how they can be formalized into a declarative language, called the Declarative Camera Control Language (DCCL). We describe the application of DCCL within the context of a simple interactive video game and argue that DCCL represents cinematic knowledge at the same level of abstraction as expert directors by encoding 16 idioms from a film textbook. These idioms produce compelling animations, as demonstrated on the accompanying videotape.

  14. Solid state fermentation studies of citric acid production

    African Journals Online (AJOL)

    SERVER

    2008-03-04

    Mar 4, 2008 ... solid waste management, biomass energy conservation, production of high value products and little risk ... The carrier, sugarcane bagasse for solid state fermentation was procured from National Sugar Institute ... constant weight and designated as dry solid residue (DSR). The filtrate (consisting of biomass, ...

  15. Nanocrystalline spinel ferrites by solid state reaction route

    Indian Academy of Sciences (India)

    Wintec

    Nanocrystalline spinel ferrites by solid state reaction route. T K KUNDU* and S MISHRA. Department of Physics, Visva-Bharati, Santiniketan 731 235, India. Abstract. Nanostructured NiFe2O4, MnFe2O4 and (NiZn)Fe2O4 were synthesized by aliovalent ion doping using conventional solid-state reaction route. With the ...

  16. Neutral-beam performance analysis using a CCD camera

    International Nuclear Information System (INIS)

    Hill, D.N.; Allen, S.L.; Pincosy, P.A.

    1986-01-01

    We have developed an optical diagnostic system suitable for characterizing the performance of energetic neutral beams. An absolutely calibrated CCD video camera is used to view the neutral beam as it passes through a relatively high pressure (10 -5 Torr) region outside the neutralizer: collisional excitation of the fast deuterium atoms produces H/sub proportional to/ emission (lambda = 6561A) that is proportional to the local atomic current density, independent of the species mix of accelerated ions over the energy range 5 to 20 keV. Digital processing of the video signal provides profile and aiming information for beam optimization. 6 refs., 3 figs

  17. Research on IGBT solid state switch

    CERN Document Server

    Gan Kong Yin; Wang Xiao Feng; Wang Lang Ping; Wang Song Yan; Chu, P K; Wu Hong Chen

    2002-01-01

    The experiments on the IGBT solid state switch for induction accelerator was carried out with two series 1.2 kV, 75 A IGBT (GA75TS120U). The static and dynamic balancing modules were carried out with metal oxide varistors, capacities and diodes in order to suppress the over-voltage during IGBT on and off. Experimental results show that IGBT solid state switch works very stable under the different conditions. It can output peak voltage 1.8 kV, rise time 300 ns, fall time 1.64 mu s waveforms on the loads. The simulation data using OrCAD are in accord with experimental results except the rise time

  18. Research on IGBT solid state switch

    International Nuclear Information System (INIS)

    Gan Kongyin; Tang Baoyin; Wang Xiaofeng; Wang Langping; Wang Songyan; Wu Hongchen

    2002-01-01

    The experiments on the IGBT solid state switch for induction accelerator was carried out with two series 1.2 kV, 75 A IGBT (GA75TS120U). The static and dynamic balancing modules were carried out with metal oxide varistors, capacities and diodes in order to suppress the over-voltage during IGBT on and off. Experimental results show that IGBT solid state switch works very stable under the different conditions. It can output peak voltage 1.8 kV, rise time 300 ns, fall time 1.64 μs waveforms on the loads. The simulation data using OrCAD are in accord with experimental results except the rise time

  19. Handbook of Applied Solid State Spectroscopy

    CERN Document Server

    Vij, D. R

    2006-01-01

    Solid-State spectroscopy is a burgeoning field with applications in many branches of science, including physics, chemistry, biosciences, surface science, and materials science. Handbook of Applied Solid-State Spectroscopy brings together in one volume information about various spectroscopic techniques that is currently scattered in the literature of these disciplines. This concise yet comprehensive volume covers theory and applications of a broad range of spectroscopies, including NMR, NQR, EPR/ESR, ENDOR, scanning tunneling, acoustic resonance, FTIR, auger electron emission, x-ray photoelectron emission, luminescence, and optical polarization, and more. Emphasis is placed on fundamentals and current methods and procedures, together with the latest applications and developments in the field.

  20. Physical Acoustics in the Solid State

    CERN Document Server

    Lüthi, B

    2006-01-01

    Suitable for researchers and graduate students in physics and material science, "Physical Acoustics in the Solid State" reviews the modern aspects in the field, including many experimental results, especially those involving ultrasonics. Practically all fields of solid-state physics are covered: metals, semiconductors, magnetism, superconductivity, different kinds of phase transitions, low-dimensional systems, and the quantum Hall effect. After a review of the relevant experimental techniques and an introduction to the theory of elasticity, emphasizing the symmetry aspects, applications in the various fields of condensed matter physics are presented. Also treated are Brillouin-scattering results and results from thermodynamic investigations, such as thermal expansion and specific heat.

  1. Physical Acoustics in the Solid State

    CERN Document Server

    Lüthi, Bruno

    2007-01-01

    Suitable for researchers and graduate students in physics and material science, "Physical Acoustics in the Solid State" reviews the modern aspects in the field, including many experimental results, especially those involving ultrasonics. Practically all fields of solid-state physics are covered: metals, semiconductors, magnetism, superconductivity, different kinds of phase transitions, low-dimensional systems, and the quantum Hall effect. After a review of the relevant experimental techniques and an introduction to the theory of elasticity, emphasizing the symmetry aspects, applications in the various fields of condensed matter physics are presented. Also treated are Brillouin-scattering results and results from thermodynamic investigations, such as thermal expansion and specific heat.

  2. HDR 192Ir source speed measurements using a high speed video camera

    International Nuclear Information System (INIS)

    Fonseca, Gabriel P.; Viana, Rodrigo S. S.; Yoriyaz, Hélio; Podesta, Mark; Rubo, Rodrigo A.; Sales, Camila P. de; Reniers, Brigitte; Verhaegen, Frank

    2015-01-01

    Purpose: The dose delivered with a HDR 192 Ir afterloader can be separated into a dwell component, and a transit component resulting from the source movement. The transit component is directly dependent on the source speed profile and it is the goal of this study to measure accurate source speed profiles. Methods: A high speed video camera was used to record the movement of a 192 Ir source (Nucletron, an Elekta company, Stockholm, Sweden) for interdwell distances of 0.25–5 cm with dwell times of 0.1, 1, and 2 s. Transit dose distributions were calculated using a Monte Carlo code simulating the source movement. Results: The source stops at each dwell position oscillating around the desired position for a duration up to (0.026 ± 0.005) s. The source speed profile shows variations between 0 and 81 cm/s with average speed of ∼33 cm/s for most of the interdwell distances. The source stops for up to (0.005 ± 0.001) s at nonprogrammed positions in between two programmed dwell positions. The dwell time correction applied by the manufacturer compensates the transit dose between the dwell positions leading to a maximum overdose of 41 mGy for the considered cases and assuming an air-kerma strength of 48 000 U. The transit dose component is not uniformly distributed leading to over and underdoses, which is within 1.4% for commonly prescribed doses (3–10 Gy). Conclusions: The source maintains its speed even for the short interdwell distances. Dose variations due to the transit dose component are much lower than the prescribed treatment doses for brachytherapy, although transit dose component should be evaluated individually for clinical cases

  3. Uses of solid state analogies in elementary particle theory

    International Nuclear Information System (INIS)

    Anderson, P.W.

    1976-01-01

    The solid state background of some of the modern ideas of field theory is reviewed, and additional examples of model situations in solid state or many-body theory which may have relevance to fundamental theories of elementary particles are adduced

  4. Transire, a Program for Generating Solid-State Interface Structures

    Science.gov (United States)

    2017-09-14

    ARL-TR-8134 ● SEP 2017 US Army Research Laboratory Transire, a Program for Generating Solid-State Interface Structures by...Program for Generating Solid-State Interface Structures by Caleb M Carlin and Berend C Rinderspacher Weapons and Materials Research Directorate, ARL...

  5. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    Science.gov (United States)

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  6. Piloting the feasibility of head-mounted video technology to augment student feedback during simulated clinical decision-making: An observational design pilot study.

    Science.gov (United States)

    Forbes, Helen; Bucknall, Tracey K; Hutchinson, Alison M

    2016-04-01

    Clinical decision-making is a complex activity that is critical to patient safety. Simulation, augmented by feedback, affords learners the opportunity to learn critical clinical decision-making skills. More detailed feedback following simulation exercises has the potential to further enhance student learning, particularly in relation to developing improved clinical decision-making skills. To investigate the feasibility of head-mounted video camera recordings, to augment feedback, following acute patient deterioration simulations. Pilot study using an observational design. Ten final-year nursing students participated in three simulation exercises, each focussed on detection and management of patient deterioration. Two observers collected behavioural data using an adapted version of Gaba's Clinical Simulation Tool, to provide verbal feedback to each participant, following each simulation exercise. Participants wore a head-mounted video camera during the second simulation exercise only. Video recordings were replayed to participants to augment feedback, following the second simulation exercise. Data were collected on: participant performance (observed and perceived); participant perceptions of feedback methods; and head-mounted video camera recording feasibility and capability for detailed audio-visual feedback. Management of patient deterioration improved for six participants (60%). Increased perceptions of confidence (70%) and competence (80%), were reported by the majority of participants. Few participants (20%) agreed that the video recording specifically enhanced their learning. The visual field of the head-mounted video camera was not always synchronised with the participant's field of vision, thus affecting the usefulness of some recordings. The usefulness of the video recordings, to enhance verbal feedback to participants on detection and management of simulated patient deterioration, was inconclusive. Modification of the video camera glasses, to improve

  7. A proposal of decontamination robot using 3D hand-eye-dual-cameras solid recognition and accuracy validation

    International Nuclear Information System (INIS)

    Minami, Mamoru; Nishimura, Kenta; Sunami, Yusuke; Yanou, Akira; Yu, Cui; Yamashita, Manabu; Ishiyama, Shintaro

    2015-01-01

    New robotic system that uses three dimensional measurement with solid object recognition —3D-MOS (Three Dimensional Move on Sensing)— based on visual servoing technology was designed and the on-board hand-eye-dual-cameras robot system has been developed to reduce risks of radiation exposure during decontamination processes by filter press machine that solidifies and reduces the volume of irradiation contaminated soil. The feature of 3D-MoS includes; (1) the both hand-eye-dual-cameras take the images of target object near the intersection of both lenses' centerlines, (2) the observation at intersection enables both cameras can see target object almost at the center of both images, (3) then it brings benefits as reducing the effect of lens aberration and improving the detection accuracy of three dimensional position. In this study, accuracy validation test of interdigitation of the robot's hand into filter cloth rod of the filter press —the task is crucial for the robot to remove the contaminated cloth from the filter press machine automatically and for preventing workers from exposing to radiation—, was performed. Then the following results were derived; (1) the 3D-MoS controlled robot could recognize the rod at arbitrary position within designated space, and all of insertion test were carried out successfully and, (2) test results also demonstrated that the proposed control guarantees that interdigitation clearance between the rod and robot hand can be kept within 1.875[mm] with standard deviation being 0.6[mm] or less. (author)

  8. IFE Power Plant design principles. Drivers. Solid state laser drivers

    International Nuclear Information System (INIS)

    Nakai, S.; Andre, M.; Krupke, W.F.; Mak, A.A.; Soures, J.M.; Yamanaka, M.

    1995-01-01

    The present status of solid state laser drivers for an inertial confinement thermonuclear fusion power plant is discussed. In particular, the feasibility of laser diode pumped solid state laser drivers from both the technical and economic points of view is briefly reviewed. Conceptual design studies showed that they can, in principle, satisfy the design requirements. However, development of new solid state materials with long fluorescence lifetimes and good thermal characteristics is a key issue for laser diode pumped solid state lasers. With the advent of laser diode pumping many materials which were abandoned in the past can presently be reconsidered as viable candidates. It is also concluded that it is important to examine the technical requirements for solid state lasers in relation to target performance criteria. The progress of laser diode pumped lasers in industrial applications should also be closely watched to provide additional information on the economic feasibility of this type of driver. 15 refs, 9 figs, 2 tabs

  9. Potential of solid state fermentation for production of ergot alkaloids

    OpenAIRE

    Trejo Hernandez, M.R.; Raimbault, Maurice; Roussos, Sevastianos; Lonsane, B.K.

    1992-01-01

    Production of total ergot alkaloids by #Claviceps fusiformis$ in solid state fermentation was 3.9 times higher compared to that in submerged fermentation. Production was equal in the case of #Claviceps purpurea$ but the spectra of alkaloids were advantageous with the use of solid state fermentation. The data establish potential of solid state fermentation which was not explored earlier for production of ergot alkaloids. (Résumé d'auteur)

  10. Presence capture cameras - a new challenge to the image quality

    Science.gov (United States)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  11. Development of the plastic solid-dye cell for tunable solid-state dye lasers and study on its optical properties

    Energy Technology Data Exchange (ETDEWEB)

    Ko, Do Kyeong; Lee, Jong Min; Cha, Byung Heon; Jung, E. C.; Kim, Hyun Su; Lim, Gwon

    2001-01-01

    we have fabricated solid-state dyes with PMMA and sol-gel materials. We developed single longitudianl mode solid-state dye laser with the linewidth of less than 500MHz. We have constructed a self-seeded laser and observed the increase of the output power because of self-seeding effect. We investigated the operating characteristics of the dualwave laser oscillator and DFDL with solid-state dyes. And we have constructed the 3-color solid-state dye laser oscillator and amplifier system and observed 3-color operation. We also improved the laser oscliiator with disk-type solid-state dye cell which can be translated and rotated with the help of the two stepping motors. With the help of computer control, we could constantly changed the illuminated area of the dye cell and, therefore, were able to achieve long time operation and to use almost the entire region of the solid-state dye cell.

  12. An electronic pan/tilt/magnify and rotate camera system

    International Nuclear Information System (INIS)

    Zimmermann, S.; Martin, H.L.

    1992-01-01

    A new camera system has been developed for omnidirectional image-viewing applications that provides pan, tilt, magnify, and rotational orientation within a hemispherical field of view (FOV) without any moving parts. The imaging device is based on the fact that the image from a fish-eye lens, which produces a circular image of an entire hemispherical FOV, can be mathematically corrected using high-speed electronic circuitry. More specifically, an incoming fish-eye image from any image acquisition source is captured in the memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. The image transformation device can provide corrected images at frame rates compatible with RS-170 standard video equipment. As a result, this device can accomplish the functions of pan, tilt, rotation, and magnification throughout a hemispherical FOV without the need for any mechanical devices. Multiple images, each with different image magnifications and pan-tilt-rotate parameters, can be obtained from a single camera

  13. Improved embedded non-linear processing of video for camera surveillance

    NARCIS (Netherlands)

    Cvetkovic, S.D.; With, de P.H.N.

    2009-01-01

    For a real time imaging in surveillance applications, image fidelity is of primary importance to ensure customer confidence. The fidelity is obtained amongst others via dynamic range expansion and video signal enhancement. The dynamic range of the signal needs adaptation, because the sensor signal

  14. Radiation sensitive solid state devices

    International Nuclear Information System (INIS)

    Shannon, J.M.; Ralph, J.E.

    1975-01-01

    A solid state radiation sensitive device is described employing JFETs as the sensitive elements. Two terminal construction is achieved by using a common conductor to capacitively couple to the JFET gate and to one of the source and drain connections. (auth)

  15. Video repairing under variable illumination using cyclic motions.

    Science.gov (United States)

    Jia, Jiaya; Tai, Yu-Wing; Wu, Tai-Pang; Tang, Chi-Keung

    2006-05-01

    This paper presents a complete system capable of synthesizing a large number of pixels that are missing due to occlusion or damage in an uncalibrated input video. These missing pixels may correspond to the static background or cyclic motions of the captured scene. Our system employs user-assisted video layer segmentation, while the main processing in video repair is fully automatic. The input video is first decomposed into the color and illumination videos. The necessary temporal consistency is maintained by tensor voting in the spatio-temporal domain. Missing colors and illumination of the background are synthesized by applying image repairing. Finally, the occluded motions are inferred by spatio-temporal alignment of collected samples at multiple scales. We experimented on our system with some difficult examples with variable illumination, where the capturing camera can be stationary or in motion.

  16. Risk analysis of a video-surveillance system

    NARCIS (Netherlands)

    Rothkrantz, L.; Lefter, I.

    2011-01-01

    The paper describes a surveillance system of cameras installed at lamppost of a military area. The surveillance system has been designed to detect unwanted visitors or suspicious behaviors. The area is composed of streets, building blocks and surrounded by gates and water. The video recordings are

  17. Distributed Video Coding for Multiview and Video-plus-depth Coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo

    The interest in Distributed Video Coding (DVC) systems has grown considerably in the academic world in recent years. With DVC the correlation between frames is exploited at the decoder (joint decoding). The encoder codes the frame independently, performing relatively simple operations. Therefore......, with DVC the complexity is shifted from encoder to decoder, making the coding architecture a viable solution for encoders with limited resources. DVC may empower new applications which can benefit from this reversed coding architecture. Multiview Distributed Video Coding (M-DVC) is the application...... of the to-be-decoded frame. Another key element is the Residual estimation, indicating the reliability of the SI, which is used to calculate the parameters of the correlation noise model between SI and original frame. In this thesis new methods for Inter-camera SI generation are analyzed in the Stereo...

  18. Atomistic Simulation of Interfaces in Materials of Solid State Ionics

    Science.gov (United States)

    Ivanov-Schitz, A. K.; Mazo, G. N.

    2018-01-01

    The possibilities of describing correctly interfaces of different types in solids within a computer experiment using molecular statics simulation, molecular dynamics simulation, and quantum chemical calculations are discussed. Heterophase boundaries of various types, including grain boundaries and solid electrolyte‒solid electrolyte and ionic conductor‒electrode material interfaces, are considered. Specific microstructural features and mechanisms of the ion transport in real heterophase structures (cationic conductor‒metal anode and anionic conductor‒cathode) existing in solid state ionics devices (such as solid-state batteries and fuel cells) are discussed.

  19. Observation of the dynamic movement of fragmentations by high-speed camera and high-speed video

    Science.gov (United States)

    Suk, Chul-Gi; Ogata, Yuji; Wada, Yuji; Katsuyama, Kunihisa

    1995-05-01

    The experiments of blastings using mortal concrete blocks and model concrete columns were carried out in order to obtain technical information on fragmentation caused by the blasting demolition. The dimensions of mortal concrete blocks were 1,000 X 1,000 X 1,000 mm. Six kinds of experimental blastings were carried out using mortal concrete blocks. In these experiments precision detonators and No. 6 electric detonators with 10 cm detonating fuse were used and discussed the control of fragmentation. As the results of experiment it was clear that the flying distance of fragmentation can be controlled using a precise blasting system. The reinforced concrete model columns for typical apartment houses in Japan were applied to the experiments. The dimension of concrete test column was 800 X 800 X 2400 mm and buried 400 mm in the ground. The specified design strength of the concrete was 210 kgf/cm2. These columns were exploded by the blasting with internal loading of dynamite. The fragmentation were observed by two kinds of high speed camera with 500 and 2000 FPS and a high speed video with 400 FPS. As one of the results in the experiments, the velocity of fragmentation, blasted 330 g of explosive with the minimum resisting length of 0.32 m, was measured as much as about 40 m/s.

  20. Online Tracking of Outdoor Lighting Variations for Augmented Reality with Moving Cameras

    OpenAIRE

    Liu , Yanli; Granier , Xavier

    2012-01-01

    International audience; In augmented reality, one of key tasks to achieve a convincing visual appearance consistency between virtual objects and video scenes is to have a coherent illumination along the whole sequence. As outdoor illumination is largely dependent on the weather, the lighting condition may change from frame to frame. In this paper, we propose a full image-based approach for online tracking of outdoor illumination variations from videos captured with moving cameras. Our key ide...