WorldWideScience

Sample records for lens 250mm camera

  1. Intraocular camera for retinal prostheses: Refractive and diffractive lens systems

    Science.gov (United States)

    Hauer, Michelle Christine

    The focus of this thesis is on the design and analysis of refractive, diffractive, and hybrid refractive/diffractive lens systems for a miniaturized camera that can be surgically implanted in the crystalline lens sac and is designed to work in conjunction with current and future generation retinal prostheses. The development of such an intraocular camera (IOC) would eliminate the need for an external head-mounted or eyeglass-mounted camera. Placing the camera inside the eye would allow subjects to use their natural eye movements for foveation (attention) instead of more cumbersome head tracking, would notably aid in personal navigation and mobility, and would also be significantly more psychologically appealing from the standpoint of personal appearances. The capability for accommodation with no moving parts or feedback control is incorporated by employing camera designs that exhibit nearly infinite depth of field. Such an ultracompact optical imaging system requires a unique combination of refractive and diffractive optical elements and relaxed system constraints derived from human psychophysics. This configuration necessitates an extremely compact, short focal-length lens system with an f-number close to unity. Initially, these constraints appear highly aggressive from an optical design perspective. However, after careful analysis of the unique imaging requirements of a camera intended to work in conjunction with the relatively low pixellation levels of a retinal microstimulator array, it becomes clear that such a design is not only feasible, but could possibly be implemented with a single lens system.

  2. Photographic zoom fisheye lens design for DSLR cameras

    Science.gov (United States)

    Yan, Yufeng; Sasian, Jose

    2017-09-01

    Photographic fisheye lenses with fixed focal length for cameras with different sensor formats have been well developed for decades. However, photographic fisheye lenses with variable focal length are rare on the market due in part to the greater design difficulty. This paper presents a large aperture zoom fisheye lens for DSLR cameras that produces both circular and diagonal fisheye imaging for 35-mm sensors and diagonal fisheye imaging for APS-C sensors. The history and optical characteristics of fisheye lenses are briefly reviewed. Then, a 9.2- to 16.1-mm F/2.8 to F/3.5 zoom fisheye lens design is presented, including the design approach and aberration control. Image quality and tolerance performance analysis for this lens are also presented.

  3. Mechanically assisted liquid lens zoom system for mobile phone cameras

    Science.gov (United States)

    Wippermann, F. C.; Schreiber, P.; Bräuer, A.; Berge, B.

    2006-08-01

    Camera systems with small form factor are an integral part of today's mobile phones which recently feature auto focus functionality. Ready to market solutions without moving parts have been developed by using the electrowetting technology. Besides virtually no deterioration, easy control electronics and simple and therefore cost-effective fabrication, this type of liquid lenses enables extremely fast settling times compared to mechanical approaches. As a next evolutionary step mobile phone cameras will be equipped with zoom functionality. We present first order considerations for the optical design of a miniaturized zoom system based on liquid-lenses and compare it to its mechanical counterpart. We propose a design of a zoom lens with a zoom factor of 2.5 considering state-of-the-art commercially available liquid lens products. The lens possesses auto focus capability and is based on liquid lenses and one additional mechanical actuator. The combination of liquid lenses and a single mechanical actuator enables extremely short settling times of about 20ms for the auto focus and a simplified mechanical system design leading to lower production cost and longer life time. The camera system has a mechanical outline of 24mm in length and 8mm in diameter. The lens with f/# 3.5 provides market relevant optical performance and is designed for an image circle of 6.25mm (1/2.8" format sensor).

  4. Portraiture lens concept in a mobile phone camera

    Science.gov (United States)

    Sheil, Conor J.; Goncharov, Alexander V.

    2017-11-01

    A small form-factor lens was designed for the purpose of portraiture photography, the size of which allows use within smartphone casing. The current general requirement of mobile cameras having good all-round performance results in a typical, familiar, many-element design. Such designs have little room for improvement, in terms of the available degrees of freedom and highly-demanding target metrics such as low f-number and wide field of view. However, the specific application of the current portraiture lens relaxed the requirement of an all-round high-performing lens, allowing improvement of certain aspects at the expense of others. With a main emphasis on reducing depth of field (DoF), the current design takes advantage of the simple geometrical relationship between DoF and pupil diameter. The system has a large aperture, while a reasonable f-number gives a relatively large focal length, requiring a catadioptric lens design with double ray path; hence, field of view is reduced. Compared to typical mobile lenses, the large diameter reduces depth of field by a factor of four.

  5. Applying image quality in cell phone cameras: lens distortion

    Science.gov (United States)

    Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje

    2009-01-01

    This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.

  6. CCD Camera Lens Interface for Real-Time Theodolite Alignment

    Science.gov (United States)

    Wake, Shane; Scott, V. Stanley, III

    2012-01-01

    Theodolites are a common instrument in the testing, alignment, and building of various systems ranging from a single optical component to an entire instrument. They provide a precise way to measure horizontal and vertical angles. They can be used to align multiple objects in a desired way at specific angles. They can also be used to reference a specific location or orientation of an object that has moved. Some systems may require a small margin of error in position of components. A theodolite can assist with accurately measuring and/or minimizing that error. The technology is an adapter for a CCD camera with lens to attach to a Leica Wild T3000 Theodolite eyepiece that enables viewing on a connected monitor, and thus can be utilized with multiple theodolites simultaneously. This technology removes a substantial part of human error by relying on the CCD camera and monitors. It also allows image recording of the alignment, and therefore provides a quantitative means to measure such error.

  7. Addressing challenges of modulation transfer function measurement with fisheye lens cameras

    Science.gov (United States)

    Deegan, Brian M.; Denny, Patrick E.; Zlokolica, Vladimir; Dever, Barry; Russell, Laura

    2015-03-01

    Modulation transfer function (MTF) is a well defined and accepted method of measuring image sharpness. The slanted edge test, as defined in ISO12233 is a standard method of calculating MTF, and is widely used for lens alignment and auto-focus algorithm verification. However, there are a number of challenges which should be considered when measuring MTF in cameras with fisheye lenses. Due to trade-offs related Petzval curvature, planarity of the optical plane is difficult to achieve in fisheye lenses. It is therefore critical to have the ability to accurately measure sharpness throughout the entire image, particularly for lens alignment. One challenge for fisheye lenses is that, because of the radial distortion, the slanted edges will have different angles, depending on the location within the image and on the distortion profile of the lens. Previous work in the literature indicates that MTF measurements are robust for angles between 2 and 10 degrees. Outside of this range, MTF measurements become unreliable. Also, the slanted edge itself will be curved by the lens distortion, causing further measurement problems. This study summarises the difficulties in the use of MTF for sharpness measurement in fisheye lens cameras, and proposes mitigations and alternative methods.

  8. INVESTIGATION OF PARALLAX ISSUES FOR MULTI-LENS MULTISPECTRAL CAMERA BAND CO-REGISTRATION

    Directory of Open Access Journals (Sweden)

    J. P. Jhan

    2017-08-01

    Full Text Available The multi-lens multispectral cameras (MSCs, such as Micasense Rededge and Parrot Sequoia, can record multispectral information by each separated lenses. With their lightweight and small size, which making they are more suitable for mounting on an Unmanned Aerial System (UAS to collect high spatial images for vegetation investigation. However, due to the multi-sensor geometry of multi-lens structure induces significant band misregistration effects in original image, performing band co-registration is necessary in order to obtain accurate spectral information. A robust and adaptive band-to-band image transform (RABBIT is proposed to perform band co-registration of multi-lens MSCs. First is to obtain the camera rig information from camera system calibration, and utilizes the calibrated results for performing image transformation and lens distortion correction. Since the calibration uncertainty leads to different amount of systematic errors, the last step is to optimize the results in order to acquire a better co-registration accuracy. Due to the potential issues of parallax that will cause significant band misregistration effects when images are closer to the targets, four datasets thus acquired from Rededge and Sequoia were applied to evaluate the performance of RABBIT, including aerial and close-range imagery. From the results of aerial images, it shows that RABBIT can achieve sub-pixel accuracy level that is suitable for the band co-registration purpose of any multi-lens MSC. In addition, the results of close-range images also has same performance, if we focus on the band co-registration on specific target for 3D modelling, or when the target has equal distance to the camera.

  9. Low-cost mobile phone microscopy with a reversed mobile phone camera lens.

    Science.gov (United States)

    Switz, Neil A; D'Ambrosio, Michael V; Fletcher, Daniel A

    2014-01-01

    The increasing capabilities and ubiquity of mobile phones and their associated digital cameras offer the possibility of extending low-cost, portable diagnostic microscopy to underserved and low-resource areas. However, mobile phone microscopes created by adding magnifying optics to the phone's camera module have been unable to make use of the full image sensor due to the specialized design of the embedded camera lens, exacerbating the tradeoff between resolution and field of view inherent to optical systems. This tradeoff is acutely felt for diagnostic applications, where the speed and cost of image-based diagnosis is related to the area of the sample that can be viewed at sufficient resolution. Here we present a simple and low-cost approach to mobile phone microscopy that uses a reversed mobile phone camera lens added to an intact mobile phone to enable high quality imaging over a significantly larger field of view than standard microscopy. We demonstrate use of the reversed lens mobile phone microscope to identify red and white blood cells in blood smears and soil-transmitted helminth eggs in stool samples.

  10. Low-cost mobile phone microscopy with a reversed mobile phone camera lens.

    Directory of Open Access Journals (Sweden)

    Neil A Switz

    Full Text Available The increasing capabilities and ubiquity of mobile phones and their associated digital cameras offer the possibility of extending low-cost, portable diagnostic microscopy to underserved and low-resource areas. However, mobile phone microscopes created by adding magnifying optics to the phone's camera module have been unable to make use of the full image sensor due to the specialized design of the embedded camera lens, exacerbating the tradeoff between resolution and field of view inherent to optical systems. This tradeoff is acutely felt for diagnostic applications, where the speed and cost of image-based diagnosis is related to the area of the sample that can be viewed at sufficient resolution. Here we present a simple and low-cost approach to mobile phone microscopy that uses a reversed mobile phone camera lens added to an intact mobile phone to enable high quality imaging over a significantly larger field of view than standard microscopy. We demonstrate use of the reversed lens mobile phone microscope to identify red and white blood cells in blood smears and soil-transmitted helminth eggs in stool samples.

  11. Decision Support System to Choose Digital Single Lens Camera with Simple Additive Weighting Method

    Directory of Open Access Journals (Sweden)

    Tri Pina Putri

    2016-11-01

    Full Text Available One of the technologies that evolve today is Digital Single Lens Reflex (DSLR camera. The number of products makes users have difficulties to choose the appropriate camera based on their criteria. Users may utilize several ways to help them choosing the intended camera such as using magazine, internet, and other media. This paper discusses about a web based decision support system to choose cameras by using SAW (Simple Additive Weighting method in order to make the decision process more effective and efficient. This system is expected to give recommendations about the camera which is appropriate with the user’s need and criteria based on the cost, the resolution, the feature, the ISO, and the censor. The system was implemented by using PHP and MySQL. Based on the result of questionnaire distributed to 20 respondents, 60% respondents agree that this decision support system can help users to choose the appropriate camera DSLR in accordance with the user’s need, 60% of respondents agree that this decision support system is more effective to choose DSLR camera and 75% of respondents agree that this system is more efficient. In addition, 60.55% of respondents agree that this system has met 5 Es Usability Framework.

  12. New long-zoom lens for 4K super 35mm digital cameras

    Science.gov (United States)

    Thorpe, Laurence J.; Usui, Fumiaki; Kamata, Ryuhei

    2015-05-01

    The world of television production is beginning to adopt 4K Super 35 mm (S35) image capture for a widening range of program genres that seek both the unique imaging properties of that large image format and the protection of their program assets in a world anticipating future 4K services. Documentary and natural history production in particular are transitioning to this form of production. The nature of their shooting demands long zoom lenses. In their traditional world of 2/3-inch digital HDTV cameras they have a broad choice in portable lenses - with zoom ranges as high as 40:1. In the world of Super 35mm the longest zoom lens is limited to 12:1 offering a telephoto of 400mm. Canon was requested to consider a significantly longer focal range lens while severely curtailing its size and weight. Extensive computer simulation explored countless combinations of optical and optomechanical systems in a quest to ensure that all operational requests and full 4K performance could be met. The final lens design is anticipated to have applications beyond entertainment production, including a variety of security systems.

  13. Multiocular image sensor with on-chip beam-splitter and inner meta-micro-lens for single-main-lens stereo camera.

    Science.gov (United States)

    Koyama, Shinzo; Onozawa, Kazutoshi; Tanaka, Keisuke; Saito, Shigeru; Kourkouss, Sahim Mohamed; Kato, Yoshihisa

    2016-08-08

    We developed multiocular 1/3-inch 2.75-μm-pixel-size 2.1M- pixel image sensors by co-design of both on-chip beam-splitter and 100-nm-width 800-nm-depth patterned inner meta-micro-lens for single-main-lens stereo camera systems. A camera with the multiocular image sensor can capture horizontally one-dimensional light filed by both the on-chip beam-splitter horizontally dividing ray according to incident angle, and the inner meta-micro-lens collecting the divided ray into pixel with small optical loss. Cross-talks between adjacent light field images of a fabricated binocular image sensor and of a quad-ocular image sensor are as low as 6% and 7% respectively. With the selection of two images from one-dimensional light filed images, a selective baseline for stereo vision is realized to view close objects with single-main-lens. In addition, by adding multiple light field images with different ratios, baseline distance can be tuned within an aperture of a main lens. We suggest the electrically selective or tunable baseline stereo vision to reduce 3D fatigue of viewers.

  14. High performance gel imaging with a commercial single lens reflex camera

    Science.gov (United States)

    Slobodan, J.; Corbett, R.; Wye, N.; Schein, J. E.; Marra, M. A.; Coope, R. J. N.

    2011-03-01

    A high performance gel imaging system was constructed using a digital single lens reflex camera with epi-illumination to image 19 × 23 cm agarose gels with up to 10,000 DNA bands each. It was found to give equivalent performance to a laser scanner in this high throughput DNA fingerprinting application using the fluorophore SYBR Green®. The specificity and sensitivity of the imager and scanner were within 1% using the same band identification software. Low and high cost color filters were also compared and it was found that with care, good results could be obtained with inexpensive dyed acrylic filters in combination with more costly dielectric interference filters, but that very poor combinations were also possible. Methods for determining resolution, dynamic range, and optical efficiency for imagers are also proposed to facilitate comparison between systems.

  15. Contourlet domain multiband deblurring based on color correlation for fluid lens cameras.

    Science.gov (United States)

    Tzeng, Jack; Liu, Chun-Chen; Nguyen, Truong Q

    2010-10-01

    Due to the novel fluid optics, unique image processing challenges are presented by the fluidic lens camera system. Developed for surgical applications, unique properties, such as no moving parts while zooming and better miniaturization than traditional glass optics, are advantages of the fluid lens. Despite these abilities, sharp color planes and blurred color planes are created by the nonuniform reaction of the liquid lens to different color wavelengths. Severe axial color aberrations are caused by this reaction. In order to deblur color images without estimating a point spread function, a contourlet filter bank system is proposed. Information from sharp color planes is used by this multiband deblurring method to improve blurred color planes. Compared to traditional Lucy-Richardson and Wiener deconvolution algorithms, significantly improved sharpness and reduced ghosting artifacts are produced by a previous wavelet-based method. Directional filtering is used by the proposed contourlet-based system to adjust to the contours of the image. An image is produced by the proposed method which has a similar level of sharpness to the previous wavelet-based method and has fewer ghosting artifacts. Conditions for when this algorithm will reduce the mean squared error are analyzed. While improving the blue color plane by using information from the green color plane is the primary focus of this paper, these methods could be adjusted to improve the red color plane. Many multiband systems such as global mapping, infrared imaging, and computer assisted surgery are natural extensions of this work. This information sharing algorithm is beneficial to any image set with high edge correlation. Improved results in the areas of deblurring, noise reduction, and resolution enhancement can be produced by the proposed algorithm.

  16. Robust and adaptive band-to-band image transform of UAS miniature multi-lens multispectral camera

    Science.gov (United States)

    Jhan, Jyun-Ping; Rau, Jiann-Yeou; Haala, Norbert

    2018-03-01

    Utilizing miniature multispectral (MS) or hyperspectral (HS) cameras by mounting them on an Unmanned Aerial System (UAS) has the benefits of convenience and flexibility to collect remote sensing imagery for precision agriculture, vegetation monitoring, and environment investigation applications. Most miniature MS cameras adopt a multi-lens structure to record discrete MS bands of visible and invisible information. The differences in lens distortion, mounting positions, and viewing angles among lenses mean that the acquired original MS images have significant band misregistration errors. We have developed a Robust and Adaptive Band-to-Band Image Transform (RABBIT) method for dealing with the band co-registration of various types of miniature multi-lens multispectral cameras (Mini-MSCs) to obtain band co-registered MS imagery for remote sensing applications. The RABBIT utilizes modified projective transformation (MPT) to transfer the multiple image geometry of a multi-lens imaging system to one sensor geometry, and combines this with a robust and adaptive correction (RAC) procedure to correct several systematic errors and to obtain sub-pixel accuracy. This study applies three state-of-the-art Mini-MSCs to evaluate the RABBIT method's performance, specifically the Tetracam Miniature Multiple Camera Array (MiniMCA), Micasense RedEdge, and Parrot Sequoia. Six MS datasets acquired at different target distances and dates, and locations are also applied to prove its reliability and applicability. Results prove that RABBIT is feasible for different types of Mini-MSCs with accurate, robust, and rapid image processing efficiency.

  17. Lens densitometry after corneal cross-linking in patients with keratoconus using a Scheimpflug camera

    Directory of Open Access Journals (Sweden)

    Alireza Baradaran-Rafii

    2015-01-01

    Conclusion: In this short term study with six months' follow-up, we observed no significant impact on lens density following exposure of the crystalline lens to ultraviolet A and riboflavin free radicals in the CXL procedure.

  18. Generalized free-space diffuse photon transport model based on the influence analysis of a camera lens diaphragm.

    Science.gov (United States)

    Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie

    2010-10-10

    The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space.

  19. Development and Optical Testing of the Camera, Hand Lens, and Microscope Probe with Scannable Laser Spectroscopy (CHAMP-SLS)

    Science.gov (United States)

    Mungas, Greg S.; Gursel, Yekta; Sepulveda, Cesar A.; Anderson, Mark; La Baw, Clayton; Johnson, Kenneth R.; Deans, Matthew; Beegle, Luther; Boynton, John

    2008-01-01

    Conducting high resolution field microscopy with coupled laser spectroscopy that can be used to selectively analyze the surface chemistry of individual pixels in a scene is an enabling capability for next generation robotic and manned spaceflight missions, civil, and military applications. In the laboratory, we use a range of imaging and surface preparation tools that provide us with in-focus images, context imaging for identifying features that we want to investigate at high magnification, and surface-optical coupling that allows us to apply optical spectroscopic analysis techniques for analyzing surface chemistry particularly at high magnifications. The camera, hand lens, and microscope probe with scannable laser spectroscopy (CHAMP-SLS) is an imaging/spectroscopy instrument capable of imaging continuously from infinity down to high resolution microscopy (resolution of approx. 1 micron/pixel in a final camera format), the closer CHAMP-SLS is placed to a feature, the higher the resultant magnification. At hand lens to microscopic magnifications, the imaged scene can be selectively interrogated with point spectroscopic techniques such as Raman spectroscopy, microscopic Laser Induced Breakdown Spectroscopy (micro-LIBS), laser ablation mass-spectrometry, Fluorescence spectroscopy, and/or Reflectance spectroscopy. This paper summarizes the optical design, development, and testing of the CHAMP-SLS optics.

  20. Accurate and cost-effective MTF measurement system for lens modules of digital cameras

    Science.gov (United States)

    Chang, Gao-Wei; Liao, Chia-Cheng; Yeh, Zong-Mu

    2007-01-01

    For many years, the widening use of digital imaging products, e.g., digital cameras, has given rise to much attention in the market of consumer electronics. However, it is important to measure and enhance the imaging performance of the digital ones, compared to that of conventional cameras (with photographic films). For example, the effect of diffraction arising from the miniaturization of the optical modules tends to decrease the image resolution. As a figure of merit, modulation transfer function (MTF) has been broadly employed to estimate the image quality. Therefore, the objective of this paper is to design and implement an accurate and cost-effective MTF measurement system for the digital camera. Once the MTF of the sensor array is provided, that of the optical module can be then obtained. In this approach, a spatial light modulator (SLM) is employed to modulate the spatial frequency of light emitted from the light-source. The modulated light going through the camera under test is consecutively detected by the sensors. The corresponding images formed from the camera are acquired by a computer and then, they are processed by an algorithm for computing the MTF. Finally, through the investigation on the measurement accuracy from various methods, such as from bar-target and spread-function methods, it appears that our approach gives quite satisfactory results.

  1. Feasibility study of a lens-coupled charge-coupled device gamma camera

    International Nuclear Information System (INIS)

    Lee, Hakjae; Jung, Youngjun; Kim, Jungmin; Bae, Seungbin; Lee, Kisung; Kang, Jungwon

    2011-01-01

    A charge-coupled device (CCD) is generally used in a digital camera as a light-collecting device such as a photomultiplier tube (PMT). Because of its low sensitivity and very high dark current, CCD have not been popularly used for gamma imaging systems. However, a recent CCD technological breakthrough has improved CCD sensitivity, and the use of a Peltier cooling system can significantly minimize the dark current. In this study, we investigated the feasibility of a prototype CCD gamma camera consisting of a CsI scintillator, optical lenses, and a CCD module. Despite electron-multiplying (EM) CCDs having higher performance, in this study, we built a cost-effective system consisted of low-cost components compared to EMCCDs. Our prototype detector consists of a CsI scintillator, two optical lenses, and a conventional Peltier-cooled CCD. The performance of this detector was evaluated by acquiring the sensitivity, resolution, and the modulation transfer function (MTF). The sensitivity of the prototype detector showed excellent linearity. With a 1 mm-diameter pinhole collimator, the full width at half-maximum (FWHM) of a 1.1 mm Tc-99m line source image was 2.85 mm. These results show that the developed prototype camera is feasible for small animal gamma imaging.

  2. Optimization design of periscope type 3X zoom lens design for a five megapixel cellphone camera

    Science.gov (United States)

    Sun, Wen-Shing; Tien, Chuen-Lin; Pan, Jui-Wen; Chao, Yu-Hao; Chu, Pu-Yi

    2016-11-01

    This paper presents a periscope type 3X zoom lenses design for a five megapixel cellphone camera. The configuration of optical system uses the right angle prism in front of the zoom lenses to change the optical path rotated by a 90° angle resulting in the zoom lenses length of 6 mm. The zoom lenses can be embedded in mobile phone with a thickness of 6 mm. The zoom lenses have three groups with six elements. The half field of view is varied from 30° to 10.89°, the effective focal length is adjusted from 3.142 mm to 9.426 mm, and the F-number is changed from 2.8 to 5.13.

  3. Scheimpflug camera combined with placido-disk corneal topography and optical biometry for intraocular lens power calculation.

    Science.gov (United States)

    Kirgiz, Ahmet; Atalay, Kurşat; Kaldirim, Havva; Cabuk, Kubra Serefoglu; Akdemir, Mehmet Orcun; Taskapili, Muhittin

    2017-08-01

    The purpose of this study was to compare the keratometry (K) values obtained by the Scheimpflug camera combined with placido-disk corneal topography (Sirius) and optical biometry (Lenstar) for intraocular lens (IOL) power calculation before the cataract surgery, and to evaluate the accuracy of postoperative refraction. 50 eyes of 40 patients were scheduled to have phacoemulsification with the implantation of a posterior chamber intraocular lens. The IOL power was calculated using the SRK/T formula with Lenstar K and K readings from Sirius. Simulated K (SimK), K at 3-, 5-, and 7-mm zones from Sirius were compared with Lenstar K readings. The accuracy of these parameters was determined by calculating the mean absolute error (MAE). The mean Lenstar K value was 44.05 diopters (D) ±1.93 (SD) and SimK, K at 3-, 5-, and 7-mm zones were 43.85 ± 1.91, 43.88 ± 1.9, 43.84 ± 1.9, 43.66 ± 1.85 D, respectively. There was no statistically significant difference between the K readings (P = 0.901). When Lenstar was used for the corneal power measurements, MAE was 0.42 ± 0.33 D, but when simK of Sirius was used, it was 0.37 ± 0.32 D (the lowest MAE (0.36 ± 0.32 D) was achieved as a result of 5 mm K measurement), but it was not statistically significant (P = 0.892). Of all the K readings of Sirius and Lenstar, Sirius 5-mm zone K readings were the best in predicting a more precise IOL power. The corneal power measurements with the Scheimpflug camera combined with placido-disk corneal topography can be safely used for IOL power calculation.

  4. Commercialization of radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10{sup 6} - 10{sup 8} rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  5. Commercialization of radiation tolerant camera

    International Nuclear Information System (INIS)

    Lee, Yong Bum; Choi, Young Soo; Kim, Sun Ku; Lee, Jong Min; Cha, Bung Hun; Lee, Nam Ho; Byun, Eiy Gyo; Yoo, Seun Wook; Choi, Bum Ki; Yoon, Sung Up; Kim, Hyun Gun; Sin, Jeong Hun; So, Suk Il

    1999-12-01

    In this project, radiation tolerant camera which tolerates 10 6 - 10 8 rad total dose is developed. In order to develop radiation tolerant camera, radiation effect of camera components was examined and evaluated, and camera configuration was studied. By the result of evaluation, the components were decided and design was performed. Vidicon tube was selected to use by image sensor and non-browning optics and camera driving circuit were applied. The controller needed for CCTV camera system, lens, light, pan/tilt controller, was designed by the concept of remote control. And two type of radiation tolerant camera were fabricated consider to use in underwater environment or normal environment. (author)

  6. Diffusion of Co and W in diamond tool induced by 10.6 µm CO2 laser radiation

    CSIR Research Space (South Africa)

    Masina, Bathusile N

    2011-05-01

    Full Text Available www.csir.co.za Experimental setup CO2 laser ZnSe lens, f = 250 mm HPHT diamond sample Infrared camera Slide 8 © CSIR 2009 www.csir.co.za Experimental setup CO2 laser ZnSe lens, f = 250 mm HPHT diamond sample Infrared camera...

  7. Design and development of a zoom lens objective for the fast breeder test reactor periscope

    International Nuclear Information System (INIS)

    Das, N.C.; Udupa, D.V.; Shukla, R.P.

    2003-10-01

    A three lens optically compensated zoom lens useful for the 5 meter long periscope in the Fast Breeder Test Reactor (FBTR) has been designed, fabricated and tested. The zoom lens fabricated using radiation resistant glasses has a zoom ratio of 2.5 with a focal length range of l00 mm to 250 mm. The zoom lens objective has been designed for viewing the objects kept at a distance in the range of 1.5 m to 3 m from the objective lens. It is found that the zoom lens objective can be used for resolving objects with a linear resolution of 0.2 mm inside the reactor when viewed with an eye piece of focal length 50 mm. (author)

  8. Lens decenter and tilt measurement by interferogram

    Science.gov (United States)

    Hung, Min-Wei; Wu, Wen-Hong; Huang, Kuo-Cheng

    2009-11-01

    For the recent years, the vigorous development of the electro-optic industry, particularly the digital camera and the cellular phone camera, has placed a larger and larger demand for the optical devices. Among the optical lens, the aspherical optical lens plays the key component because the aspherical lens may provide better imaging quality then the spherical lens does. For the manufacturing reason, the aspherical lens is prone to a decenter or tilt issue with respect to the optical axes of its two surfaces. To measure decenter and tile error specifically would help to obviate the deficient lens, but most of the present measuring method can't provide this function. This paper proposed a new method to specifically measure the decenter and tile of lens by observing the interferogram of each surface. And the corresponding measuring instrument, which contains interferometer and motion stages, was introduced as well.

  9. Estimating rice yield related traits and quantitative trait loci analysis under different nitrogen treatments using a simple tower-based field phenotyping system with modified single-lens reflex cameras

    Science.gov (United States)

    Naito, Hiroki; Ogawa, Satoshi; Valencia, Milton Orlando; Mohri, Hiroki; Urano, Yutaka; Hosoi, Fumiki; Shimizu, Yo; Chavez, Alba Lucia; Ishitani, Manabu; Selvaraj, Michael Gomez; Omasa, Kenji

    2017-03-01

    Application of field based high-throughput phenotyping (FB-HTP) methods for monitoring plant performance in real field conditions has a high potential to accelerate the breeding process. In this paper, we discuss the use of a simple tower based remote sensing platform using modified single-lens reflex cameras for phenotyping yield traits in rice under different nitrogen (N) treatments over three years. This tower based phenotyping platform has the advantages of simplicity, ease and stability in terms of introduction, maintenance and continual operation under field conditions. Out of six phenological stages of rice analyzed, the flowering stage was the most useful in the estimation of yield performance under field conditions. We found a high correlation between several vegetation indices (simple ratio (SR), normalized difference vegetation index (NDVI), transformed vegetation index (TVI), corrected transformed vegetation index (CTVI), soil-adjusted vegetation index (SAVI) and modified soil-adjusted vegetation index (MSAVI)) and multiple yield traits (panicle number, grain weight and shoot biomass) across a three trials. Among all of the indices studied, SR exhibited the best performance in regards to the estimation of grain weight (R2 = 0.80). Under our tower-based field phenotyping system (TBFPS), we identified quantitative trait loci (QTL) for yield related traits using a mapping population of chromosome segment substitution lines (CSSLs) and a single nucleotide polymorphism data set. Our findings suggest the TBFPS can be useful for the estimation of yield performance during early crop development. This can be a major opportunity for rice breeders whom desire high throughput phenotypic selection for yield performance traits.

  10. Polarizing aperture stereoscopic cinema camera

    Science.gov (United States)

    Lipton, Lenny

    2012-07-01

    The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.

  11. Objective lens

    Science.gov (United States)

    Olczak, Eugene G. (Inventor)

    2011-01-01

    An objective lens and a method for using same. The objective lens has a first end, a second end, and a plurality of optical elements. The optical elements are positioned between the first end and the second end and are at least substantially symmetric about a plane centered between the first end and the second end.

  12. Single Camera Calibration in 3D Vision

    Directory of Open Access Journals (Sweden)

    Caius SULIMAN

    2009-12-01

    Full Text Available Camera calibration is a necessary step in 3D vision in order to extract metric information from 2D images. A camera is considered to be calibrated when the parameters of the camera are known (i.e. principal distance, lens distorsion, focal length etc.. In this paper we deal with a single camera calibration method and with the help of this method we try to find the intrinsic and extrinsic camera parameters. The method was implemented with succes in the programming and simulation environment Matlab.

  13. EVALUATION OF THE QUALITY OF ACTION CAMERAS WITH WIDE-ANGLE LENSES IN UAV PHOTOGRAMMETRY

    OpenAIRE

    Hastedt, H.; Ekkel, T.; Luhmann, T.

    2016-01-01

    The application of light-weight cameras in UAV photogrammetry is required due to restrictions in payload. In general, consumer cameras with normal lens type are applied to a UAV system. The availability of action cameras, like the GoPro Hero4 Black, including a wide-angle lens (fish-eye lens) offers new perspectives in UAV projects. With these investigations, different calibration procedures for fish-eye lenses are evaluated in order to quantify their accuracy potential in UAV photogrammetry....

  14. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, Ul; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is replaceably mounted in the ray inlet opening of the camera, while the others are placed on separate supports. Supports are swingably mounted upon a column one above the other

  15. Gamma camera

    International Nuclear Information System (INIS)

    Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    The design of a collimation system for a gamma camera for use in nuclear medicine is described. When used with a 2-dimensional position sensitive radiation detector, the novel system can produce superior images than conventional cameras. The optimal thickness and positions of the collimators are derived mathematically. (U.K.)

  16. Picosecond camera

    International Nuclear Information System (INIS)

    Decroisette, Michel

    A Kerr cell activated by infrared pulses of a model locked Nd glass laser, acts as an ultra-fast and periodic shutter, with a few p.s. opening time. Associated with a S.T.L. camera, it gives rise to a picosecond camera allowing us to study very fast effects [fr

  17. Characterization of lens based photoacoustic imaging system

    Directory of Open Access Journals (Sweden)

    Kalloor Joseph Francis

    2017-12-01

    Full Text Available Some of the challenges in translating photoacoustic (PA imaging to clinical applications includes limited view of the target tissue, low signal to noise ratio and the high cost of developing real-time systems. Acoustic lens based PA imaging systems, also known as PA cameras are a potential alternative to conventional imaging systems in these scenarios. The 3D focusing action of lens enables real-time C-scan imaging with a 2D transducer array. In this paper, we model the underlying physics in a PA camera in the mathematical framework of an imaging system and derive a closed form expression for the point spread function (PSF. Experimental verification follows including the details on how to design and fabricate the lens inexpensively. The system PSF is evaluated over a 3D volume that can be imaged by this PA camera. Its utility is demonstrated by imaging phantom and an ex vivo human prostate tissue sample.

  18. Characterization of lens based photoacoustic imaging system.

    Science.gov (United States)

    Francis, Kalloor Joseph; Chinni, Bhargava; Channappayya, Sumohana S; Pachamuthu, Rajalakshmi; Dogra, Vikram S; Rao, Navalgund

    2017-12-01

    Some of the challenges in translating photoacoustic (PA) imaging to clinical applications includes limited view of the target tissue, low signal to noise ratio and the high cost of developing real-time systems. Acoustic lens based PA imaging systems, also known as PA cameras are a potential alternative to conventional imaging systems in these scenarios. The 3D focusing action of lens enables real-time C-scan imaging with a 2D transducer array. In this paper, we model the underlying physics in a PA camera in the mathematical framework of an imaging system and derive a closed form expression for the point spread function (PSF). Experimental verification follows including the details on how to design and fabricate the lens inexpensively. The system PSF is evaluated over a 3D volume that can be imaged by this PA camera. Its utility is demonstrated by imaging phantom and an ex vivo human prostate tissue sample.

  19. Gamma camera

    International Nuclear Information System (INIS)

    Tschunt, E.; Platz, W.; Baer, U.; Heinz, L.

    1978-01-01

    A gamma camera has a plurality of exchangeable collimators, one of which is mounted in the ray inlet opening of the camera, while the others are placed on separate supports. The supports are swingably mounted upon a column one above the other through about 90 0 to a collimator exchange position. Each of the separate supports is swingable to a vertically aligned position, with limiting of the swinging movement and positioning of the support at the desired exchange position. The collimators are carried on the supports by means of a series of vertically disposed coil springs. Projections on the camera are movable from above into grooves of the collimator at the exchange position, whereupon the collimator is turned so that it is securely prevented from falling out of the camera head

  20. TESS Lens-Bezel Assembly Modal Testing

    Science.gov (United States)

    Dilworth, Brandon J.; Karlicek, Alexandra

    2017-01-01

    The Transiting Exoplanet Survey Satellite (TESS) program, led by the Kavli Institute for Astrophysics and Space Research at the Massachusetts Institute of Technology (MIT) will be the first-ever spaceborne all-sky transit survey. MIT Lincoln Laboratory is responsible for the cameras, including the lens assemblies, detector assemblies, lens hoods, and camera mounts. TESS is scheduled to be launched in August of 2017 with the primary goal to detect small planets with bright host starts in the solar neighborhood, so that detailed characterizations of the planets and their atmospheres can be performed. The TESS payload consists of four identical cameras and a data handling unit. Each camera consists of a lens assembly with seven optical elements and a detector assembly with four charge-coupled devices (CCDs) including their associated electronics. The optical prescription requires that several of the lenses are in close proximity to a neighboring element. A finite element model (FEM) was developed to estimate the relative deflections between each lens-bezel assembly under launch loads to predict that there are adequate clearances preventing the lenses from making contact. Modal tests using non-contact response measurements were conducted to experimentally estimate the modal parameters of the lens-bezel assembly, and used to validate the initial FEM assumptions. Key Words Non-contact measurements, modal analysis, model validation

  1. Lens Model

    DEFF Research Database (Denmark)

    Nash, Ulrik William

    2014-01-01

    Firms consist of people who make decisions to achieve goals. How do these people develop the expectations which underpin the choices they make? The lens model provides one answer to this question. It was developed by cognitive psychologist Egon Brunswik (1952) to illustrate his theory of probabil...

  2. Coaxial fundus camera for opthalmology

    Science.gov (United States)

    de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.

    2015-09-01

    A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.

  3. Liquid lens: advances in adaptive optics

    Science.gov (United States)

    Casey, Shawn Patrick

    2010-12-01

    'Liquid lens' technologies promise significant advancements in machine vision and optical communications systems. Adaptations for machine vision, human vision correction, and optical communications are used to exemplify the versatile nature of this technology. Utilization of liquid lens elements allows the cost effective implementation of optical velocity measurement. The project consists of a custom image processor, camera, and interface. The images are passed into customized pattern recognition and optical character recognition algorithms. A single camera would be used for both speed detection and object recognition.

  4. Algorithm design of liquid lens inspection system

    Science.gov (United States)

    Hsieh, Lu-Lin; Wang, Chun-Chieh

    2008-08-01

    In mobile lens domain, the glass lens is often to be applied in high-resolution requirement situation; but the glass zoom lens needs to be collocated with movable machinery and voice-coil motor, which usually arises some space limits in minimum design. In high level molding component technology development, the appearance of liquid lens has become the focus of mobile phone and digital camera companies. The liquid lens sets with solid optical lens and driving circuit has replaced the original components. As a result, the volume requirement is decreased to merely 50% of the original design. Besides, with the high focus adjusting speed, low energy requirement, high durability, and low-cost manufacturing process, the liquid lens shows advantages in the competitive market. In the past, authors only need to inspect the scrape defect made by external force for the glass lens. As to the liquid lens, authors need to inspect the state of four different structural layers due to the different design and structure. In this paper, authors apply machine vision and digital image processing technology to administer inspections in the particular layer according to the needs of users. According to our experiment results, the algorithm proposed can automatically delete non-focus background, extract the region of interest, find out and analyze the defects efficiently in the particular layer. In the future, authors will combine the algorithm of the system with automatic-focus technology to implement the inside inspection based on the product inspective demands.

  5. A Motionless Camera

    Science.gov (United States)

    1994-01-01

    Omniview, a motionless, noiseless, exceptionally versatile camera was developed for NASA as a receiving device for guiding space robots. The system can see in one direction and provide as many as four views simultaneously. Developed by Omniview, Inc. (formerly TRI) under a NASA Small Business Innovation Research (SBIR) grant, the system's image transformation electronics produce a real-time image from anywhere within a hemispherical field. Lens distortion is removed, and a corrected "flat" view appears on a monitor. Key elements are a high resolution charge coupled device (CCD), image correction circuitry and a microcomputer for image processing. The system can be adapted to existing installations. Applications include security and surveillance, teleconferencing, imaging, virtual reality, broadcast video and military operations. Omniview technology is now called IPIX. The company was founded in 1986 as TeleRobotics International, became Omniview in 1995, and changed its name to Interactive Pictures Corporation in 1997.

  6. Comparative evaluation of consumer grade cameras and mobile phone cameras for close range photogrammetry

    Science.gov (United States)

    Chikatsu, Hirofumi; Takahashi, Yoji

    2009-08-01

    The authors have been concentrating on developing convenient 3D measurement methods using consumer grade digital cameras, and it was concluded that consumer grade digital cameras are expected to become a useful photogrammetric device for the various close range application fields. On the other hand, mobile phone cameras which have 10 mega pixels were appeared on the market in Japan. In these circumstances, we are faced with alternative epoch-making problem whether mobile phone cameras are able to take the place of consumer grade digital cameras in close range photogrammetric applications. In order to evaluate potentials of mobile phone cameras in close range photogrammetry, comparative evaluation between mobile phone cameras and consumer grade digital cameras are investigated in this paper with respect to lens distortion, reliability, stability and robustness. The calibration tests for 16 mobile phone cameras and 50 consumer grade digital cameras were conducted indoors using test target. Furthermore, practability of mobile phone camera for close range photogrammetry was evaluated outdoors. This paper presents that mobile phone cameras have ability to take the place of consumer grade digital cameras, and develop the market in digital photogrammetric fields.

  7. Distribution and Parameter's Calculations of Television Cameras Inside a Nuclear Facility

    International Nuclear Information System (INIS)

    El-kafas, A.A.

    2009-01-01

    In this work, a distribution of television cameras and parameter's calculation inside and outside a nuclear facility is presented. Each of exterior and interior camera systems will be described and explained. The work shows the overall closed circuit television system. Fixed and moving cameras with various lens format and different angles of view are used. The calculations of width of images sensitive area and Lens focal length for the cameras will be introduced. The work shows the camera locations and distributions inside and outside the nuclear facility. The technical specifications and parameters for cameras selection are tabulated

  8. Projection-type X-ray microscope based on a spherical compound refractive X-ray lens

    OpenAIRE

    Dudchik, Yu. I.; Gary, C. K.; Park, H.; Pantell, R. H.; Piestrup, M. A.

    2007-01-01

    New projection- type X-ray microscope with a compound refractive lens as the optical element is presented. The microscope consists of an X-ray source that is 1-2 mm in diameter, compound X-ray lens and X-ray camera that are placed in-line to satisfy the lens formula. The lens forms an image of the X-ray source at camera sensitive plate. An object is placed between the X-ray source and the lens as close as possible to the source, and the camera shows a shadow image of the object. Spatial resol...

  9. Scintillating camera

    International Nuclear Information System (INIS)

    Vlasbloem, H.

    1976-01-01

    The invention relates to a scintillating camera and in particular to an apparatus for determining the position coordinates of a light pulse emitting point on the anode of an image intensifier tube which forms part of a scintillating camera, comprising at least three photomultipliers which are positioned to receive light emitted by the anode screen on their photocathodes, circuit means for processing the output voltages of the photomultipliers to derive voltages that are representative of the position coordinates; a pulse-height discriminator circuit adapted to be fed with the sum voltage of the output voltages of the photomultipliers for gating the output of the processing circuit when the amplitude of the sum voltage of the output voltages of the photomultipliers lies in a predetermined amplitude range, and means for compensating the distortion introduced in the image on the anode screen

  10. Gamma camera

    International Nuclear Information System (INIS)

    Reiss, K.H.; Kotschak, O.; Conrad, B.

    1976-01-01

    A gamma camera with a simplified setup as compared with the state of engineering is described permitting, apart from good localization, also energy discrimination. Behind the usual vacuum image amplifier a multiwire proportional chamber filled with trifluorine bromium methane is connected in series. Localizing of the signals is achieved by a delay line, energy determination by means of a pulse height discriminator. With the aid of drawings and circuit diagrams, the setup and mode of operation are explained. (ORU) [de

  11. Wavefront analysis for plenoptic camera imaging

    International Nuclear Information System (INIS)

    Luan Yin-Sen; Xu Bing; Yang Ping; Tang Guo-Mao

    2017-01-01

    The plenoptic camera is a single lens stereo camera which can retrieve the direction of light rays while detecting their intensity distribution. In this paper, to reveal more truths of plenoptic camera imaging, we present the wavefront analysis for the plenoptic camera imaging from the angle of physical optics but not from the ray tracing model of geometric optics. Specifically, the wavefront imaging model of a plenoptic camera is analyzed and simulated by scalar diffraction theory and the depth estimation is redescribed based on physical optics. We simulate a set of raw plenoptic images of an object scene, thereby validating the analysis and derivations and the difference between the imaging analysis methods based on geometric optics and physical optics are also shown in simulations. (paper)

  12. Camera processing with chromatic aberration.

    Science.gov (United States)

    Korneliussen, Jan Tore; Hirakawa, Keigo

    2014-10-01

    Since the refractive index of materials commonly used for lens depends on the wavelengths of light, practical camera optics fail to converge light to a single point on an image plane. Known as chromatic aberration, this phenomenon distorts image details by introducing magnification error, defocus blur, and color fringes. Though achromatic and apochromatic lens designs reduce chromatic aberration to a degree, they are complex and expensive and they do not offer a perfect correction. In this paper, we propose a new postcapture processing scheme designed to overcome these problems computationally. Specifically, the proposed solution is comprised of chromatic aberration-tolerant demosaicking algorithm and post-demosaicking chromatic aberration correction. Experiments with simulated and real sensor data verify that the chromatic aberration is effectively corrected.

  13. Gamma camera

    International Nuclear Information System (INIS)

    Berninger, W.H.

    1975-01-01

    The light pulse output of a scintillator, on which incident collimated gamma rays impinge, is detected by an array of photoelectric tubes each having a convexly curved photocathode disposed in close proximity to the scintillator. Electronic circuitry connected to outputs of the phototubes develops the scintillation event position coordinate electrical signals with good linearity and with substantial independence of the spacing between the scintillator and photocathodes so that the phototubes can be positioned as close to the scintillator as is possible to obtain less distortion in the field of view and improved spatial resolution as compared to conventional planar photocathode gamma cameras

  14. Radioisotope camera

    International Nuclear Information System (INIS)

    Tausch, L.M.; Kump, R.J.

    1978-01-01

    The electronic ciruit corrects distortions caused by the distance between the individual photomultiplier tubes of the multiple radioisotope camera on one hand and between the tube configuration and the scintillator plate on the other. For this purpose the transmission characteristics of the nonlinear circuits are altered as a function of the energy of the incident radiation. By this means the threshold values between lower and higher amplification are adjusted to the energy level of each scintillation. The correcting circuit may be used for any number of isotopes to be measured. (DG) [de

  15. Investigating the Suitability of Mirrorless Cameras in Terrestrial Photogrammetric Applications

    Science.gov (United States)

    Incekara, A. H.; Seker, D. Z.; Delen, A.; Acar, A.

    2017-11-01

    Digital single-lens reflex cameras (DSLR) which are commonly referred as mirrored cameras are preferred for terrestrial photogrammetric applications such as documentation of cultural heritage, archaeological excavations and industrial measurements. Recently, digital cameras which are called as mirrorless systems that can be used with different lens combinations have become available for using similar applications. The main difference between these two camera types is the presence of the mirror mechanism which means that the incoming beam towards the lens is different in the way it reaches the sensor. In this study, two different digital cameras, one with a mirror (Nikon D700) and the other without a mirror (Sony a6000), were used to apply close range photogrammetric application on the rock surface at Istanbul Technical University (ITU) Ayazaga Campus. Accuracy of the 3D models created by means of photographs taken with both cameras were compared with each other using difference values between field and model coordinates which were obtained after the alignment of the photographs. In addition, cross sections were created on the 3D models for both data source and maximum area difference between them is quite small because they are almost overlapping. The mirrored camera has become more consistent in itself with respect to the change of model coordinates for models created with photographs taken at different times, with almost the same ground sample distance. As a result, it has been determined that mirrorless cameras and point cloud produced using photographs obtained from these cameras can be used for terrestrial photogrammetric studies.

  16. Converging or Diverging Lens?

    Science.gov (United States)

    Branca, Mario

    2013-01-01

    Why does a lens magnify? Why does it shrink objects? Why does this happen? The activities that we propose here are useful in helping us to understand how lenses work, and they show that the same lens can have different magnification capabilities. A converging lens can also act as a diverging lens. (Contains 4 figures.)

  17. Optical camera system for radiation field

    International Nuclear Information System (INIS)

    Maki, Koichi; Senoo, Makoto; Takahashi, Fuminobu; Shibata, Keiichiro; Honda, Takuro.

    1995-01-01

    An infrared-ray camera comprises a transmitting filter used exclusively for infrared-rays at a specific wavelength, such as far infrared-rays and a lens used exclusively for infrared rays. An infrared ray emitter-incorporated photoelectric image converter comprising an infrared ray emitting device, a focusing lens and a semiconductor image pick-up plate is disposed at a place of low gamma-ray dose rate. Infrared rays emitted from an objective member are passed through the lens system of the camera, and real images are formed by way of the filter. They are transferred by image fibers, introduced to the photoelectric image converter and focused on the image pick-up plate by the image-forming lens. Further, they are converted into electric signals and introduced to a display and monitored. With such a constitution, an optical material used exclusively for infrared rays, for example, ZnSe can be used for the lens system and the optical transmission system. Accordingly, it can be used in a radiation field of high gamma ray dose rate around the periphery of the reactor container. (I.N.)

  18. X-ray imaging using digital cameras

    Science.gov (United States)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  19. Gamma camera

    International Nuclear Information System (INIS)

    Conrad, B.; Heinzelmann, K.G.

    1975-01-01

    A gamma camera is described which obviates the distortion of locating signals generally caused by the varied light conductive capacities of the light conductors in that the flow of light through each light conductor may be varied by means of a shutter. A balancing of the flow of light through each of the individual light conductors, in effect, collective light conductors may be balanced on the basis of their light conductive capacities or properties, so as to preclude a distortion of the locating signals caused by the varied light conductive properties of the light conductors. Each light conductor has associated therewith two, relative to each other, independently adjustable shutters, of which one forms a closure member and the other an adjusting shutter. In this embodiment of the invention it is thus possible to block all of the light conductors leading to a photoelectric transducer, with the exception of those light conductors which are to be balanced. The balancing of the individual light conductors may then be obtained on the basis of the output signals of the photoelectric transducer. (auth)

  20. Scintillation camera

    International Nuclear Information System (INIS)

    Zioni, J.; Klein, Y.; Inbar, D.

    1975-01-01

    The scintillation camera is to make pictures of the density distribution of radiation fields created by the injection or administration radioactive medicaments into the body of the patient. It contains a scintillation crystal, several photomultipliers and computer circuits to obtain an analytical function at the exits of the photomultiplier which is dependent on the position of the scintillations at the time in the crystal. The scintillation crystal is flat and spatially corresponds to the production site of radiation. The photomultipliers form a pattern whose basic form consists of at least three photomultipliers. They are assigned to at least two crossing parallel series groups where a vertical running reference axis in the crystal plane belongs to each series group. The computer circuits are each assigned to a reference axis. Each series of a series group assigned to one of the reference axes in the computer circuit has an adder to produce a scintillation dependent series signal. Furthermore, the projection of the scintillation on this reference axis is calculated. A series signal is used for this which originates from a series chosen from two neighbouring photomultiplier series of this group. The scintillation must have appeared between these chosen series. They are termed as basic series. The photomultiplier can be arranged hexagonally or rectangularly. (GG/LH) [de

  1. Single lens to lens duplication: The missing link

    OpenAIRE

    Bhatt, Rupal; Jethani, Jitendra; Saluja, Praveen; Bharti, Vinay

    2008-01-01

    Congenital anomalies of the lens include a wide range from lens coloboma to primary aphakia and doubling of lens. There have been few case reports of double lens; the etiology suggested is metaplastic changes in the surface ectoderm that leads to formation of two lens vesicles and hence resulting in double lens. We report a case with bilobed lens, which raises the possibility of explaining the etiology of double lens.

  2. CCD camera system for use with a streamer chamber

    International Nuclear Information System (INIS)

    Angius, S.A.; Au, R.; Crawley, G.C.; Djalali, C.; Fox, R.; Maier, M.; Ogilvie, C.A.; Molen, A. van der; Westfall, G.D.; Tickle, R.S.

    1988-01-01

    A system based on three charge-coupled-device (CCD) cameras is described here. It has been used to acquire images from a streamer chamber and consists of three identical subsystems, one for each camera. Each subsystem contains an optical lens, CCD camera head, camera controller, an interface between the CCD and a microprocessor, and a link to a minicomputer for data recording and on-line analysis. Image analysis techniques have been developed to enhance the quality of the particle tracks. Some steps have been made to automatically identify tracks and reconstruct the event. (orig.)

  3. Projection model for flame chemiluminescence tomography based on lens imaging

    Science.gov (United States)

    Wan, Minggang; Zhuang, Jihui

    2018-04-01

    For flame chemiluminescence tomography (FCT) based on lens imaging, the projection model is essential because it formulates the mathematical relation between the flame projections captured by cameras and the chemiluminescence field, and, through this relation, the field is reconstructed. This work proposed the blurry-spot (BS) model, which takes more universal assumptions and has higher accuracy than the widely applied line-of-sight model. By combining the geometrical camera model and the thin-lens equation, the BS model takes into account perspective effect of the camera lens; by combining ray-tracing technique and Monte Carlo simulation, it also considers inhomogeneous distribution of captured radiance on the image plane. Performance of these two models in FCT was numerically compared, and results showed that using the BS model could lead to better reconstruction quality in wider application ranges.

  4. Contact Lens Care

    Science.gov (United States)

    ... Consumers Consumer Information by Audience For Women Contact Lens Care Share Tweet Linkedin Pin it More sharing ... www.fda.gov/medwatch Learn More about Contact Lens Care Other Tips on Contact Lenses Decorative Contact ...

  5. The Light Field Attachment: Turning a DSLR into a Light Field Camera Using a Low Budget Camera Ring

    KAUST Repository

    Wang, Yuwang

    2016-11-16

    We propose a concept for a lens attachment that turns a standard DSLR camera and lens into a light field camera. The attachment consists of 8 low-resolution, low-quality side cameras arranged around the central high-quality SLR lens. Unlike most existing light field camera architectures, this design provides a high-quality 2D image mode, while simultaneously enabling a new high-quality light field mode with a large camera baseline but little added weight, cost, or bulk compared with the base DSLR camera. From an algorithmic point of view, the high-quality light field mode is made possible by a new light field super-resolution method that first improves the spatial resolution and image quality of the side cameras and then interpolates additional views as needed. At the heart of this process is a super-resolution method that we call iterative Patch- And Depth-based Synthesis (iPADS), which combines patch-based and depth-based synthesis in a novel fashion. Experimental results obtained for both real captured data and synthetic data confirm that our method achieves substantial improvements in super-resolution for side-view images as well as the high-quality and view-coherent rendering of dense and high-resolution light fields.

  6. Curiosity's Mars Hand Lens Imager (MAHLI): Inital Observations and Activities

    Science.gov (United States)

    Edgett, K. S.; Yingst, R. A.; Minitti, M. E.; Robinson, M. L.; Kennedy, M. R.; Lipkaman, L. J.; Jensen, E. H.; Anderson, R. C.; Bean, K. M.; Beegle, L. W.; hide

    2013-01-01

    MAHLI (Mars Hand Lens Imager) is a 2-megapixel focusable macro lens color camera on the turret on Curiosity's robotic arm. The investigation centers on stratigraphy, grain-scale texture, structure, mineralogy, and morphology of geologic materials at Curiosity's Gale robotic field site. MAHLI acquires focused images at working distances of 2.1 cm to infinity; for reference, at 2.1 cm the scale is 14 microns/pixel; at 6.9 cm it is 31 microns/pixel, like the Spirit and Opportunity Microscopic Imager (MI) cameras.

  7. Pentacam Scheimpflug quantitative imaging of the crystalline lens and intraocular lens.

    Science.gov (United States)

    Rosales, Patricia; Marcos, Susana

    2009-05-01

    To implement geometrical and optical distortion correction methods for anterior segment Scheimpflug images obtained with a commercially available system (Pentacam, Oculus Optikgeräte GmbH). Ray tracing algorithms were implemented to obtain corrected ocular surface geometry from the original images captured by the Pentacam's CCD camera. As details of the optical layout were not fully provided by the manufacturer, an iterative procedure (based on imaging of calibrated spheres) was developed to estimate the camera lens specifications. The correction procedure was tested on Scheimpflug images of a physical water cell model eye (with polymethylmethacrylate cornea and a commercial IOL of known dimensions) and of a normal human eye previously measured with a corrected optical and geometrical distortion Scheimpflug camera (Topcon SL-45 [Topcon Medical Systems Inc] from the Vrije University, Amsterdam, Holland). Uncorrected Scheimpflug images show flatter surfaces and thinner lenses than in reality. The application of geometrical and optical distortion correction algorithms improves the accuracy of the estimated anterior lens radii of curvature by 30% to 40% and of the estimated posterior lens by 50% to 100%. The average error in the retrieved radii was 0.37 and 0.46 mm for the anterior and posterior lens radii of curvature, respectively, and 0.048 mm for lens thickness. The Pentacam Scheimpflug system can be used to obtain quantitative information on the geometry of the crystalline lens, provided that geometrical and optical distortion correction algorithms are applied, within the accuracy of state-of-the art phakometry and biometry. The techniques could improve with exact knowledge of the technical specifications of the instrument, improved edge detection algorithms, consideration of aspheric and non-rotationally symmetrical surfaces, and introduction of a crystalline gradient index.

  8. The MVACS Robotic Arm Camera

    Science.gov (United States)

    Keller, H. U.; Hartwig, H.; Kramm, R.; Koschny, D.; Markiewicz, W. J.; Thomas, N.; Fernades, M.; Smith, P. H.; Reynolds, R.; Lemmon, M. T.; Weinberg, J.; Marcialis, R.; Tanner, R.; Boss, B. J.; Oquest, C.; Paige, D. A.

    2001-08-01

    The Robotic Arm Camera (RAC) is one of the key instruments newly developed for the Mars Volatiles and Climate Surveyor payload of the Mars Polar Lander. This lightweight instrument employs a front lens with variable focus range and takes images at distances from 11 mm (image scale 1:1) to infinity. Color images with a resolution of better than 50 μm can be obtained to characterize the Martian soil. Spectral information of nearby objects is retrieved through illumination with blue, green, and red lamp sets. The design and performance of the camera are described in relation to the science objectives and operation. The RAC uses the same CCD detector array as the Surface Stereo Imager and shares the readout electronics with this camera. The RAC is mounted at the wrist of the Robotic Arm and can characterize the contents of the scoop, the samples of soil fed to the Thermal Evolved Gas Analyzer, the Martian surface in the vicinity of the lander, and the interior of trenches dug out by the Robotic Arm. It can also be used to take panoramic images and to retrieve stereo information with an effective baseline surpassing that of the Surface Stereo Imager by about a factor of 3.

  9. Aberration design of zoom lens systems using thick lens modules.

    Science.gov (United States)

    Zhang, Jinkai; Chen, Xiaobo; Xi, Juntong; Wu, Zhuoqi

    2014-12-20

    A systematic approach for the aberration design of a zoom lens system using a thick lens module is presented. Each component is treated as a thick lens module at the beginning of the design. A thick lens module refers to a thick lens component with a real lens structure, like lens materials, lens curvatures, lens thicknesses, and lens interval distances. All nine third-order aberrations of a thick lens component are considered during the design. The relationship of component aberrations in different zoom positions can be approximated from the aberration shift. After minimizing the aberrations of the zoom lens system, the nine third-order aberrations of every lens component can be determined. Then the thick lens structure of every lens component can be determined after optimization according to their first-order properties and third-order aberration targets. After a third optimization for minimum practical third-order aberrations of a zoom lens system, the aberration design using the thick lens module is complete, which provides a practical zoom lens system with thick lens structures. A double-sided telecentric zoom lens system is designed using the thick lens module in this paper, which shows that this method is practical for zoom lens design.

  10. The Director's Lens: An Intelligent Assistant for Virtual Cinematography

    OpenAIRE

    Lino , Christophe; Christie , Marc; Ranon , Roberto; Bares , William

    2011-01-01

    International audience; We present the Director's Lens, an intelligent interactive assistant for crafting virtual cinematography using a motion-tracked hand-held device that can be aimed like a real camera. The system employs an intelligent cinematography engine that can compute, at the request of the fi lmmaker, a set of suitable camera placements for starting a shot. These suggestions represent semantically and cinematically distinct choices for visualizing the current narrative. In computi...

  11. Surgical video recording with a modified GoPro Hero 4 camera

    Directory of Open Access Journals (Sweden)

    Lin LK

    2016-01-01

    Full Text Available Lily Koo Lin Department of Ophthalmology and Vision Science, University of California, Davis Eye Center, Sacramento, CA, USA Background: Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. Method: The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Results: Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. Conclusion: The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination. Keywords: teaching, oculoplastic, strabismus

  12. Surgical video recording with a modified GoPro Hero 4 camera.

    Science.gov (United States)

    Lin, Lily Koo

    2016-01-01

    Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. The stock lens mount and lens were removed from a GoPro Hero 4 camera, and was refitted with a Peau Productions SuperMount and 25 mm lens. The modified GoPro Hero 4 camera was then fixed to an overhead surgical light. Camera settings were set to 1080p video resolution. The 25 mm lens allowed for nine times the magnification as the GoPro stock lens. There was no noticeable video distortion. The entire cost was less than 600 USD. The adapted GoPro Hero 4 with a 25 mm lens allows for high-definition, cost-effective, portable video capture of oculoplastic and strabismus surgery. The 25 mm lens allows for detailed videography that can enhance surgical teaching and self-examination.

  13. Review of Calibration Methods for Scheimpflug Camera

    Directory of Open Access Journals (Sweden)

    Cong Sun

    2018-01-01

    Full Text Available The Scheimpflug camera offers a wide range of applications in the field of typical close-range photogrammetry, particle image velocity, and digital image correlation due to the fact that the depth-of-view of Scheimpflug camera can be greatly extended according to the Scheimpflug condition. Yet, the conventional calibration methods are not applicable in this case because the assumptions used by classical calibration methodologies are not valid anymore for cameras undergoing Scheimpflug condition. Therefore, various methods have been investigated to solve the problem over the last few years. However, no comprehensive review exists that provides an insight into recent calibration methods of Scheimpflug cameras. This paper presents a survey of recent calibration methods of Scheimpflug cameras with perspective lens, including the general nonparametric imaging model, and analyzes in detail the advantages and drawbacks of the mainstream calibration models with respect to each other. Real data experiments including calibrations, reconstructions, and measurements are performed to assess the performance of the models. The results reveal that the accuracies of the RMM, PLVM, PCIM, and GNIM are basically equal, while the accuracy of GNIM is slightly lower compared with the other three parametric models. Moreover, the experimental results reveal that the parameters of the tangential distortion are likely coupled with the tilt angle of the sensor in Scheimpflug calibration models. The work of this paper lays the foundation of further research of Scheimpflug cameras.

  14. PERFORMANCE EVALUATION OF THERMOGRAPHIC CAMERAS FOR PHOTOGRAMMETRIC MEASUREMENTS

    Directory of Open Access Journals (Sweden)

    N. Yastikli

    2013-05-01

    Full Text Available The aim of this research is the performance evaluation of the termographic cameras for possible use for photogrammetric documentation and deformation analyses caused by moisture and isolation problem of the historical and cultural heritage. To perform geometric calibration of the termographic camera, the 3D test object was designed with 77 control points which were distributed in different depths. For performance evaluation, Flir A320 termographic camera with 320 × 240 pixels and lens with 18 mm focal length was used. The Nikon D3X SLR digital camera with 6048 × 4032 pixels and lens with 20 mm focal length was used as reference for comparison. The size of pixel was 25 μm for the Flir A320 termographic camera and 6 μm for the Nikon D3X SLR digital camera. The digital images of the 3D test object were recorded with the Flir A320 termographic camera and Nikon D3X SLR digital camera and the image coordinate of the control points in the images were measured. The geometric calibration parameters, including the focal length, position of principal points, radial and tangential distortions were determined with introduced additional parameters in bundle block adjustments. The measurement of image coordinates and bundle block adjustments with additional parameters were performed using the PHIDIAS digital photogrammetric system. The bundle block adjustment was repeated with determined calibration parameter for both Flir A320 termographic camera and Nikon D3X SLR digital camera. The obtained standard deviation of measured image coordinates was 9.6 μm and 10.5 μm for Flir A320 termographic camera and 8.3 μm and 7.7 μm for Nikon D3X SLR digital camera. The obtained standard deviation of measured image points in Flir A320 termographic camera images almost same accuracy level with digital camera in comparison with 4 times bigger pixel size. The obtained results from this research, the interior geometry of the termographic cameras and lens distortion was

  15. Performance Evaluation of Thermographic Cameras for Photogrammetric Measurements

    Science.gov (United States)

    Yastikli, N.; Guler, E.

    2013-05-01

    The aim of this research is the performance evaluation of the termographic cameras for possible use for photogrammetric documentation and deformation analyses caused by moisture and isolation problem of the historical and cultural heritage. To perform geometric calibration of the termographic camera, the 3D test object was designed with 77 control points which were distributed in different depths. For performance evaluation, Flir A320 termographic camera with 320 × 240 pixels and lens with 18 mm focal length was used. The Nikon D3X SLR digital camera with 6048 × 4032 pixels and lens with 20 mm focal length was used as reference for comparison. The size of pixel was 25 μm for the Flir A320 termographic camera and 6 μm for the Nikon D3X SLR digital camera. The digital images of the 3D test object were recorded with the Flir A320 termographic camera and Nikon D3X SLR digital camera and the image coordinate of the control points in the images were measured. The geometric calibration parameters, including the focal length, position of principal points, radial and tangential distortions were determined with introduced additional parameters in bundle block adjustments. The measurement of image coordinates and bundle block adjustments with additional parameters were performed using the PHIDIAS digital photogrammetric system. The bundle block adjustment was repeated with determined calibration parameter for both Flir A320 termographic camera and Nikon D3X SLR digital camera. The obtained standard deviation of measured image coordinates was 9.6 μm and 10.5 μm for Flir A320 termographic camera and 8.3 μm and 7.7 μm for Nikon D3X SLR digital camera. The obtained standard deviation of measured image points in Flir A320 termographic camera images almost same accuracy level with digital camera in comparison with 4 times bigger pixel size. The obtained results from this research, the interior geometry of the termographic cameras and lens distortion was modelled efficiently

  16. Changes in monkey crystalline lens spherical aberration during simulated accommodation in a lens stretcher.

    Science.gov (United States)

    Maceo Heilman, Bianca; Manns, Fabrice; de Castro, Alberto; Durkee, Heather; Arrieta, Esdras; Marcos, Susana; Parel, Jean-Marie

    2015-02-10

    The purpose of this study was to quantify accommodation-induced changes in the spherical aberration of cynomolgus monkey lenses. Twenty-four lenses from 20 cynomolgus monkeys (Macaca fascicularis; 4.4-16.0 years of age; postmortem time 13.5 ± 13.0 hours) were mounted in a lens stretcher. Lens spherical aberration was measured in the unstretched (accommodated) and stretched (relaxed) states with a laser ray tracing system that delivered 51 equally spaced parallel rays along 1 meridian of the lens over the central 6-mm optical zone. A camera mounted below the lens was used to measure the ray height at multiple positions along the optical axis. For each entrance ray, the change in ray height with axial position was fitted with a third-order polynomial. The effective paraxial focal length and Zernike spherical aberration coefficients corresponding to a 6-mm pupil diameter were extracted from the fitted values. The unstretched lens power decreased with age from 59.3 ± 4.0 diopters (D) for young lenses to 45.7 ± 3.1 D for older lenses. The unstretched lens shifted toward less negative spherical aberration with age, from -6.3 ± 0.7 μm for young lenses to -5.0 ± 0.5 μm for older lenses. The power and spherical aberration of lenses in the stretched state were independent of age, with values of 33.5 ± 3.4 D and -2.6 ± 0.5 μm, respectively. Spherical aberration is negative in cynomolgus monkey lenses and becomes more negative with accommodation. These results are in good agreement with the predicted values using computational ray tracing in a lens model with a reconstructed gradient refractive index. The spherical aberration of the unstretched lens becomes less negative with age. Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.

  17. INVESTIGATING THE SUITABILITY OF MIRRORLESS CAMERAS IN TERRESTRIAL PHOTOGRAMMETRIC APPLICATIONS

    Directory of Open Access Journals (Sweden)

    A. H. Incekara

    2017-11-01

    Full Text Available Digital single-lens reflex cameras (DSLR which are commonly referred as mirrored cameras are preferred for terrestrial photogrammetric applications such as documentation of cultural heritage, archaeological excavations and industrial measurements. Recently, digital cameras which are called as mirrorless systems that can be used with different lens combinations have become available for using similar applications. The main difference between these two camera types is the presence of the mirror mechanism which means that the incoming beam towards the lens is different in the way it reaches the sensor. In this study, two different digital cameras, one with a mirror (Nikon D700 and the other without a mirror (Sony a6000, were used to apply close range photogrammetric application on the rock surface at Istanbul Technical University (ITU Ayazaga Campus. Accuracy of the 3D models created by means of photographs taken with both cameras were compared with each other using difference values between field and model coordinates which were obtained after the alignment of the photographs. In addition, cross sections were created on the 3D models for both data source and maximum area difference between them is quite small because they are almost overlapping. The mirrored camera has become more consistent in itself with respect to the change of model coordinates for models created with photographs taken at different times, with almost the same ground sample distance. As a result, it has been determined that mirrorless cameras and point cloud produced using photographs obtained from these cameras can be used for terrestrial photogrammetric studies.

  18. Refractive neutron lens

    International Nuclear Information System (INIS)

    Petrov, P.V.; Kolchevsky, N.N.

    2013-01-01

    Model of the refractive neutron lens is proposed. System of N lenses acts as one thin lens with a complex refraction index n*. The maximum number N max of individual lenses for 'thick' neutron lens is calculated. Refractive neutron lens properties (resolution, focal depth) as function of resolution factor F 0 =ρbc/μ and depth of field factor dF 0 =λF 0 =λρbc/μ are calculated. It is shown that micro resolution of the refractive neutron optics is far from the wavelength in size and its open possibilities for progress in refractive neutron optics. (authors)

  19. Making Ceramic Cameras

    Science.gov (United States)

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  20. BENCHMARKING THE OPTICAL RESOLVING POWER OF UAV BASED CAMERA SYSTEMS

    Directory of Open Access Journals (Sweden)

    H. Meißner

    2017-08-01

    Full Text Available UAV based imaging and 3D object point generation is an established technology. Some of the UAV users try to address (very highaccuracy applications, i.e. inspection or monitoring scenarios. In order to guarantee such level of detail and accuracy high resolving imaging systems are mandatory. Furthermore, image quality considerably impacts photogrammetric processing, as the tie point transfer, mandatory for forming the block geometry, fully relies on the radiometric quality of images. Thus, empirical testing of radiometric camera performance is an important issue, in addition to standard (geometric calibration, which normally is covered primarily. Within this paper the resolving power of ten different camera/lens installations has been investigated. Selected systems represent different camera classes, like DSLRs, system cameras, larger format cameras and proprietary systems. As the systems have been tested in wellcontrolled laboratory conditions and objective quality measures have been derived, individual performance can be compared directly, thus representing a first benchmark on radiometric performance of UAV cameras. The results have shown, that not only the selection of appropriate lens and camera body has an impact, in addition the image pre-processing, i.e. the use of a specific debayering method, significantly influences the final resolving power.

  1. Adapting Virtual Camera Behaviour

    DEFF Research Database (Denmark)

    Burelli, Paolo

    2013-01-01

    In a three-dimensional virtual environment aspects such as narrative and interaction completely depend on the camera since the camera defines the player’s point of view. Most research works in automatic camera control aim to take the control of this aspect from the player to automatically gen- er...

  2. Curiosity's Mars Hand Lens Imager (MAHLI) Investigation

    Science.gov (United States)

    Edgett, Kenneth S.; Yingst, R. Aileen; Ravine, Michael A.; Caplinger, Michael A.; Maki, Justin N.; Ghaemi, F. Tony; Schaffner, Jacob A.; Bell, James F.; Edwards, Laurence J.; Herkenhoff, Kenneth E.; Heydari, Ezat; Kah, Linda C.; Lemmon, Mark T.; Minitti, Michelle E.; Olson, Timothy S.; Parker, Timothy J.; Rowland, Scott K.; Schieber, Juergen; Sullivan, Robert J.; Sumner, Dawn Y.; Thomas, Peter C.; Jensen, Elsa H.; Simmonds, John J.; Sengstacken, Aaron J.; Wilson, Reg G.; Goetz, Walter

    2012-01-01

    The Mars Science Laboratory (MSL) Mars Hand Lens Imager (MAHLI) investigation will use a 2-megapixel color camera with a focusable macro lens aboard the rover, Curiosity, to investigate the stratigraphy and grain-scale texture, structure, mineralogy, and morphology of geologic materials in northwestern Gale crater. Of particular interest is the stratigraphic record of a ?5 km thick layered rock sequence exposed on the slopes of Aeolis Mons (also known as Mount Sharp). The instrument consists of three parts, a camera head mounted on the turret at the end of a robotic arm, an electronics and data storage assembly located inside the rover body, and a calibration target mounted on the robotic arm shoulder azimuth actuator housing. MAHLI can acquire in-focus images at working distances from ?2.1 cm to infinity. At the minimum working distance, image pixel scale is ?14 μm per pixel and very coarse silt grains can be resolved. At the working distance of the Mars Exploration Rover Microscopic Imager cameras aboard Spirit and Opportunity, MAHLI?s resolution is comparable at ?30 μm per pixel. Onboard capabilities include autofocus, auto-exposure, sub-framing, video imaging, Bayer pattern color interpolation, lossy and lossless compression, focus merging of up to 8 focus stack images, white light and longwave ultraviolet (365 nm) illumination of nearby subjects, and 8 gigabytes of non-volatile memory data storage.

  3. Calibration of action cameras for photogrammetric purposes.

    Science.gov (United States)

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  4. Calibration of Action Cameras for Photogrammetric Purposes

    Directory of Open Access Journals (Sweden)

    Caterina Balletti

    2014-09-01

    Full Text Available The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a easy to handle, (b capable of performing under extreme conditions and more importantly (c able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  5. Multiview Trajectory Mapping Using Homography with Lens Distortion Correction

    Directory of Open Access Journals (Sweden)

    Andrea Cavallaro

    2008-11-01

    Full Text Available We present a trajectory mapping algorithm for a distributed camera setting that is based on statistical homography estimation accounting for the distortion introduced by camera lenses. Unlike traditional approaches based on the direct linear transformation (DLT algorithm and singular value decomposition (SVD, the planar homography estimation is derived from renormalization. In addition to this, the algorithm explicitly introduces a correction parameter to account for the nonlinear radial lens distortion, thus improving the accuracy of the transformation. We demonstrate the proposed algorithm by generating mosaics of the observed scenes and by registering the spatial locations of moving objects (trajectories from multiple cameras on the mosaics. Moreover, we objectively compare the transformed trajectories with those obtained by SVD and least mean square (LMS methods on standard datasets and demonstrate the advantages of the renormalization and the lens distortion correction.

  6. Multiview Trajectory Mapping Using Homography with Lens Distortion Correction

    Directory of Open Access Journals (Sweden)

    Kayumbi Gabin

    2008-01-01

    Full Text Available Abstract We present a trajectory mapping algorithm for a distributed camera setting that is based on statistical homography estimation accounting for the distortion introduced by camera lenses. Unlike traditional approaches based on the direct linear transformation (DLT algorithm and singular value decomposition (SVD, the planar homography estimation is derived from renormalization. In addition to this, the algorithm explicitly introduces a correction parameter to account for the nonlinear radial lens distortion, thus improving the accuracy of the transformation. We demonstrate the proposed algorithm by generating mosaics of the observed scenes and by registering the spatial locations of moving objects (trajectories from multiple cameras on the mosaics. Moreover, we objectively compare the transformed trajectories with those obtained by SVD and least mean square (LMS methods on standard datasets and demonstrate the advantages of the renormalization and the lens distortion correction.

  7. Cryogenic solid Schmidt camera as a base for future wide-field IR systems

    Science.gov (United States)

    Yudin, Alexey N.

    2011-11-01

    Work is focused on study of capability of solid Schmidt camera to serve as a wide-field infrared lens for aircraft system with whole sphere coverage, working in 8-14 um spectral range, coupled with spherical focal array of megapixel class. Designs of 16 mm f/0.2 lens with 60 and 90 degrees sensor diagonal are presented, their image quality is compared with conventional solid design. Achromatic design with significantly improved performance, containing enclosed soft correcting lens behind protective front lens is proposed. One of the main goals of the work is to estimate benefits from curved detector arrays in 8-14 um spectral range wide-field systems. Coupling of photodetector with solid Schmidt camera by means of frustrated total internal reflection is considered, with corresponding tolerance analysis. The whole lens, except front element, is considered to be cryogenic, with solid Schmidt unit to be flown by hydrogen for improvement of bulk transmission.

  8. An all-silicone zoom lens in an optical imaging system

    International Nuclear Information System (INIS)

    Zhao Cun-Hua

    2013-01-01

    An all-silicone zoom lens is fabricated. A tunable metal ringer is fettered around the side edge of the lens. A nylon rope linking a motor is tied, encircling the notch in the metal ringer. While the motor is operating, the rope can shrink or release to change the focal length of the lens. A calculation method is developed to obtain the focal length and the zoom ratio. The testing is carried out in succession. The testing values are compared with the calculated ones, and they tally with each other well. Finally, the imaging performance of the all-silicone lens is demonstrated. The all-silicone lens has potential uses in cellphone cameras, notebook cameras, micro monitor lenses, etc. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  9. An all-silicone zoom lens in an optical imaging system

    Science.gov (United States)

    Zhao, Cun-Hua

    2013-09-01

    An all-silicone zoom lens is fabricated. A tunable metal ringer is fettered around the side edge of the lens. A nylon rope linking a motor is tied, encircling the notch in the metal ringer. While the motor is operating, the rope can shrink or release to change the focal length of the lens. A calculation method is developed to obtain the focal length and the zoom ratio. The testing is carried out in succession. The testing values are compared with the calculated ones, and they tally with each other well. Finally, the imaging performance of the all-silicone lens is demonstrated. The all-silicone lens has potential uses in cellphone cameras, notebook cameras, micro monitor lenses, etc.

  10. Magnifying lens for 800 MeV proton radiography

    International Nuclear Information System (INIS)

    Merrill, F. E.; Campos, E.; Espinoza, C.; Hogan, G.; Hollander, B.; Lopez, J.; Mariam, F. G.; Morley, D.; Morris, C. L.; Murray, M.; Saunders, A.; Schwartz, C.; Thompson, T. N.

    2011-01-01

    This article describes the design and performance of a magnifying magnetic-lens system designed, built, and commissioned at the Los Alamos National Laboratory (LANL) for 800 MeV flash proton radiography. The technique of flash proton radiography has been developed at LANL to study material properties under dynamic loading conditions through the analysis of time sequences of proton radiographs. The requirements of this growing experimental program have resulted in the need for improvements in spatial radiographic resolution. To meet these needs, a new magnetic lens system, consisting of four permanent magnet quadrupoles, has been developed. This new lens system was designed to reduce the second order chromatic aberrations, the dominant source of image blur in 800 MeV proton radiography, as well as magnifying the image to reduce the blur contribution from the detector and camera systems. The recently commissioned lens system performed as designed, providing nearly a factor of three improvement in radiographic resolution.

  11. Capsular 'pits' in the human lens.

    OpenAIRE

    Harris, M. L.; Brown, N. A.; Shun-Shin, G. A.; Smith, G. T.

    1992-01-01

    The lens capsule is an atypical basement membrane surrounding the lens epithelial cells and lens fibres which make up the remainder of the human lens. A seemingly unreported morphological change visible in the lens capsule with the biomicroscope is described.

  12. [Cinematography of ocular fundus with a jointed optical system and tv or cine-camera (author's transl)].

    Science.gov (United States)

    Kampik, A; Rapp, J

    1979-02-01

    A method of Cinematography of the ocular fundus is introduced which--by connecting a camera with an indirect ophthalmoscop--allows to record the monocular picture of the fundus as produced by the ophthalmic lens.

  13. Intraocular lens fabrication

    Energy Technology Data Exchange (ETDEWEB)

    Salazar, Mike A. (Albuquerque, NM); Foreman, Larry R. (Los Alamos, NM)

    1997-01-01

    This invention describes a method for fabricating an intraocular lens made rom clear Teflon.TM., Mylar.TM., or other thermoplastic material having a thickness of about 0.025 millimeters. These plastic materials are thermoformable and biocompatable with the human eye. The two shaped lenses are bonded together with a variety of procedures which may include thermosetting and solvent based adhesives, laser and impulse welding, and ultrasonic bonding. The fill tube, which is used to inject a refractive filling material is formed with the lens so as not to damage the lens shape. A hypodermic tube may be included inside the fill tube.

  14. Intraocular lens fabrication

    Energy Technology Data Exchange (ETDEWEB)

    Salazar, M.A.; Foreman, L.R.

    1997-07-08

    This invention describes a method for fabricating an intraocular lens made from clear Teflon{trademark}, Mylar{trademark}, or other thermoplastic material having a thickness of about 0.025 millimeters. These plastic materials are thermoformable and biocompatable with the human eye. The two shaped lenses are bonded together with a variety of procedures which may include thermosetting and solvent based adhesives, laser and impulse welding, and ultrasonic bonding. The fill tube, which is used to inject a refractive filling material is formed with the lens so as not to damage the lens shape. A hypodermic tube may be included inside the fill tube. 13 figs.

  15. INFLUENCE OF MECHANICAL ERRORS IN A ZOOM CAMERA

    Directory of Open Access Journals (Sweden)

    Alfredo Gardel

    2011-05-01

    Full Text Available As it is well known, varying the focus and zoom of a camera lens system changes the alignment of the lens components resulting in a displacement of the image centre and field of view. Thus, knowledge of how the image centre shifts may be important for some aspects of camera calibration. As shown in other papers, the pinhole model is not adequate for zoom lenses. To ensure a calibration model for these lenses, the calibration parameters must be adjusted. The geometrical modelling of a zoom lens is realized from its lens specifications. The influence on the calibration parameters is calculated by introducing mechanical errors in the mobile lenses. Figures are given describing the errors obtained in the principal point coordinates and also in its standard deviation. A comparison is then made with the errors that come from the incorrect detection of the calibration points. It is concluded that mechanical errors of actual zoom lenses can be neglected in the calibration process because detection errors have more influence on the camera parameters.

  16. Recent Developments In High Speed Lens Design At The NPRL

    Science.gov (United States)

    Mcdowell, M. W.; Klee, H. W.

    1987-09-01

    Although the lens provides the link between the high speed camera and the outside world, there has over the years been little evidence of co-operation between the optical design and high speed photography communities. It is still only too common for a manufacturer to develop a camera of improved performance and resolution and then to combine this with a standard camera lens. These lenses were often designed for a completely different recording medium and, more often than not, their use results in avoidable degradation of the overall system performance. There is a tendency to assume that a specialized lens would be too expensive and that pushing the aperture automatically implies more complex optical systems. In the present paper some recent South African developments in the design of large aperture lenses are described. The application of a new design principle, based on the work earlier this century of Bernhard Schmidt, shows that ultra-fast lenses need not be overly complex and a basic four-element lens configuration can be adapted to a wide variety of applications.

  17. A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer

    Directory of Open Access Journals (Sweden)

    Bailey Y. Shen

    2017-01-01

    Full Text Available Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm×91mm×45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient.

  18. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... One Use Facts About Colored Contacts and Halloween Safety Colored Contact Lens Facts Over-the-Counter Costume ... Costume Contact Lenses Can Ruin Vision Eye Makeup Safety In fact, it is illegal to sell colored ...

  19. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... One Use Facts About Colored Contacts and Halloween Safety Colored Contact Lens Facts Over-the-Counter Costume ... new application of artificial intelligence shows whether a patient’s eyes point to high blood pressure or risk ...

  20. The lens and cataracts.

    Science.gov (United States)

    Matthews, Andrew G

    2004-08-01

    It is conservatively estimated that some form of lens opacity is present in 5% to 7% of horses with otherwise clinically normal eyes.These opacities can range from small epicapsular remnants of the fetal vasculature to dense and extensive cataract. A cataract is defined technically as any opacity or alteration in the optical homogeneity of the lens involving one or more of the following: anterior epithelium, capsule, cortex, or nucleus. In the horse, cataracts rarely involve the entire lens structure (ie, complete cataracts) and are more usually localized to one anatomic landmark or sector of the lens. Complete cataracts are invariably associated with overt and significant visual disability. Focal or incomplete cataracts alone seldom cause any apparent visual dysfunction in affected horses,however.

  1. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... One Use Facts About Colored Contacts and Halloween Safety Colored Contact Lens Facts Over-the-Counter Costume ... use of colored contact lenses , from the U.S. Food and Drug Administration (FDA). Are the colored lenses ...

  2. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... sell contacts without a prescription are breaking the law, and may be fined $11,000 per violation. " ... wear any kind of contact lens. In Butler's case, the lenses caused an infection and left her ...

  3. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... contacto de color Sep. 26, 2013 It started as an impulsive buy from a souvenir shop, but ... require the same level of care or consideration as a standard contact lens because they can be ...

  4. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... With Proper Contact Lens Care Apr 23, 2018 Solar Eclipse Inflicts Damage in the Shape of the ... edging closer, thanks to a wave of new technologies aiming to fix failing eye parts with human- ...

  5. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... glow-in-the-dark lizard lenses, costume contacts can certainly add a spooky, eye-popping touch. But ... consideration as a standard contact lens because they can be purchased over-the-counter or on the ...

  6. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... valid prescription that includes the brand name, lens measurements, and expiration date. Purchase the colored contact lenses ... with human-made versions. U.S. News Highlights the Value of Ophthalmologists APR 20, 2018 By Dan T. ...

  7. Vortex gas lens

    Science.gov (United States)

    Bogdanoff, David W.; Berschauer, Andrew; Parker, Timothy W.; Vickers, Jesse E.

    1989-01-01

    A vortex gas lens concept is presented. Such a lens has a potential power density capability of 10 to the 9th - 10 to the 10th w/sq cm. An experimental prototype was constructed, and the divergence half angle of the exiting beam was measured as a function of the lens operating parameters. Reasonably good agreement is found between the experimental results and theoretical calculations. The expanded beam was observed to be steady, and no strong, potentially beam-degrading jets were found to issue from the ends of the lens. Estimates of random beam deflection angles to be expected due to boundary layer noise are presented; these angles are very small.

  8. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... had not been properly fitted by an eye care professional, the lenses stuck to my eye like ... lenses do not require the same level of care or consideration as a standard contact lens because ...

  9. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... not require the same level of care or consideration as a standard contact lens because they can ... sell contacts without a prescription are breaking the law, and may be fined $11,000 per violation. " ...

  10. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... not require the same level of care or consideration as a standard contact lens because they can ... Us About the Academy Jobs at the Academy Financial Relationships with Industry Medical Disclaimer Privacy Policy Terms ...

  11. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... be purchased over-the-counter or on the Internet," says Thomas Steinemann, MD, professor of ophthalmology at ... ask for a prescription. There is no such thing as a "one size fits all" contact lens. ...

  12. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... prescription. Follow the contact lens care directions for cleaning, disinfecting, and wearing the lenses. Never share contact ... with Industry Medical Disclaimer Privacy Policy Terms of Service For Advertisers For Media Ophthalmology Job Center © American ...

  13. Radiation camera exposure control

    International Nuclear Information System (INIS)

    Martone, R.J.; Yarsawich, M.; Wolczek, W.

    1976-01-01

    A system and method for governing the exposure of an image generated by a radiation camera to an image sensing camera is disclosed. The exposure is terminated in response to the accumulation of a predetermined quantity of radiation, defining a radiation density, occurring in a predetermined area. An index is produced which represents the value of that quantity of radiation whose accumulation causes the exposure termination. The value of the predetermined radiation quantity represented by the index is sensed so that the radiation camera image intensity can be calibrated to compensate for changes in exposure amounts due to desired variations in radiation density of the exposure, to maintain the detectability of the image by the image sensing camera notwithstanding such variations. Provision is also made for calibrating the image intensity in accordance with the sensitivity of the image sensing camera, and for locating the index for maintaining its detectability and causing the proper centering of the radiation camera image

  14. GRACE star camera noise

    Science.gov (United States)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  15. Solid state video cameras

    CERN Document Server

    Cristol, Y

    2013-01-01

    Solid State Video Cameras reviews the state of the art in the field of solid-state television cameras as compiled from patent literature. Organized into 10 chapters, the book begins with the basic array types of solid-state imagers and appropriate read-out circuits and methods. Documents relating to improvement of picture quality, such as spurious signal suppression, uniformity correction, or resolution enhancement, are also cited. The last part considerssolid-state color cameras.

  16. CALIBRATION PROCEDURES ON OBLIQUE CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    G. Kemper

    2016-06-01

    Full Text Available Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna –IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first

  17. Active liquid-crystal deflector and lens with Fresnel structure

    Science.gov (United States)

    Shibuya, Giichi; Yamano, Shohei; Yoshida, Hiroyuki; Ozaki, Masanori

    2017-02-01

    A new type of tunable Fresnel deflector and lens composed of liquid crystal was developed. Combined structure of multiple interdigitated electrodes and the high-resistivity (HR) layer implements the saw-tooth distribution of electrical potential with only the planar surfaces of the transparent substrates. According to the numerical calculation and design, experimental devices were manufactured with the liquid crystal (LC) material sealed into the sandwiched flat glass plates of 0.7 mm thickness with rubbed alignment layers set to an anti-parallel configuration. Fabricated beam deflector with no moving parts shows the maximum tilt angle of +/-1.3 deg which can apply for optical image stabilizer (OIS) of micro camera. We also discussed and verified their lens characteristics to be extended more advanced applications. Transparent interdigitated electrodes were concentrically aligned on the lens aperture with the insulator gaps under their boundary area. The diameter of the lens aperture was 30 mm and the total number of Fresnel zone was 100. Phase retardation of the beam wavefront irradiated from the LC lens device can be evaluated by polarizing microscope images with a monochromatic filter. Radial positions of each observed fringe are plotted and fitted with 2nd degree polynomial approximation. The number of appeared fringes is over 600 in whole lens aperture area and the correlation coefficients of all approximations are over 0.993 that seems enough ideal optical wavefront. The obtained maximum lens powers from the approximations are about +/-4 m-1 which was satisfied both convex and concave lens characteristics; and their practical use for the tunable lens grade eyeglasses became more prospective.

  18. Calibration of Low Cost RGB and NIR Uav Cameras

    Science.gov (United States)

    Fryskowska, A.; Kedzierski, M.; Grochala, A.; Braula, A.

    2016-06-01

    Non-metric digital cameras are being widely used for photogrammetric studies. The increase in resolution and quality of images obtained by non-metric cameras, allows to use it in low-cost UAV and terrestrial photogrammetry. Imagery acquired with non-metric cameras can be used in 3D modeling of objects or landscapes, reconstructing of historical sites, generating digital elevation models (DTM), orthophotos, or in the assessment of accidents. Non-metric digital camcorders are characterized by instability and ignorance of the interior orientation parameters. Therefore, the use of these devices requires prior calibration. Calibration research was conducted using non-metric camera, different calibration tests and various software. The first part of the paper contains a brief theoretical introduction including the basic definitions, like the construction of non-metric cameras or description of different optical distortions. The second part of the paper contains cameras calibration process, details of the calibration methods and models that have been used. Sony Nex 5 camera calibration has been done using software: Image Master Calib, Matlab - Camera Calibrator application and Agisoft Lens. For the study 2D test fields has been used. As a part of the research a comparative analysis of the results have been done.

  19. Cameras in mobile phones

    Science.gov (United States)

    Nummela, Ville; Viinikanoja, Jarkko; Alakarhu, Juha

    2006-04-01

    One of the fastest growing markets in consumer markets today are camera phones. During past few years total volume has been growing fast and today millions of mobile phones with camera will be sold. At the same time resolution and functionality of the cameras has been growing from CIF towards DSC level. From camera point of view the mobile world is an extremely challenging field. Cameras should have good image quality but in small size. They also need to be reliable and their construction should be suitable for mass manufacturing. All components of the imaging chain should be well optimized in this environment. Image quality and usability are the most important parameters to user. The current trend of adding more megapixels to cameras and at the same time using smaller pixels is affecting both. On the other hand reliability and miniaturization are key drivers for product development as well as the cost. In optimized solution all parameters are in balance but the process of finding the right trade-offs is not an easy task. In this paper trade-offs related to optics and their effects to image quality and usability of cameras are discussed. Key development areas from mobile phone camera point of view are also listed.

  20. A digital gigapixel large-format tile-scan camera.

    Science.gov (United States)

    Ben-Ezra, M

    2011-01-01

    Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.

  1. Thermal Cameras and Applications

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.

    2014-01-01

    Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up...... a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection......, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras....

  2. Camera Trajectory fromWide Baseline Images

    Science.gov (United States)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the

  3. Typical effects of laser dazzling CCD camera

    Science.gov (United States)

    Zhang, Zhen; Zhang, Jianmin; Shao, Bibo; Cheng, Deyan; Ye, Xisheng; Feng, Guobin

    2015-05-01

    In this article, an overview of laser dazzling effect to buried channel CCD camera is given. The CCDs are sorted into staring and scanning types. The former includes the frame transfer and interline transfer types. The latter includes linear and time delay integration types. All CCDs must perform four primary tasks in generating an image, which are called charge generation, charge collection, charge transfer and charge measurement. In camera, the lenses are needed to input the optical signal to the CCD sensors, in which the techniques for erasing stray light are used. And the electron circuits are needed to process the output signal of CCD, in which many electronic techniques are used. The dazzling effects are the conjunct result of light distribution distortion and charge distribution distortion, which respectively derive from the lens and the sensor. Strictly speaking, in lens, the light distribution is not distorted. In general, the lens are so well designed and fabricated that its stray light can be neglected. But the laser is of much enough intensity to make its stray light obvious. In CCD image sensors, laser can induce a so large electrons generation. Charges transfer inefficiency and charges blooming will cause the distortion of the charge distribution. Commonly, the largest signal outputted from CCD sensor is restricted by capability of the collection well of CCD, and can't go beyond the dynamic range for the subsequent electron circuits maintaining normal work. So the signal is not distorted in the post-processing circuits. But some techniques in the circuit can make some dazzling effects present different phenomenon in final image.

  4. Holographic interferometry using a digital photo-camera

    International Nuclear Information System (INIS)

    Sekanina, H.; Hledik, S.

    2001-01-01

    The possibilities of running digital holographic interferometry using commonly available compact digital zoom photo-cameras are studied. The recently developed holographic setup, suitable especially for digital photo-cameras equipped with an un detachable object lens, is used. The method described enables a simple and straightforward way of both recording and reconstructing of a digital holographic interferograms. The feasibility of the new method is verified by digital reconstruction of the interferograms acquired, using a numerical code based on the fast Fourier transform. Experimental results obtained are presented and discussed. (authors)

  5. Advances in pediatric gastroenterology: introducing video camera capsule endoscopy.

    Science.gov (United States)

    Siaw, Emmanuel O

    2006-04-01

    The video camera capsule endoscope is a gastrointestinal endoscope approved by the U.S. Food and Drug Administration in 2001 for use in diagnosing gastrointestinal disorders in adults. In 2003, the agency approved the device for use in children ages 10 and older, and the endoscope is currently in use at Arkansas Children's Hospital. A capsule camera, lens, battery, transmitter and antenna together record images of the small intestine as the endoscope makes its way through the bowel. The instrument is used with minimal risk to the patient while offering a high degree of accuracy in diagnosing small intestine disorders.

  6. A Wireless Camera Node with Passive Self-righting Mechanism for Capturing Surrounding View

    OpenAIRE

    Kawabata, Kuniaki; Sato, Hideo; Suzuki, Tsuyoshi; Tobe, Yoshito

    2010-01-01

    In this report, we have proposed a sensor node and related wireless network for information gathering in disaster areas. We have described a “camera node” prototype developed on this basis, containing a camera with a fisheye lens, a passive self-righting mechanism to maintain the camera orientation, and the systems capability for construction of an ad hoc wireless network, together with a GPS adaptor and an embedded computer timer to identify its position and imaging time. The camera node...

  7. Camera calibration method of binocular stereo vision based on OpenCV

    Science.gov (United States)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  8. Symmetric lens with extended depth of focus

    OpenAIRE

    Cho, Sung Nae

    2008-01-01

    The lens surface profile is derived based on the instantaneous focal length versus the lens radius data. The lens design based on instantaneous focal length versus the lens radius data has many useful applications in software assisted image focusing technology.

  9. A catoptric lens

    International Nuclear Information System (INIS)

    Rambauske, W.R.

    1973-01-01

    The invention relates to a catoptric lens for combining energies transmitted by several sources such as lasers; said lens comprising mirrors, the reflective surfaces of which have their focuses spaced from a common axis of symmetry. By means of these reflecting surfaces, which are generated by the nutation of portions of quadratic conics about the axis of symmetry, it is possible to focus the energy emmited by several lasers at the focus of the exit-mirror reflecting surface. This can be applied to thermonuclear fusion [fr

  10. CCD camera eases the control of a soda recovery boiler; CCD-kamera helpottaa soodakattilan valvontaa

    Energy Technology Data Exchange (ETDEWEB)

    Kinnunen, L.

    2001-07-01

    Fortum Technology has developed a CCD firebox camera, based on semiconductor technology, enduring hard conditions of soda recovery boiler longer than traditional cameras. The firebox camera air- cooled and the same air is pressed over the main lens so it remains clean despite of the alkaline liquor splashing around in the boiler. The image of the boiler is transferred through the main lens, image transfer lens and a special filter, mounted inside the camera tube, into the CCD camera. The first CCD camera system has been in use since 1999 in Sunila pulp mill in Kotka, owned by Myllykoski Oy and Enso Oyj. The mill has two medium-sized soda recovery boilers. The amount of black liquor, formed daily, is about 2000 tons DS, which is more than enough for the heat generation. Even electric power generation exceeds sometimes the demand, so the surplus power can be sold out. Black liquor is sprayed inside the soda recovery boiler with high pressure. The liquor form droplets in the boiler, the temperature of which is over 1000 deg C. A full-hot pile is formed at the bottom of the boiler after burning. The size and shape of the pile effect on the efficiency and the emissions of the boiler. The camera has operated well.

  11. Advanced CCD camera developments

    Energy Technology Data Exchange (ETDEWEB)

    Condor, A. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Two charge coupled device (CCD) camera systems are introduced and discussed, describing briefly the hardware involved, and the data obtained in their various applications. The Advanced Development Group Defense Sciences Engineering Division has been actively designing, manufacturing, fielding state-of-the-art CCD camera systems for over a decade. These systems were originally developed for the nuclear test program to record data from underground nuclear tests. Today, new and interesting application for these systems have surfaced and development is continuing in the area of advanced CCD camera systems, with the new CCD camera that will allow experimenters to replace film for x-ray imaging at the JANUS, USP, and NOVA laser facilities.

  12. A Tribute to Len Barton

    Science.gov (United States)

    Tomlinson, Sally

    2010-01-01

    This article constitutes a short personal tribute to Len Barton in honour of his work and our collegial relationship going back over 30 years. It covers how Len saw his intellectual project of providing critical sociological and political perspectives on special education, disability and inclusion, and his own radical political perspectives. Len's…

  13. Luneburg lens in silicon photonics.

    Science.gov (United States)

    Di Falco, Andrea; Kehr, Susanne C; Leonhardt, Ulf

    2011-03-14

    The Luneburg lens is an aberration-free lens that focuses light from all directions equally well. We fabricated and tested a Luneburg lens in silicon photonics. Such fully-integrated lenses may become the building blocks of compact Fourier optics on chips. Furthermore, our fabrication technique is sufficiently versatile for making perfect imaging devices on silicon platforms.

  14. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.; Schlosser, P.A.; Steidley, J.W.

    1980-01-01

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  15. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... prescription. There is no such thing as a "one size fits all" contact lens. Lenses that are not properly fitted may scratch the eye or cause blood vessels to grow into the cornea. Even if you have perfect vision, you need to get an eye exam and a prescription ...

  16. Contact Lens Risks

    Science.gov (United States)

    ... There is a risk of eye infection from bacteria in swimming pool water, hot tubs, lakes and the ocean Replace your contact lens storage case every 3 months or as directed by your eye care professional. Other Risks of Contact Lenses Other risks of contact lenses include pink eye ( ...

  17. MISSING: BUBBLE CHAMBER LENS

    CERN Multimedia

    2001-01-01

    Would the person who borrowed the large bubble chamber lens from the Microcosm workshops on the ISR please return it. This is a much used piece from our object archives. If anybody has any information about the whereabouts of this object, please contact Emma.Sanders@cern.ch Thank you

  18. The Lens of Chemistry

    Science.gov (United States)

    Thalos, Mariam

    2013-01-01

    Chemistry possesses a distinctive theoretical lens--a distinctive set of theoretical concerns regarding the dynamics and transformations of a perplexing variety of organic and nonorganic substances--to which it must be faithful. Even if it is true that chemical facts bear a special (reductive) relationship to physical facts, nonetheless it will…

  19. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... wear any kind of contact lens. In Butler's case, the lenses caused an infection and left her with a corneal ... A recent article from U.S. News and World Report explains what ophthalmologists are and how they can ...

  20. Quadrupole magnetic lens

    International Nuclear Information System (INIS)

    Piskunov, V.A.

    1981-01-01

    The following connection of windings of electromagnet is suggested for simplification of the design of qUadrupole magnetic lens intended for use in radiotechnical and electron-optical devices. The mentioned windings are connected with each other by a bridge scheme and the variable resistors are switched in its diagonals in the lens containing four electromagnet with windings connected with two variable resistors the mobile contacts of which are connected with a direct current source. Current redistribution between left windings and right windings takes place at shift of mobile contact of variable resistor, and current redistribution between upper and low coils of electromagnets takes place at shifting mobile contact of the other variable resistor. In this case smooth and independent electron-optical misalignment of lens by two mutually perpendicular directions proceeds. Use of the given design of the lens in the oscillograph permits to use printing assembly for alignment plate and to reduce the number of connections at the expense of decreasing the number of resistors

  1. Colored Contact Lens Dangers

    Medline Plus

    Full Text Available ... can be purchased over-the-counter or on the Internet," says Thomas Steinemann, MD, professor of ophthalmology at ... ask for a prescription. There is no such thing as a "one size fits all" contact lens. Lenses that are not properly fitted may scratch the eye or cause blood vessels to grow into ...

  2. Neutron cameras for ITER

    International Nuclear Information System (INIS)

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-01-01

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from 16 N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with 16 N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins

  3. Measuring Light Pollution with Fisheye Lens Imagery from A Moving Boat, A Proof of Concept

    OpenAIRE

    Jechow, Andreas; Kolláth, Zoltán; Lerner, Amit; Hänel, Andreas; Shashar, Nadav; Hölker, Franz; Kyba, Christopher C. M.

    2017-01-01

    Near all-sky imaging photometry was performed from a boat on the Gulf of Aqaba to measure the night sky brightness in a coastal environment. The boat was not anchored, and therefore drifted and rocked. The camera was mounted on a tripod without any inertia/motion stabilization. A commercial digital single lens reflex (DSLR) camera and fisheye lens were used with ISO setting of 6400, with the exposure time varied between 0.5 s and 5 s. We find that despite movement of the vessel the measuremen...

  4. Effects of lens distortion calibration patterns on the accuracy of monocular 3D measurements

    CSIR Research Space (South Africa)

    De Villiers, J

    2011-11-01

    Full Text Available choice (e.g. the open computer vision (OpenCV) library [4], Caltech Camera Calibration Toolbox [5]) as the intersections can be found extremely accurately by finding the saddle point of the intensity profile about the intersection as described... to capture and process data in order to calibrate it. A. Equipment specification A 1600-by-1200 Prosilica GE1660 Gigabit Ethernet ma- chine vision camera was mated with a Schneider Cinegon 4.8mm/f1.4 lens for use in this work. This lens has an 82...

  5. Acceptance/operational test procedure 241-AN-107 Video Camera System

    International Nuclear Information System (INIS)

    Pedersen, L.T.

    1994-01-01

    This procedure will document the satisfactory operation of the 241-AN-107 Video Camera System. The camera assembly, including camera mast, pan-and-tilt unit, camera, and lights, will be installed in Tank 241-AN-107 to monitor activities during the Caustic Addition Project. The camera focus, zoom, and iris remote controls will be functionally tested. The resolution and color rendition of the camera will be verified using standard reference charts. The pan-and-tilt unit will be tested for required ranges of motion, and the camera lights will be functionally tested. The master control station equipment, including the monitor, VCRs, printer, character generator, and video micrometer will be set up and performance tested in accordance with original equipment manufacturer's specifications. The accuracy of the video micrometer to measure objects in the range of 0.25 inches to 67 inches will be verified. The gas drying distribution system will be tested to ensure that a drying gas can be flowed over the camera and lens in the event that condensation forms on these components. This test will be performed by attaching the gas input connector, located in the upper junction box, to a pressurized gas supply and verifying that the check valve, located in the camera housing, opens to exhaust the compressed gas. The 241-AN-107 camera system will also be tested to assure acceptable resolution of the camera imaging components utilizing the camera system lights

  6. HST image of Gravitational Lens G2237 + 305 or 'Einstein Cross'

    Science.gov (United States)

    1990-01-01

    European Space Agency (ESA) Faint Object Camera (FOC) science image was taken from the Hubble Space Telescope (HST) of Gravitational Lens G2237 + 305 or 'Einstein Cross'. The gravitational lens G2237 + 305 or 'Einstein Cross' shows four images of a very distant quasar which has been multiple-imaged by a relatively nearby galaxy acting as a gravitational lens. The angular separation between the upper and lower images is 1.6 arc seconds. Photo was released from Goddard Space Flight Center (GSFC) 09-12-90.

  7. Touch And Go Camera System (TAGCAMS) for the OSIRIS-REx Asteroid Sample Return Mission

    Science.gov (United States)

    Bos, B. J.; Ravine, M. A.; Caplinger, M.; Schaffner, J. A.; Ladewig, J. V.; Olds, R. D.; Norman, C. D.; Huish, D.; Hughes, M.; Anderson, S. K.; Lorenz, D. A.; May, A.; Jackman, C. D.; Nelson, D.; Moreau, M.; Kubitschek, D.; Getzandanner, K.; Gordon, K. E.; Eberhardt, A.; Lauretta, D. S.

    2018-02-01

    NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch And Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample, and document asteroid sample stowage. The cameras were designed and constructed by Malin Space Science Systems (MSSS) based on requirements developed by Lockheed Martin and NASA. All three of the cameras are mounted to the spacecraft nadir deck and provide images in the visible part of the spectrum, 400-700 nm. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. Their boresights are aligned in the nadir direction with small angular offsets for operational convenience. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Its boresight is pointed at the OSIRIS-REx sample return capsule located on the spacecraft deck. All three cameras have at their heart a 2592 × 1944 pixel complementary metal oxide semiconductor (CMOS) detector array that provides up to 12-bit pixel depth. All cameras also share the same lens design and a camera field of view of roughly 44° × 32° with a pixel scale of 0.28 mrad/pixel. The StowCam lens is focused to image features on the spacecraft deck, while both NavCam lens focus positions are optimized for imaging at infinity. A brief description of the TAGCAMS instrument and how it is used to support critical OSIRIS-REx operations is provided.

  8. A Smart Assistant for Shooting Virtual Cinematography with Motion-Tracked Cameras

    OpenAIRE

    Lino , Christophe; Christie , Marc; Ranon , Roberto; Bares , William

    2011-01-01

    International audience; This demonstration shows how an automated assistant encoded with knowledge of cinematography practice can off er suggested viewpoints to a fi lmmaker operating a hand-held motion-tracked virtual camera device. Our system, called Director's Lens, uses an intelligent cinematography engine to compute, at the request of the lmmaker, a set of suitable camera placements for starting a shot that represent semantically and cinematically distinct choices for visualizing the cur...

  9. Selective-imaging camera

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  10. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    A positron emission tomography camera having a plurality of detector rings positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom. Each detector ring or offset ring includes a plurality of photomultiplier tubes and a plurality of scintillation crystals are positioned relative to the photomultiplier tubes whereby each tube is responsive to more than one crystal. Each alternate crystal in the ring is offset by one-half or less of the thickness of the crystal such that the staggered crystals are seen by more than one photomultiplier tube. This sharing of crystals and photomultiplier tubes allows identification of the staggered crystal and the use of smaller detectors shared by larger photomultiplier tubes thereby requiring less photomultiplier tubes, creating more scanning slices, providing better data sampling, and reducing the cost of the camera. The offset detector ring geometry reduces the costs of the positron camera and improves its performance

  11. Bifocal liquid lens zoom objective for mobile phone applications

    Science.gov (United States)

    Wippermann, F. C.; Schreiber, P.; Bräuer, A.; Craen, P.

    2007-02-01

    Miniaturized camera systems are an integral part of today's mobile phones which recently possess auto focus functionality. Commercially available solutions without moving parts have been developed using the electrowetting technology. Here, the contact angle of a drop of a conductive or polar liquid placed on an insulating substrate can be influenced by an electric field. Besides the compensation of the axial image shift due to different object distances, mobile phones with zoom functionality are desired as a next evolutionary step. In classical mechanically compensated zoom lenses two independently driven actuators combined with precision guides are needed leading to a delicate, space consuming and expansive opto-mechanical setup. Liquid lens technology based on the electrowetting effect gives the opportunity to built adaptive lenses without moving parts thus simplifying the mechanical setup. However, with the recent commercially available liquid lens products a completely motionless and continuously adaptive zoom system with market relevant optical performance is not feasible. This is due to the limited change in optical power the liquid lenses can provide and the dispersion of the used materials. As an intermediate step towards a continuously adjustable and motionless zoom lens we propose a bifocal system sufficient for toggling between two effective focal lengths without any moving parts. The system has its mechanical counterpart in a bifocal zoom lens where only one lens group has to be moved. In a liquid lens bifocal zoom two groups of adaptable liquid lenses are required for adjusting the effective focal length and keeping the image location constant. In order to overcome the difficulties in achromatizing the lens we propose a sequential image acquisition algorithm. Here, the full color image is obtained from a sequence of monochrome images (red, green, blue) leading to a simplified optical setup.

  12. Modular scintillation camera

    International Nuclear Information System (INIS)

    Barrett, H. H.

    1985-01-01

    Improved optical coupling modules to be used in coded-aperture-type radiographic imaging systems. In a first system, a rotating slit coded-aperture is employed between the radioactive object and the module. The module consists of one pair of side-by-side photomultipliers receiving light rays from a scintillation crystal exposed to the object via the coded-aperture. The light rays are guided to the photomultipliers by a mask having a central transverse transparent window, or by a cylindrical lens, the mask or lens being mounted in a light-conveying quartz block assembly providing internal reflections at opposite faces of the assembly. This generates output signals from the photomultipliers which can be utilized to compute one-dimensional coordinate values for restoring the image of the radioactive object on a display screen. In another form of optical coupling module, usable with other types of coded-apertures, four square photomultipliers form a substantially square block and receive light rays from scintillations from a scintillation crystal exposed to the radioactive object via the coded-aperture. The light rays are guided to the photomultipliers by a square mask or a centrally transparent square lens configuration mounted in a light-conveying assembly formed by internally reflecting quartz blocks, the optical rays being directed to the respective photomultipliers so as to generate resultant output signals which can be utilized to compute image coordinate values for two-dimensional representation of the radioactive object being examined

  13. NV-CMOS HD camera for day/night imaging

    Science.gov (United States)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  14. A Correlation of Thin Lens Approximation to Thick Lens Design by Using Coddington Factors in Lens Design and Manufacturing

    OpenAIRE

    FARSAKOĞLU, Ö. Faruk

    2014-01-01

    The effect of Coddington factors on aberration functions has been analysed using thin lens approximation. Minimizing spherical aberrations of singlet lenses using Coddington factors in lens design depending on lens manufacturing is discussed. Notation of lens test plate pairs used in lens manufacturing is also presented in terms of Coddington shape factors.

  15. Remote control video cameras on a suborbital rocket

    International Nuclear Information System (INIS)

    Wessling, Francis C.

    1997-01-01

    Three video cameras were controlled in real time from the ground to a sub-orbital rocket during a fifteen minute flight from White Sands Missile Range in New Mexico. Telemetry communications with the rocket allowed the control of the cameras. The pan, tilt, zoom, focus, and iris of two of the camera lenses, the power and record functions of the three cameras, and also the analog video signal that would be sent to the ground was controlled by separate microprocessors. A microprocessor was used to record data from three miniature accelerometers, temperature sensors and a differential pressure sensor. In addition to the selected video signal sent to the ground and recorded there, the video signals from the three cameras also were recorded on board the rocket. These recorders were mounted inside the pressurized segment of the rocket payload. The lenses, lens control mechanisms, and the three small television cameras were located in a portion of the rocket payload that was exposed to the vacuum of space. The accelerometers were also exposed to the vacuum of space

  16. Camera System MTF: combining optic with detector

    Science.gov (United States)

    Andersen, Torben B.; Granger, Zachary A.

    2017-08-01

    MTF is one of the most common metrics used to quantify the resolving power of an optical component. Extensive literature is dedicated to describing methods to calculate the Modulation Transfer Function (MTF) for stand-alone optical components such as a camera lens or telescope, and some literature addresses approaches to determine an MTF for combination of an optic with a detector. The formulations pertaining to a combined electro-optical system MTF are mostly based on theory, and assumptions that detector MTF is described only by the pixel pitch which does not account for wavelength dependencies. When working with real hardware, detectors are often characterized by testing MTF at discrete wavelengths. This paper presents a method to simplify the calculation of a polychromatic system MTF when it is permissible to consider the detector MTF to be independent of wavelength.

  17. SCC500: next-generation infrared imaging camera core products with highly flexible architecture for unique camera designs

    Science.gov (United States)

    Rumbaugh, Roy N.; Grealish, Kevin; Kacir, Tom; Arsenault, Barry; Murphy, Robert H.; Miller, Scott

    2003-09-01

    A new 4th generation MicroIR architecture is introduced as the latest in the highly successful Standard Camera Core (SCC) series by BAE SYSTEMS to offer an infrared imaging engine with greatly reduced size, weight, power, and cost. The advanced SCC500 architecture provides great flexibility in configuration to include multiple resolutions, an industry standard Real Time Operating System (RTOS) for customer specific software application plug-ins, and a highly modular construction for unique physical and interface options. These microbolometer based camera cores offer outstanding and reliable performance over an extended operating temperature range to meet the demanding requirements of real-world environments. A highly integrated lens and shutter is included in the new SCC500 product enabling easy, drop-in camera designs for quick time-to-market product introductions.

  18. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...

  19. The world's fastest camera

    CERN Multimedia

    Piquepaille, Roland

    2006-01-01

    This image processor is not your typical digital camera. It took 6 years to 20 people and $6 million to build the "Regional Calorimeter Trigger"(RCT) which will be a component of the Compact Muon Solenoid (CMS) experiment, one of the detectors on the Large Hadron Collider (LHC) in Geneva, Switzerland (1 page)

  20. Camera network video summarization

    Science.gov (United States)

    Panda, Rameswar; Roy-Chowdhury, Amit K.

    2017-05-01

    Networks of vision sensors are deployed in many settings, ranging from security needs to disaster response to environmental monitoring. Many of these setups have hundreds of cameras and tens of thousands of hours of video. The difficulty of analyzing such a massive volume of video data is apparent whenever there is an incident that requires foraging through vast video archives to identify events of interest. As a result, video summarization, that automatically extract a brief yet informative summary of these videos, has attracted intense attention in the recent years. Much progress has been made in developing a variety of ways to summarize a single video in form of a key sequence or video skim. However, generating a summary from a set of videos captured in a multi-camera network still remains as a novel and largely under-addressed problem. In this paper, with the aim of summarizing videos in a camera network, we introduce a novel representative selection approach via joint embedding and capped l21-norm minimization. The objective function is two-fold. The first is to capture the structural relationships of data points in a camera network via an embedding, which helps in characterizing the outliers and also in extracting a diverse set of representatives. The second is to use a capped l21-norm to model the sparsity and to suppress the influence of data outliers in representative selection. We propose to jointly optimize both of the objectives, such that embedding can not only characterize the structure, but also indicate the requirements of sparse representative selection. Extensive experiments on standard multi-camera datasets well demonstrate the efficacy of our method over state-of-the-art methods.

  1. I Am Not a Camera: On Visual Politics and Method. A Response to Roy Germano

    NARCIS (Netherlands)

    Yanow, D.

    2014-01-01

    No observational method is "point and shoot." Even bracketing interpretive methodologies and their attendant philosophies, a researcher-including an experimentalist-always frames observation in terms of the topic of interest. I cannot ever be "just a camera lens," not as researcher and not as

  2. A Simple Model of the Accommodating Lens of the Human Eye

    Science.gov (United States)

    Oommen, Vinay; Kanthakumar, Praghalathan

    2014-01-01

    The human eye is often discussed as optically equivalent to a photographic camera. The iris is compared with the shutter, the pupil to the aperture, and the retina to the film, and both have lens systems to focus rays of light. Although many similarities exist, a major difference between the two systems is the mechanism involved in focusing an…

  3. Estimating tiger abundance from camera trap data: Field surveys and analytical issues

    Science.gov (United States)

    Karanth, K. Ullas; Nichols, James D.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Automated photography of tigers Panthera tigris for purely illustrative purposes was pioneered by British forester Fred Champion (1927, 1933) in India in the early part of the Twentieth Century. However, it was McDougal (1977) in Nepal who first used camera traps, equipped with single-lens reflex cameras activated by pressure pads, to identify individual tigers and study their social and predatory behaviors. These attempts involved a small number of expensive, cumbersome camera traps, and were not, in any formal sense, directed at “sampling” tiger populations.

  4. LC-lens array with light field algorithm for 3D biomedical applications

    Science.gov (United States)

    Huang, Yi-Pai; Hsieh, Po-Yuan; Hassanfiroozi, Amir; Martinez, Manuel; Javidi, Bahram; Chu, Chao-Yu; Hsuan, Yun; Chu, Wen-Chun

    2016-03-01

    In this paper, liquid crystal lens (LC-lens) array was utilized in 3D bio-medical applications including 3D endoscope and light field microscope. Comparing with conventional plastic lens array, which was usually placed in 3D endoscope or light field microscope system to record image disparity, our LC-lens array has higher flexibility of electrically changing its focal length. By using LC-lens array, the working distance and image quality of 3D endoscope and microscope could be enhanced. Furthermore, the 2D/3D switching ability could be achieved if we turn off/on the electrical power on LClens array. In 3D endoscope case, a hexagonal micro LC-lens array with 350um diameter was placed at the front end of a 1mm diameter endoscope. With applying electric field on LC-lens array, the 3D specimen would be recorded as from seven micro-cameras with different disparity. We could calculate 3D construction of specimen with those micro images. In the other hand, if we turn off the electric field on LC-lens array, the conventional high resolution 2D endoscope image would be recorded. In light field microscope case, the LC-lens array was placed in front of the CMOS sensor. The main purpose of LC-lens array is to extend the refocusing distance of light field microscope, which is usually very narrow in focused light field microscope system, by montaging many light field images sequentially focusing on different depth. With adjusting focal length of LC-lens array from 2.4mm to 2.9mm, the refocusing distance was extended from 1mm to 11.3mm. Moreover, we could use a LC wedge to electrically shift the optics axis and increase the resolution of light field.

  5. DotLens smartphone microscopy for biological and biomedical applications (Conference Presentation)

    Science.gov (United States)

    Sung, Yu-Lung; Zhao, Fusheng; Shih, Wei-Chuan

    2017-02-01

    Recent advances in inkjet-printed optics have created a new class of lens fabrication technique. Lenses with a tunable geometry, magnification, and focal length can be fabricated by dispensing controlled amounts of liquid polymer onto a heated surface. This fabrication technique is highly cost-effective, and can achieve optically smooth surface finish. Dubbed DotLens, a single of which weighs less than 50 mg and occupies a volume less than 50 μL. DotLens can be attached onto any smartphone camera akin to a contact lens, and enable smartphones to obtain image resolution as fine as 1 µm. The surface curvature modifies the optical path of light to the image sensor, and enables the camera to focus as close as 2 mm. This enables microscopic imaging on a smartphone without any additional attachments, and has shown great potential in mobile point-of-care diagnostic systems, particularly for histology of tissue sections and cytology of blood cells. DotLens Smartphone Microscopy represents an innovative approach fundamentally different from other smartphone microscopes. In this paper, we describe the application and performance of DotLens smartphone microscopy in biological and biomedical research. In particular, we show recent results from images collected from pathology tissue slides with cancer features. In addition, we show performance in cytological analysis of blood smear. This tool has empowered Citizen Science investigators to collect microscopic images from various interesting objects.

  6. Characteristics of the thick, compound refractive lens

    International Nuclear Information System (INIS)

    Pantell, Richard H.; Feinstein, Joseph; Beguiristain, H. Raul; Piestrup, Melvin A.; Gary, Charles K.; Cremer, Jay T.

    2003-01-01

    A compound refractive lens (CRL), consisting of a series of N closely spaced lens elements each of which contributes a small fraction of the total focusing, can be used to focus x rays or neutrons. The thickness of a CRL can be comparable to its focal length, whereupon a thick-lens analysis must be performed. In contrast with the conventional optical lens, where the ray inside the lens follows a straight line, the ray inside the CRL is continually changing direction because of the multiple refracting surfaces. Thus the matrix representation for the thick CRL is quite different from that for the thick optical lens. Principal planes can be defined such that the thick-lens matrix can be converted to that of a thin lens. For a thick lens the focal length is greater than for a thin lens with the same lens curvature, but this lengthening effect is less for the CRL than for the conventional optical lens

  7. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    Science.gov (United States)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  8. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1986-01-01

    A positron emission tomography camera having a plurality of detector rings positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom. Each ring contains a plurality of scintillation detectors which are positioned around an inner circumference with a septum ring extending inwardly from the inner circumference along each outer edge of each ring. An additional septum ring is positioned in the middle of each ring of detectors and parallel to the other septa rings, whereby the inward extent of all the septa rings may be reduced by one-half and the number of detectors required in each ring is reduced. The additional septa reduces the costs of the positron camera and improves its performance

  9. Gamma ray camera

    International Nuclear Information System (INIS)

    Wang, S.-H.; Robbins, C.D.

    1979-01-01

    An Anger gamma ray camera is improved by the substitution of a gamma ray sensitive, proximity type image intensifier tube for the scintillator screen in the Anger camera. The image intensifier tube has a negatively charged flat scintillator screen, a flat photocathode layer, and a grounded, flat output phosphor display screen, all of which have the same dimension to maintain unit image magnification; all components are contained within a grounded metallic tube, with a metallic, inwardly curved input window between the scintillator screen and a collimator. The display screen can be viewed by an array of photomultipliers or solid state detectors. There are two photocathodes and two phosphor screens to give a two stage intensification, the two stages being optically coupled by a light guide. (author)

  10. NSTX Tangential Divertor Camera

    International Nuclear Information System (INIS)

    Roquemore, A.L.; Ted Biewer; Johnson, D.; Zweben, S.J.; Nobuhiro Nishino; Soukhanovskii, V.A.

    2004-01-01

    Strong magnetic field shear around the divertor x-point is numerically predicted to lead to strong spatial asymmetries in turbulence driven particle fluxes. To visualize the turbulence and associated impurity line emission near the lower x-point region, a new tangential observation port has been recently installed on NSTX. A reentrant sapphire window with a moveable in-vessel mirror images the divertor region from the center stack out to R 80 cm and views the x-point for most plasma configurations. A coherent fiber optic bundle transmits the image through a remotely selected filter to a fast camera, for example a 40500 frames/sec Photron CCD camera. A gas puffer located in the lower inboard divertor will localize the turbulence in the region near the x-point. Edge fluid and turbulent codes UEDGE and BOUT will be used to interpret impurity and deuterium emission fluctuation measurements in the divertor

  11. Scanning gamma camera

    International Nuclear Information System (INIS)

    Engdahl, L.W.; Batter, J.F. Jr.; Stout, K.J.

    1977-01-01

    A scanning system for a gamma camera providing for the overlapping of adjacent scan paths is described. A collimator mask having tapered edges provides for a graduated reduction in intensity of radiation received by a detector thereof, the reduction in intensity being graduated in a direction normal to the scanning path to provide a blending of images of adjacent scan paths. 31 claims, 15 figures

  12. Gamma camera display system

    International Nuclear Information System (INIS)

    Stout, K.J.

    1976-01-01

    A gamma camera having an array of photomultipliers coupled via pulse shaping circuitry and a resistor weighting circuit to a display for forming an image of a radioactive subject is described. A linearizing circuit is coupled to the weighting circuit, the linearizing circuit including a nonlinear feedback circuit with diode coupling to the weighting circuit for linearizing the correspondence between points of the display and points of the subject. 4 Claims, 5 Drawing Figures

  13. Comparison of polarimetric cameras

    Science.gov (United States)

    2017-03-01

    Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget , Paperwork Reduction Project (0704-0188...polarimetric camera, remote sensing, space systems 15. NUMBER OF PAGES 93 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18...2016. Hermann Hall, Monterey, CA. The next data in Figure 37. were collected on 01 December 2016 at 1226 PST on the rooftop of the Marriot Hotel in

  14. Photography in Dermatologic Surgery: Selection of an Appropriate Camera Type for a Particular Clinical Application.

    Science.gov (United States)

    Chen, Brian R; Poon, Emily; Alam, Murad

    2017-08-01

    Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.

  15. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Yu Lu

    2016-04-01

    Full Text Available A new compact large field of view (FOV multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second.

  16. Consumer electronic optics: how small can a lens be: the case of panomorph lenses

    Science.gov (United States)

    Thibault, Simon; Parent, Jocelyn; Zhang, Hu; Du, Xiaojun; Roulet, Patrice

    2014-09-01

    In 2014, miniature camera modules are applied to a variety of applications such as webcam, mobile phone, automotive, endoscope, tablets, portable computers and many other products. Mobile phone cameras are probably one of the most challenging parts due to the need for smaller and smaller total track length (TTL) and optimized embedded image processing algorithms. As the technology is developing, higher resolution and higher image quality, new capabilities are required to fulfil the market needs. Consequently, the lens system becomes more complex and requires more optical elements and/or new optical elements. What is the limit? How small an injection molded lens can be? We will discuss those questions by comparing two wide angle lenses for consumer electronic market. The first lens is a 6.56 mm (TTL) panoramic (180° FOV) lens built in 2012. The second is a more recent (2014) panoramic lens (180° FOV) with a TTL of 3.80 mm for mobile phone camera. Both optics are panomorph lenses used with megapixel sensors. Between 2012 and 2014, the development in design and plastic injection molding allowed a reduction of the TTL by more than 40%. This TTL reduction has been achieved by pushing the lens design to the extreme (edge/central air and material thicknesses as well as lens shape). This was also possible due to a better control of the injection molding process and material (low birefringence, haze and thermal stability). These aspects will be presented and discussed. During the next few years, we don't know if new material will come or new process but we will still need innovative people and industries to push again the limits.

  17. Optomechanical System Development of the AWARE Gigapixel Scale Camera

    Science.gov (United States)

    Son, Hui S.

    Electronic focal plane arrays (FPA) such as CMOS and CCD sensors have dramatically improved to the point that digital cameras have essentially phased out film (except in very niche applications such as hobby photography and cinema). However, the traditional method of mating a single lens assembly to a single detector plane, as required for film cameras, is still the dominant design used in cameras today. The use of electronic sensors and their ability to capture digital signals that can be processed and manipulated post acquisition offers much more freedom of design at system levels and opens up many interesting possibilities for the next generation of computational imaging systems. The AWARE gigapixel scale camera is one such computational imaging system. By utilizing a multiscale optical design, in which a large aperture objective lens is mated with an array of smaller, well corrected relay lenses, we are able to build an optically simple system that is capable of capturing gigapixel scale images via post acquisition stitching of the individual pictures from the array. Properly shaping the array of digital cameras allows us to form an effectively continuous focal surface using off the shelf (OTS) flat sensor technology. This dissertation details developments and physical implementations of the AWARE system architecture. It illustrates the optomechanical design principles and system integration strategies we have developed through the course of the project by summarizing the results of the two design phases for AWARE: AWARE-2 and AWARE-10. These systems represent significant advancements in the pursuit of scalable, commercially viable snapshot gigapixel imaging systems and should serve as a foundation for future development of such systems.

  18. Development and setting of a time-lapse video camera system for the Antarctic lake observation

    Directory of Open Access Journals (Sweden)

    Sakae Kudoh

    2010-11-01

    Full Text Available A submersible video camera system, which aimed to record the growth image of aquatic vegetation in Antarctic lakes for one year, was manufactured. The system consisted of a video camera, a programmable controller unit, a lens-cleaning wiper with a submersible motor, LED lights, and a lithium ion battery unit. Changes of video camera (High Vision System and modification of the lens-cleaning wiper allowed higher sensitivity and clearer recording images compared to the previous submersible video without increasing the power consumption. This system was set on the lake floor in Lake Naga Ike (a tentative name in Skarvsnes in Soya Coast, during the summer activity of the 51th Japanese Antarctic Research Expedition. Interval record of underwater visual image for one year have been started by our diving operation.

  19. Optical Design of the Camera for Transiting Exoplanet Survey Satellite (TESS)

    Science.gov (United States)

    Chrisp, Michael; Clark, Kristin; Primeau, Brian; Dalpiaz, Michael; Lennon, Joseph

    2015-01-01

    The optical design of the wide field of view refractive camera, 34 degrees diagonal field, for the TESS payload is described. This fast f/1.4 cryogenic camera, operating at -75 C, has no vignetting for maximum light gathering within the size and weight constraints. Four of these cameras capture full frames of star images for photometric searches of planet crossings. The optical design evolution, from the initial Petzval design, took advantage of Forbes aspheres to develop a hybrid design form. This maximized the correction from the two aspherics resulting in a reduction of average spot size by sixty percent in the final design. An external long wavelength pass filter was replaced by an internal filter coating on a lens to save weight, and has been fabricated to meet the specifications. The stray light requirements were met by an extended lens hood baffle design, giving the necessary off-axis attenuation.

  20. Measuring Light Pollution with Fisheye Lens Imagery from A Moving Boat – A Proof of Concept

    Directory of Open Access Journals (Sweden)

    Andreas Jechow

    2017-06-01

    Full Text Available Near all-sky imaging photometry was performed from a boat on the Gulf of Aqaba to measure the night sky brightness in a coastal environment. The boat was not anchored, and therefore drifted and rocked. The camera was mounted on a tripod without any inertia/motion stabilization. A commercial digital single lens reflex (DSLR camera and fisheye lens were used with ISO setting of 6400, with the exposure time varied between 0.5 s and 5 s. We find that despite movement of the vessel the measurements produce quantitatively comparable results apart from saturation effects. We discuss the potential and limitations of this method for mapping light pollution in marine and freshwater systems. This work represents the proof of concept that all-sky photometry with a commercial DSLR camera is a viable tool to determine light pollution in an ecological context from a moving boat.

  1. Fabry-Perot interferometry using an image-intensified rotating-mirror streak camera

    International Nuclear Information System (INIS)

    Seitz, W.L.; Stacy, H.L.

    1983-01-01

    A Fabry-Perot velocity interferometer system is described that uses a modified rotating mirror streak camera to recrod the dynamic fringe positions. A Los Alamos Model 72B rotating-mirror streak camera, equipped with a beryllium mirror, was modified to include a high aperture (f/2.5) relay lens and a 40-mm image-intensifier tube such that the image normally formed at the film plane of the streak camera is projected onto the intensifier tube. Fringe records for thin (0.13 mm) flyers driven by a small bridgewire detonator obtained with a Model C1155-01 Hamamatsu and Model 790 Imacon electronic streak cameras are compared with those obtained with the image-intensified rotating-mirror streak camera (I 2 RMC). Resolution comparisons indicate that the I 2 RMC gives better time resolution than either the Hamamatsu or the Imacon for total writing times of a few microseconds or longer

  2. Radiation-resistant camera tube

    International Nuclear Information System (INIS)

    Kuwahata, Takao; Manabe, Sohei; Makishima, Yasuhiro

    1982-01-01

    It was a long time ago that Toshiba launched on manufacturing black-and-white radiation-resistant camera tubes employing nonbrowning face-plate glass for ITV cameras used in nuclear power plants. Now in compliance with the increasing demand in nuclear power field, the Company is at grips with the development of radiation-resistant single color-camera tubes incorporating a color-stripe filter for color ITV cameras used under radiation environment. Herein represented are the results of experiments on characteristics of materials for single color-camera tubes and prospects for commercialization of the tubes. (author)

  3. Evaluation of the geometric stability and the accuracy potential of digital cameras — Comparing mechanical stabilisation versus parameterisation

    Science.gov (United States)

    Rieke-Zapp, D.; Tecklenburg, W.; Peipe, J.; Hastedt, H.; Haig, Claudia

    Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems-Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47 μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52 μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29 μm maximum absolute length measurement error in object space). Extending the parameter model with Fi

  4. Camera Movement in Narrative Cinema

    DEFF Research Database (Denmark)

    Nielsen, Jakob Isak

    2007-01-01

    section unearths what characterizes the literature on camera movement. The second section of the dissertation delineates the history of camera movement itself within narrative cinema. Several organizational principles subtending the on-screen effect of camera movement are revealed in section two...... but they are not organized into a coherent framework. This is the task that section three meets in proposing a functional taxonomy for camera movement in narrative cinema. Two presumptions subtend the taxonomy: That camera movement actively contributes to the way in which we understand the sound and images on the screen......, commentative or valuative manner. 4) Focalization: associating the movement of the camera with the viewpoints of characters or entities in the story world. 5) Reflexive: inviting spectators to engage with the artifice of camera movement. 6) Abstract: visualizing abstract ideas and concepts. In order...

  5. Crystalline lens radioprotectors

    International Nuclear Information System (INIS)

    Belkacemi, Y.; Pasquier, D.; Castelain, B.; Lartigau, E.; Warnet, J.M.

    2003-01-01

    During more than a half of century, numerous compounds have been tested in different models against radiation-induced cataract. In this report, we will review the radioprotectors that have been already tested for non-human crystalline lens protection. We will focus on the most important published studies in this topic and the mechanisms of cyto-protection reported in. vitro and in. vivo from animals. The most frequent mechanisms incriminated in the cyto-protective effect are: free radical scavenging, limitation of lipid peroxidation, modulation of cycle progression increase of intracellular reduced glutathione pool, reduction of DNA strand breaks and limitation of apoptotic cell death. Arnifostine (or Ethyol) and anethole dithiolethione (or Sulfarlem), already used clinically as chemo- and radio-protectants, could be further test?r for ocular radioprotection particularly for radiation-induced cataract. (author)

  6. Wedged multilayer Laue lens

    International Nuclear Information System (INIS)

    Conley, Ray; Liu Chian; Qian Jun; Kewish, Cameron M.; Macrander, Albert T.; Yan Hanfei; Maser, Joerg; Kang, Hyon Chol; Stephenson, G. Brian

    2008-01-01

    A multilayer Laue lens (MLL) is an x-ray focusing optic fabricated from a multilayer structure consisting of thousands of layers of two different materials produced by thin-film deposition. The sequence of layer thicknesses is controlled to satisfy the Fresnel zone plate law and the multilayer is sectioned to form the optic. An improved MLL geometry can be created by growing each layer with an in-plane thickness gradient to form a wedge, so that every interface makes the correct angle with the incident beam for symmetric Bragg diffraction. The ultimate hard x-ray focusing performance of a wedged MLL has been predicted to be significantly better than that of a nonwedged MLL, giving subnanometer resolution with high efficiency. Here, we describe a method to deposit the multilayer structure needed for an ideal wedged MLL and report our initial deposition results to produce these structures

  7. Using of a microcapillary refractive X-ray lens for focusing and imaging

    Energy Technology Data Exchange (ETDEWEB)

    Dudchik, Yu.I. [Institute of Applied Physics Problems, Kurchatova 7, 220064, Minsk (Belarus)], E-mail: dudchik@bsu.by; Komarov, F.F. [Institute of Applied Physics Problems, Kurchatova 7, 220064, Minsk (Belarus); Piestrup, M.A. [Adelphi Technology, 981-B Industrial Rd, San Carlos, 94070, California (United States)], E-mail: melpie@adelphitech.com; Gary, C.K.; Park, H.; Cremer, J.T. [Adelphi Technology, 981-B Industrial Rd, San Carlos, 94070, California (United States)

    2007-07-15

    The microcapillary lens, formed by air bubbles in a hollow core glass capillary filled with epoxy, is a novel design of a compound refractive lens for X-rays. The epoxy enclosed between two air bubbles has the form of a biconcave lens and acts as a positive lens for X-rays. Each individual lens is spherical with radius of curvature equal to the inner radius of the capillary. Up to 500 individual biconcave lenses can be formed in a single capillary with diameters from 50 to 500 {mu}m. Due to the small radius of curvatures that can be achieved, microcapillary lenses typically have shorter focal lengths than those made by compression or injection molding. For example, microcapillary lenses with a focal length about 5 cm for 8 keV X-rays and 50-micron aperture are readily available. We have produced a set of lenses in a 200-micron inner-diameter glass capillary with 100-350 individual microlenses and measured their parameters at the Stanford Synchrotron Radiation Laboratory and at the Advanced Photon Source. Our investigations have also shown that the lenses are suitable for imaging applications with an X-ray tube as a source of X-rays. A simple X-ray microscope is discussed. The microscope consists of a copper anode X-ray tube, X-ray lens and CCD-camera. The object, lens and CCD-camera were placed in-line at distances to satisfy the lens formula. It is shown that the field of view of the microscope is about 1 mm and resolution is equal to 3-5 {mu}m.

  8. Passive auto-focus for digital still cameras and camera phones: Filter-switching and low-light techniques

    Science.gov (United States)

    Gamadia, Mark Noel

    In order to gain valuable market share in the growing consumer digital still camera and camera phone market, camera manufacturers have to continually add and improve existing features to their latest product offerings. Auto-focus (AF) is one such feature, whose aim is to enable consumers to quickly take sharply focused pictures with little or no manual intervention in adjusting the camera's focus lens. While AF has been a standard feature in digital still and cell-phone cameras, consumers often complain about their cameras' slow AF performance, which may lead to missed photographic opportunities, rendering valuable moments and events with undesired out-of-focus pictures. This dissertation addresses this critical issue to advance the state-of-the-art in the digital band-pass filter, passive AF method. This method is widely used to realize AF in the camera industry, where a focus actuator is adjusted via a search algorithm to locate the in-focus position by maximizing a sharpness measure extracted from a particular frequency band of the incoming image of the scene. There are no known systematic methods for automatically deriving the parameters such as the digital pass-bands or the search step-size increments used in existing passive AF schemes. Conventional methods require time consuming experimentation and tuning in order to arrive at a set of parameters which balance AF performance in terms of speed and accuracy ultimately causing a delay in product time-to-market. This dissertation presents a new framework for determining an optimal set of passive AF parameters, named Filter- Switching AF, providing an automatic approach to achieve superior AF performance, both in good and low lighting conditions based on the following performance measures (metrics): speed (total number of iterations), accuracy (offset from truth), power consumption (total distance moved), and user experience (in-focus position overrun). Performance results using three different prototype cameras

  9. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    Science.gov (United States)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  10. Fixed-focus camera objective for small remote sensing satellites

    Science.gov (United States)

    Topaz, Jeremy M.; Braun, Ofer; Freiman, Dov

    1993-09-01

    An athermalized objective has been designed for a compact, lightweight push-broom camera which is under development at El-Op Ltd. for use in small remote-sensing satellites. The high performance objective has a fixed focus setting, but maintains focus passively over the full range of temperatures encountered in small satellites. The lens is an F/5.0, 320 mm focal length Tessar type, operating over the range 0.5 - 0.9 micrometers . It has a 16 degree(s) field of view and accommodates various state-of-the-art silicon detector arrays. The design and performance of the objective is described in this paper.

  11. Approaching direct optimization of as-built lens performance

    Science.gov (United States)

    McGuire, James P.; Kuper, Thomas G.

    2012-10-01

    We describe a method approaching direct optimization of the rms wavefront error of a lens including tolerances. By including the effect of tolerances in the error function, the designer can choose to improve the as-built performance with a fixed set of tolerances and/or reduce the cost of production lenses with looser tolerances. The method relies on the speed of differential tolerance analysis and has recently become practical due to the combination of continuing increases in computer hardware speed and multiple core processing We illustrate the method's use on a Cooke triplet, a double Gauss, and two plastic mobile phone camera lenses.

  12. Compliance among soft contact lens wearers.

    Science.gov (United States)

    Kuzman, Tomislav; Kutija, Marija Barisić; Masnec, Sanja; Jandroković, Sonja; Mrazovac, Danijela; Jurisić, Darija; Skegro, Ivan; Kalauz, Miro; Kordić, Rajko

    2014-12-01

    Contact lens compliance is proven to be crucial for preventing lens wear-related complications because of the interdependence of the steps in lens care regime and their influence on lens system microbial contamination. Awareness of the patients' lens handling compliance as well as correct recognition of non-compliant behaviours is the basis for creating more targeted strategies for patient education. The aim of this study was to investigate compliance among soft contact lens (SCL) wearers in different aspects of lens care handling and wearing habits. In our research 50 asymptomatic lens wearers filled out a questionnaire containing demographic data, lens type, hygiene and wearing habits, lenses and lens care system replacement schedule and self-evaluation of contact lens handling hygiene. We established criteria of compliance according to available manufacturer's recommendations, prior literature and our clinical experience. Only 2 (4%) of patients were fully compliant SCL wearers. The most common non-compliant behaviours were insufficient lens solution soaking time (62%), followed by failure to daily exchange lens case solution and showering while wearing lenses. 44% of patients reported storing lenses in saline solution. Mean lens storage case replacement was 3.6 months, with up to 78% patients replacing lens case at least once in 3 months. Average grade in self evaluating level of compliance was very good (4 +/- 0.78) (from 1-poor level of hygiene to 5-great level of hygiene). Lens wearers who reported excessive daily lens wear and more than 10 years of lens wearing experience were also found to be less compliant with other lens system care procedures. (t = -2.99, df=47, p rate, self grading was relatively high. Therefore, these results indicate the need for patient education and encouragement of better lens wearing habits and all of the lens maintenance steps at each patient visit.

  13. Contact Lens-Related Eye Infections

    Science.gov (United States)

    ... Español Eye Health / Eye Health A-Z Contact Lens-Related Eye Infections Sections Contact Lens-Related Eye ... Six Steps to Avoid Contact Lens Infections Contact Lens-Related Eye Infections Leer en Español: Infecciones relacionadas ...

  14. 21 CFR 886.1375 - Bagolini lens.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Bagolini lens. 886.1375 Section 886.1375 Food and... OPHTHALMIC DEVICES Diagnostic Devices § 886.1375 Bagolini lens. (a) Identification. A Bagolini lens is a device that consists of a plane lens containing almost imperceptible striations that do not obscure...

  15. Straylight Measurements in Contact Lens Wear

    NARCIS (Netherlands)

    van der Meulen, Ivanka J. E.; Engelbrecht, Leonore A.; van Vliet, Johannes M. J.; Lapid-Gortzak, Ruth; Nieuwendaal, Carla P.; Mourits, Maarten P.; Schlingemann, Reinier O.; van den Berg, Thomas J. T. P.

    2010-01-01

    Purpose: (1) To quantify the effect of contact lens wear on straylight in rigid and soft contact lens wearers and (2) to relate findings to morphological changes and subjective complaints. Methods: Straylight was measured using the Oculus C-Quant during contact lens wear and after contact lens

  16. Immunohistochemical studies of lens crystallins in the dysgenetic lens (dyl) mutant mice

    NARCIS (Netherlands)

    Brahma, S.K.; Sanyal, S.

    1984-01-01

    The lens in the dyl mutant mice shows a persistent lens-ectodermal connection as well as degeneration and extrusion of lens materials after the initial differentiation of the fibres. Immunohistochemical investigation of the ontogeny of the lens crystallins in this developing mutant lens has been

  17. EVALUATION OF THE QUALITY OF ACTION CAMERAS WITH WIDE-ANGLE LENSES IN UAV PHOTOGRAMMETRY

    Directory of Open Access Journals (Sweden)

    H. Hastedt

    2016-06-01

    Full Text Available The application of light-weight cameras in UAV photogrammetry is required due to restrictions in payload. In general, consumer cameras with normal lens type are applied to a UAV system. The availability of action cameras, like the GoPro Hero4 Black, including a wide-angle lens (fish-eye lens offers new perspectives in UAV projects. With these investigations, different calibration procedures for fish-eye lenses are evaluated in order to quantify their accuracy potential in UAV photogrammetry. Herewith the GoPro Hero4 is evaluated using different acquisition modes. It is investigated to which extent the standard calibration approaches in OpenCV or Agisoft PhotoScan/Lens can be applied to the evaluation processes in UAV photogrammetry. Therefore different calibration setups and processing procedures are assessed and discussed. Additionally a pre-correction of the initial distortion by GoPro Studio and its application to the photogrammetric purposes will be evaluated. An experimental setup with a set of control points and a prospective flight scenario is chosen to evaluate the processing results using Agisoft PhotoScan. Herewith it is analysed to which extent a pre-calibration and pre-correction of a GoPro Hero4 will reinforce the reliability and accuracy of a flight scenario.

  18. Evaluation of the Quality of Action Cameras with Wide-Angle Lenses in Uav Photogrammetry

    Science.gov (United States)

    Hastedt, H.; Ekkel, T.; Luhmann, T.

    2016-06-01

    The application of light-weight cameras in UAV photogrammetry is required due to restrictions in payload. In general, consumer cameras with normal lens type are applied to a UAV system. The availability of action cameras, like the GoPro Hero4 Black, including a wide-angle lens (fish-eye lens) offers new perspectives in UAV projects. With these investigations, different calibration procedures for fish-eye lenses are evaluated in order to quantify their accuracy potential in UAV photogrammetry. Herewith the GoPro Hero4 is evaluated using different acquisition modes. It is investigated to which extent the standard calibration approaches in OpenCV or Agisoft PhotoScan/Lens can be applied to the evaluation processes in UAV photogrammetry. Therefore different calibration setups and processing procedures are assessed and discussed. Additionally a pre-correction of the initial distortion by GoPro Studio and its application to the photogrammetric purposes will be evaluated. An experimental setup with a set of control points and a prospective flight scenario is chosen to evaluate the processing results using Agisoft PhotoScan. Herewith it is analysed to which extent a pre-calibration and pre-correction of a GoPro Hero4 will reinforce the reliability and accuracy of a flight scenario.

  19. Dual-camera design for coded aperture snapshot spectral imaging.

    Science.gov (United States)

    Wang, Lizhi; Xiong, Zhiwei; Gao, Dahua; Shi, Guangming; Wu, Feng

    2015-02-01

    Coded aperture snapshot spectral imaging (CASSI) provides an efficient mechanism for recovering 3D spectral data from a single 2D measurement. However, since the reconstruction problem is severely underdetermined, the quality of recovered spectral data is usually limited. In this paper we propose a novel dual-camera design to improve the performance of CASSI while maintaining its snapshot advantage. Specifically, a beam splitter is placed in front of the objective lens of CASSI, which allows the same scene to be simultaneously captured by a grayscale camera. This uncoded grayscale measurement, in conjunction with the coded CASSI measurement, greatly eases the reconstruction problem and yields high-quality 3D spectral data. Both simulation and experimental results demonstrate the effectiveness of the proposed method.

  20. FPscope: a field-portable high-resolution microscope using a cellphone lens.

    Science.gov (United States)

    Dong, Siyuan; Guo, Kaikai; Nanda, Pariksheet; Shiradkar, Radhika; Zheng, Guoan

    2014-10-01

    The large consumer market has made cellphone lens modules available at low-cost and in high-quality. In a conventional cellphone camera, the lens module is used to demagnify the scene onto the image plane of the camera, where image sensor is located. In this work, we report a 3D-printed high-resolution Fourier ptychographic microscope, termed FPscope, which uses a cellphone lens in a reverse manner. In our platform, we replace the image sensor with sample specimens, and use the cellphone lens to project the magnified image to the detector. To supersede the diffraction limit of the lens module, we use an LED array to illuminate the sample from different incident angles and synthesize the acquired images using the Fourier ptychographic algorithm. As a demonstration, we use the reported platform to acquire high-resolution images of resolution target and biological specimens, with a maximum synthetic numerical aperture (NA) of 0.5. We also show that, the depth-of-focus of the reported platform is about 0.1 mm, orders of magnitude longer than that of a conventional microscope objective with a similar NA. The reported platform may enable healthcare accesses in low-resource settings. It can also be used to demonstrate the concept of computational optics for educational purposes.

  1. Contact Lens Related Corneal Ulcer

    OpenAIRE

    Loh, KY; Agarwal, P

    2010-01-01

    A corneal ulcer caused by infection is one of the major causes of blindness worldwide. One of the recent health concerns is the increasing incidence of corneal ulcers associated with contact lens user especially if the users fail to follow specific instruction in using their contact lenses. Risk factors associated with increased risk of contact lens related corneal ulcers are: overnight wear, long duration of continuous wear, lower socio-economic classes, smoking, dry eye and poor hygiene. Th...

  2. Crystalline lens and refractive development.

    Science.gov (United States)

    Iribarren, Rafael

    2015-07-01

    Individual refractive errors usually change along lifespan. Most children are hyperopic in early life. This hyperopia is usually lost during growth years, leading to emmetropia in adults, but myopia also develops in children during school years or during early adult life. Those subjects who remain emmetropic are prone to have hyperopic shifts in middle life. And even later, at older ages, myopic shifts are developed with nuclear cataract. The eye grows from 15 mm in premature newborns to approximately 24 mm in early adult years, but, in most cases, refractions are maintained stable in a clustered distribution. This growth in axial length would represent a refractive change of more than 40 diopters, which is compensated by changes in corneal and lens powers. The process which maintains the balance between the ocular components of refraction during growth is still under study. As the lens power cannot be measured in vivo, but can only be calculated based on the other ocular components, there have not been many studies of lens power in humans. Yet, recent studies have confirmed that the lens loses power during growth in children, and that hyperopic and myopic shifts in adulthood may be also produced by changes in the lens. These studies in children and adults give a picture of the changing power of the lens along lifespan. Other recent studies about the growth of the lens and the complexity of its internal structure give clues about how these changes in lens power are produced along life. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Single lens laser beam shaper

    Science.gov (United States)

    Liu, Chuyu [Newport News, VA; Zhang, Shukui [Yorktown, VA

    2011-10-04

    A single lens bullet-shaped laser beam shaper capable of redistributing an arbitrary beam profile into any desired output profile comprising a unitary lens comprising: a convex front input surface defining a focal point and a flat output portion at the focal point; and b) a cylindrical core portion having a flat input surface coincident with the flat output portion of the first input portion at the focal point and a convex rear output surface remote from the convex front input surface.

  4. Video Chat with Multiple Cameras

    OpenAIRE

    MacCormick, John

    2012-01-01

    The dominant paradigm for video chat employs a single camera at each end of the conversation, but some conversations can be greatly enhanced by using multiple cameras at one or both ends. This paper provides the first rigorous investigation of multi-camera video chat, concentrating especially on the ability of users to switch between views at either end of the conversation. A user study of 23 individuals analyzes the advantages and disadvantages of permitting a user to switch between views at...

  5. Transmission electron microscope CCD camera

    Science.gov (United States)

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  6. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1977-01-01

    A gamma camera system having control components operating in conjunction with a solid state detector is described. The detector is formed of a plurality of discrete components which are associated in geometrical or coordinate arrangement defining a detector matrix to derive coordinate signal outputs. These outputs are selectively filtered and summed to form coordinate channel signals and corresponding energy channel signals. A control feature of the invention regulates the noted summing and filtering performance to derive data acceptance signals which are addressed to further treating components. The latter components include coordinate and enery channel multiplexers as well as energy-responsive selective networks. A sequential control is provided for regulating the signal processing functions of the system to derive an overall imaging cycle

  7. Positron emission tomography camera

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    A positron emission tomography camera having a plurality of detector planes positioned side-by-side around a patient area to detect radiation. Each plane includes a plurality of photomultiplier tubes and at least two rows of scintillation crystals on each photomultiplier tube extend across to adjacent photomultiplier tubes for detecting radiation from the patient area. Each row of crystals on each photomultiplier tube is offset from the other rows of crystals, and the area of each crystal on each tube in each row is different than the area of the crystals on the tube in other rows for detecting which crystal is actuated and allowing the detector to detect more inter-plane slides. The crystals are offset by an amount equal to the length of the crystal divided by the number of rows. The rows of crystals on opposite sides of the patient may be rotated 90 degrees relative to each other

  8. The Circular Camera Movement

    DEFF Research Database (Denmark)

    Hansen, Lennard Højbjerg

    2014-01-01

    It has been an accepted precept in film theory that specific stylistic features do not express specific content. Nevertheless, it is possible to find many examples in the history of film in which stylistic features do express specific content: for instance, the circular camera movement is used...... repeatedly to convey the feeling of a man and a woman falling in love. This raises the question of why producers and directors choose certain stylistic features to narrate certain categories of content. Through the analysis of several short film and TV clips, this article explores whether...... or not there are perceptual aspects related to specific stylistic features that enable them to be used for delimited narrational purposes. The article further attempts to reopen this particular stylistic debate by exploring the embodied aspects of visual perception in relation to specific stylistic features...

  9. A course in lens design

    CERN Document Server

    Velzel, Chris

    2014-01-01

    A Course in Lens Design is an instruction in the design of image-forming optical systems. It teaches how a satisfactory design can be obtained in a straightforward way. Theory is limited to a minimum, and used to support the practical design work. The book introduces geometrical optics, optical instruments and aberrations. It gives a description of the process of lens design and of the strategies used in this process. Half of its content is devoted to the design of sixteen types of lenses, described in detail from beginning to end. This book is different from most other books on lens design because it stresses the importance of the initial phases of the design process: (paraxial) lay-out and (thin-lens) pre-design. The argument for this change of accent is that in these phases much information can be obtained about the properties of the lens to be designed. This information can be used in later phases of the design. This makes A Course in Lens Design a useful self-study book, and a suitable basis for an intro...

  10. Automatic locking radioisotope camera lock

    International Nuclear Information System (INIS)

    Rosauer, P.J.

    1978-01-01

    The lock of the present invention secures the isotope source in a stored shielded condition in the camera until a positive effort has been made to open the lock and take the source outside of the camera and prevents disconnection of the source pigtail unless the source is locked in a shielded condition in the camera. It also gives a visual indication of the locked or possible exposed condition of the isotope source and prevents the source pigtail from being completely pushed out of the camera, even when the lock is released. (author)

  11. Multispectral calibration to enhance the metrology performance of C-mount camera systems

    Directory of Open Access Journals (Sweden)

    S. Robson

    2014-06-01

    Full Text Available Low cost monochrome camera systems based on CMOS sensors and C-mount lenses have been successfully applied to a wide variety of metrology tasks. For high accuracy work such cameras are typically equipped with ring lights to image retro-reflective targets as high contrast image features. Whilst algorithms for target image measurement and lens modelling are highly advanced, including separate RGB channel lens distortion correction, target image circularity compensation and a wide variety of detection and centroiding approaches, less effort has been directed towards optimising physical target image quality by considering optical performance in narrow wavelength bands. This paper describes an initial investigation to assess the effect of wavelength on camera calibration parameters for two different camera bodies and the same ‘C-mount’ wide angle lens. Results demonstrate the expected strong influence on principal distance, radial and tangential distortion, and also highlight possible trends in principal point, orthogonality and affinity parameters which are close to the parameter estimation noise level from the strong convergent self-calibrating image networks.

  12. Design of a Day/Night Star Camera System

    Science.gov (United States)

    Alexander, Cheryl; Swift, Wesley; Ghosh, Kajal; Ramsey, Brian

    1999-01-01

    This paper describes the design of a camera system capable of acquiring stars during both the day and night cycles of a high altitude balloon flight (35-42 km). The camera system will be filtered to operate in the R band (590-810 nm). Simulations have been run using MODTRAN atmospheric code to determine the worse case sky brightness at 35 km. With a daytime sky brightness of 2(exp -05) W/sq cm/str/um in the R band, the sensitivity of the camera system will allow acquisition of at least 1-2 stars/sq degree at star magnitude limits of 8.25-9.00. The system will have an F2.8, 64.3 mm diameter lens and a 1340X1037 CCD array digitized to 12 bits. The CCD array is comprised of 6.8 X 6.8 micron pixels with a well depth of 45,000 electrons and a quantum efficiency of 0.525 at 700 nm. The camera's field of view will be 6.33 sq degree and provide attitude knowledge to 8 arcsec or better. A test flight of the system is scheduled for fall 1999.

  13. X-ray imaging with compound refractive lens and microfocus X-ray tube

    OpenAIRE

    Pina, Ladislav; Dudchik, Yury; Jelinek, Vaclav; Sveda, Libor; Marsik, Jiri; Horvath, Martin; Petr, Ondrej

    2008-01-01

    Compound refractive lenses (CRL), consisting of a lot number in-line concave microlenses made of low-Z material were studied. Lenses with focal length 109 mm and 41 mm for 8-keV X-rays, microfocus X-ray tube and X-ray CCD camera were used in experiments. Obtained images show intensity distribution of magnified microfocus X-ray source focal spot. Within the experiments, one lens was also used as an objective lens of the X-ray microscope, where the copper anode X-ray microfocus tube served as a...

  14. Extending the depth of field in a fixed focus lens using axial colour

    Science.gov (United States)

    Fitzgerald, Niamh; Dainty, Christopher; Goncharov, Alexander V.

    2017-11-01

    We propose a method of extending the depth of field (EDOF) of conventional lenses for a low cost iris recognition front-facing smartphone camera. Longitudinal chromatic aberration (LCA) can be induced in the lens by means of dual wavelength illumination. The EDOF region is then constructed from the sum of the adjacent depths of field from each wavelength illumination. The lens parameters can be found analytically with paraxial raytracing. The extended depth of field is dependant on the glass chosen and position of the near object point.

  15. The "All Sky Camera Network"

    Science.gov (United States)

    Caldwell, Andy

    2005-01-01

    In 2001, the "All Sky Camera Network" came to life as an outreach program to connect the Denver Museum of Nature and Science (DMNS) exhibit "Space Odyssey" with Colorado schools. The network is comprised of cameras placed strategically at schools throughout Colorado to capture fireballs--rare events that produce meteorites.…

  16. The Eye of the Camera

    NARCIS (Netherlands)

    van Rompay, Thomas Johannes Lucas; Vonk, Dorette J.; Fransen, M.L.

    2009-01-01

    This study addresses the effects of security cameras on prosocial behavior. Results from previous studies indicate that the presence of others can trigger helping behavior, arising from the need for approval of others. Extending these findings, the authors propose that security cameras can likewise

  17. An electrically tunable plenoptic camera using a liquid crystal microlens array

    International Nuclear Information System (INIS)

    Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Ji, An; Xie, Changsheng

    2015-01-01

    Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated with an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF

  18. Development and Performance of Bechtel Nevada's Nine-Frame Camera System

    International Nuclear Information System (INIS)

    S. A. Baker; M. J. Griffith; J. L. Tybo

    2002-01-01

    Bechtel Nevada, Los Alamos Operations, has developed a high-speed, nine-frame camera system that records a sequence from a changing or dynamic scene. The system incorporates an electrostatic image tube with custom gating and deflection electrodes. The framing tube is shuttered with high-speed gating electronics, yielding frame rates of up to 5MHz. Dynamic scenes are lens-coupled to the camera, which contains a single photocathode gated on and off to control each exposure time. Deflection plates and drive electronics move the frames to different locations on the framing tube output. A single charge-coupled device (CCD) camera then records the phosphor image of all nine frames. This paper discusses setup techniques to optimize system performance. It examines two alternate philosophies for system configuration and respective performance results. We also present performance metrics for system evaluation, experimental results, and applications to four-frame cameras

  19. UCXp camera imaging principle and key technologies of data post-processing

    Science.gov (United States)

    Yuan, Fangyan; Li, Guoqing; Zuo, Zhengli; Liu, Jianmin; Wu, Liang; Yu, Xiaoping; Zhao, Haitao

    2014-03-01

    The large format digital aerial camera product UCXp was introduced into the Chinese market in 2008, the image consists of 17310 columns and 11310 rows with a pixel size of 6 mm. The UCXp camera has many advantages compared with the same generation camera, with multiple lenses exposed almost at the same time and no oblique lens. The camera has a complex imaging process whose principle will be detailed in this paper. On the other hand, the UCXp image post-processing method, including data pre-processing and orthophoto production, will be emphasized in this article. Based on the data of new Beichuan County, this paper will describe the data processing and effects.

  20. UCXp camera imaging principle and key technologies of data post-processing

    International Nuclear Information System (INIS)

    Yuan, Fangyan; Li, Guoqing; Zuo, Zhengli; Liu, Jianmin; Wu, Liang; Yu, Xiaoping; Zhao, Haitao

    2014-01-01

    The large format digital aerial camera product UCXp was introduced into the Chinese market in 2008, the image consists of 17310 columns and 11310 rows with a pixel size of 6 mm. The UCXp camera has many advantages compared with the same generation camera, with multiple lenses exposed almost at the same time and no oblique lens. The camera has a complex imaging process whose principle will be detailed in this paper. On the other hand, the UCXp image post-processing method, including data pre-processing and orthophoto production, will be emphasized in this article. Based on the data of new Beichuan County, this paper will describe the data processing and effects

  1. Gamma camera system

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1982-01-01

    The invention provides a composite solid state detector for use in deriving a display, by spatial coordinate information, of the distribution or radiation emanating from a source within a region of interest, comprising several solid state detector components, each having a given surface arranged for exposure to impinging radiation and exhibiting discrete interactions therewith at given spatially definable locations. The surface of each component and the surface disposed opposite and substantially parallel thereto are associated with impedence means configured to provide for each opposed surface outputs for signals relating the given location of the interactions with one spatial coordinate parameter of one select directional sense. The detector components are arranged to provide groupings of adjacently disposed surfaces mutually linearly oriented to exhibit a common directional sense of the spatial coordinate parameter. Means interconnect at least two of the outputs associated with each of the surfaces within a given grouping for collecting the signals deriving therefrom. The invention also provides a camera system for imaging the distribution of a source of gamma radiation situated within a region of interest

  2. High-speed two-frame gated camera for parameters measurement of Dragon-Ⅰ LIA

    International Nuclear Information System (INIS)

    Jiang Xiaoguo; Wang Yuan; Zhang Kaizhi; Shi Jinshui; Deng Jianjun; Li Jin

    2012-01-01

    The time-resolved measurement system which can work at very high speed is necessary in electron beam parameter diagnosis for Dragon-Ⅰ linear induction accelerator (LIA). A two-frame gated camera system has been developed and put into operation. The camera system adopts the optical principle of splitting the imaging light beam into two parts in the imaging space of a lens with long focus length. It includes lens coupled gated image intensifier, CCD camera, high speed shutter trigger device based on large scale field programmable gate array. The minimum exposure time for each image is about 3 ns, and the interval time between two images can be adjusted with a step of about 0.5 ns. The exposure time and the interval time can be independently adjusted and can reach about 1 s. The camera system features good linearity, good response uniformity, equivalent background illumination (EBI) as low as about 5 electrons per pixel per second, large adjustment range of sensitivity, and excel- lent flexibility and adaptability in applications. The camera system can capture two frame images at one time with the image size of 1024 x 1024. It meets the requirements of measurement for Dragon-Ⅰ LIA. (authors)

  3. Tinting of intraocular lens implants

    Energy Technology Data Exchange (ETDEWEB)

    Zigman, S.

    1982-06-01

    Intraocular lens (IOL) implants of polymethyl methacrylate (PMMA) lack an important yellow pigment useful as a filter in the visual process and in the protection of the retina from short-wavelength radiant energy. The ability to produce a yellow pigment in the PMMA used in IOL implants by exposure to near-ultraviolet (UV) light was tested. It was found that the highly cross-linked material in Copeland lens blanks was tinted slightly because of this exposure. The absorptive properties of lens blanks treated with near-UV light in this way approached that of the absorptive properties of human lenses. This finding shows that it is possible to alter IOL implants simply so as to induce a pale-yellow pigment in them to improve the visual process and to protect the retinas of IOL users.

  4. Tinting of intraocular lens implants

    International Nuclear Information System (INIS)

    Zigman, S.

    1982-01-01

    Intraocular lens (IOL) implants of polymethyl methacrylate (PMMA) lack an important yellow pigment useful as a filter in the visual process and in the protection of the retina from short-wavelength radiant energy. The ability to produce a yellow pigment in the PMMA used in IOL implants by exposure to near-ultraviolet (UV) light was tested. It was found that the highly cross-linked material in Copeland lens blanks was tinted slightly because of this exposure. The absorptive properties of lens blanks treated with near-UV light in this way approached that of the absorptive properties of human lenses. This finding shows that it is possible to alter IOL implants simply so as to induce a pale-yellow pigment in them to improve the visual process and to protect the retinas of IOL users

  5. Automated Fresnel lens tester system

    Energy Technology Data Exchange (ETDEWEB)

    Phipps, G.S.

    1981-07-01

    An automated data collection system controlled by a desktop computer has been developed for testing Fresnel concentrators (lenses) intended for solar energy applications. The system maps the two-dimensional irradiance pattern (image) formed in a plane parallel to the lens, whereas the lens and detector assembly track the sun. A point detector silicon diode (0.5-mm-dia active area) measures the irradiance at each point of an operator-defined rectilinear grid of data positions. Comparison with a second detector measuring solar insolation levels results in solar concentration ratios over the image plane. Summation of image plane energies allows calculation of lens efficiencies for various solar cell sizes. Various graphical plots of concentration ratio data help to visualize energy distribution patterns.

  6. Development of underwater camera using high-definition camera

    International Nuclear Information System (INIS)

    Tsuji, Kenji; Watanabe, Masato; Takashima, Masanobu; Kawamura, Shingo; Tanaka, Hiroyuki

    2012-01-01

    In order to reduce the time for core verification or visual inspection of BWR fuels, the underwater camera using a High-Definition camera has been developed. As a result of this development, the underwater camera has 2 lights and 370 x 400 x 328mm dimensions and 20.5kg weight. Using the camera, 6 or so spent-fuel IDs are identified at 1 or 1.5m distance at a time, and 0.3mmφ pin-hole is recognized at 1.5m distance and 20 times zoom-up. Noises caused by radiation less than 15 Gy/h are not affected the images. (author)

  7. Multi-spectral camera development

    CSIR Research Space (South Africa)

    Holloway, M

    2012-10-01

    Full Text Available and Evaluation Bertus Theron Evaluation rational ? Data-sheets for COTS optical systems have limited performance data ? Optronic Sensor Systems (OSS) has the facility, Optronics Test and Evaluation Laboratory (OTEL), to benchmark and verify conformance... to the manufacturer?s data-sheet One measured parameter, namely cut-off frequency, of the lens was compared to the manufacturer?s data-sheet Hardware evaluation - COTS lens ? CSIR 2012 Slide 9 Hardware Design ? Sensor Unit ? CSIR 2012 Slide 10 Key design...

  8. Fabrication of multi-focal microlens array on curved surface for wide-angle camera module

    Science.gov (United States)

    Pan, Jun-Gu; Su, Guo-Dung J.

    2017-08-01

    In this paper, we present a wide-angle and compact camera module that consists of microlens array with different focal lengths on curved surface. The design integrates the principle of an insect's compound eye and the human eye. It contains a curved hexagonal microlens array and a spherical lens. Compared with normal mobile phone cameras which usually need no less than four lenses, but our proposed system only uses one lens. Furthermore, the thickness of our proposed system is only 2.08 mm and diagonal full field of view is about 100 degrees. In order to make the critical microlens array, we used the inkjet printing to control the surface shape of each microlens for achieving different focal lengths and use replication method to form curved hexagonal microlens array.

  9. Lens system for SIMS analysis

    International Nuclear Information System (INIS)

    Martinez, G.; Sancho, M.; Garcia-Galan, J.C.

    1987-01-01

    A powerful version of the charge-density method is applied to the study of a combined objective and emission lens, suitable for highly localized analysis of a flat sample surface. This lens can extract secondary ions of equal or opposite polarity to that of the primary particles. A computer simulation of the ion trajectories for both modes is made. The behaviour for different values of the geometric parameters and polarizations is analyzed and useful data for design such as primary beam demagnification and secondary image position are given. (author) 4 refs

  10. Control system for gamma camera

    International Nuclear Information System (INIS)

    Miller, D.W.

    1977-01-01

    An improved gamma camera arrangement is described which utilizing a solid state detector, formed of high purity germanium. the central arrangement of the camera operates to effect the carrying out of a trapezoidal filtering operation over antisymmetrically summed spatial signals through gated integration procedures utilizing idealized integrating intervals. By simultaneously carrying out peak energy evaluation of the input signals, a desirable control over pulse pile-up phenomena is achieved. Additionally, through the use of the time derivative of incoming pulse or signal energy information to initially enable the control system, a low level information evaluation is provided serving to enhance the signal processing efficiency of the camera

  11. Surgical video recording with a modified GoPro Hero 4 camera

    OpenAIRE

    Lin LK

    2016-01-01

    Lily Koo Lin Department of Ophthalmology and Vision Science, University of California, Davis Eye Center, Sacramento, CA, USA Background: Surgical videography can provide analytical self-examination for the surgeon, teaching opportunities for trainees, and allow for surgical case presentations. This study examined if a modified GoPro Hero 4 camera with a 25 mm lens could prove to be a cost-effective method of surgical videography with enough detail for oculoplastic and strabismus surgery. Me...

  12. Refractive lens exchange with a multifocal diffractive aspheric intraocular lens

    Directory of Open Access Journals (Sweden)

    Teresa Ferrer-Blasco

    2012-06-01

    Full Text Available PURPOSE: To evaluate the safety, efficacy and predictability after refractive lens exchange with multifocal diffractive aspheric intraocular lens implantation. METHODS: Sixty eyes of 30 patients underwent bilateral implantation with AcrySof® ReSTOR® SN6AD3 intraocular lens with +4.00 D near addition. Patients were divided into myopic and hyperopic groups. Monocular best corrected visual acuity at distance and near and monocular uncorrected visual acuity at distance and near were measured before and 6 months postoperatively. RESULTS: After surgery, uncorrected visual acuity was 0.08 ± 0.15 and 0.11 ± 0.14 logMAR for the myopic and hyperopic groups, respectively (50% and 46.67% of patients had an uncorrected visual acuity of 20/20 or better in the myopic and hyperopic groups, respectively. The safety and efficacy indexes were 1.05 and 0.88 for the myopic and 1.01 and 0.86 for the hyperopic groups at distance vision. Within the myopic group, 20 eyes remained unchanged after the surgery, and 3 gained >2 lines of best corrected visual acuity. For the hyperopic group, 2 eyes lost 2 lines of best corrected visual acuity, 21 did not change, and 3 eyes gained 2 lines. At near vision, the safety and efficacy indexes were 1.23 and 1.17 for the myopic and 1.16 and 1.13 for the hyperopic groups. Best corrected near visual acuity improved after surgery in both groups (from 0.10 logMAR to 0.01 logMAR in the myopic group, and from 0.10 logMAR to 0.04 logMAR in the hyperopic group. CONCLUSIONS: The ReSTOR® SN6AD3 intraocular lens in refractive lens exchange demonstrated good safety, efficacy, and predictability in correcting high ametropia and presbyopia.

  13. First results of micro-neutron tomography by use of a focussing neutron lens

    CERN Document Server

    Masschaele, B; Cauwels, P; Dierick, M; Jolie, J; Mondelaers, W

    2001-01-01

    Since the appearance of high flux neutron beams, scientists experimented with neutron radiography. This high beam flux combined with modern neutron to visible light converters leads to the possibility of performing fast neutron micro-tomography. The first results of cold neutron tomography with a neutron lens are presented in this article. Samples are rotated in the beam and the projections are recorded with a neutron camera. The 3D reconstruction is performed with cone beam reconstruction software.

  14. Analyzer for gamma cameras diagnostic

    International Nuclear Information System (INIS)

    Oramas Polo, I.; Osorio Deliz, J. F.; Diaz Garcia, A.

    2013-01-01

    This research work was carried out to develop an analyzer for gamma cameras diagnostic. It is composed of an electronic system that includes hardware and software capabilities, and operates from the acquisition of the 4 head position signals of a gamma camera detector. The result is the spectrum of the energy delivered by nuclear radiation coming from the camera detector head. This system includes analog processing of position signals from the camera, digitization and the subsequent processing of the energy signal in a multichannel analyzer, sending data to a computer via a standard USB port and processing of data in a personal computer to obtain the final histogram. The circuits are composed of an analog processing board and a universal kit with micro controller and programmable gate array. (Author)

  15. Stretchable Binary Fresnel Lens for Focus Tuning

    NARCIS (Netherlands)

    Li, X.; Wei, L.; Poelma, R.H.; Vollebregt, S.; Wei, J.; Urbach, Paul; Sarro, P.M.; Zhang, G.Q.

    2016-01-01

    This paper presents a tuneable binary amplitude Fresnel lens produced by wafer-level microfabrication. The Fresnel lens is fabricated by encapsulating lithographically defined vertically aligned carbon nanotube (CNT) bundles inside a polydimethyl-siloxane (PDMS) layer. The composite lens material

  16. 21 CFR 886.1400 - Maddox lens.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Maddox lens. 886.1400 Section 886.1400 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1400 Maddox lens. (a) Identification. A Maddox lens is a device...

  17. Crystalline lens power and refractive error.

    Science.gov (United States)

    Iribarren, Rafael; Morgan, Ian G; Nangia, Vinay; Jonas, Jost B

    2012-02-01

    To study the relationships between the refractive power of the crystalline lens, overall refractive error of the eye, and degree of nuclear cataract. All phakic participants of the population-based Central India Eye and Medical Study with an age of 50+ years were included. Calculation of the refractive lens power was based on distance noncycloplegic refractive error, corneal refractive power, anterior chamber depth, lens thickness, and axial length according to Bennett's formula. The study included 1885 subjects. Mean refractive lens power was 25.5 ± 3.0 D (range, 13.9-36.6). After adjustment for age and sex, the standardized correlation coefficients (β) of the association with the ocular refractive error were highest for crystalline lens power (β = -0.41; P lens opacity grade (β = -0.42; P lens power (β = -0.95), lower corneal refractive power (β = -0.76), higher lens thickness (β = 0.30), deeper anterior chamber (β = 0.28), and less marked nuclear lens opacity (β = -0.05). Lens thickness was significantly lower in eyes with greater nuclear opacity. Variations in refractive error in adults aged 50+ years were mostly influenced by variations in axial length and in crystalline lens refractive power, followed by variations in corneal refractive power, and, to a minor degree, by variations in lens thickness and anterior chamber depth.

  18. 21 CFR 886.3600 - Intraocular lens.

    Science.gov (United States)

    2010-04-01

    ... DEVICES OPHTHALMIC DEVICES Prosthetic Devices § 886.3600 Intraocular lens. (a) Identification. An intraocular lens is a device made of materials such as glass or plastic intended to be implanted to replace the natural lens of an eye. (b) Classification. Class III. (c) Date PMA or notice of completion of a...

  19. Astronomy and the camera obscura

    Science.gov (United States)

    Feist, M.

    2000-02-01

    The camera obscura (from Latin meaning darkened chamber) is a simple optical device with a long history. In the form considered here, it can be traced back to 1550. It had its heyday during the Victorian era when it was to be found at the seaside as a tourist attraction or sideshow. It was also used as an artist's drawing aid and, in 1620, the famous astronomer-mathematician, Johannes Kepler used a small tent camera obscura to trace the scenery.

  20. The future of consumer cameras

    Science.gov (United States)

    Battiato, Sebastiano; Moltisanti, Marco

    2015-03-01

    In the last two decades multimedia, and in particular imaging devices (camcorders, tablets, mobile phones, etc.) have been dramatically diffused. Moreover the increasing of their computational performances, combined with an higher storage capability, allows them to process large amount of data. In this paper an overview of the current trends of consumer cameras market and technology will be given, providing also some details about the recent past (from Digital Still Camera up today) and forthcoming key issues.

  1. Variable high-resolution color CCD camera system with online capability for professional photo studio application

    Science.gov (United States)

    Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute

    1998-04-01

    Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.

  2. The Ultrawideband Leaky Lens Antenna

    NARCIS (Netherlands)

    Bruni, S.; Neto, A.; Marliani, F.

    2007-01-01

    A novel directive and nondispersive antenna is presented: the ultrawideband (UWB) leaky lens. It is based on the broad band Cherenkov radiation occurring at a slot printed between different infinite homogeneous dielectrics. The first part of the paper presents the antenna concept and the UWB design.

  3. ECTOPIC LENS EXTRACTION IN CHILDREN

    Directory of Open Access Journals (Sweden)

    Vladimir Pfeifer

    2002-12-01

    Full Text Available Background. Ectopia lentis continues to be a therapeutic challenge for ophthalmologists. It can occur as an isolated condition, after ocular trauma, in association with other ocular disorders, as part of a systemic mesodermal disease or a complication of general metabolic disorders. Minimal subluxation of the lens may cause no visual symptoms, but in more advanced cases serious optical disturbances arise. The most important is amblyopia. Surgical treatment options include iris manipulation, lens discission, aspiration, intracapsular or extracapsular extraction, and pars plana lensectomy. The choice of surgical technique remains controversial, in part because of the historically poor visual results and high rate of perioperative complications, including vitreous loss and retinal detachment.Methods. We describe a surgical technique based on the use of the Cionni endocapsular tension ring, dry irrigation aspiration of lens material, centration of the capsular bag and foldable intraocular lens implantation into the bag. With mentioned surgical technique 8 patients were operated; 4 boys and 4 girls, together 11 eyes.Results. The final BCVA after follow up period improved in 9 eyes and it remained the same as before operation in one eye. Statistical comparison of preoperative and postoperative visual acuities showed significant improvement. On the other hand there was no correlation between preoperative and postoperative visual acuity.Conclusions. This surgical procedure is an alternative approach in solving this challenging cases of ectopia lentis with good postoperative visual rehabilitation.

  4. Science, conservation, and camera traps

    Science.gov (United States)

    Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.

  5. Computing camera heading: A study

    Science.gov (United States)

    Zhang, John Jiaxiang

    2000-08-01

    An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.

  6. On the accuracy potential of focused plenoptic camera range determination in long distance operation

    Science.gov (United States)

    Sardemann, Hannes; Maas, Hans-Gerd

    2016-04-01

    Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.

  7. Improved approach to characterizing and presenting streak camera performance

    International Nuclear Information System (INIS)

    Wiedwald, J.D.; Jones, B.A.

    1985-01-01

    The performance of a streak camera recording system is strongly linked to the technique used to amplify, detect and quantify the streaked image. At the Lawrence Livermore National Laboratory (LLNL) streak camera images have been recorded both on film and by fiber-optically coupling to charge-coupled devices (CCD's). During the development of a new process for recording these images (lens coupling the image onto a cooled CCD) the definitions of important performance characteristics such as resolution and dynamic range were re-examined. As a result of this development, these performance characteristics are now presented to the streak camera user in a more useful format than in the past. This paper describes how these techniques are used within the Laser Fusion Program at LLNL. The system resolution is presented as a modulation transfer function, including the seldom reported effects that flare and light scattering have at low spatial frequencies. Data are presented such that a user can adjust image intensifier gain and pixel averaging to optimize the useful dynamic range in any particular application

  8. Lens stem cells may reside outside the lens capsule: an hypothesis

    Directory of Open Access Journals (Sweden)

    Meyer Rita A

    2007-06-01

    Full Text Available Abstract In this paper, we consider the ocular lens in the context of contemporary developments in biological ideas. We attempt to reconcile lens biology with stem cell concepts and a dearth of lens tumors. Historically, the lens has been viewed as a closed system, in which cells at the periphery of the lens epithelium differentiate into fiber cells. Theoretical considerations led us to question whether the intracapsular lens is indeed self-contained. Since stem cells generate tumors and the lens does not naturally develop tumors, we reasoned that lens stem cells may not be present within the capsule. We hypothesize that lens stem cells reside outside the lens capsule, in the nearby ciliary body. Our ideas challenge the existing lens biology paradigm. We begin our discussion with lens background information, in order to describe our lens stem cell hypothesis in the context of published data. Then we present the ciliary body as a possible source for lens stem cells, and conclude by comparing the ocular lens with the corneal epithelium.

  9. The influence of crystalline lens accommodation on post-saccadic oscillations in pupil-based eye trackers.

    Science.gov (United States)

    Nyström, Marcus; Andersson, Richard; Magnusson, Måns; Pansell, Tony; Hooge, Ignace

    2015-02-01

    It is well known that the crystalline lens (henceforth lens) can oscillate (or 'wobble') relative to the eyeball at the end of saccades. Recent research has proposed that such wobbling of the lens is a source of post-saccadic oscillations (PSOs) seen in data recorded by eye trackers that estimate gaze direction from the location of the pupil. Since the size of the lens wobbles increases with accommodative effort, one would predict a similar increase of PSO-amplitude in data recorded with a pupil based eye tracker. In four experiments, we investigated the role of lens accommodation on PSOs in a video-based eye tracker. In Experiment 1, we replicated previous results showing that PSO-amplitudes increase at near viewing distances (large vergence angles), when the lens is highly accommodated. In Experiment 2a, we manipulated the accommodative state of the lens pharmacologically using eye drops at a fixed viewing distance and found, in contrast to Experiment 1, no significant difference in PSO-amplitude related to the accommodative state of the lens. Finally, in Experiment 2b, the effect of vergence angle was investigated by comparing PSO-amplitudes at near and far while maintaining a fixed lens accommodation. Despite the pharmacologically fixed degree of accommodation, PSO-amplitudes were systematically larger in the near condition. In summary, PSOs cannot exhaustively be explained by lens wobbles. Possible confounds related to pupil size and eye-camera angle are investigated in Experiments 3 and 4, and alternative mechanisms behind PSOs are probed in the discussion. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Modern lens antennas for communications engineering

    CERN Document Server

    Thornton, John

    2012-01-01

    The aim of this book is to present the modern design principles and analysis of lens antennas. It gives graduates and RF/Microwave professionals the design insights in order to make full use of lens antennas.  Why do we want to write a book in lens antennas? Because this topic has not been thoroughly publicized, its importance is underestimated. As antennas play a key role in communication systems, recent development in wireless communications would indeed benefit from the characteristics of lens antennas: low profile, and low cost etc.  The major advantages of lens antennas are na

  11. Contact lens surface by electron beam

    International Nuclear Information System (INIS)

    Shin, Jung Hyuck; Lee, Suk Ju; Hwang, Kwang Ha; Jeon Jin

    2011-01-01

    Contact lens materials needs good biocompatibility, high refractive index, high optical transparency, high water content etc. Surface treat method by using plasma and radiation can modify the physical and/or chemical properties of the contact lens surface. Radiation technology such as electron beam irradiation can apply to polymerization reaction and enhance the functionality of the polymer.The purpose of this study is to modify of contact lens surface by using Eb irradiation technology. Electron beam was irradiated to the contact lens surface which was synthesized thermal polymerization method and commercial contact lens to modify physical and chemical properties. Ft-IR, XP, UV-vis spectrophotometer, water content, oxygen trans-metastability were used to characterize the surface state, physicochemical, and optical property of the contact lens treated with Eb. The water content and oxygen transmissibility of the contact lens treated with Eb were increased due to increase in the hydrophilic group such as O-C=O and OH group on the contact lens surface which could be produced by possible reaction between carbon and oxygen during the Eb irradiation. All of the lenses showed the high optical transmittance above 90%. In this case of B/Es, TES, Ti contact lens, the optical transmittance decreased about 5% with increasing Eb dose in the wavelength of UV-B region. The contact lens modified by Eb irradiation could improve the physical properties of the contact lens such as water content and oxygen transmissibility

  12. Plasma Lens for Muon and Neutrino Beams

    Science.gov (United States)

    Kahn, Stephen; Korenev, Sergey; Bishai, Mary; Diwan, Milind; Gallardo, Juan; Hershcovitch, Ady; Johnson, Brant

    2008-04-01

    The plasma lens is examined as an alternate to focusing horns and solenoids for use in a neutrino or muon beam facility. The plasma lens concept is based on a combined high-current lens/target configuration. The current is fed at electrodes located upstream and downstream from the target where pion capturing is needed. The current flows primarily in the plasma, which has a lower resistivity than the target. A second plasma lens section, with an additional current feed, follows the target to provide shaping of the plasma stability. The geometry of the plasma is shaped to provide optimal pion capture. Simulations of this plasma lens system have shown a 25% higher neutrino production than the horn system. A plasma lens has additional advantage: larger axial current than horns, minimal neutrino contamination during antineutrino running, and negligible pion absorption or scattering. Results from particle simulations using a plasma lens will be presented.

  13. Sub-Camera Calibration of a Penta-Camera

    Science.gov (United States)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  14. The fly's eye camera system

    Science.gov (United States)

    Mészáros, L.; Pál, A.; Csépány, G.; Jaskó, A.; Vida, K.; Oláh, K.; Mezö, G.

    2014-12-01

    We introduce the Fly's Eye Camera System, an all-sky monitoring device intended to perform time domain astronomy. This camera system design will provide complementary data sets for other synoptic sky surveys such as LSST or Pan-STARRS. The effective field of view is obtained by 19 cameras arranged in a spherical mosaic form. These individual cameras of the device stand on a hexapod mount that is fully capable of achieving sidereal tracking for the subsequent exposures. This platform has many advantages. First of all it requires only one type of moving component and does not include unique parts. Hence this design not only eliminates problems implied by unique elements, but the redundancy of the hexapod allows smooth operations even if one or two of the legs are stuck. In addition, it can calibrate itself by observed stars independently from both the geographical location (including northen and southern hemisphere) and the polar alignment of the full mount. All mechanical elements and electronics are designed within the confines of our institute Konkoly Observatory. Currently, our instrument is in testing phase with an operating hexapod and reduced number of cameras.

  15. Event detection intelligent camera development

    International Nuclear Information System (INIS)

    Szappanos, A.; Kocsis, G.; Molnar, A.; Sarkozi, J.; Zoletnik, S.

    2008-01-01

    A new camera system 'event detection intelligent camera' (EDICAM) is being developed for the video diagnostics of W-7X stellarator, which consists of 10 distinct and standalone measurement channels each holding a camera. Different operation modes will be implemented for continuous and for triggered readout as well. Hardware level trigger signals will be generated from real time image processing algorithms optimized for digital signal processor (DSP) and field programmable gate array (FPGA) architectures. At full resolution a camera sends 12 bit sampled 1280 x 1024 pixels with 444 fps which means 1.43 Terabyte over half an hour. To analyse such a huge amount of data is time consuming and has a high computational complexity. We plan to overcome this problem by EDICAM's preprocessing concepts. EDICAM camera system integrates all the advantages of CMOS sensor chip technology and fast network connections. EDICAM is built up from three different modules with two interfaces. A sensor module (SM) with reduced hardware and functional elements to reach a small and compact size and robust action in harmful environment as well. An image processing and control unit (IPCU) module handles the entire user predefined events and runs image processing algorithms to generate trigger signals. Finally a 10 Gigabit Ethernet compatible image readout card functions as the network interface for the PC. In this contribution all the concepts of EDICAM and the functions of the distinct modules are described

  16. A lateral chromatic aberration correction system for ultrahigh-definition color video camera

    Science.gov (United States)

    Yamashita, Takayuki; Shimamoto, Hiroshi; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji

    2006-02-01

    We have developed color camera for an 8k x 4k-pixel ultrahigh-definition video system, which is called Super Hi- Vision, with a 5x zoom lens and a signal-processing system incorporating a function for real-time lateral chromatic aberration correction. The chromatic aberration of the lens degrades color image resolution. So in order to develop a compact zoom lens consistent with ultrahigh-resolution characteristics, we incorporated a real-time correction function in the signal-processing system. The signal-processing system has eight memory tables to store the correction data at eight focal length points on the blue and red channels. When the focal length data is inputted from the lens control units, the relevant correction data are interpolated from two of eights correction data tables. This system performs geometrical conversion on both channels using this correction data. This paper describes that the correction function can successfully reduce the lateral chromatic aberration, to an amount small enough to ensure the desired image resolution was achieved over the entire range of the lens in real time.

  17. Fabrication of MTF measurement system for a mobile phone lens using multi-square objects

    Science.gov (United States)

    Hong, Sung Mok; Jo, Jae Heung; Lee, Hoi Youn; Yang, Ho Soon; Lee, Yun Woo; Lee, In Won

    2007-12-01

    The mobile phone market grows rapidly and the performance estimation about camera module is required. Accordingly, we fabricate the MTF measurement system for a mobile phone lens having extremely small diameter and large f-number. The objective lens with the magnification of X20 for MTF measurement for high resolution lens and a detector of CCD that is pixel size of 7.4 um are adapted to the system. Also, the CCD is translated by using a linear motor to reduce measurement errors. The measurement lens is placed at the most suitable imaging point by a precise auto-focusing motor. The measuring equipment which we developed for off-axis MTF measurement of a mobile phone lens used the multi-square objects. The square objects of measuring equipment are arranged a unit in the on-axis and total 12 units (0.3 field: 4 units, 0.5 field: 4 units, 0.7 field: 4 units) in the off-axis. When the measurement is started, the linear motors of signal detection part are transferred from on-axis to off-axis. And a detected signals from the each square objects are used for MTF measurement. System driver and MTF measure are using application program that developed us. This software can be measure the on-axis and the off-axis sequentially. In addition to that it did optimization of motor transfer for measurement time shortening.

  18. Video camera use at nuclear power plants

    International Nuclear Information System (INIS)

    Estabrook, M.L.; Langan, M.O.; Owen, D.E.

    1990-08-01

    A survey of US nuclear power plants was conducted to evaluate video camera use in plant operations, and determine equipment used and the benefits realized. Basic closed circuit television camera (CCTV) systems are described and video camera operation principles are reviewed. Plant approaches for implementing video camera use are discussed, as are equipment selection issues such as setting task objectives, radiation effects on cameras, and the use of disposal cameras. Specific plant applications are presented and the video equipment used is described. The benefits of video camera use --- mainly reduced radiation exposure and increased productivity --- are discussed and quantified. 15 refs., 6 figs

  19. Technical assessment of Navitar Zoom 6000 optic and Sony HDC-X310 camera for MEMS presentations and training.

    Energy Technology Data Exchange (ETDEWEB)

    Diegert, Carl F.

    2006-02-01

    This report evaluates a newly-available, high-definition, video camera coupled with a zoom optical system for microscopic imaging of micro-electro-mechanical systems. We did this work to support configuration of three document-camera-like stations as part of an installation in a new Microsystems building at Sandia National Laboratories. The video display walls to be installed as part of these three presentation and training stations are of extraordinary resolution and quality. The new availability of a reasonably-priced, cinema-quality, high-definition video camera offers the prospect of filling these displays with full-motion imaging of Sandia's microscopic products at a quality substantially beyond the quality of typical video microscopes. Simple and robust operation of the microscope stations will allow the extraordinary-quality imaging to contribute to Sandia's day-to-day research and training operations. This report illustrates the disappointing image quality from a camera/lens system comprised of a Sony HDC-X310 high-definition video camera coupled to a Navitar Zoom 6000 lens. We determined that this Sony camera is capable of substantially more image quality than the Navitar optic can deliver. We identified an optical doubler lens from Navitar as the component of their optical system that accounts for a substantial part of the image quality problem. While work continues to incrementally improve performance of the Navitar system, we are also evaluating optical systems from other vendors to couple to this Sony camera.

  20. The GISMO-2 Bolometer Camera

    Science.gov (United States)

    Staguhn, Johannes G.; Benford, Dominic J.; Fixsen, Dale J.; Hilton, Gene; Irwin, Kent D.; Jhabvala, Christine A.; Kovacs, Attila; Leclercq, Samuel; Maher, Stephen F.; Miller, Timothy M.; hide

    2012-01-01

    We present the concept for the GISMO-2 bolometer camera) which we build for background-limited operation at the IRAM 30 m telescope on Pico Veleta, Spain. GISM0-2 will operate Simultaneously in the 1 mm and 2 mm atmospherical windows. The 1 mm channel uses a 32 x 40 TES-based Backshort Under Grid (BUG) bolometer array, the 2 mm channel operates with a 16 x 16 BUG array. The camera utilizes almost the entire full field of view provided by the telescope. The optical design of GISM0-2 was strongly influenced by our experience with the GISMO 2 mm bolometer camera which is successfully operating at the 30m telescope. GISMO is accessible to the astronomical community through the regular IRAM call for proposals.

  1. Dark Energy Camera for Blanco

    Energy Technology Data Exchange (ETDEWEB)

    Binder, Gary A.; /Caltech /SLAC

    2010-08-25

    In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images from the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.

  2. Perceptual Color Characterization of Cameras

    Directory of Open Access Journals (Sweden)

    Javier Vazquez-Corral

    2014-12-01

    Full Text Available Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as \\(XYZ\\, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a \\(3 \\times 3\\ matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson al., to perform a perceptual color characterization. In particular, we search for the \\(3 \\times 3\\ matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE \\(\\Delta E\\ error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3for the \\(\\Delta E\\ error, 7& for the S-CIELAB error and 13% for the CID error measures.

  3. Multiple-image oscilloscope camera

    International Nuclear Information System (INIS)

    Yasillo, N.J.

    1978-01-01

    An optical device for placing automatically a plurality of images at selected locations on one film comprises a stepping motor coupled to a rotating mirror and lens. A mechanical connection from the mirror controls an electronic logical system to allow rotation of the mirror to place a focused image at tge desired preselected location. The device is of especial utility when used to place four images on a single film to record oscilloscope views obtained in gamma radiography

  4. Lens Coupled Quantum Cascade Laser

    Science.gov (United States)

    Hu, Qing (Inventor); Lee, Alan Wei Min (Inventor)

    2013-01-01

    Terahertz quantum cascade (QC) devices are disclosed that can operate, e.g., in a range of about 1 THz to about 10 THz. In some embodiments, QC lasers are disclosed in which an optical element (e.g., a lens) is coupled to an output facet of the laser's active region to enhance coupling of the lasing radiation from the active region to an external environment. In other embodiments, terahertz amplifier and tunable terahertz QC lasers are disclosed.

  5. Looking beyond the perfect lens

    International Nuclear Information System (INIS)

    Wee, W H; Pendry, J B

    2010-01-01

    The holy grail of imaging is the ability to see through anything. From the conservation of energy, we can easily see that to see through a lossy material would require lenses with gain. The aim of this paper therefore is to propose a simple scheme by which we can construct a general perfect lens, with gain-one that can restore both the phases and amplitudes of near and far fields.

  6. EDICAM (Event Detection Intelligent Camera)

    Energy Technology Data Exchange (ETDEWEB)

    Zoletnik, S. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Szabolics, T., E-mail: szabolics.tamas@wigner.mta.hu [Wigner RCP RMI, EURATOM Association, Budapest (Hungary); Kocsis, G.; Szepesi, T.; Dunai, D. [Wigner RCP RMI, EURATOM Association, Budapest (Hungary)

    2013-10-15

    Highlights: ► We present EDICAM's hardware modules. ► We present EDICAM's main design concepts. ► This paper will describe EDICAM firmware architecture. ► Operation principles description. ► Further developments. -- Abstract: A new type of fast framing camera has been developed for fusion applications by the Wigner Research Centre for Physics during the last few years. A new concept was designed for intelligent event driven imaging which is capable of focusing image readout to Regions of Interests (ROIs) where and when predefined events occur. At present these events mean intensity changes and external triggers but in the future more sophisticated methods might also be defined. The camera provides 444 Hz frame rate at full resolution of 1280 × 1024 pixels, but monitoring of smaller ROIs can be done in the 1–116 kHz range even during exposure of the full image. Keeping space limitations and the harsh environment in mind the camera is divided into a small Sensor Module and a processing card interconnected by a fast 10 Gbit optical link. This camera hardware has been used for passive monitoring of the plasma in different devices for example at ASDEX Upgrade and COMPASS with the first version of its firmware. The new firmware and software package is now available and ready for testing the new event processing features. This paper will present the operation principle and features of the Event Detection Intelligent Camera (EDICAM). The device is intended to be the central element in the 10-camera monitoring system of the Wendelstein 7-X stellarator.

  7. Trend of digital camera and interchangeable zoom lenses with high ratio based on patent application over the past 10 years

    Science.gov (United States)

    Sensui, Takayuki

    2012-10-01

    Although digitalization has tripled consumer-class camera market scale, extreme reductions in prices of fixed-lens cameras has reduced profitability. As a result, a number of manufacturers have entered the market of the System DSC i.e. digital still camera with interchangeable lens, where large profit margins are possible, and many high ratio zoom lenses with image stabilization functions have been released. Quiet actuators are another indispensable component. Design with which there is little degradation in performance due to all types of errors is preferred for good balance in terms of size, lens performance, and the rate of quality to sub-standard products. Decentering, such as that caused by tilting, sensitivity of moving groups is especially important. In addition, image stabilization mechanisms actively shift lens groups. Development of high ratio zoom lenses with vibration reduction mechanism is confronted by the challenge of reduced performance due to decentering, making control over decentering sensitivity between lens groups everything. While there are a number of ways to align lenses (axial alignment), shock resistance and ability to stand up to environmental conditions must also be considered. Naturally, it is very difficult, if not impossible, to make lenses smaller and achieve a low decentering sensitivity at the same time. 4-group zoom construction is beneficial in making lenses smaller, but decentering sensitivity is greater. 5-group zoom configuration makes smaller lenses more difficult, but it enables lower decentering sensitivities. At Nikon, the most advantageous construction is selected for each lens based on specifications. The AF-S DX NIKKOR 18-200mm f/3.5-5.6G ED VR II and AF-S NIKKOR 28-300mm f/3.5-5.6G ED VR are excellent examples of this.

  8. The Lens Staring You in the Face

    Science.gov (United States)

    vachon, R. W.

    2012-12-01

    When you are embedded in the minutia of a science, you and your collaborators don't need to be convinced that there are benefits to investigating your precise science. You are intrinsically invested. In fact, we spend so much time around each other that we create our own language. So many acronyms that we come to take for granted and topic relevant words, like "fractionate", are not common parlance. This is the case with any specialization, but when it becomes second nature to communicate your work with these norms, transmission breaks down, people tune out, and some audiences become frustrated. Media can cushion this separation, but what do you do when you are the one in front of the camera, and clarity and impact of a concept rests on your shoulders? Just as writing a peer review paper is an acquired skill, so is communicating to the ones who pay many of our salaries, taxpayers. Over the past three years I have worked intimately with publishers, networks and university outreach programs to refine my approach to communicating scientific knowledge to particular audiences. The road has positioned me as interviewee, motivational speaker for science, technology, engineering and math (STEM) in middle and high schools, and educational video series host. Media has their own standards. Personality and enthusiasm may be just as important as the journal into which your findings were published. This presentation will emphasize the audience-adjustable tenants that have stood the test of time to result in effective video communication. Additionally, preparation for and execution of different roles in front of the lens will be discussed.Communicating concepts on video

  9. New light field camera based on physical based rendering tracing

    Science.gov (United States)

    Chung, Ming-Han; Chang, Shan-Ching; Lee, Chih-Kung

    2014-03-01

    Even though light field technology was first invented more than 50 years ago, it did not gain popularity due to the limitation imposed by the computation technology. With the rapid advancement of computer technology over the last decade, the limitation has been uplifted and the light field technology quickly returns to the spotlight of the research stage. In this paper, PBRT (Physical Based Rendering Tracing) was introduced to overcome the limitation of using traditional optical simulation approach to study the light field camera technology. More specifically, traditional optical simulation approach can only present light energy distribution but typically lack the capability to present the pictures in realistic scenes. By using PBRT, which was developed to create virtual scenes, 4D light field information was obtained to conduct initial data analysis and calculation. This PBRT approach was also used to explore the light field data calculation potential in creating realistic photos. Furthermore, we integrated the optical experimental measurement results with PBRT in order to place the real measurement results into the virtually created scenes. In other words, our approach provided us with a way to establish a link of virtual scene with the real measurement results. Several images developed based on the above-mentioned approaches were analyzed and discussed to verify the pros and cons of the newly developed PBRT based light field camera technology. It will be shown that this newly developed light field camera approach can circumvent the loss of spatial resolution associated with adopting a micro-lens array in front of the image sensors. Detailed operational constraint, performance metrics, computation resources needed, etc. associated with this newly developed light field camera technique were presented in detail.

  10. Life through a Lens: Risk, Surveillance and Subjectivity

    Directory of Open Access Journals (Sweden)

    Gavin Smith

    2016-03-01

    Full Text Available Drawing on findings from a two-year empirical study examining the culture of closed-circuit television (CCTV operation in the UK, this paper analyses how CCTV camera operators subjectively experience the visual media that they work to produce. It seeks to excavate some of the social meanings that these vicarious risk flâneurs ascribe to the telemediated events that they indirectly encounter, and how these ‘narratives of the street’ come to inscribe themselves on the subjectivities of the camera operators in a disciplinary manner. In so doing, the paper reveals the work of watching to be an ambiguous social practice, an activity that far exceeds its formal framing as a dispassionate and standardised procedure. As such, I contend that CCTV camera operators engage in two distinct modes of work – ‘surface’ and ‘deep’ – as they watch the screens and codify the spectacles that are mediated through the camera lens. The ‘surface’ work they enact is officially acknowledged and concerns their focusing attention on the screens to identify harmful behaviours, to capture evidence and to share information with other collaborators in the security network. This mode of work is principally performed for professional imperatives and economic returns. In contrast, the ‘deep’ work rituals they execute are informal in scope and therapeutic in purpose. Such individualised practices are an unseen and unrecognised work relation that mitigates the negative effects of CCTV viewing. They are operationalised through diverse behavioural repertoires which function to insulate the self from its exposure to mediated traumas, and from the contradiction of mobilising ‘(inaction at a distance’. Overall, the paper accentuates the messy realities that hinge on the practice of urban surveillance, showing these realities to be meditated by the vagaries of subjective experience and social relations.

  11. Radiometric calibration of wide-field camera system with an application in astronomy

    Science.gov (United States)

    Vítek, Stanislav; Nasyrova, Maria; Stehlíková, Veronika

    2017-09-01

    Camera response function (CRF) is widely used for the description of the relationship between scene radiance and image brightness. Most common application of CRF is High Dynamic Range (HDR) reconstruction of the radiance maps of imaged scenes from a set of frames with different exposures. The main goal of this work is to provide an overview of CRF estimation algorithms and compare their outputs with results obtained under laboratory conditions. These algorithms, typically designed for multimedia content, are unfortunately quite useless with astronomical image data, mostly due to their nature (blur, noise, and long exposures). Therefore, we propose an optimization of selected methods to use in an astronomical imaging application. Results are experimentally verified on the wide-field camera system using Digital Single Lens Reflex (DSLR) camera.

  12. Registration of an on-axis see-through head-mounted display and camera system

    Science.gov (United States)

    Luo, Gang; Rensing, Noa M.; Weststrate, Evan; Peli, Eli

    2005-02-01

    An optical see-through head-mounted display (HMD) system integrating a miniature camera that is aligned with the user's pupil is developed and tested. Such an HMD system has a potential value in many augmented reality applications, in which registration of the virtual display to the real scene is one of the critical aspects. The camera alignment to the user's pupil results in a simple yet accurate calibration and a low registration error across a wide range of depth. In reality, a small camera-eye misalignment may still occur in such a system due to the inevitable variations of HMD wearing position with respect to the eye. The effects of such errors are measured. Calculation further shows that the registration error as a function of viewing distance behaves nearly the same for different virtual image distances, except for a shift. The impact of prismatic effect of the display lens on registration is also discussed.

  13. CONTACT LENS RELATED CORNEAL ULCER

    Directory of Open Access Journals (Sweden)

    AGARWAL P

    2010-01-01

    Full Text Available A corneal ulcer caused by infection is one of the major causes of blindness worldwide. One of the recent health concerns is the increasing incidence of corneal ulcers associated with contact lens user especially if the users fail to follow specific instruction in using their contact lenses. Risk factors associated with increased risk of contact lens related corneal ulcers are:overnight wear, long duration of continuous wear, lower socio-economic classes, smoking, dry eye and poor hygiene. The presenting symptoms of contact lens related corneal ulcers include eye discomfort, foreign body sensation and lacrimation. More serious symptoms are redness (especially circum-corneal injection, severe pain, photophobia, eye discharge and blurring of vision. The diagnosis is established by a thorough slit lamp microscopic examination with fluorescein staining and corneal scraping for Gram stain and culture of the infective organism. Delay in diagnosing and treatment can cause permanent blindness, therefore an early referral to ophthalmologist and commencing of antimicrobial therapy can prevent visual loss.

  14. Glycation precedes lens crystallin aggregation

    International Nuclear Information System (INIS)

    Swamy, M.S.; Perry, R.E.; Abraham, E.C.

    1987-01-01

    Non-enzymatic glycosylation (glycation) seems to have the potential to alter the structure of crystallins and make them susceptible to thiol oxidation leading to disulfide-linked high molecular weight (HMW) aggregate formation. They used streptozotocin diabetic rats during precataract and cataract stages and long-term cell-free glycation of bovine lens crystallins to study the relationship between glycation and lens crystallin aggregation. HMW aggregates and other protein components of the water-soluble (WS) and urea-soluble (US) fractions were separated by molecular sieve high performance liquid chromatography. Glycation was estimated by both [ 3 H]NaBH 4 reduction and phenylboronate agarose affinity chromatography. Levels of total glycated protein (GP) in the US fractions were about 2-fold higher than in the WS fractions and there was a linear increase in GP in both WS and US fractions. This increase was parallelled by a corresponding increase in HMW aggregates. Total GP extracted by the affinity method from the US fraction showed a predominance of HMW aggregates and vice versa. Cell-free glycation studies with bovine crystallins confirmed the results of the animals studies. Increasing glycation caused a corresponding increase in protein insolubilization and the insoluble fraction thus formed also contained more glycated protein. It appears that lens protein glycation, HMW aggregate formation, and protein insolubilization are interrelated

  15. The Sydney University PAPA camera

    Science.gov (United States)

    Lawson, Peter R.

    1994-04-01

    The Precision Analog Photon Address (PAPA) camera is a photon-counting array detector that uses optical encoding to locate photon events on the output of a microchannel plate image intensifier. The Sydney University camera is a 256x256 pixel detector which can operate at speeds greater than 1 million photons per second and produce individual photon coordinates with a deadtime of only 300 ns. It uses a new Gray coded mask-plate which permits a simplified optical alignment and successfully guards against vignetting artifacts.

  16. Streak cameras and their applications

    International Nuclear Information System (INIS)

    Bernet, J.M.; Imhoff, C.

    1987-01-01

    Over the last several years, development of various measurement techniques in the nanosecond and pico-second range has led to increased reliance on streak cameras. This paper will present the main electronic and optoelectronic performances of the Thomson-CSF TSN 506 cameras and their associated devices used to build an automatic image acquisition and processing system (NORMA). A brief survey of the diversity and the spread of the use of high speed electronic cinematography will be illustrated by a few typical applications [fr

  17. Development of an Algorithm for Heart Rate Measurement Using a Mobile Phone Camera

    Directory of Open Access Journals (Sweden)

    D. A. Laure

    2014-01-01

    Full Text Available Nowadays there exist many different ways to measure a person’s heart rate. One of them assumes the usage of a mobile phone built-in camera. This method is easy to use and does not require any additional skills or special devices for heart rate measurement. It requires only a mobile cellphone with a built-in camera and a flash. The main idea of the method is to detect changes in finger skin color that occur due to blood pulsation. The measurement process is simple: the user covers the camera lens with a finger and the application on the mobile phone starts catching and analyzing frames from the camera. Heart rate can be calculated by analyzing average red component values of frames taken by the mobile cellphone camera that contain images of an area of the skin.In this paper the authors review the existing algorithms for heart rate measurement with the help of a mobile phone camera and propose their own algorithm which is more efficient than the reviewed algorithms.

  18. The Effect of the Crystalline Lens on Central Vault After Implantable Collamer Lens Implantation.

    Science.gov (United States)

    Qi, Meng-Ying; Chen, Qian; Zeng, Qing-Yan

    2017-08-01

    To identify associations between crystalline lens-related factors and central vault after Implantable Collamer Lens (ICL) (Staar Surgical, Monrovia, CA) implantation. This retrospective clinical study included 320 eyes from 186 patients who underwent ICL implantation surgery. At 1 year after surgery, the central vault was measured using anterior segment optical coherence tomography. Preoperative anterior chamber depth, lens thickness, lens position (lens position = anterior chamber depth + 1/2 lens thickness), and vault were analyzed to investigate the effects of lens-related factors on postoperative vault. The mean vault was 513 ± 215 µm at 1 year after surgery. Vault was positively correlated with preoperative anterior chamber depth (r = 0.495, P lens position (r = 0.371, P lens thickness (r = -0.262, P lens position than eyes in the other two vault groups (which had vaults ≥ 250 µm) (P lens position less than 5.1 mm had greatly reduced vaults (P lens could have an important influence on postoperative vault. Eyes with a shallower anterior chamber and a forward lens position will have lower vaults. [J Refract Surg. 2017;33(8):519-523.]. Copyright 2017, SLACK Incorporated.

  19. THE BOSS EMISSION-LINE LENS SURVEY (BELLS). I. A LARGE SPECTROSCOPICALLY SELECTED SAMPLE OF LENS GALAXIES AT REDSHIFT {approx}0.5

    Energy Technology Data Exchange (ETDEWEB)

    Brownstein, Joel R.; Bolton, Adam S.; Pandey, Parul [Department of Physics and Astronomy, University of Utah, Salt Lake City, UT 84112 (United States); Schlegel, David J. [Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Eisenstein, Daniel J. [Harvard College Observatory, 60 Garden Street, MS 20, Cambridge, MA 02138 (United States); Kochanek, Christopher S. [Department of Astronomy and Center for Cosmology and Astroparticle Physics, Ohio State University, Columbus, OH 43210 (United States); Connolly, Natalia [Department of Physics, Hamilton College, Clinton, NY 13323 (United States); Maraston, Claudia [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom); Seitz, Stella [University Observatory Munich, Scheinstrasse 1, 81679 Muenchen (Germany); Wake, David A. [Department of Astronomy, Yale University, New Haven, CT 06520 (United States); Wood-Vasey, W. Michael [Pittsburgh Center for Particle Physics, Astrophysics, and Cosmology (PITT-PACC), Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA 15260 (United States); Brinkmann, Jon [Apache Point Observatory, P.O. Box 59, Sunspot, NM 88349 (United States); Schneider, Donald P. [Department of Astronomy and Astrophysics and Institute for Gravitation and the Cosmos, Pennsylvania State University, University Park, PA 16802 (United States); Weaver, Benjamin A. [Center for Cosmology and Particle Physics, New York University, New York, NY 10003 (United States)

    2012-01-01

    We present a catalog of 25 definite and 11 probable strong galaxy-galaxy gravitational lens systems with lens redshifts 0.4 {approx}< z {approx}< 0.7, discovered spectroscopically by the presence of higher-redshift emission lines within the Baryon Oscillation Spectroscopic Survey (BOSS) of luminous galaxies, and confirmed with high-resolution Hubble Space Telescope (HST) images of 44 candidates. Our survey extends the methodology of the Sloan Lens Advanced Camera for Surveys survey (SLACS) to higher redshift. We describe the details of the BOSS spectroscopic candidate detections, our HST ACS image processing and analysis methods, and our strong gravitational lens modeling procedure. We report BOSS spectroscopic parameters and ACS photometric parameters for all candidates, and mass-distribution parameters for the best-fit singular isothermal ellipsoid models of definite lenses. Our sample to date was selected using only the first six months of BOSS survey-quality spectroscopic data. The full five-year BOSS database should produce a sample of several hundred strong galaxy-galaxy lenses and in combination with SLACS lenses at lower redshift, strongly constrain the redshift evolution of the structure of elliptical, bulge-dominated galaxies as a function of luminosity, stellar mass, and rest-frame color, thereby providing a powerful test for competing theories of galaxy formation and evolution.

  20. The Mars Science Laboratory (MSL) Mast cameras and Descent imager: Investigation and instrument descriptions

    Science.gov (United States)

    Malin, Michal C.; Ravine, Michael A.; Caplinger, Michael A.; Tony Ghaemi, F.; Schaffner, Jacob A.; Maki, Justin N.; Bell, James F.; Cameron, James F.; Dietrich, William E.; Edgett, Kenneth S.; Edwards, Laurence J.; Garvin, James B.; Hallet, Bernard; Herkenhoff, Kenneth E.; Heydari, Ezat; Kah, Linda C.; Lemmon, Mark T.; Minitti, Michelle E.; Olson, Timothy S.; Parker, Timothy J.; Rowland, Scott K.; Schieber, Juergen; Sletten, Ron; Sullivan, Robert J.; Sumner, Dawn Y.; Aileen Yingst, R.; Duston, Brian M.; McNair, Sean; Jensen, Elsa H.

    2017-08-01

    The Mars Science Laboratory Mast camera and Descent Imager investigations were designed, built, and operated by Malin Space Science Systems of San Diego, CA. They share common electronics and focal plane designs but have different optics. There are two Mastcams of dissimilar focal length. The Mastcam-34 has an f/8, 34 mm focal length lens, and the M-100 an f/10, 100 mm focal length lens. The M-34 field of view is about 20° × 15° with an instantaneous field of view (IFOV) of 218 μrad; the M-100 field of view (FOV) is 6.8° × 5.1° with an IFOV of 74 μrad. The M-34 can focus from 0.5 m to infinity, and the M-100 from 1.6 m to infinity. All three cameras can acquire color images through a Bayer color filter array, and the Mastcams can also acquire images through seven science filters. Images are ≤1600 pixels wide by 1200 pixels tall. The Mastcams, mounted on the 2 m tall Remote Sensing Mast, have a 360° azimuth and 180° elevation field of regard. Mars Descent Imager is fixed-mounted to the bottom left front side of the rover at 66 cm above the surface. Its fixed focus lens is in focus from 2 m to infinity, but out of focus at 66 cm. The f/3 lens has a FOV of 70° by 52° across and along the direction of motion, with an IFOV of 0.76 mrad. All cameras can acquire video at 4 frames/second for full frames or 720p HD at 6 fps. Images can be processed using lossy Joint Photographic Experts Group and predictive lossless compression.

  1. Multiple-aperture optical design for micro-level cameras using 3D-printing method

    Science.gov (United States)

    Peng, Wei-Jei; Hsu, Wei-Yao; Cheng, Yuan-Chieh; Lin, Wen-Lung; Yu, Zong-Ru; Chou, Hsiao-Yu; Chen, Fong-Zhi; Fu, Chien-Chung; Wu, Chong-Syuan; Huang, Chao-Tsung

    2018-02-01

    The design of the ultra miniaturized camera using 3D-printing technology directly printed on to the complementary metal-oxide semiconductor (CMOS) imaging sensor is presented in this paper. The 3D printed micro-optics is manufactured using the femtosecond two-photon direct laser writing, and the figure error which could achieve submicron accuracy is suitable for the optical system. Because the size of the micro-level camera is approximately several hundreds of micrometers, the resolution is reduced much and highly limited by the Nyquist frequency of the pixel pitch. For improving the reduced resolution, one single-lens can be replaced by multiple-aperture lenses with dissimilar field of view (FOV), and then stitching sub-images with different FOV can achieve a high resolution within the central region of the image. The reason is that the angular resolution of the lens with smaller FOV is higher than that with larger FOV, and then the angular resolution of the central area can be several times than that of the outer area after stitching. For the same image circle, the image quality of the central area of the multi-lens system is significantly superior to that of a single-lens. The foveated image using stitching FOV breaks the limitation of the resolution for the ultra miniaturized imaging system, and then it can be applied such as biomedical endoscopy, optical sensing, and machine vision, et al. In this study, the ultra miniaturized camera with multi-aperture optics is designed and simulated for the optimum optical performance.

  2. High-speed holographic camera

    International Nuclear Information System (INIS)

    Novaro, Marc

    The high-speed holographic camera is a disgnostic instrument using holography as an information storing support. It allows us to take 10 holograms, of an object, with exposures times of 1,5ns, separated in time by 1 or 2ns. In order to get these results easily, no mobile part is used in the set-up [fr

  3. The Camera Comes to Court.

    Science.gov (United States)

    Floren, Leola

    After the Lindbergh kidnapping trial in 1935, the American Bar Association sought to eliminate electronic equipment from courtroom proceedings. Eventually, all but two states adopted regulations applying that ban to some extent, and a 1965 Supreme Court decision encouraged the banning of television cameras at trials as well. Currently, some states…

  4. Gamma camera with reflectivity mask

    International Nuclear Information System (INIS)

    Stout, K.J.

    1980-01-01

    In accordance with the present invention there is provided a radiographic camera comprising: a scintillator; a plurality of photodectors positioned to face said scintillator; a plurality of masked regions formed upon a face of said scintillator opposite said photdetectors and positioned coaxially with respective ones of said photodetectors for decreasing the amount of internal reflection of optical photons generated within said scintillator. (auth)

  5. Observations of the Perseids 2012 using SPOSH cameras

    Science.gov (United States)

    Margonis, A.; Flohrer, J.; Christou, A.; Elgner, S.; Oberst, J.

    2012-09-01

    The Perseids are one of the most prominent annual meteor showers occurring every summer when the stream of dust particles, originating from Halley-type comet 109P/Swift-Tuttle, intersects the orbital path of the Earth. The dense core of this stream passes Earth's orbit on the 12th of August producing the maximum number of meteors. The Technical University of Berlin (TUB) and the German Aerospace Center (DLR) organize observing campaigns every summer monitoring the Perseids activity. The observations are carried out using the Smart Panoramic Optical Sensor Head (SPOSH) camera system [0]. The SPOSH camera has been developed by DLR and Jena-Optronik GmbH under an ESA/ESTEC contract and it is designed to image faint, short-lived phenomena on dark planetary hemispheres. The camera features a highly sensitive backilluminated 1024x1024 CCD chip and a high dynamic range of 14 bits. The custom-made fish-eye lens offers a 120°x120° field-of-view (168° over the diagonal). Figure 1: A meteor captured by the SPOSH cameras simultaneously during the last 2011 observing campaign in Greece. The horizon including surrounding mountains can be seen in the image corners as a result of the large FOV of the camera. The observations will be made on the Greek Peloponnese peninsula monitoring the post-peak activity of the Perseids during a one-week period around the August New Moon (14th to 21st). Two SPOSH cameras will be deployed in two remote sites in high altitudes for the triangulation of meteor trajectories captured at both stations simultaneously. The observations during this time interval will give us the possibility to study the poorly-observed postmaximum branch of the Perseid stream and compare the results with datasets from previous campaigns which covered different periods of this long-lived meteor shower. The acquired data will be processed using dedicated software for meteor data reduction developed at TUB and DLR. Assuming a successful campaign, statistics, trajectories

  6. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  7. Primary intraocular lens implantation for penetrating lens trauma in Africa.

    Science.gov (United States)

    Bowman, R J; Yorston, D; Wood, M; Gilbert, C; Foster, A

    1998-09-01

    This study aimed to audit the surgical strategy of primary posterior chamber intraocular lens implantation for cases of recent penetrating trauma involving the lens in an African population. Retrospective, noncomparative case series. Seventy-two cases are reported, including all patients who underwent primary intraocular lens implantation for traumatic cataract extraction performed within 1 month of injury between 1988 and 1996. Demographic characteristics and follow-up attendance rates are analyzed. Surgical technique and the occurrence of intraoperative and postoperative complications are reported. Visual outcomes are reported with detailed analysis for cases of poor visual outcome. Mean age was 14.3 years (standard deviation = 11.1), 57 (79%) were male and 15 (21%) were female (chi-square = 23.66, P capsule had been breached by the trauma in 27 (38%) cases, and 15 of these required anterior vitrectomy. Capsular fixation of the implant was achieved in 49% of patients, the remainder having sulcus fixation. Intraoperative rupture of the posterior capsule occurred in four cases. The only common postoperative complication was acute fibrinous anterior uveitis, which occurred in 29 (40%) patients, and 32% of patients followed up for at least 6 months required secondary posterior capsulotomy. This was more common in younger patients (chi-square = 4.2, P < 0.05). Corrected postoperative visual acuities were available for 51 patients, of which 71% achieved 20/60 or better visual acuity. Patients 6 years of age or younger were less likely to achieve 20/60 (chi-square = 6.61, P = 0.01). This surgical strategy has proved successful, producing good visual results and causing no sight-threatening complications. Primary posterior capsulotomy may be appropriate for younger patients.

  8. DISSECTING THE GRAVITATIONAL LENS B1608+656. I. LENS POTENTIAL RECONSTRUCTION

    NARCIS (Netherlands)

    Suyu, S. H.; Marshall, P. J.; Blandford, R. D.; Fassnacht, C. D.; Koopmans, L. V. E.; McKean, J. P.; Treu, T.

    2009-01-01

    Strong gravitational lensing is a powerful technique for probing galaxy mass distributions and for measuring cosmological parameters. Lens systems with extended source-intensity distributions are particularly useful for this purpose since they provide additional constraints on the lens potential (

  9. Role of Aquaporin 0 in lens biomechanics

    International Nuclear Information System (INIS)

    Sindhu Kumari, S.; Gupta, Neha; Shiels, Alan; FitzGerald, Paul G.; Menon, Anil G.; Mathias, Richard T.; Varadaraj, Kulandaiappan

    2015-01-01

    Maintenance of proper biomechanics of the eye lens is important for its structural integrity and for the process of accommodation to focus near and far objects. Several studies have shown that specialized cytoskeletal systems such as the beaded filament (BF) and spectrin-actin networks contribute to mammalian lens biomechanics; mutations or deletion in these proteins alters lens biomechanics. Aquaporin 0 (AQP0), which constitutes ∼45% of the total membrane proteins of lens fiber cells, has been shown to function as a water channel and a structural cell-to-cell adhesion (CTCA) protein. Our recent ex vivo study on AQP0 knockout (AQP0 KO) mouse lenses showed the CTCA function of AQP0 could be crucial for establishing the refractive index gradient. However, biomechanical studies on the role of AQP0 are lacking. The present investigation used wild type (WT), AQP5 KO (AQP5 −/− ), AQP0 KO (heterozygous KO: AQP0 +/− ; homozygous KO: AQP0 −/− ; all in C57BL/6J) and WT-FVB/N mouse lenses to learn more about the role of fiber cell AQPs in lens biomechanics. Electron microscopic images exhibited decreases in lens fiber cell compaction and increases in extracellular space due to deletion of even one allele of AQP0. Biomechanical assay revealed that loss of one or both alleles of AQP0 caused a significant reduction in the compressive load-bearing capacity of the lenses compared to WT lenses. Conversely, loss of AQP5 did not alter the lens load-bearing ability. Compressive load-bearing at the suture area of AQP0 +/− lenses showed easy separation while WT lens suture remained intact. These data from KO mouse lenses in conjunction with previous studies on lens-specific BF proteins (CP49 and filensin) suggest that AQP0 and BF proteins could act co-operatively in establishing normal lens biomechanics. We hypothesize that AQP0, with its prolific expression at the fiber cell membrane, could provide anchorage for cytoskeletal structures like BFs and together they help to

  10. Freeform lens design for LED collimating illumination.

    Science.gov (United States)

    Chen, Jin-Jia; Wang, Te-Yuan; Huang, Kuang-Lung; Liu, Te-Shu; Tsai, Ming-Da; Lin, Chin-Tang

    2012-05-07

    We present a simple freeform lens design method for an application to LED collimating illumination. The method is derived from a basic geometric-optics analysis and construction approach. By using this method, a highly collimating lens with LED chip size of 1.0 mm × 1.0 mm and optical simulation efficiency of 86.5% under a view angle of ± 5 deg is constructed. To verify the practical performance of the lens, a prototype of the collimator lens is also made, and an optical efficiency of 90.3% with a beam angle of 4.75 deg is measured.

  11. The Mars Hand Lens Imager (MAHLI) aboard the Mars rover, Curiosity

    Science.gov (United States)

    Edgett, K. S.; Ravine, M. A.; Caplinger, M. A.; Ghaemi, F. T.; Schaffner, J. A.; Malin, M. C.; Baker, J. M.; Dibiase, D. R.; Laramee, J.; Maki, J. N.; Willson, R. G.; Bell, J. F., III; Cameron, J. F.; Dietrich, W. E.; Edwards, L. J.; Hallet, B.; Herkenhoff, K. E.; Heydari, E.; Kah, L. C.; Lemmon, M. T.; Minitti, M. E.; Olson, T. S.; Parker, T. J.; Rowland, S. K.; Schieber, J.; Sullivan, R. J.; Sumner, D. Y.; Thomas, P. C.; Yingst, R. A.

    2009-08-01

    The Mars Science Laboratory (MSL) rover, Curiosity, is expected to land on Mars in 2012. The Mars Hand Lens Imager (MAHLI) will be used to document martian rocks and regolith with a 2-megapixel RGB color CCD camera with a focusable macro lens mounted on an instrument-bearing turret on the end of Curiosity's robotic arm. The flight MAHLI can focus on targets at working distances of 20.4 mm to infinity. At 20.4 mm, images have a pixel scale of 13.9 μm/pixel. The pixel scale at 66 mm working distance is about the same (31 μm/pixel) as that of the Mars Exploration Rover (MER) Microscopic Imager (MI). MAHLI camera head placement is dependent on the capabilities of the MSL robotic arm, the design for which presently has a placement uncertainty of ~20 mm in 3 dimensions; hence, acquisition of images at the minimum working distance may be challenging. The MAHLI consists of 3 parts: a camera head, a Digital Electronics Assembly (DEA), and a calibration target. The camera head and DEA are connected by a JPL-provided cable which transmits data, commands, and power. JPL is also providing a contact sensor. The camera head will be mounted on the rover's robotic arm turret, the DEA will be inside the rover body, and the calibration target will be mounted on the robotic arm azimuth motor housing. Camera Head. MAHLI uses a Kodak KAI-2020CM interline transfer CCD (1600 x 1200 active 7.4 μm square pixels with RGB filtered microlenses arranged in a Bayer pattern). The optics consist of a group of 6 fixed lens elements, a movable group of 3 elements, and a fixed sapphire window front element. Undesired near-infrared radiation is blocked using a coating deposited on the inside surface of the sapphire window. The lens is protected by a dust cover with a Lexan window through which imaging can be ac-complished if necessary, and targets can be illuminated by sunlight or two banks of two white light LEDs. Two 365 nm UV LEDs are included to search for fluores-cent materials at night. DEA

  12. Changes in lens stiffness due to capsular opacification in accommodative lens refilling

    NARCIS (Netherlands)

    Nibourg, Lisanne M.; Sharma, Prashant K.; van Kooten, Theo G.; Koopmans, Steven A.

    Accommodation may be restored to presbyopic lenses by refilling the lens capsular bag with a soft polymer. After this accommodative lens refilling prevention of capsular opacification is a requirement, since capsular opacification leads to a decreased clarity of the refilled lens. It has been

  13. Exchange of tears under a contact lens is driven by distortions of the contact lens.

    Science.gov (United States)

    Maki, Kara L; Ross, David S

    2014-12-01

    We studied the flow of the post-lens tear film under a soft contact lens to understand how the design parameters of contact lenses can affect ocular health. When a soft contact lens is inserted, the blinking eyelid causes the lens to stretch in order to conform to the shape of the eye. The deformed contact lens acts to assume its un-deformed shape and thus generates a suction pressure in the post-lens tear film. In consequence, the post-lens tear fluid moves; it responds to the suction pressure. The suction pressure may draw in fresh fluid from the edge of the lens, or it may eject fluid there, as the lens reassumes its un-deformed shape. In this article, we develop a mathematical model of the flow of the post-lens tear fluid in response to the mechanical suction pressure of a deformed contact lens. We predict the amount of exchange of fluid exchange under a contact lens and we explore the influence of the eye's shape on the rate of exchange of fluid. © The Author 2014. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.

  14. Evaluate depth of field limits of fixed focus lens arrangements in thermal infrared

    Science.gov (United States)

    Schuster, Norbert

    2016-05-01

    More and more modern thermal imaging systems use uncooled detectors. High volume applications work with detectors that have a reduced pixel count (typically between 200x150 and 640x480). This reduces the usefulness of modern image treatment procedures such as wave front coding. On the other hand, uncooled detectors demand lenses with fast fnumbers, near f/1.0, which reduces the expected Depth of Field (DoF). What are the limits on resolution if the target changes distance to the camera system? The desire to implement lens arrangements without a focusing mechanism demands a deeper quantification of the DoF problem. A new approach avoids the classic "accepted image blur circle" and quantifies the expected DoF by the Through Focus MTF of the lens. This function is defined for a certain spatial frequency that provides a straightforward relation to the pixel pitch of imaging device. A certain minimum MTF-level is necessary so that the complete thermal imaging system can realize its basic functions, such as recognition or detection of specified targets. Very often, this technical tradeoff is approved with a certain lens. But what is the impact of changing the lens for one with a different focal length? Narrow field lenses, which give more details of targets in longer distances, tighten the DoF problem. A first orientation is given by the hyperfocal distance. It depends in a square relation on the focal length and in a linear relation on the through focus MTF of the lens. The analysis of these relations shows the contradicting requirements between higher thermal and spatial resolution, faster f-number and desired DoF. Furthermore, the hyperfocal distance defines the DoF-borders. Their relation between is such as the first order imaging formulas. A calculation methodology will be presented to transfer DoF-results from an approved combination lens and camera to another lens in combination with the initial camera. Necessary input for this prediction is the accepted DoF of

  15. Lens design and local minima

    International Nuclear Information System (INIS)

    Brixner, B.

    1981-01-01

    The widespread belief that local minima exist in the least squares lens-design error function is not confirmed by the Los Alamos Scientific Laboratory (LASL) optimization program. LASL finds the optimum-mimimum region, which is characterized by small parameter gradients of similar size, small performance improvement per iteration, and many designs that give similar performance. Local minima and unique prescriptions have not been found in many-parameter problems. The reason for these absences is that image errors caused by a change in one parameter can be compensated by changes in the remaining parameters. False local minima have been found, and four cases are discussed

  16. Architectural Design Document for Camera Models

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study.......Architecture of camera simulator models and data interface for the Maneuvering of Inspection/Servicing Vehicle (MIV) study....

  17. Selecting a digital camera for telemedicine.

    Science.gov (United States)

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  18. 21 CFR 886.1120 - Opthalmic camera.

    Science.gov (United States)

    2010-04-01

    ... DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1120 Opthalmic camera. (a) Identification. An ophthalmic camera is an AC-powered device intended to take photographs of the eye and the surrounding area...

  19. An electronic pan/tilt/magnify and rotate camera system

    International Nuclear Information System (INIS)

    Zimmermann, S.; Martin, H.L.

    1992-01-01

    A new camera system has been developed for omnidirectional image-viewing applications that provides pan, tilt, magnify, and rotational orientation within a hemispherical field of view (FOV) without any moving parts. The imaging device is based on the fact that the image from a fish-eye lens, which produces a circular image of an entire hemispherical FOV, can be mathematically corrected using high-speed electronic circuitry. More specifically, an incoming fish-eye image from any image acquisition source is captured in the memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. The image transformation device can provide corrected images at frame rates compatible with RS-170 standard video equipment. As a result, this device can accomplish the functions of pan, tilt, rotation, and magnification throughout a hemispherical FOV without the need for any mechanical devices. Multiple images, each with different image magnifications and pan-tilt-rotate parameters, can be obtained from a single camera

  20. Improved depth estimation with the light field camera

    Science.gov (United States)

    Wang, Huachun; Sang, Xinzhu; Chen, Duo; Guo, Nan; Wang, Peng; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    Light-field cameras are used in consumer and industrial applications. An array of micro-lenses captures enough information that one can refocus images after acquisition, as well as shift one's viewpoint within the sub-apertures of the main lens, effectively obtaining multiple views. Thus, depth estimation from both defocus and correspondence are now available in a single capture. And Lytro.Inc also provides a depth estimation from a single-shot capture with light field camera, like Lytro Illum. This Lytro depth estimation containing many correct depth information can be used for higher quality estimation. In this paper, we present a novel simple and principled algorithm that computes dense depth estimation by combining defocus, correspondence and Lytro depth estimations. We analyze 2D epipolar image (EPI) to get defocus and correspondence depth maps. Defocus depth is obtained by computing the spatial gradient after angular integration and correspondence depth by computing the angular variance from EPIs. Lytro depth can be extracted from Lyrto Illum with software. We then show how to combine the three cues into a high quality depth map. Our method for depth estimation is suitable for computer vision applications such as matting, full control of depth-of-field, and surface reconstruction, as well as light filed display

  1. Cameras and settings for optimal image capture from UAVs

    Science.gov (United States)

    Smith, Mike; O'Connor, James; James, Mike R.

    2017-04-01

    Aerial image capture has become very common within the geosciences due to the increasing affordability of low payload (markets. Their application to surveying has led to many studies being undertaken using UAV imagery captured from consumer grade cameras as primary data sources. However, image quality and the principles of image capture are seldom given rigorous discussion which can lead to experiments being difficult to accurately reproduce. In this contribution we revisit the underpinning concepts behind image capture, from which the requirements for acquiring sharp, well exposed and suitable imagery are derived. This then leads to discussion of how to optimise the platform, camera, lens and imaging settings relevant to image quality planning, presenting some worked examples as a guide. Finally, we challenge the community to make their image data open for review in order to ensure confidence in the outputs/error estimates, allow reproducibility of the results and have these comparable with future studies. We recommend providing open access imagery where possible, a range of example images, and detailed metadata to rigorously describe the image capture process.

  2. Performance of Very Small Robotic Fish Equipped with CMOS Camera

    Directory of Open Access Journals (Sweden)

    Yang Zhao

    2015-10-01

    Full Text Available Underwater robots are often used to investigate marine animals. Ideally, such robots should be in the shape of fish so that they can easily go unnoticed by aquatic animals. In addition, lacking a screw propeller, a robotic fish would be less likely to become entangled in algae and other plants. However, although such robots have been developed, their swimming speed is significantly lower than that of real fish. Since to carry out a survey of actual fish a robotic fish would be required to follow them, it is necessary to improve the performance of the propulsion system. In the present study, a small robotic fish (SAPPA was manufactured and its propulsive performance was evaluated. SAPPA was developed to swim in bodies of freshwater such as rivers, and was equipped with a small CMOS camera with a wide-angle lens in order to photograph live fish. The maximum swimming speed of the robot was determined to be 111 mm/s, and its turning radius was 125 mm. Its power consumption was as low as 1.82 W. During trials, SAPPA succeeded in recognizing a goldfish and capturing an image of it using its CMOS camera.

  3. Improved positron emission tomography camera

    International Nuclear Information System (INIS)

    Mullani, N.A.

    1986-01-01

    A positron emission tomography camera having a plurality of rings of detectors positioned side-by-side or offset by one-half of the detector cross section around a patient area to detect radiation therefrom, and a plurality of scintillation crystals positioned relative to the photomultiplier tubes whereby each tube is responsive to more than one crystal. Each alternate crystal in the ring may be offset by one-half or less of the thickness of the crystal such that the staggered crystals are seen by more than one photomultiplier tube. This sharing of crystals and photomultiplier tubes allows identification of the staggered crystal and the use of smaller detectors shared by larger photomultiplier tubes thereby requiring less photomultiplier tubes, creating more scanning slices, providing better data sampling, and reducing the cost of the camera. (author)

  4. Vehicular camera pedestrian detection research

    Science.gov (United States)

    Liu, Jiahui

    2018-03-01

    With the rapid development of science and technology, it has made great development, but at the same time of highway traffic more convenient in highway traffic and transportation. However, in the meantime, traffic safety accidents occur more and more frequently in China. In order to deal with the increasingly heavy traffic safety. So, protecting the safety of people's personal property and facilitating travel has become a top priority. The real-time accurate pedestrian and driving environment are obtained through a vehicular camera which are used to detection and track the preceding moving targets. It is popular in the domain of intelligent vehicle safety driving, autonomous navigation and traffic system research. Based on the pedestrian video obtained by the Vehicular Camera, this paper studies the trajectory of pedestrian detection and its algorithm.

  5. The family photography in the lens of students’ camera [Rodzina w obiektywie studenckiego aparatu

    Directory of Open Access Journals (Sweden)

    Andrzej ŁADYŻYŃSKI

    2017-11-01

    Full Text Available The paper constitutes an attempt to interpret family photography as vital pedagogical information. Photographies, with their subject: ‘The family’, were taken by the students of the Institute of Pedagogy of the University of Wroclaw. Analysis of the photographs gives a detailed information about the people being photographed, provides knowledge about the prevailing thematic motives, the contextual circumstances, the emotions accompanying the process of taking them and it approximates their social aspect: their social class, sex, and age. It allows the observers to interpret the biographical experiences of the photographers. An endeavour to interpret the photographs presented by students shows an interesting way to understand the concept of the family and a variety of aspects of its functioning

  6. Using Motion Pictures to Teach Management: Refocusing the Camera Lens through the Infusion Approach to Diversity

    Science.gov (United States)

    Bumpus, Minnette A.

    2005-01-01

    Motion pictures and television shows can provide mediums to facilitate the learning of management and organizational behavior theories and concepts. Although the motion pictures and television shows cited in the literature cover a broad range of cinematic categories, racial inclusion is limited. The objectives of this article are to document the…

  7. Primary anterior chamber intraocular lens for the treatment of severe crystalline lens subluxation.

    Science.gov (United States)

    Hoffman, Richard S; Fine, I Howard; Packer, Mark

    2009-10-01

    Subluxated cataractous and clear lenses are commonly treated by limbal or pars plana lensectomy followed by primary or secondary intraocular lens (IOL) implantation. Adjunctive capsular prosthetic devices have facilitated lens removal and IOL centration in these challenging cases but have also added complexity and potential complications to the procedure. Although crystalline lens extraction may be required to clear the visual axis in mild to moderate lens subluxations, we propose insertion of a primary anterior chamber IOL without lens extraction in severe subluxations when the eye is optically aphakic or can be made functionally aphakic following neodymium:YAG laser zonulysis. Two cases demonstrating this approach are presented.

  8. Graphic design of pinhole cameras

    Science.gov (United States)

    Edwards, H. B.; Chu, W. P.

    1979-01-01

    The paper describes a graphic technique for the analysis and optimization of pinhole size and focal length. The technique is based on the use of the transfer function of optical elements described by Scott (1959) to construct the transfer function of a circular pinhole camera. This transfer function is the response of a component or system to a pattern of lines having a sinusoidally varying radiance at varying spatial frequencies. Some specific examples of graphic design are presented.

  9. Ocular Surface Temperature During Scleral Lens Wearing in Patients With Keratoconus.

    Science.gov (United States)

    Carracedo, Gonzalo; Wang, Zicheng; Serramito-Blanco, Maria; Martin-Gil, Alba; Carballo-Alvarez, Jesús; Pintor, Jesús

    2017-11-01

    To evaluate the ocular surface temperature using an infrared thermography camera before and after wearing scleral lens in patients with keratoconus and correlate these results with the tear production and stability. A pilot, experimental, short-term study has been performed. Twenty-six patients with keratoconus (36.95±8.95 years) participated voluntarily in the study. The sample was divided into two groups: patients with intrastromal corneal ring (KC-ICRS group) and patients without ICRS (KC group). Schirmer test, tear breakup time (TBUT), and ocular surface temperature in the conjunctiva, limbus, and cornea were evaluated before and after wearing a scleral lens. The patients wore the scleral lenses from 6 to 9 hours with average of 7.59±0.73 hours. No significant changes in Schirmer test and TBUT were found for both groups. No temperature differences were found between the KC-ICRS and the KC groups for all zones evaluated. There was a slight, but statistically significant, increase in the inferior cornea, temporal limbus, and nasal conjunctival temperature for KC-ICRS group and temporal limbus temperature decreasing for the KC group after wearing scleral lens (Ptemperature was statistically higher than the central cornea for both groups before and after scleral lenses wearing (Pperipheral cornea was found. No statistically significant differences in the central corneal temperature were found between the groups after scleral lens wearing (P>0.05). Scleral contact lens seems not to modify the ocular surface temperature despite the presence of the tear film stagnation under the lens.

  10. Preliminary analysis on faint luminous lightning events recorded by multiple high speed cameras

    Science.gov (United States)

    Alves, J.; Saraiva, A. V.; Pinto, O.; Campos, L. Z.; Antunes, L.; Luz, E. S.; Medeiros, C.; Buzato, T. S.

    2013-12-01

    The objective of this work is the study of some faint luminous events produced by lightning flashes that were recorded simultaneously by multiple high-speed cameras during the previous RAMMER (Automated Multi-camera Network for Monitoring and Study of Lightning) campaigns. The RAMMER network is composed by three fixed cameras and one mobile color camera separated by, in average, distances of 13 kilometers. They were located in the Paraiba Valley (in the cities of São José dos Campos and Caçapava), SP, Brazil, arranged in a quadrilateral shape, centered in São José dos Campos region. This configuration allowed RAMMER to see a thunderstorm from different angles, registering the same lightning flashes simultaneously by multiple cameras. Each RAMMER sensor is composed by a triggering system and a Phantom high-speed camera version 9.1, which is set to operate at a frame rate of 2,500 frames per second with a lens Nikkor (model AF-S DX 18-55 mm 1:3.5 - 5.6 G in the stationary sensors, and a lens model AF-S ED 24 mm - 1:1.4 in the mobile sensor). All videos were GPS (Global Positioning System) time stamped. For this work we used a data set collected in four RAMMER manual operation days in the campaign of 2012 and 2013. On Feb. 18th the data set is composed by 15 flashes recorded by two cameras and 4 flashes recorded by three cameras. On Feb. 19th a total of 5 flashes was registered by two cameras and 1 flash registered by three cameras. On Feb. 22th we obtained 4 flashes registered by two cameras. Finally, in March 6th two cameras recorded 2 flashes. The analysis in this study proposes an evaluation methodology for faint luminous lightning events, such as continuing current. Problems in the temporal measurement of the continuing current can generate some imprecisions during the optical analysis, therefore this work aim to evaluate the effects of distance in this parameter with this preliminary data set. In the cases that include the color camera we analyzed the RGB

  11. Fabricating customized hydrogel contact lens

    Science.gov (United States)

    Childs, Andre; Li, Hao; Lewittes, Daniella M.; Dong, Biqin; Liu, Wenzhong; Shu, Xiao; Sun, Cheng; Zhang, Hao F.

    2016-10-01

    Contact lenses are increasingly used in laboratories for in vivo animal retinal imaging and pre-clinical studies. The lens shapes often need modification to optimally fit corneas of individual test subjects. However, the choices from commercially available contact lenses are rather limited. Here, we report a flexible method to fabricate customized hydrogel contact lenses. We showed that the fabricated hydrogel is highly transparent, with refractive indices ranging from 1.42 to 1.45 in the spectra range from 400 nm to 800 nm. The Young’s modulus (1.47 MPa) and hydrophobicity (with a sessile drop contact angle of 40.5°) have also been characterized experimentally. Retinal imaging using optical coherence tomography in rats wearing our customized contact lenses has the quality comparable to the control case without the contact lens. Our method could significantly reduce the cost and the lead time for fabricating soft contact lenses with customized shapes, and benefit the laboratorial-used contact lenses in pre-clinical studies.

  12. Protection of the eye lens

    International Nuclear Information System (INIS)

    2015-01-01

    The limit of radiation exposure for eye lens is going to decrease dramatically from 150 to 20 mSv as a transposition into the French law of a CIPR (International Commission for Radiation Protection) directive. Sanitary studies have shown that radiologists are more likely by a factor of 3.8 to get eye lens opacities than the rest of the population. The wearing of protective glasses is recommended and in order to get a better monitoring of the radiation dose new dosimeters have been designed, they can be worn on the glass frame of directly stuck on the skin near the eyes. A study has shown that veterinary surgeons that are accustomed to stay near animals to keep them quiet during radiological exams are prone to receive high doses as well as physicians that use hypnosis to decrease the level of anxiety of their patients during radiological exams. Radiation exposure of radiologists can be mitigated through: the use of protective shields and equipment and the optimization of the dose delivered to the patient. (A.C.)

  13. Color corrected Fresnel lens for solar concentration

    International Nuclear Information System (INIS)

    Kritchman, E.M.

    1979-01-01

    A new linear convex Fresnel lens with its groove side down is described. The design philosophy is similar to the highly concentrating two focal Fresnel lens but including a correction for chromatic aberration. A solar concentration ratio as high as 80 is achieved. For wide acceptance angles the concentration nears the theoretical maximum. (author)

  14. Analysis of a Thin Optical Lens Model

    Science.gov (United States)

    Ivchenko, Vladimir V.

    2011-01-01

    In this article a thin optical lens model is considered. It is shown that the limits of its applicability are determined not only by the ratio between the thickness of the lens and the modules of the radii of curvature, but above all its geometric type. We have derived the analytical criteria for the applicability of the model for different types…

  15. Mathematical Lens: How Much Can You Bench?

    Science.gov (United States)

    Bolognese, Chris A.

    2013-01-01

    "How Much Can You Bench?" appears in the "Mathematical Lens" section of "Mathematics Teacher." "Mathematical Lens" uses photographs as a springboard for mathematical inquiry and appears in every issue of "Mathematics Teacher." This month the mathematics behind the photograph includes finding areas…

  16. Plasma Lens for Muon and Neutrino Beams

    International Nuclear Information System (INIS)

    Kahn, S.A.; Korenev, S.; Bishai, M.; Diwan, M.; Gallardo, J.C.; Hershcovitch, A.; Johnson, B.M.

    2008-01-01

    The plasma lens is examined as an alternate to focusing horns and solenoids for use in a neutrino or muon beam facility. The plasma lens concept is based on a combined high-energy lens/target configuration. The current is fed at electrodes located upstream and downstream from the target where pion capturing is needed. The current flows primarily in the plasma, which has a lower resistivity than the target. A second plasma lens section, with an additional current feed, follows the target to provide shaping of the plasma for optimum focusing. The plasma lens is immersed in an additional solenoid magnetic field to facilitate the plasma stability. The geometry of the plasma is shaped to provide optimal pion capture. Simulations of this plasma lens system have shown a 25% higher neutrino production than the horn system. Plasma lenses have the additional advantage of negligible pion absorption and scattering by the lens material and reduced neutrino contamination during anti-neutrino running. Results of particle simulations using plasma lens will be presented

  17. 16 CFR 501.1 - Camera film.

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk still...

  18. An Open Standard for Camera Trap Data

    NARCIS (Netherlands)

    Forrester, Tavis; O'Brien, Tim; Fegraus, Eric; Jansen, P.A.; Palmer, Jonathan; Kays, Roland; Ahumada, Jorge; Stern, Beth; McShea, William

    2016-01-01

    Camera traps that capture photos of animals are a valuable tool for monitoring biodiversity. The use of camera traps is rapidly increasing and there is an urgent need for standardization to facilitate data management, reporting and data sharing. Here we offer the Camera Trap Metadata Standard as an

  19. A camera specification for tendering purposes

    International Nuclear Information System (INIS)

    Lunt, M.J.; Davies, M.D.; Kenyon, N.G.

    1985-01-01

    A standardized document is described which is suitable for sending to companies which are being invited to tender for the supply of a gamma camera. The document refers to various features of the camera, the performance specification of the camera, maintenance details, price quotations for various options and delivery, installation and warranty details. (U.K.)

  20. SAAO's new robotic telescope and WiNCam (Wide-field Nasmyth Camera)

    Science.gov (United States)

    Worters, Hannah L.; O'Connor, James E.; Carter, David B.; Loubser, Egan; Fourie, Pieter A.; Sickafoose, Amanda; Swanevelder, Pieter

    2016-08-01

    The South African Astronomical Observatory (SAAO) is designing and manufacturing a wide-field camera for use on two of its telescopes. The initial concept was of a Prime focus camera for the 74" telescope, an equatorial design made by Grubb Parsons, where it would employ a 61mmx61mm detector to cover a 23 arcmin diameter field of view. However, while in the design phase, SAAO embarked on the process of acquiring a bespoke 1-metre robotic alt-az telescope with a 43 arcmin field of view, which needs a homegrown instrument suite. The Prime focus camera design was thus adapted for use on either telescope, increasing the detector size to 92mmx92mm. Since the camera will be mounted on the Nasmyth port of the new telescope, it was dubbed WiNCam (Wide-field Nasmyth Camera). This paper describes both WiNCam and the new telescope. Producing an instrument that can be swapped between two very different telescopes poses some unique challenges. At the Nasmyth port of the alt-az telescope there is ample circumferential space, while on the 74 inch the available envelope is constrained by the optical footprint of the secondary, if further obscuration is to be avoided. This forces the design into a cylindrical volume of 600mm diameter x 250mm height. The back focal distance is tightly constrained on the new telescope, shoehorning the shutter, filter unit, guider mechanism, a 10mm thick window and a tip/tilt mechanism for the detector into 100mm depth. The iris shutter and filter wheel planned for prime focus could no longer be accommodated. Instead, a compact shutter with a thickness of less than 20mm has been designed in-house, using a sliding curtain mechanism to cover an aperture of 125mmx125mm, while the filter wheel has been replaced with 2 peripheral filter cartridges (6 filters each) and a gripper to move a filter into the beam. We intend using through-vacuum wall PCB technology across the cryostat vacuum interface, instead of traditional hermetic connector-based wiring. This

  1. Observations of the Perseids 2013 using SPOSH cameras

    Science.gov (United States)

    Margonis, A.; Elgner, S.; Christou, A.; Oberst, J.; Flohrer, J.

    2013-09-01

    Earth is constantly bombard by debris, most of which disintegrates in the upper atmosphere. The collision of a dust particle, having a mass of approximately 1g or larger, with the Earth's atmosphere results into a visible streak of light in the night sky, called meteor. Comets produce new meteoroids each time they come close to the Sun due to sublimation processes. These fresh particles are moving around the Sun in orbits similar to their parent comet forming meteoroid streams. For this reason, the intersection of Earth's orbital path with different comets, gives rise to anumber of meteor showers throughout the year. The Perseids are one of the most prominent annual meteor showers occurring every summer, having its origin in Halley-type comet 109P/Swift-Tuttle. The dense core of this stream passes Earth's orbit on the 12th of August when more than 100 meteors per hour can been seen by a single observer under ideal conditions. The Technical University of Berlin (TUB) and the German Aerospace Center (DLR) together with the Armagh observatory organize meteor campaigns every summer observing the activity of the Perseids meteor shower. The observations are carried out using the Smart Panoramic Optical Sensor Head (SPOSH) camera system [2] which has been developed by DLR and Jena-Optronik GmbH under an ESA/ESTEC contract. The camera was designed to image faint, short-lived phenomena on dark planetary hemispheres. The camera is equipped with a highly sensitive back-illuminated CCD chip having a pixel resolution of 1024x1024. The custom-made fish-eye lens offers a 120°x120° field-of-view (168° over the diagonal) making the monitoring of nearly the whole night sky possible (Fig. 1). This year the observations will take place between 3rd and 10th of August to cover the meteor activity of the Perseids just before their maximum. The SPOSH cameras will be deployed at two remote sites located in high altitudes in the Greek Peloponnese peninsula. The baseline of ∼50km

  2. [Crystalline lens photodisruption using femtosecond laser: experimental study].

    Science.gov (United States)

    Chatoux, O; Touboul, D; Buestel, C; Balcou, P; Colin, J

    2010-09-01

    The aim of this study was to analyze the interactions during femtosecond (fs) laser photodisruption in ex vivo porcine crystalline lenses and to study the parameters for laser interaction optimization. An experimental femtosecond laser was used. The laser characteristics were: 1030 nm wavelength; pulse duration, 400 fs; and numerical aperture, 0.13. Specific software was created to custom and monitor any type of photoablation pattern for treatment purposes. Porcine crystalline lenses were placed in an open sky holder filled with physiological liquid (BSS) covered by a glass plate. A numerical camera was associated with metrological software in order to magnify and quantify the results. Transmission electron microscopy (TEM) was performed on some samples to identify the microscopic plasma interactions with the lens. The optimization of parameters was investigated in terms of the optical breakdown threshold, the sizing of interactions, and the best pattern for alignments. More than 150 crystalline lenses of freshly enucleated pigs were treated. The optical breakdown threshold (OBT) was defined as the minimal energy level per pulse necessary to observe a physical interaction. In our study, the OBT varied according to the following parameters: the crystalline lens itself, varying from 4.2 to 7.6 μJ (mean, 5.1 μJ), and the depth of laser focus, varying up to 1 μJ, increasing in the depth of the tissue. Analyzing the distance between impacts, we observed that the closer the impacts were the less power was needed to create a clear well-drawn defect pattern (lines), i.e., with a 4-μJ optimized OBT, when the impacts were placed every 2 μm for the x,y directions and 60 μm for the z direction. Coalescent bubbles created by plasma formation always disappeared in less than 24h. The nonthermal effect of plasma and the innocuousness on surrounding tissues were proven by the TEM results. The crystalline lens photodisruption by the femtosecond laser seems an innovative

  3. Calibration method for projector-camera-based telecentric fringe projection profilometry system.

    Science.gov (United States)

    Liu, Haibo; Lin, Huijing; Yao, Linshen

    2017-12-11

    By combining a fringe projection setup with a telecentric lens, a fringe pattern could be projected and imaged within a small area, making it possible to measure the three-dimensional (3D) surfaces of micro-components. This paper focuses on the flexible calibration of the fringe projection profilometry (FPP) system using a telecentric lens. An analytical telecentric projector-camera calibration model is introduced, in which the rig structure parameters remain invariant for all views, and the 3D calibration target can be located on the projector image plane with sub-pixel precision. Based on the presented calibration model, a two-step calibration procedure is proposed. First, the initial parameters, e.g., the projector-camera rig, projector intrinsic matrix, and coordinates of the control points of a 3D calibration target, are estimated using the affine camera factorization calibration method. Second, a bundle adjustment algorithm with various simultaneous views is applied to refine the calibrated parameters, especially the rig structure parameters and coordinates of the control points forth 3D target. Because the control points are determined during the calibration, there is no need for an accurate 3D reference target, whose is costly and extremely difficult to fabricate, particularly for tiny objects used to calibrate the telecentric FPP system. Real experiments were performed to validate the performance of the proposed calibration method. The test results showed that the proposed approach is very accurate and reliable.

  4. LensEnt2: Maximum-entropy weak lens reconstruction

    Science.gov (United States)

    Marshall, P. J.; Hobson, M. P.; Gull, S. F.; Bridle, S. L.

    2013-08-01

    LensEnt2 is a maximum entropy reconstructor of weak lensing mass maps. The method takes each galaxy shape as an independent estimator of the reduced shear field and incorporates an intrinsic smoothness, determined by Bayesian methods, into the reconstruction. The uncertainties from both the intrinsic distribution of galaxy shapes and galaxy shape estimation are carried through to the final mass reconstruction, and the mass within arbitrarily shaped apertures are calculated with corresponding uncertainties. The input is a galaxy ellipticity catalog with each measured galaxy shape treated as a noisy tracer of the reduced shear field, which is inferred on a fine pixel grid assuming positivity, and smoothness on scales of w arcsec where w is an input parameter. The ICF width w can be chosen by computing the evidence for it.

  5. Relative camera localisation in non-overlapping camera networks using multiple trajectories

    NARCIS (Netherlands)

    John, V.; Englebienne, G.; Kröse, B.J.A.

    2012-01-01

    In this article we present an automatic camera calibration algorithm using multiple trajectories in a multiple camera network with non-overlapping field-of-views (FOV). Visible trajectories within a camera FOV are assumed to be measured with respect to the camera local co-ordinate system.

  6. Solutions on a high-speed wide-angle zoom lens with aspheric surfaces

    Science.gov (United States)

    Yamanashi, Takanori

    2012-10-01

    Recent development in CMOS and digital camera technology has accelerated the business and market share of digital cinematography. In terms of optical design, this technology has increased the need to carefully consider pixel pitch and characteristics of the imager. When the field angle at the wide end, zoom ratio, and F-number are specified, choosing an appropriate zoom lens type is crucial. In addition, appropriate power distributions and lens configurations are required. At points near the wide end of a zoom lens, it is known that an aspheric surface is an effective means to correct off-axis aberrations. On the other hand, optical designers have to focus on manufacturability of aspheric surfaces and perform required analysis with respect to the surface shape. Centration errors aside, it is also important to know the sensitivity to aspheric shape errors and their effect on image quality. In this paper, wide angle cine zoom lens design examples are introduced and their main characteristics are described. Moreover, technical challenges are pointed out and solutions are proposed.

  7. Stereo Pinhole Camera: Assembly and experimental activities

    Directory of Open Access Journals (Sweden)

    Gilmário Barbosa Santos

    2015-05-01

    Full Text Available This work describes the assembling of a stereo pinhole camera for capturing stereo-pairs of images and proposes experimental activities with it. A pinhole camera can be as sophisticated as you want, or so simple that it could be handcrafted with practically recyclable materials. This paper describes the practical use of the pinhole camera throughout history and currently. Aspects of optics and geometry involved in the building of the stereo pinhole camera are presented with illustrations. Furthermore, experiments are proposed by using the images obtained by the camera for 3D visualization through a pair of anaglyph glasses, and the estimation of relative depth by triangulation is discussed.

  8. Lens oscillations in the human eye. Implications for post-saccadic suppression of vision.

    Directory of Open Access Journals (Sweden)

    Juan Tabernero

    Full Text Available The eye changes gaze continuously from one visual stimulus to another. Using a high speed camera to record eye and lens movements we demonstrate how the crystalline lens sustains an inertial oscillatory decay movement immediately after every change of gaze. This behavior fit precisely with the movement of a classical damped harmonic oscillator. The time course of the oscillations range from 50 to 60 msec with an oscillation frequency of around 20 Hz. That has dramatic implications on the image quality at the retina on the very short times (∼50 msec that follow the movement. However, it is well known that our vision is nearly suppressed on those periods (post-saccadic suppression. Both phenomenon follow similar time courses and therefore might be synchronized to avoid the visual impairment.

  9. Lessons learned: wrong intraocular lens.

    Science.gov (United States)

    Schein, Oliver D; Banta, James T; Chen, Teresa C; Pritzker, Scott; Schachat, Andrew P

    2012-10-01

    To report cases involving the placement of the wrong intraocular lens (IOL) at the time of cataract surgery where human error occurred. Retrospective small case series, convenience sample. Seven surgical cases. Institutional review of errors committed and subsequent improvements to clinical protocols. Lessons learned and changes in procedures adapted. The pathways to a wrong IOL are many but largely reflect some combination of poor surgical team communication, transcription error, lack of preoperative clarity in surgical planning or failure to match the patient, and IOL calculation sheet with 2 unique identifiers. Safety in surgery involving IOLs is enhanced both by strict procedures, such as an IOL-specific "time-out," and the fostering of a surgical team culture in which all members are encouraged to voice questions and concerns. Copyright © 2012 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  10. Radiation studies in Lens culinaris

    International Nuclear Information System (INIS)

    Sinha, S.S.N.

    1977-01-01

    Estimation of chromosomal aberrations in flowers of Lens culinaris according to their sequence of development in the plants at 4, 8 and 12 Kr in the M 1 generation, showed that the later formed flowers had smaller percentages of cells with aberrations than those developed earlier. It is suggested that this may be the result of competition between more damaged and less damaged cells during the development of the shoot. There is consequently a decrease of sterility in successive flowers. The numbers of karyotypes taking part in the formation of lower and uppermost flowers were estimated cytologically at 4, 8 and 12 Kr. It was found that more karyotypes were involved in the formation of the lower flowers than in the upper ones. It appeared that at lower doses larger numbers of karyotypes were taking part in the formation of the chimaera than at higher doses. (auth.)

  11. Collimator changer for scintillation camera

    International Nuclear Information System (INIS)

    Jupa, E.C.; Meeder, R.L.; Richter, E.K.

    1976-01-01

    A collimator changing assembly mounted on the support structure of a scintillation camera is described. A vertical support column positioned proximate the detector support column with a plurality of support arms mounted thereon in a rotatable cantilevered manner at separate vertical positions. Each support arm is adapted to carry one of the plurality of collimators which are interchangeably mountable on the underside of the detector and to transport the collimator between a store position remote from the detector and a change position underneath said detector

  12. Robot Tracer with Visual Camera

    Science.gov (United States)

    Jabbar Lubis, Abdul; Dwi Lestari, Yuyun; Dafitri, Haida; Azanuddin

    2017-12-01

    Robot is a versatile tool that can function replace human work function. The robot is a device that can be reprogrammed according to user needs. The use of wireless networks for remote monitoring needs can be utilized to build a robot that can be monitored movement and can be monitored using blueprints and he can track the path chosen robot. This process is sent using a wireless network. For visual robot using high resolution cameras to facilitate the operator to control the robot and see the surrounding circumstances.

  13. Precision lens assembly with alignment turning system

    Science.gov (United States)

    Ho, Cheng-Fang; Huang, Chien-Yao; Lin, Yi-Hao; Kuo, Hui-Jean; Kuo, Ching-Hsiang; Hsu, Wei-Yao; Chen, Fong-Zhi

    2017-10-01

    The poker chip assembly with high precision lens barrels is widely applied to ultra-high performance optical system. ITRC applies the poker chip assembly technology to the high numerical aperture objective lenses and lithography projection lenses because of its high efficiency assembly process. In order to achieve high precision lens cell for poker chip assembly, an alignment turning system (ATS) is developed. The ATS includes measurement, alignment and turning modules. The measurement module is equipped with a non-contact displacement sensor (NCDS) and an autocollimator (ACM). The NCDS and ACM are used to measure centration errors of the top and the bottom surface of a lens respectively; then the amount of adjustment of displacement and tilt with respect to the rotational axis of the turning machine for the alignment module can be determined. After measurement, alignment and turning processes on the ATS, the centration error of a lens cell with 200 mm in diameter can be controlled within 10 arcsec. Furthermore, a poker chip assembly lens cell with three sub-cells is demonstrated, each sub-cells are measured and accomplished with alignment and turning processes. The lens assembly test for five times by each three technicians; the average transmission centration error of assembly lens is 12.45 arcsec. The results show that ATS can achieve high assembly efficiency for precision optical systems.

  14. Bioinspired adaptive gradient refractive index distribution lens

    Science.gov (United States)

    Yin, Kezhen; Lai, Chuan-Yar; Wang, Jia; Ji, Shanzuo; Aldridge, James; Feng, Jingxing; Olah, Andrew; Baer, Eric; Ponting, Michael

    2018-02-01

    Inspired by the soft, deformable human eye lens, a synthetic polymer gradient refractive index distribution (GRIN) lens with an adaptive geometry and focal power has been demonstrated via multilayer coextrusion and thermoforming of nanolayered elastomeric polymer films. A set of 30 polymer nanolayered films comprised of two thermoplastic polyurethanes having a refractive index difference of 0.05 were coextruded via forced-assembly technique. The set of 30 nanolayered polymer films exhibited transmission near 90% with each film varying in refractive index by 0.0017. An adaptive GRIN lens was fabricated from a laminated stack of the variable refractive index films with a 0.05 spherical GRIN. This lens was subsequently deformed by mechanical ring compression of the lens. Variation in the optical properties of the deformable GRIN lens was determined, including 20% variation in focal length and reduced spherical aberration. These properties were measured and compared to simulated results by placido-cone topography and ANSYS methods. The demonstration of a solid-state, dynamic focal length, GRIN lens with improved aberration correction was discussed relative to the potential future use in implantable devices.

  15. The central corneal light reflex ratio from photographs derived from a digital camera in young adults.

    Science.gov (United States)

    Duangsang, Suampa; Tengtrisorn, Supaporn

    2012-05-01

    To determine the normal range of Central Corneal Light Reflex Ratio (CCLRR) from photographs of young adults. A digital camera equipped with a telephoto lens with a flash attachment placed directly above the lens was used to obtain corneal light reflex photographs of 104 subjects, first with the subject fixating on the lens of the camera at a distance of 43 centimeters, and then while looking past the camera to a wall at a distance of 5.4 meters. Digital images were displayed using Adobe Photoshop at a magnification of l200%. The CCLRR was the ratio of the sum of distances between the inner margin of cornea and the central corneal light reflex of each eye to the sum of horizontal corneal diameter of each eye. Measurements were made by three technicians on all subjects, and repeated on a 16% (n=17) subsample. Mean ratios (standard deviation-SD) from near/distance measurements were 0.468 (0.012)/0.452 (0.019). Limits of the normal range, with 95% certainty, were 0.448 and 0.488 for near measurements and 0.419 and 0.484 for distance measurements. Lower and upper indeterminate zones were 0.440-0.447 and 0.489-0.497 for near measurements and 0.406-0.418 and 0.485-0.497 for distance measurements. More extreme values can be considered as abnormal. The reproducibility and repeatability of the test was good. This method is easy to perform and has potential for use in strabismus screening by paramedical personnel.

  16. Design of a hyperbolic microwave metallic lens

    International Nuclear Information System (INIS)

    Uckan, T.

    1979-12-01

    Due to problems caused by multiple reflections in the cavity walls of the EBT fusion research device, the use of a horn becomes important for the directivity of waves in the millimetric range. An ordinary dielectric lens cannot be used because of plasma-wall interactions. Microwave metallic lenses, designed to focus the energy into a plane wave, can improve the directivity considerably. By implementing a 70-GHz standard-gain horn with a delay-type hyperbolic lens, which consists of a solid metallic disk with a number of equal size small holes has indicated a gain of 15 dB over the no lens case

  17. Accelerating convergence in automatic lens design

    International Nuclear Information System (INIS)

    Brixner, B.

    1981-01-01

    Among the various factors that slow lens optimization-insufficient performance targets, the absence of a unique solution, false local minima, a poorly scaled change vector, failure to find the optimum damping number, and failure to equalize the parameter gradients-the importance of parameter gradient equalization has been insufficiently recognized. Gradients can be approximately equalized by scaling the lens to a suitable size while it is being optimized. For best results, the size of the damping number should also be optimized during each iteration. If these two procedures are followed, scaling the change vector is usually not crucial. To illustrate the importance of parameter equalization, a lens optimization is analyzed

  18. Transferability of glass lens molding

    Science.gov (United States)

    Katsuki, Masahide

    2006-02-01

    Sphere lenses have been used for long time. But it is well known that sphere lenses theoretically have spherical aberration, coma and so on. And, aspheric lenses attract attention recently. Plastic lenses are molded easily with injection machines, and are relatively low cost. They are suitable for mass production. On the other hand, glass lenses have several excellent features such as high refractive index, heat resistance and so on. Many aspheric glass lenses came to be used for the latest digital camera and mobile phone camera module. It is very difficult to produce aspheric glass lenses by conventional process of curve generating and polishing. For the solution of this problem, Glass Molding Machine was developed and is spreading through the market. High precision mold is necessary to mold glass lenses with Glass Molding Machine. The mold core is ground or turned by high precision NC aspheric generator. To obtain higher transferability of the mold core, the function of the molding machine and the conditions of molding are very important. But because of high molding temperature, there are factors of thermal expansion and contraction of the mold and glass material. And it is hard to avoid the factors. In this session, I introduce following items. [1] Technology of glass molding and the machine is introduced. [2] The transferability of glass molding is analyzed with some data of glass lenses molded. [3] Compensation of molding shape error is discussed with examples.

  19. Using DSLR cameras in digital holography

    Science.gov (United States)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  20. Human tracking over camera networks: a review

    Science.gov (United States)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  1. Image compensation for camera and lighting variability

    Science.gov (United States)

    Daley, Wayne D.; Britton, Douglas F.

    1996-12-01

    With the current trend of integrating machine vision systems in industrial manufacturing and inspection applications comes the issue of camera and illumination stabilization. Unless each application is built around a particular camera and highly controlled lighting environment, the interchangeability of cameras of fluctuations in lighting become a problem as each camera usually has a different response. An empirical approach is proposed where color tile data is acquired using the camera of interest, and a mapping is developed to some predetermined reference image using neural networks. A similar analytical approach based on a rough analysis of the imaging systems is also considered for deriving a mapping between cameras. Once a mapping has been determined, all data from one camera is mapped to correspond to the images of the other prior to performing any processing on the data. Instead of writing separate image processing algorithms for the particular image data being received, the image data is adjusted based on each particular camera and lighting situation. All that is required when swapping cameras is the new mapping for the camera being inserted. The image processing algorithms can remain the same as the input data has been adjusted appropriately. The results of utilizing this technique are presented for an inspection application.

  2. Optimising camera traps for monitoring small mammals.

    Directory of Open Access Journals (Sweden)

    Alistair S Glen

    Full Text Available Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera's field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1 trigger speed, 2 passive infrared vs. microwave sensor, 3 white vs. infrared flash, and 4 still photographs vs. video. We also tested a new approach to standardise each camera's field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats (Mustelaerminea, feral cats (Felis catus and hedgehogs (Erinaceuseuropaeus. Trigger speeds of 0.2-2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera's field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps.

  3. 21 CFR 886.1395 - Diagnostic Hruby fundus lens.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Diagnostic Hruby fundus lens. 886.1395 Section 886...) MEDICAL DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1395 Diagnostic Hruby fundus lens. (a) Identification. A diagnostic Hruby fundus lens is a device that is a 55 diopter lens intended for use in the...

  4. 21 CFR 886.5844 - Prescription spectacle lens.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Prescription spectacle lens. 886.5844 Section 886...) MEDICAL DEVICES OPHTHALMIC DEVICES Therapeutic Devices § 886.5844 Prescription spectacle lens. (a) Identification. A prescription spectacle lens is a glass or plastic device that is a lens intended to be worn by...

  5. 21 CFR 886.1390 - Flexible diagnostic Fresnel lens.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Flexible diagnostic Fresnel lens. 886.1390 Section... (CONTINUED) MEDICAL DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1390 Flexible diagnostic Fresnel lens. (a) Identification. A flexible diagnostic Fresnel lens is a device that is a very thin lens which has...

  6. Canine and feline fundus photography and videography using a nonpatented 3D printed lens adapter for a smartphone.

    Science.gov (United States)

    Espinheira Gomes, Filipe; Ledbetter, Eric

    2018-05-11

    To describe an indirect funduscopy imaging technique for dogs and cats using low cost and widely available equipment: a smartphone, a three-dimensional (3D) printed indirect lens adapter, and a 40 diopters (D) indirect ophthalmoscopy lens. Fundus videography was performed in dogs and cats using a 40D indirect ophthalmoscopy lens and a smartphone fitted with a 3D printed indirect lens adapter. All animals were pharmacologically dilated with topical tropicamide 1% solution. Eyelid opening and video recording were performed using standard binocular indirect ophthalmoscopy technique. All videos were uploaded to a computer, and still images were selected and acquired for archiving purposes. Fundic images were manipulated to represent the true anatomy of the fundus. It was possible to promptly obtain good quality images from normal and diseased retinas using the nonpatented 3D printed, lens adapter for a smartphone. Fundic imaging using a smartphone can be performed with minimal investment. This simple imaging modality can be used by veterinary ophthalmologists and general practitioners to acquire, archive, and share images of the retina. The quality of images obtained will likely improve with developments in smartphone camera software and hardware. © 2018 American College of Veterinary Ophthalmologists.

  7. Function and Evolutionary Origin of Unicellular Camera-Type Eye Structure

    KAUST Repository

    Hayakawa, Shiho; Takaku, Yasuharu; Hwang, Jung Shan; Horiguchi, Takeo; Suga, Hiroshi; Gehring, Walter; Ikeo, Kazuho; Gojobori, Takashi

    2015-01-01

    The ocelloid is an extraordinary eyespot organelle found only in the dinoflagellate family Warnowiaceae. It contains retina- and lens-like structures called the retinal body and the hyalosome. The ocelloid has been an evolutionary enigma because of its remarkable resemblance to the multicellular camera-type eye. To determine if the ocelloid is functionally photoreceptive, we investigated the warnowiid dinoflagellate Erythropsidinium. Here, we show that the morphology of the retinal body changed depending on different illumination conditions and the hyalosome manifests the refractile nature. Identifying a rhodopsin gene fragment in Erythropsidinium ESTs that is expressed in the retinal body by in situ hybridization, we also show that ocelloids are actually light sensitive photoreceptors. The rhodopsin gene identified is most closely related to bacterial rhodopsins. Taken together, we suggest that the ocelloid is an intracellular camera-type eye, which might be originated from endosymbiotic origin. © 2015 Hayakawa et al.

  8. Sky light polarization detection with linear polarizer triplet in light field camera inspired by insect vision.

    Science.gov (United States)

    Zhang, Wenjing; Cao, Yu; Zhang, Xuanzhe; Liu, Zejin

    2015-10-20

    Stable information of a sky light polarization pattern can be used for navigation with various advantages such as better performance of anti-interference, no "error cumulative effect," and so on. But the existing method of sky light polarization measurement is weak in real-time performance or with a complex system. Inspired by the navigational capability of a Cataglyphis with its compound eyes, we introduce a new approach to acquire the all-sky image under different polarization directions with one camera and without a rotating polarizer, so as to detect the polarization pattern across the full sky in a single snapshot. Our system is based on a handheld light field camera with a wide-angle lens and a triplet linear polarizer placed over its aperture stop. Experimental results agree with the theoretical predictions. Not only real-time detection but simple and costless architecture demonstrates the superiority of the approach proposed in this paper.

  9. Measuring high-resolution sky luminance distributions with a CCD camera.

    Science.gov (United States)

    Tohsing, Korntip; Schrempf, Michael; Riechelmann, Stefan; Schilke, Holger; Seckmeyer, Gunther

    2013-03-10

    We describe how sky luminance can be derived from a newly developed hemispherical sky imager (HSI) system. The system contains a commercial compact charge coupled device (CCD) camera equipped with a fish-eye lens. The projection of the camera system has been found to be nearly equidistant. The luminance from the high dynamic range images has been calculated and then validated with luminance data measured by a CCD array spectroradiometer. The deviation between both datasets is less than 10% for cloudless and completely overcast skies, and differs by no more than 20% for all sky conditions. The global illuminance derived from the HSI pictures deviates by less than 5% and 20% under cloudless and cloudy skies for solar zenith angles less than 80°, respectively. This system is therefore capable of measuring sky luminance with the high spatial and temporal resolution of more than a million pixels and every 20 s respectively.

  10. Function and Evolutionary Origin of Unicellular Camera-Type Eye Structure

    KAUST Repository

    Hayakawa, Shiho

    2015-03-03

    The ocelloid is an extraordinary eyespot organelle found only in the dinoflagellate family Warnowiaceae. It contains retina- and lens-like structures called the retinal body and the hyalosome. The ocelloid has been an evolutionary enigma because of its remarkable resemblance to the multicellular camera-type eye. To determine if the ocelloid is functionally photoreceptive, we investigated the warnowiid dinoflagellate Erythropsidinium. Here, we show that the morphology of the retinal body changed depending on different illumination conditions and the hyalosome manifests the refractile nature. Identifying a rhodopsin gene fragment in Erythropsidinium ESTs that is expressed in the retinal body by in situ hybridization, we also show that ocelloids are actually light sensitive photoreceptors. The rhodopsin gene identified is most closely related to bacterial rhodopsins. Taken together, we suggest that the ocelloid is an intracellular camera-type eye, which might be originated from endosymbiotic origin. © 2015 Hayakawa et al.

  11. Augmented reality glass-free three-dimensional display with the stereo camera

    Science.gov (United States)

    Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.

  12. Measurement of Crystalline Lens Volume During Accommodation in a Lens Stretcher.

    Science.gov (United States)

    Marussich, Lauren; Manns, Fabrice; Nankivil, Derek; Maceo Heilman, Bianca; Yao, Yue; Arrieta-Quintero, Esdras; Ho, Arthur; Augusteyn, Robert; Parel, Jean-Marie

    2015-07-01

    To determine if the lens volume changes during accommodation. The study used data acquired on 36 cynomolgus monkey lenses that were stretched in a stepwise fashion to simulate disaccommodation. At each step, stretching force and dioptric power were measured and a cross-sectional image of the lens was acquired using an optical coherence tomography system. Images were corrected for refractive distortions and lens volume was calculated assuming rotational symmetry. The average change in lens volume was calculated and the relation between volume change and power change, and between volume change and stretching force, were quantified. Linear regressions of volume-power and volume-force plots were calculated. The mean (± SD) volume in the unstretched (accommodated) state was 97 ± 8 mm3. On average, there was a small but statistically significant (P = 0.002) increase in measured lens volume with stretching. The mean change in lens volume was +0.8 ± 1.3 mm3. The mean volume-power and volume-load slopes were -0.018 ± 0.058 mm3/D and +0.16 ± 0.40 mm3/g. Lens volume remains effectively constant during accommodation, with changes that are less than 1% on average. This result supports a hypothesis that the change in lens shape with accommodation is accompanied by a redistribution of tissue within the capsular bag without significant compression of the lens contents or fluid exchange through the capsule.

  13. AutoLens: Automated Modeling of a Strong Lens's Light, Mass and Source

    Science.gov (United States)

    Nightingale, J. W.; Dye, S.; Massey, Richard J.

    2018-05-01

    This work presents AutoLens, the first entirely automated modeling suite for the analysis of galaxy-scale strong gravitational lenses. AutoLens simultaneously models the lens galaxy's light and mass whilst reconstructing the extended source galaxy on an adaptive pixel-grid. The method's approach to source-plane discretization is amorphous, adapting its clustering and regularization to the intrinsic properties of the lensed source. The lens's light is fitted using a superposition of Sersic functions, allowing AutoLens to cleanly deblend its light from the source. Single component mass models representing the lens's total mass density profile are demonstrated, which in conjunction with light modeling can detect central images using a centrally cored profile. Decomposed mass modeling is also shown, which can fully decouple a lens's light and dark matter and determine whether the two component are geometrically aligned. The complexity of the light and mass models are automatically chosen via Bayesian model comparison. These steps form AutoLens's automated analysis pipeline, such that all results in this work are generated without any user-intervention. This is rigorously tested on a large suite of simulated images, assessing its performance on a broad range of lens profiles, source morphologies and lensing geometries. The method's performance is excellent, with accurate light, mass and source profiles inferred for data sets representative of both existing Hubble imaging and future Euclid wide-field observations.

  14. Placement of a crystalline lens and intraocular lens: Retinal image quality.

    Science.gov (United States)

    Siedlecki, Damian; Nowak, Jerzy; Zajac, Marek

    2006-01-01

    The influence of changes of both crystalline lens and intraocular lens (IOL) misalignment on the retinal image quality was investigated. The optical model of the eye used in investigations was the Liou-Brennan model, which is commonly considered as one of the most anatomically accurate. The original crystalline lens from this model was replaced with an IOL, made of rigid polymethylmethacrylate, in a way that recommend obligatory procedures. The modifications that were made both for crystalline lens and IOL were the longitudinal, the transversal, and the angular displacement.

  15. Development of Fresnel lens for improvement of rear visibility; Shikai kojo Fresnel lens no kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    Iwamoto, K; Sanada, C; Tsukino, M [Nissan Motor Co. Ltd., Tokyo (Japan)

    1997-10-01

    Fresnel lenses have been widely used to increase the visual field around vehicles for drivers. However, internal reflection in these lenses has been an obstacle in producing dear images. This internal glow is generated by incident light from an unexpected direction reflecting on the non-lens surface or radiating from the non-lens surface of the Fresnel lens. The cause of internal glow has been made dear combining louver film with the lens. The newly developed technology removes obstacles in producing dear images by reducing internal glow. 7 figs.

  16. Disinfection capacity of PuriLens contact lens cleaning unit against Acanthamoeba.

    Science.gov (United States)

    Hwang, Thomas S; Hyon, Joon Young; Song, Jae Kyung; Reviglio, Victor E; Spahr, Harry T; O'Brien, Terrence P

    2004-01-01

    The PuriLens contact lens system is indicated for cleaning and disinfection of soft (hydrophilic) contact lenses by means of subsonic agitation to remove lens deposits and microorganisms, and ultraviolet irradiation of the storage solution for disinfection. The capacity of the PuriLens system to disinfect storage solutions contaminated with known concentrations of Staphylococcus aureus, Pseudomonas aeruginosa, and Acanthamoeba species was evaluated. An in vitro assessment of the antibacterial and antiparasitic efficacy of the PuriLens system was performed. Separated batches of the storage solution for the cleansing system were contaminated with stock strains of S. aureus and P. aeruginosa. A comparison of the microbiologic content was made between the solution before and after the cycle. The PuriLens system effectively eradicated S. aureus and P. aeruginosa organisms after a 15-minute cycle. However, viable cysts of acanthamoeba were recovered in the solution after the 15-minute cycle. The PuriLens system is highly efficient in protecting against contamination with common bacterial ocular pathogens. Acanthamoeba cysts, however, can survive in the solution or contact lens bath undergoing integrated subsonic debridement and indirect ultraviolet light disinfection. Use of chemical disinfecting solutions that contain agents such as chlorhexidine or other cationic antiseptics may be advisable in conjunction with use of the PuriLens device, especially in high-risk settings.

  17. Night Vision Goggles Objectives Lens Focusing Methodology

    National Research Council Canada - National Science Library

    Pinkus, Alan; Task, H. L

    2000-01-01

    ...: interpupillary distance, tilt, eye relief, height, eyepiece and objective lens focus. Currently, aircrew use a Hoffman 20/20 test unit to pre-focus their NVG objective lenses at optical infinity before boarding their aircraft...

  18. Characteristics of soft X-ray lens

    International Nuclear Information System (INIS)

    Qin Yi

    2007-12-01

    A soft X-lens was devised with waveguide X-ray optics of total external reflection (TER). The lens consists of a stack of 1 387 TER waveguides with inner diameter of 0.45 mm and outer diameter of 0.60 mm. With the help of plasma sources of soft X-ray radiation, high density of pure soft X-ray radiation (without plasma expansion fragments) with broad-band spectral range can be obtained at the focus of the lens. As laser-plasma is considered, the radiation density of 1.3 x 10 5 W/cm 2 is obtained, the transmission coefficient is 18.6%, the ratio of the density at the focus with and without the lens is 1000 and the radiation capture is 28.9 degree. The density of 0.5 TW/cm 2 can be obtained as far as Qiang-Guang I facility is considered. (authors)

  19. Gravitational lens effect and pregalactic halo objects

    International Nuclear Information System (INIS)

    Bontz, R.J.

    1979-01-01

    The changes in flux, position, and size of a distant extended (galaxy, etc.) source that result from the gravitational lens action of a massive opaque object are discussed. The flux increase is described by a single function of two parameters. One of these parameters characterizes the strength of the gravitational lens, the other describes the alignment of source and lens object. This function also describes the relative intensity of the images formed by lens. ( A similar formalism is discussed by Bourassa et al. for a point source). The formalism is applied to the problem of the galactic halo. It appears that a massive (10 1 2 M/sub sun/) spherical halo surrounding the visible part of the galaxy is consistent with the observable properties of extragalactic sources

  20. Discovery of two new gravitation lens systems

    International Nuclear Information System (INIS)

    Guertler, J.

    1988-01-01

    The discovery of new quasar and radio galaxy double images produced by the gravitation lens effect is reported. The light deflecting galaxies acting as gravitational lenses could be made visible by means of image processing procedures

  1. A Plasma Lens for Magnetron Sputtering

    International Nuclear Information System (INIS)

    Anders, Andre; Brown, Jeff

    2010-01-01

    A plasma lens, consisting of a solenoid and potential-defining ring electrodes, has been placed between a magnetron and substrates to be coated. Photography reveals qualitative information on excitation, ionization, and the transport of plasma to the substrate.

  2. The discovery of a gravitational lens

    International Nuclear Information System (INIS)

    Chaffee, F.H. Jr.

    1981-01-01

    A recently discovered pair of quasars turns out to be not a pair at all but two images of a single quasar formed by a gravitational lens: an elliptical galaxy halfway between the quasar and our own galaxy. (orig.) [de

  3. EDEL: ENEA dosemeter for eye lens

    International Nuclear Information System (INIS)

    Ferrari, Paolo; Mariotti, Francesca; Campani, Lorenzo

    2016-01-01

    Since the publication of International Commission on Radiological Protection statement in 2011 on tissue reaction, eye lens radiation protection played an important role in exposed personnel dosimetry. For this reason, the Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA) Individual Monitoring Service decided to study a prototype to fulfil specific requests (e.g. for survey in interventional department and intercomparisons). On the basis of such preliminary investigation, a new eye lens dosemeter was developed. The new dosemeter, named EDEL (ENEA Dosemeter for Eye Lens), was characterised in terms of H p (3), the operational quantity related to eye lens monitoring. The investigation was performed experimentally and optimised using the Monte Carlo MCNP6 code. The new prototype was thought to fulfil two main requests: the reliability of the dosimetric data and the portability of the dosemeter itself. The new dosemeter will soon be supplied to the collaborating hospitals for workplace test measurements. (authors)

  4. Role of Aquaporin 0 in lens biomechanics

    Energy Technology Data Exchange (ETDEWEB)

    Sindhu Kumari, S.; Gupta, Neha [Physiology and Biophysics, Stony Brook University, Stony Brook, NY (United States); Shiels, Alan [Washington University School of Medicine, St. Louis, MO (United States); FitzGerald, Paul G. [Cell Biology and Human Anatomy, School of Medicine, University of California, Davis, CA (United States); Menon, Anil G. [University of Cincinnati College of Medicine, Cincinnati, OH (United States); Mathias, Richard T. [Physiology and Biophysics, Stony Brook University, Stony Brook, NY (United States); SUNY Eye Institute, NY (United States); Varadaraj, Kulandaiappan, E-mail: kulandaiappan.varadaraj@stonybrook.edu [Physiology and Biophysics, Stony Brook University, Stony Brook, NY (United States); SUNY Eye Institute, NY (United States)

    2015-07-10

    Maintenance of proper biomechanics of the eye lens is important for its structural integrity and for the process of accommodation to focus near and far objects. Several studies have shown that specialized cytoskeletal systems such as the beaded filament (BF) and spectrin-actin networks contribute to mammalian lens biomechanics; mutations or deletion in these proteins alters lens biomechanics. Aquaporin 0 (AQP0), which constitutes ∼45% of the total membrane proteins of lens fiber cells, has been shown to function as a water channel and a structural cell-to-cell adhesion (CTCA) protein. Our recent ex vivo study on AQP0 knockout (AQP0 KO) mouse lenses showed the CTCA function of AQP0 could be crucial for establishing the refractive index gradient. However, biomechanical studies on the role of AQP0 are lacking. The present investigation used wild type (WT), AQP5 KO (AQP5{sup −/−}), AQP0 KO (heterozygous KO: AQP0{sup +/−}; homozygous KO: AQP0{sup −/−}; all in C57BL/6J) and WT-FVB/N mouse lenses to learn more about the role of fiber cell AQPs in lens biomechanics. Electron microscopic images exhibited decreases in lens fiber cell compaction and increases in extracellular space due to deletion of even one allele of AQP0. Biomechanical assay revealed that loss of one or both alleles of AQP0 caused a significant reduction in the compressive load-bearing capacity of the lenses compared to WT lenses. Conversely, loss of AQP5 did not alter the lens load-bearing ability. Compressive load-bearing at the suture area of AQP0{sup +/−} lenses showed easy separation while WT lens suture remained intact. These data from KO mouse lenses in conjunction with previous studies on lens-specific BF proteins (CP49 and filensin) suggest that AQP0 and BF proteins could act co-operatively in establishing normal lens biomechanics. We hypothesize that AQP0, with its prolific expression at the fiber cell membrane, could provide anchorage for cytoskeletal structures like BFs and

  5. Role of Aquaporin 0 in lens biomechanics.

    Science.gov (United States)

    Sindhu Kumari, S; Gupta, Neha; Shiels, Alan; FitzGerald, Paul G; Menon, Anil G; Mathias, Richard T; Varadaraj, Kulandaiappan

    2015-07-10

    Maintenance of proper biomechanics of the eye lens is important for its structural integrity and for the process of accommodation to focus near and far objects. Several studies have shown that specialized cytoskeletal systems such as the beaded filament (BF) and spectrin-actin networks contribute to mammalian lens biomechanics; mutations or deletion in these proteins alters lens biomechanics. Aquaporin 0 (AQP0), which constitutes ∼45% of the total membrane proteins of lens fiber cells, has been shown to function as a water channel and a structural cell-to-cell adhesion (CTCA) protein. Our recent ex vivo study on AQP0 knockout (AQP0 KO) mouse lenses showed the CTCA function of AQP0 could be crucial for establishing the refractive index gradient. However, biomechanical studies on the role of AQP0 are lacking. The present investigation used wild type (WT), AQP5 KO (AQP5(-/-)), AQP0 KO (heterozygous KO: AQP0(+/-); homozygous KO: AQP0(-/-); all in C57BL/6J) and WT-FVB/N mouse lenses to learn more about the role of fiber cell AQPs in lens biomechanics. Electron microscopic images exhibited decreases in lens fiber cell compaction and increases in extracellular space due to deletion of even one allele of AQP0. Biomechanical assay revealed that loss of one or both alleles of AQP0 caused a significant reduction in the compressive load-bearing capacity of the lenses compared to WT lenses. Conversely, loss of AQP5 did not alter the lens load-bearing ability. Compressive load-bearing at the suture area of AQP0(+/-) lenses showed easy separation while WT lens suture remained intact. These data from KO mouse lenses in conjunction with previous studies on lens-specific BF proteins (CP49 and filensin) suggest that AQP0 and BF proteins could act co-operatively in establishing normal lens biomechanics. We hypothesize that AQP0, with its prolific expression at the fiber cell membrane, could provide anchorage for cytoskeletal structures like BFs and together they help to confer

  6. Evolution and the Calcite Eye Lens

    OpenAIRE

    Williams, Vernon L.

    2013-01-01

    Calcite is a uniaxial, birefringent crystal, which in its optically transparent form, has been used for animal eye lenses, the trilobite being one such animal. Because of the calcite birefringence there is a difficulty in using calcite as a lens. When the propagation direction of incoming light is not exactly on the c-axis, the mages blur. In this paper, calcite blurring is evaluated, and the non-blurring by a crystallin eye lens is compared to a calcite one.

  7. Lens Design Using Group Indices of Refraction

    Science.gov (United States)

    Vaughan, A. H.

    1995-01-01

    An approach to lens design is described in which the ratio of the group velocity to the speed of light (the group index) in glass is used, in conjunction with the more familiar phase index of refraction, to control certain chromatic properties of a system of thin lenses in contact. The first-order design of thin-lens systems is illustrated by examples incorporating the methods described.

  8. Photogrammetric Applications of Immersive Video Cameras

    OpenAIRE

    Kwiatek, K.; Tokarczyk, R.

    2014-01-01

    The paper investigates immersive videography and its application in close-range photogrammetry. Immersive video involves the capture of a live-action scene that presents a 360° field of view. It is recorded simultaneously by multiple cameras or microlenses, where the principal point of each camera is offset from the rotating axis of the device. This issue causes problems when stitching together individual frames of video separated from particular cameras, however there are ways to ov...

  9. Movement-based Interaction in Camera Spaces

    DEFF Research Database (Denmark)

    Eriksson, Eva; Riisgaard Hansen, Thomas; Lykke-Olesen, Andreas

    2006-01-01

    In this paper we present three concepts that address movement-based interaction using camera tracking. Based on our work with several movement-based projects we present four selected applications, and use these applications to leverage our discussion, and to describe our three main concepts space......, relations, and feedback. We see these as central for describing and analysing movement-based systems using camera tracking and we show how these three concepts can be used to analyse other camera tracking applications....

  10. Performance analysis for gait in camera networks

    OpenAIRE

    Michela Goffredo; Imed Bouchrika; John Carter; Mark Nixon

    2008-01-01

    This paper deploys gait analysis for subject identification in multi-camera surveillance scenarios. We present a new method for viewpoint independent markerless gait analysis that does not require camera calibration and works with a wide range of directions of walking. These properties make the proposed method particularly suitable for gait identification in real surveillance scenarios where people and their behaviour need to be tracked across a set of cameras. Tests on 300 synthetic and real...

  11. Explosive Transient Camera (ETC) Program

    Science.gov (United States)

    Ricker, George

    1991-01-01

    Since the inception of the ETC program, a wide range of new technologies was developed to support this astronomical instrument. The prototype unit was installed at ETC Site 1. The first partially automated observations were made and some major renovations were later added to the ETC hardware. The ETC was outfitted with new thermoelectrically-cooled CCD cameras and a sophisticated vacuum manifold, which, together, made the ETC a much more reliable unit than the prototype. The ETC instrumentation and building were placed under full computer control, allowing the ETC to operate as an automated, autonomous instrument with virtually no human intervention necessary. The first fully-automated operation of the ETC was performed, during which the ETC monitored the error region of the repeating soft gamma-ray burster SGR 1806-21.

  12. Approximations to camera sensor noise

    Science.gov (United States)

    Jin, Xiaodan; Hirakawa, Keigo

    2013-02-01

    Noise is present in all image sensor data. Poisson distribution is said to model the stochastic nature of the photon arrival process, while it is common to approximate readout/thermal noise by additive white Gaussian noise (AWGN). Other sources of signal-dependent noise such as Fano and quantization also contribute to the overall noise profile. Question remains, however, about how best to model the combined sensor noise. Though additive Gaussian noise with signal-dependent noise variance (SD-AWGN) and Poisson corruption are two widely used models to approximate the actual sensor noise distribution, the justification given to these types of models are based on limited evidence. The goal of this paper is to provide a more comprehensive characterization of random noise. We concluded by presenting concrete evidence that Poisson model is a better approximation to real camera model than SD-AWGN. We suggest further modification to Poisson that may improve the noise model.

  13. New camera-based microswitch technology to monitor small head and mouth responses of children with multiple disabilities.

    Science.gov (United States)

    Lancioni, Giulio E; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N; O'Reilly, Mark F; Green, Vanessa A; Furniss, Fred

    2014-06-01

    Assessing a new camera-based microswitch technology, which did not require the use of color marks on the participants' face. Two children with extensive multiple disabilities participated. The responses selected for them consisted of small, lateral head movements and mouth closing or opening. The intervention was carried out according to a multiple probe design across responses. The technology involved a computer with a CPU using a 2-GHz clock, a USB video camera with a 16-mm lens, a USB cable connecting the camera and the computer, and a special software program written in ISO C++ language. The new technology was satisfactorily used with both children. Large increases in their responding were observed during the intervention periods (i.e. when the responses were followed by preferred stimulation). The new technology may be an important resource for persons with multiple disabilities and minimal motor behavior.

  14. Orbiting objective lens telescope system and method

    International Nuclear Information System (INIS)

    Crooks, J.W. Jr.

    1984-01-01

    A large objective lens is placed in a highly eccentric orbit about the earth. The orbit and orientation of the lens are carefully chosen so that it focuses light or other radiation from a preselected astronomical object into an image which slowly moves across the surface of the earth. A row of optical sensing units is located on the surface of the earth so that the image focused by the orbiting objective lens will travel substantially perpendicularly across the row during an observation. Output data generated from the sensing units may be multiplexed and fed to a real time processor which produces display signals. Each of the sensing units provides one scan line of the image being observed. The display signals are fed to a suitable display device which produces a picture of the preselected astronomical object. The objective lens may comprise a large flexible Fresnel zone plate or a flexible convex lens carried by a bicycle wheel-type supporting structure. The lens and supporting structure may be unfolded from compact cargo configurations and rotated after being placed into orbit

  15. Photon nanojet lens: design, fabrication and characterization

    International Nuclear Information System (INIS)

    Xu, Chen; Zhang, Sichao; Shao, Jinhai; Lu, Bing-Rui; Chen, Yifang; Mehfuz, Reyad; Drakeley, Stacey; Huang, Fumin

    2016-01-01

    In this paper, a novel nanolens with super resolution, based on the photon nanojet effect through dielectric nanostructures in visible wavelengths, is proposed. The nanolens is made from plastic SU-8, consisting of parallel semi-cylinders in an array. This paper focuses on the lens designed by numerical simulation with the finite-difference time domain method and nanofabrication of the lens by grayscale electron beam lithography combined with a casting/bonding/lift-off transfer process. Monte Carlo simulation for injected charge distribution and development modeling was applied to define the resultant 3D profile in PMMA as the template for the lens shape. After the casting/bonding/lift-off process, the fabricated nanolens in SU-8 has the desired lens shape, very close to that of PMMA, indicating that the pattern transfer process developed in this work can be reliably applied not only for the fabrication of the lens but also for other 3D nanopatterns in general. The light distribution through the lens near its surface was initially characterized by a scanning near-field optical microscope, showing a well defined focusing image of designed grating lines. Such focusing function supports the great prospects of developing a novel nanolithography based on the photon nanojet effect. (paper)

  16. Contact lens rehabilitation following repaired corneal perforations

    Science.gov (United States)

    Titiyal, Jeewan S; Sinha, Rajesh; Sharma, Namrata; Sreenivas, V; Vajpayee, Rasik B

    2006-01-01

    Background Visual outcome following repair of post-traumatic corneal perforation may not be optimal due to presence of irregular keratometric astigmatism. We performed a study to evaluate and compare rigid gas permeable contact lens and spectacles in visual rehabilitation following perforating corneal injuries. Method Eyes that had undergone repair for corneal perforating injuries with or without lens aspiration were fitted rigid gas permeable contact lenses. The fitting pattern and the improvement in visual acuity by contact lens over spectacle correction were noted. Results Forty eyes of 40 patients that had undergone surgical repair of posttraumatic corneal perforations were fitted rigid gas permeable contact lenses for visual rehabilitation. Twenty-four eyes (60%) required aphakic contact lenses. The best corrected visual acuity (BCVA) of ≥ 6/18 in the snellen's acuity chart was seen in 10 (25%) eyes with spectacle correction and 37 (92.5%) eyes with the use of contact lens (p < 0.001). The best-corrected visual acuity with spectacles was 0.20 ± 0.13 while the same with contact lens was 0.58 ± 0.26. All the patients showed an improvement of ≥ 2 lines over spectacles in the snellen's acuity chart with contact lens. Conclusion Rigid gas permeable contact lenses are better means of rehabilitation in eyes that have an irregular cornea due to scars caused by perforating corneal injuries. PMID:16536877

  17. Decision about buying a gamma camera

    Energy Technology Data Exchange (ETDEWEB)

    Ganatra, R D

    1993-12-31

    A large part of the referral to a nuclear medicine department is usually for imaging studies. Sooner or later, the nuclear medicine specialist will be called upon to make a decision about when and what type of gamma camera to buy. There is no longer an option of choosing between a rectilinear scanner and a gamma camera as the former is virtually out of the market. The decision that one has to make is when to invest in a gamma camera, and then on what basis to select the gamma camera 1 tab., 1 fig

  18. Object tracking using multiple camera video streams

    Science.gov (United States)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  19. Scintillation camera for high activity sources

    International Nuclear Information System (INIS)

    Arseneau, R.E.

    1978-01-01

    The invention described relates to a scintillation camera used for clinical medical diagnosis. Advanced recognition of many unacceptable pulses allows the scintillation camera to discard such pulses at an early stage in processing. This frees the camera to process a greater number of pulses of interest within a given period of time. Temporary buffer storage allows the camera to accommodate pulses received at a rate in excess of its maximum rated capability due to statistical fluctuations in the level of radioactivity of the radiation source measured. (U.K.)

  20. Decision about buying a gamma camera

    International Nuclear Information System (INIS)

    Ganatra, R.D.

    1992-01-01

    A large part of the referral to a nuclear medicine department is usually for imaging studies. Sooner or later, the nuclear medicine specialist will be called upon to make a decision about when and what type of gamma camera to buy. There is no longer an option of choosing between a rectilinear scanner and a gamma camera as the former is virtually out of the market. The decision that one has to make is when to invest in a gamma camera, and then on what basis to select the gamma camera

  1. Streak camera recording of interferometer fringes

    International Nuclear Information System (INIS)

    Parker, N.L.; Chau, H.H.

    1977-01-01

    The use of an electronic high-speed camera in the streaking mode to record interference fringe motion from a velocity interferometer is discussed. Advantages of this method over the photomultiplier tube-oscilloscope approach are delineated. Performance testing and data for the electronic streak camera are discussed. The velocity profile of a mylar flyer accelerated by an electrically exploded bridge, and the jump-off velocity of metal targets struck by these mylar flyers are measured in the camera tests. Advantages of the streak camera include portability, low cost, ease of operation and maintenance, simplified interferometer optics, and rapid data analysis

  2. Conservation through the economics lens.

    Science.gov (United States)

    Farley, Joshua

    2010-01-01

    Although conservation is an inherently transdisciplinary issue, there is much to be gained from examining the problem through an economics lens. Three benefits of such an approach are laid out in this paper. First, many of the drivers of environmental degradation are economic in origin, and the better we understand them, the better we can conserve ecosystems by reducing degradation. Second, economics offers us a when-to-stop rule, which is equivalent to a when-to-conserve rule. All economic production is based on the transformation of raw materials provided by nature. As the economic system grows in physical size, it necessarily displaces and degrades ecosystems. The marginal benefits of economic growth are diminishing, and the marginal costs of ecological degradation are increasing. Conceptually, we should stop economic growth and focus on conservation when the two are equal. Third, economics can help us understand how to efficiently and justly allocate resources toward conservation, and this paper lays out some basic principles for doing so. Unfortunately, the field of economics is dominated by neoclassical economics, which builds an analytical framework based on questionable assumptions and takes an excessively disciplinary and formalistic approach. Conservation is a complex problem, and analysis from individual disciplinary lenses can make important contributions to conservation only when the resulting insights are synthesized into a coherent vision of the whole. Fortunately, there are a number of emerging transdisciplines, such as ecological economics and environmental management, that are dedicated to this task.

  3. Peripheral Defocus of the Monkey Crystalline Lens With Accommodation in a Lens Stretcher

    Science.gov (United States)

    Maceo Heilman, Bianca; Manns, Fabrice; Ruggeri, Marco; Ho, Arthur; Gonzalez, Alex; Rowaan, Cor; Bernal, Andres; Arrieta, Esdras; Parel, Jean-Marie

    2018-01-01

    Purpose To characterize the peripheral defocus of the monkey crystalline lens and its changes with accommodation. Methods Experiments were performed on 15 lenses from 11 cynomolgus monkey eyes (age: 3.8–12.4 years, postmortem time: 33.5 ± 15.3 hours). The tissue was mounted in a motorized lens stretcher to allow for measurements of the lens in the accommodated (unstretched) and unaccommodated (stretched) states. A custom-built combined laser ray tracing and optical coherence tomography system was used to measure the paraxial on-axis and off-axis lens power for delivery angles ranging from −20° to +20° (in air). For each delivery angle, peripheral defocus was quantified as the difference between paraxial off-axis and on-axis power. The peripheral defocus of the lens was compared in the unstretched and stretched states. Results On average, the paraxial on-axis lens power was 52.0 ± 3.4 D in the unstretched state and 32.5 ± 5.1 D in the stretched state. In both states, the lens power increased with increasing delivery angle. From 0° to +20°, the relative peripheral lens power increased by 10.7 ± 1.4 D in the unstretched state and 7.5 ± 1.6 D in the stretched state. The change in field curvature with accommodation was statistically significant (P lens has greater curvature or relative peripheral power. Conclusions The cynomolgus monkey lens has significant accommodation-dependent curvature of field, which suggests that the lens asserts a significant contribution to the peripheral optical performance of the eye that also varies with the state of accommodation.

  4. Development of the geoCamera, a System for Mapping Ice from a Ship

    Science.gov (United States)

    Arsenault, R.; Clemente-Colon, P.

    2012-12-01

    The geoCamera produces maps of the ice surrounding an ice-capable ship by combining images from one or more digital cameras with the ship's position and attitude data. Maps are produced along the ship's path with the achievable width and resolution depending on camera mounting height as well as camera resolution and lens parameters. Our system has produced maps up to 2000m wide at 1m resolution. Once installed and calibrated, the system is designed to operate automatically producing maps in near real-time and making them available to on-board users via existing information systems. The resulting small-scale maps complement existing satellite based products as well as on-board observations. Development versions have temporarily been deployed in Antarctica on the RV Nathaniel B. Palmer in 2010 and in the Arctic on the USCGC Healy in 2011. A permanent system has been deployed during the summer of 2012 on the USCGC Healy. To make the system attractive to other ships of opportunity, design goals include using existing ship systems when practical, using low costs commercial-off-the-shelf components if additional hardware is necessary, automating the process to virtually eliminate adding to the workload of ships technicians and making the software components modular and flexible enough to allow more seamless integration with a ships particular IT system.

  5. Motionless active depth from defocus system using smart optics for camera autofocus applications

    Science.gov (United States)

    Amin, M. Junaid; Riza, Nabeel A.

    2016-04-01

    This paper describes a motionless active Depth from Defocus (DFD) system design suited for long working range camera autofocus applications. The design consists of an active illumination module that projects a scene illuminating coherent conditioned optical radiation pattern which maintains its sharpness over multiple axial distances allowing an increased DFD working distance range. The imager module of the system responsible for the actual DFD operation deploys an electronically controlled variable focus lens (ECVFL) as a smart optic to enable a motionless imager design capable of effective DFD operation. An experimental demonstration is conducted in the laboratory which compares the effectiveness of the coherent conditioned radiation module versus a conventional incoherent active light source, and demonstrates the applicability of the presented motionless DFD imager design. The fast response and no-moving-parts features of the DFD imager design are especially suited for camera scenarios where mechanical motion of lenses to achieve autofocus action is challenging, for example, in the tiny camera housings in smartphones and tablets. Applications for the proposed system include autofocus in modern day digital cameras.

  6. Polarization sensitive camera for the in vitro diagnostic and monitoring of dental erosion

    Science.gov (United States)

    Bossen, Anke; Rakhmatullina, Ekaterina; Lussi, Adrian; Meier, Christoph

    Due to a frequent consumption of acidic food and beverages, the prevalence of dental erosion increases worldwide. In an initial erosion stage, the hard dental tissue is softened due to acidic demineralization. As erosion progresses, a gradual tissue wear occurs resulting in thinning of the enamel. Complete loss of the enamel tissue can be observed in severe clinical cases. Therefore, it is essential to provide a diagnosis tool for an accurate detection and monitoring of dental erosion already at early stages. In this manuscript, we present the development of a polarization sensitive imaging camera for the visualization and quantification of dental erosion. The system consists of two CMOS cameras mounted on two sides of a polarizing beamsplitter. A horizontal linearly polarized light source is positioned orthogonal to the camera to ensure an incidence illumination and detection angles of 45°. The specular reflected light from the enamel surface is collected with an objective lens mounted on the beam splitter and divided into horizontal (H) and vertical (V) components on each associate camera. Images of non-eroded and eroded enamel surfaces at different erosion degrees were recorded and assessed with diagnostic software. The software was designed to generate and display two types of images: distribution of the reflection intensity (V) and a polarization ratio (H-V)/(H+V) throughout the analyzed tissue area. The measurements and visualization of these two optical parameters, i.e. specular reflection intensity and the polarization ratio, allowed detection and quantification of enamel erosion at early stages in vitro.

  7. Sharing of secondary electrons by in-lens and out-lens detector in low-voltage scanning electron microscope equipped with immersion lens.

    Science.gov (United States)

    Kumagai, Kazuhiro; Sekiguchi, Takashi

    2009-03-01

    To understand secondary electron (SE) image formation with in-lens and out-lens detector in low-voltage scanning electron microscopy (LV-SEM), we have evaluated SE signals of an in-lens and an out-lens detector in LV-SEM. From the energy distribution spectra of SEs with various boosting voltages of the immersion lens system, we revealed that the electrostatic field of the immersion lens mainly collects electrons with energy lower than 40eV, acting as a low-pass filter. This effect is also observed as a contrast change in LV-SEM images taken by in-lens and out-lens detectors.

  8. Nuclear Radiation Degradation Study on HD Camera Based on CMOS Image Sensor at Different Dose Rates

    Directory of Open Access Journals (Sweden)

    Congzheng Wang

    2018-02-01

    Full Text Available In this work, we irradiated a high-definition (HD industrial camera based on a commercial-off-the-shelf (COTS CMOS image sensor (CIS with Cobalt-60 gamma-rays. All components of the camera under test were fabricated without radiation hardening, except for the lens. The irradiation experiments of the HD camera under biased conditions were carried out at 1.0, 10.0, 20.0, 50.0 and 100.0 Gy/h. During the experiment, we found that the tested camera showed a remarkable degradation after irradiation and differed in the dose rates. With the increase of dose rate, the same target images become brighter. Under the same dose rate, the radiation effect in bright area is lower than that in dark area. Under different dose rates, the higher the dose rate is, the worse the radiation effect will be in both bright and dark areas. And the standard deviations of bright and dark areas become greater. Furthermore, through the progressive degradation analysis of the captured image, experimental results demonstrate that the attenuation of signal to noise ratio (SNR versus radiation time is not obvious at the same dose rate, and the degradation is more and more serious with increasing dose rate. Additionally, the decrease rate of SNR at 20.0, 50.0 and 100.0 Gy/h is far greater than that at 1.0 and 10.0 Gy/h. Even so, we confirm that the HD industrial camera is still working at 10.0 Gy/h during the 8 h of measurements, with a moderate decrease of the SNR (5 dB. The work is valuable and can provide suggestion for camera users in the radiation field.

  9. Construct and face validity of a virtual reality-based camera navigation curriculum.

    Science.gov (United States)

    Shetty, Shohan; Panait, Lucian; Baranoski, Jacob; Dudrick, Stanley J; Bell, Robert L; Roberts, Kurt E; Duffy, Andrew J

    2012-10-01

    Camera handling and navigation are essential skills in laparoscopic surgery. Surgeons rely on camera operators, usually the least experienced members of the team, for visualization of the operative field. Essential skills for camera operators include maintaining orientation, an effective horizon, appropriate zoom control, and a clean lens. Virtual reality (VR) simulation may be a useful adjunct to developing camera skills in a novice population. No standardized VR-based camera navigation curriculum is currently available. We developed and implemented a novel curriculum on the LapSim VR simulator platform for our residents and students. We hypothesize that our curriculum will demonstrate construct and face validity in our trainee population, distinguishing levels of laparoscopic experience as part of a realistic training curriculum. Overall, 41 participants with various levels of laparoscopic training completed the curriculum. Participants included medical students, surgical residents (Postgraduate Years 1-5), fellows, and attendings. We stratified subjects into three groups (novice, intermediate, and advanced) based on previous laparoscopic experience. We assessed face validity with a questionnaire. The proficiency-based curriculum consists of three modules: camera navigation, coordination, and target visualization using 0° and 30° laparoscopes. Metrics include time, target misses, drift, path length, and tissue contact. We analyzed data using analysis of variance and Student's t-test. We noted significant differences in repetitions required to complete the curriculum: 41.8 for novices, 21.2 for intermediates, and 11.7 for the advanced group (P medical students during their surgery rotations. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Nuclear Radiation Degradation Study on HD Camera Based on CMOS Image Sensor at Different Dose Rates.

    Science.gov (United States)

    Wang, Congzheng; Hu, Song; Gao, Chunming; Feng, Chang

    2018-02-08

    In this work, we irradiated a high-definition (HD) industrial camera based on a commercial-off-the-shelf (COTS) CMOS image sensor (CIS) with Cobalt-60 gamma-rays. All components of the camera under test were fabricated without radiation hardening, except for the lens. The irradiation experiments of the HD camera under biased conditions were carried out at 1.0, 10.0, 20.0, 50.0 and 100.0 Gy/h. During the experiment, we found that the tested camera showed a remarkable degradation after irradiation and differed in the dose rates. With the increase of dose rate, the same target images become brighter. Under the same dose rate, the radiation effect in bright area is lower than that in dark area. Under different dose rates, the higher the dose rate is, the worse the radiation effect will be in both bright and dark areas. And the standard deviations of bright and dark areas become greater. Furthermore, through the progressive degradation analysis of the captured image, experimental results demonstrate that the attenuation of signal to noise ratio (SNR) versus radiation time is not obvious at the same dose rate, and the degradation is more and more serious with increasing dose rate. Additionally, the decrease rate of SNR at 20.0, 50.0 and 100.0 Gy/h is far greater than that at 1.0 and 10.0 Gy/h. Even so, we confirm that the HD industrial camera is still working at 10.0 Gy/h during the 8 h of measurements, with a moderate decrease of the SNR (5 dB). The work is valuable and can provide suggestion for camera users in the radiation field.

  11. Post-lens tear turbidity and visual quality after scleral lens wear.

    Science.gov (United States)

    Carracedo, Gonzalo; Serramito-Blanco, Maria; Martin-Gil, Alba; Wang, Zicheng; Rodriguez-Pomar, Candela; Pintor, Jesús

    2017-11-01

    The aim was to evaluate the turbidity and thickness of the post-lens tear layer and its effect on visual quality in patients with keratoconus after the beginning of lens wear and before lens removal at the end of eight hours. Twenty-six patients with keratoconus (aged 36.95 ± 8.95 years) participated voluntarily in the study. The sample was divided into two groups: patients with intrastromal corneal ring (ICRS group) and patients without ICRS (KC group). Distance visual acuity (VA), contrast sensitivity, pachymetry, post-lens tear layer height and post-lens tear layer turbidity (percentage area occupied and number of particles per mm 2 ) were evaluated with optical coherence tomography before and after wearing a scleral lens. A significant increase of turbidity was found in all groups assessed (p turbidity parameters with distance VA but no correlation between turbidity and post-lens tear layer thickness at the beginning was found (p > 0.05). A strong correlation in all groups between the post-lens tear layer at the beginning and differences of tear layer thickness between two measures was also found (p turbidity. © 2017 Optometry Australia.

  12. Effect of infusion bottle height on lens power after lens refilling with and without a plug

    NARCIS (Netherlands)

    Koopmans, SA; Terwee, T; Haitjema, HJ; Kooijman, AC; Barkhof, J

    2003-01-01

    Purpose: To evaluate the influence of intraoperative infusion bottle height on the power of refilled pig lenses. Setting: Research Laboratory, Pharmacia Intraocular Lens Manufacturing Plant, Groningen, The Netherlands. Methods: This study comprised 2 groups of pig eyes. In 1 group, the lens was

  13. Disassembly of the lens fiber cell nucleus to create a clear lens: the p27 descent

    Science.gov (United States)

    The eye lens is unique among tissues: it is transparent, does not form tumors, and the majority of its cells degrade their organelles, including their cell nuclei. A mystery for over a century, there has been considerable recent progress in elucidating mechanisms of lens fiber cell denucleation (LFC...

  14. Status of MUSIC, the MUltiwavelength Sub/millimeter Inductance Camera

    Science.gov (United States)

    Golwala, Sunil R.; Bockstiegel, Clint; Brugger, Spencer; Czakon, Nicole G.; Day, Peter K.; Downes, Thomas P.; Duan, Ran; Gao, Jiansong; Gill, Amandeep K.; Glenn, Jason; Hollister, Matthew I.; LeDuc, Henry G.; Maloney, Philip R.; Mazin, Benjamin A.; McHugh, Sean G.; Miller, David; Noroozian, Omid; Nguyen, Hien T.; Sayers, Jack; Schlaerth, James A.; Siegel, Seth; Vayonakis, Anastasios K.; Wilson, Philip R.; Zmuidzinas, Jonas

    2012-09-01

    We present the status of MUSIC, the MUltiwavelength Sub/millimeter Inductance Camera, a new instrument for the Caltech Submillimeter Observatory. MUSIC is designed to have a 14', diffraction-limited field-of-view instrumented with 2304 detectors in 576 spatial pixels and four spectral bands at 0.87, 1.04, 1.33, and 1.98 mm. MUSIC will be used to study dusty star-forming galaxies, galaxy clusters via the Sunyaev-Zeldovich effect, and star formation in our own and nearby galaxies. MUSIC uses broadband superconducting phased-array slot-dipole antennas to form beams, lumpedelement on-chip bandpass filters to define spectral bands, and microwave kinetic inductance detectors to sense incoming light. The focal plane is fabricated in 8 tiles consisting of 72 spatial pixels each. It is coupled to the telescope via an ambient-temperature ellipsoidal mirror and a cold reimaging lens. A cold Lyot stop sits at the image of the primary mirror formed by the ellipsoidal mirror. Dielectric and metal-mesh filters are used to block thermal infrared and out-ofband radiation. The instrument uses a pulse tube cooler and 3He/ 3He/4He closed-cycle cooler to cool the focal plane to below 250 mK. A multilayer shield attenuates Earth's magnetic field. Each focal plane tile is read out by a single pair of coaxes and a HEMT amplifier. The readout system consists of 16 copies of custom-designed ADC/DAC and IF boards coupled to the CASPER ROACH platform. We focus on recent updates on the instrument design and results from the commissioning of the full camera in 2012.

  15. Low-cost far infrared bolometer camera for automotive use

    Science.gov (United States)

    Vieider, Christian; Wissmar, Stanley; Ericsson, Per; Halldin, Urban; Niklaus, Frank; Stemme, Göran; Källhammer, Jan-Erik; Pettersson, Håkan; Eriksson, Dick; Jakobsen, Henrik; Kvisterøy, Terje; Franks, John; VanNylen, Jan; Vercammen, Hans; VanHulsel, Annick

    2007-04-01

    A new low-cost long-wavelength infrared bolometer camera system is under development. It is designed for use with an automatic vision algorithm system as a sensor to detect vulnerable road users in traffic. Looking 15 m in front of the vehicle it can in case of an unavoidable impact activate a brake assist system or other deployable protection system. To achieve our cost target below €100 for the sensor system we evaluate the required performance and can reduce the sensitivity to 150 mK and pixel resolution to 80 x 30. We address all the main cost drivers as sensor size and production yield along with vacuum packaging, optical components and large volume manufacturing technologies. The detector array is based on a new type of high performance thermistor material. Very thin Si/SiGe single crystal multi-layers are grown epitaxially. Due to the resulting valence barriers a high temperature coefficient of resistance is achieved (3.3%/K). Simultaneously, the high quality crystalline material provides very low 1/f-noise characteristics and uniform material properties. The thermistor material is transferred from the original substrate wafer to the read-out circuit using adhesive wafer bonding and subsequent thinning. Bolometer arrays can then be fabricated using industry standard MEMS process and materials. The inherently good detector performance allows us to reduce the vacuum requirement and we can implement wafer level vacuum packaging technology used in established automotive sensor fabrication. The optical design is reduced to a single lens camera. We develop a low cost molding process using a novel chalcogenide glass (GASIR®3) and integrate anti-reflective and anti-erosion properties using diamond like carbon coating.

  16. Improving Situational Awareness in camera surveillance by combining top-view maps with camera images

    NARCIS (Netherlands)

    Kooi, F.L.; Zeeders, R.

    2009-01-01

    The goal of the experiment described is to improve today's camera surveillance in public spaces. Three designs with the camera images combined on a top-view map were compared to each other and to the current situation in camera surveillance. The goal was to test which design makes spatial

  17. Automatic inference of geometric camera parameters and intercamera topology in uncalibrated disjoint surveillance cameras

    NARCIS (Netherlands)

    Hollander, R.J.M. den; Bouma, H.; Baan, J.; Eendebak, P.T.; Rest, J.H.C. van

    2015-01-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many

  18. Nuclear magnetic resonance studies of lens transparency

    International Nuclear Information System (INIS)

    Beaulieu, C.F.

    1989-01-01

    Transparency of normal lens cytoplasm and loss of transparency in cataract were studied by nuclear magnetic resonance (NMR) methods. Phosphorus ( 31 P) NMR spectroscopy was used to measure the 31 P constituents and pH of calf lens cortical and nuclear homogenates and intact lenses as a function of time after lens enucleation and in opacification produced by calcium. Transparency was measured with laser spectroscopy. Despite complete loss of adenosine triphosphate (ATP) within 18 hrs of enucleation, the homogenates and lenses remained 100% transparent. Additions of calcium to ATP-depleted cortical homogenates produced opacification as well as concentration-dependent changes in inorganic phosphate, sugar phosphates, glycerol phosphorylcholine and pH. 1 H relaxation measurements of lens water at 200 MHz proton Larmor frequency studied temperature-dependent phase separation of lens nuclear homogenates. Preliminary measurements of T 1 and T 2 with non-equilibrium temperature changes showed a change in the slope of the temperature dependence of T 1 and T 2 at the phase separation temperature. Subsequent studies with equilibrium temperature changes showed no effect of phase separation on T 1 or T 2 , consistent with the phase separation being a low-energy process. 1 H nuclear magnetic relaxation dispersion (NMRD) studies (measurements of the magnetic field dependence of the water proton 1/T 1 relaxation rates) were performed on (1) calf lens nuclear and cortical homogenates (2) chicken lens homogenates, (3) native and heat-denatured egg white and (4) pure proteins including bovine γ-II crystallin bovine serum albumin (BSA) and myoglobin. The NMRD profiles of all samples exhibited decreases in 1/T 1 with increasing magnetic field

  19. Active spectral imaging nondestructive evaluation (SINDE) camera

    Energy Technology Data Exchange (ETDEWEB)

    Simova, E.; Rochefort, P.A., E-mail: eli.simova@cnl.ca [Canadian Nuclear Laboratories, Chalk River, Ontario (Canada)

    2016-06-15

    A proof-of-concept video camera for active spectral imaging nondestructive evaluation has been demonstrated. An active multispectral imaging technique has been implemented in the visible and near infrared by using light emitting diodes with wavelengths spanning from 400 to 970 nm. This shows how the camera can be used in nondestructive evaluation to inspect surfaces and spectrally identify materials and corrosion. (author)

  20. High resolution RGB color line scan camera

    Science.gov (United States)

    Lynch, Theodore E.; Huettig, Fred

    1998-04-01

    A color line scan camera family which is available with either 6000, 8000 or 10000 pixels/color channel, utilizes off-the-shelf lenses, interfaces with currently available frame grabbers, includes on-board pixel by pixel offset correction, and is configurable and controllable via RS232 serial port for computer controlled or stand alone operation is described in this paper. This line scan camera is based on an available 8000 element monochrome line scan camera designed by AOA for OEM use. The new color version includes improvements such as better packaging and additional user features which make the camera easier to use. The heart of the camera is a tri-linear CCD sensor with on-chip color balancing for maximum accuracy and pinned photodiodes for low lag response. Each color channel is digitized to 12 bits and all three channels are multiplexed together so that the resulting camera output video is either a 12 or 8 bit data stream at a rate of up to 24Megpixels/sec. Conversion from 12 to 8 bit, or user-defined gamma, is accomplished by on board user-defined video look up tables. The camera has two user-selectable operating modes; lows speed, high sensitivity mode or high speed, reduced sensitivity mode. The intended uses of the camera include industrial inspection, digital archiving, document scanning, and graphic arts applications.

  1. Ultra fast x-ray streak camera

    International Nuclear Information System (INIS)

    Coleman, L.W.; McConaghy, C.F.

    1975-01-01

    A unique ultrafast x-ray sensitive streak camera, with a time resolution of 50psec, has been built and operated. A 100A thick gold photocathode on a beryllium vacuum window is used in a modified commerical image converter tube. The X-ray streak camera has been used in experiments to observe time resolved emission from laser-produced plasmas. (author)

  2. An Open Standard for Camera Trap Data

    Directory of Open Access Journals (Sweden)

    Tavis Forrester

    2016-12-01

    Full Text Available Camera traps that capture photos of animals are a valuable tool for monitoring biodiversity. The use of camera traps is rapidly increasing and there is an urgent need for standardization to facilitate data management, reporting and data sharing. Here we offer the Camera Trap Metadata Standard as an open data standard for storing and sharing camera trap data, developed by experts from a variety of organizations. The standard captures information necessary to share data between projects and offers a foundation for collecting the more detailed data needed for advanced analysis. The data standard captures information about study design, the type of camera used, and the location and species names for all detections in a standardized way. This information is critical for accurately assessing results from individual camera trapping projects and for combining data from multiple studies for meta-analysis. This data standard is an important step in aligning camera trapping surveys with best practices in data-intensive science. Ecology is moving rapidly into the realm of big data, and central data repositories are becoming a critical tool and are emerging for camera trap data. This data standard will help researchers standardize data terms, align past data to new repositories, and provide a framework for utilizing data across repositories and research projects to advance animal ecology and conservation.

  3. Laser scanning camera inspects hazardous area

    International Nuclear Information System (INIS)

    Fryatt, A.; Miprode, C.

    1985-01-01

    Main operational characteristics of a new laser scanning camera are presented. The camera is intended primarily for low level high resolution viewing inside nuclear reactors. It uses a He-Ne laser beam raster; by detecting the reflected light by means of a phomultiplier, the subject under observation can be reconstructed in an electronic video store and reviewed on a conventional monitor screen

  4. Single chip camera active pixel sensor

    Science.gov (United States)

    Shaw, Timothy (Inventor); Pain, Bedabrata (Inventor); Olson, Brita (Inventor); Nixon, Robert H. (Inventor); Fossum, Eric R. (Inventor); Panicacci, Roger A. (Inventor); Mansoorian, Barmak (Inventor)

    2003-01-01

    A totally digital single chip camera includes communications to operate most of its structure in serial communication mode. The digital single chip camera include a D/A converter for converting an input digital word into an analog reference signal. The chip includes all of the necessary circuitry for operating the chip using a single pin.

  5. Securing Embedded Smart Cameras with Trusted Computing

    Directory of Open Access Journals (Sweden)

    Winkler Thomas

    2011-01-01

    Full Text Available Camera systems are used in many applications including video surveillance for crime prevention and investigation, traffic monitoring on highways or building monitoring and automation. With the shift from analog towards digital systems, the capabilities of cameras are constantly increasing. Today's smart camera systems come with considerable computing power, large memory, and wired or wireless communication interfaces. With onboard image processing and analysis capabilities, cameras not only open new possibilities but also raise new challenges. Often overlooked are potential security issues of the camera system. The increasing amount of software running on the cameras turns them into attractive targets for attackers. Therefore, the protection of camera devices and delivered data is of critical importance. In this work we present an embedded camera prototype that uses Trusted Computing to provide security guarantees for streamed videos. With a hardware-based security solution, we ensure integrity, authenticity, and confidentiality of videos. Furthermore, we incorporate image timestamping, detection of platform reboots, and reporting of the system status. This work is not limited to theoretical considerations but also describes the implementation of a prototype system. Extensive evaluation results illustrate the practical feasibility of the approach.

  6. Centering mount for a gamma camera

    International Nuclear Information System (INIS)

    Mirkhodzhaev, A.Kh.; Kuznetsov, N.K.; Ostryj, Yu.E.

    1988-01-01

    A device for centering a γ-camera detector in case of radionuclide diagnosis is described. It permits the use of available medical coaches instead of a table with a transparent top. The device can be used for centering a detector (when it is fixed at the low end of a γ-camera) on a required area of the patient's body

  7. Digital airborne camera introduction and technology

    CERN Document Server

    Sandau, Rainer

    2014-01-01

    The last decade has seen great innovations on the airborne camera. This book is the first ever written on the topic and describes all components of a digital airborne camera ranging from the object to be imaged to the mass memory device.

  8. Adapting virtual camera behaviour through player modelling

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Research in virtual camera control has focused primarily on finding methods to allow designers to place cameras effectively and efficiently in dynamic and unpredictable environments, and to generate complex and dynamic plans for cinematography in virtual environments. In this article, we propose...

  9. Driving with head-slaved camera system

    NARCIS (Netherlands)

    Oving, A.B.; Erp, J.B.F. van

    2001-01-01

    In a field experiment, we tested the effectiveness of a head-slaved camera system for driving an armoured vehicle under armour. This system consists of a helmet-mounted display (HMD), a headtracker, and a motion platform with two cameras. Subjects performed several driving tasks on paved and in

  10. Rosetta Star Tracker and Navigation Camera

    DEFF Research Database (Denmark)

    Thuesen, Gøsta

    1998-01-01

    Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera.......Proposal in response to the Invitation to Tender (ITT) issued by Matra Marconi Space (MSS) for the procurement of the ROSETTA Star Tracker and Navigation Camera....

  11. Properties of the cathode lens combined with a focusing magnetic/immersion-magnetic lens

    International Nuclear Information System (INIS)

    Konvalina, I.; Muellerova, I.

    2011-01-01

    The cathode lens is an electron optical element in an emission electron microscope accelerating electrons from the sample, which serves as a source for a beam of electrons. Special application consists in using the cathode lens first for retardation of an illuminating electron beam and then for acceleration of reflected as well as secondary electrons, made in the directly imaging low energy electron microscope or in its scanning version discussed here. In order to form a real image, the cathode lens has to be combined with a focusing magnetic lens or a focusing immersion-magnetic lens, as used for objective lenses of some commercial scanning electron microscopes. These two alternatives are compared with regards to their optical properties, in particular with respect to predicted aberration coefficients and the spot size, as well as the optimum angular aperture of the primary beam. The important role of the final aperture size on the image resolution is also presented.

  12. Photosensitized oxidation in the ocular lens: evidence for photosensitizers endogenous to the human lens

    International Nuclear Information System (INIS)

    Zigler, J.S. Jr.; Goosey, J.D.

    1981-01-01

    Numerous investigators have attempted to associate near UV light exposure with various changes which occur to lens crystallins during aging and cataractogenesis. Recently it was shown that in vitro singlet oxygen mediated oxidation of lens crystallins produces effects very similar to those documented for crystallins from old or cataractous lenses and it was suggested that near UV photodynamic effects may play a major role in vivo in aging in the human lens. It has now been shown that certain oxidation products of tryptophan which have been identified in human lens can act as near UV photosensitizers, producing singlet oxygen. The insoluble protein fraction from human cataracts was shown to have the capacity to act as a photosensitizer. An age-related increase in photosensitizing capacity was also demonstrated in the soluble crystallins from human lens. These findings are discussed with respect to development of pigmented nuclear cataracts. (author)

  13. Optical integration of Pancharatnam-Berry phase lens and dynamical phase lens

    International Nuclear Information System (INIS)

    Ke, Yougang; Liu, Yachao; Zhou, Junxiao; Liu, Yuanyuan; Luo, Hailu; Wen, Shuangchun

    2016-01-01

    In the optical system, most elements such as lens, prism, and optical fiber are made of silica glass. Therefore, integrating Pancharatnam-Berry phase elements into silica glass has potential applications in the optical system. In this paper, we take a lens, for example, which integrates a Pancharatnam-Berry phase lens into a conventional plano-convex lens. The spin states and positions of focal points can be modulated by controlling the polarization states of the incident beam. The proposed lens has a high transmission efficiency, and thereby acts as a simple and powerful tool to manipulate spin photons. Furthermore, the method can be conveniently extended to the optical fiber and laser cavity, and may provide a route to the design of the spin-photonic devices.

  14. Recording of radiation-induced optical density changes in doped agarose gels with a CCD camera

    International Nuclear Information System (INIS)

    Tarte, B.J.; Jardine, P.A.; Van Doorn, T.

    1996-01-01

    Full text: Spatially resolved dose measurement with iron-doped agarose gels is continuing to be investigated for applications in radiotherapy dosimetry. It has previously been proposed to use optical methods, rather than MRI, for dose measurement with such gels and this has been investigated using a spectrophotometer (Appleby A and Leghrouz A, Med Phys, 18:309-312, 1991). We have previously studied the use of a pencil beam laser for such optical density measurement of gels and are currently investigating charge-coupled devices (CCD) camera imaging for the same purpose but with the advantages of higher data acquisition rates and potentially greater spatial resolution. The gels used in these studies were poured, irradiated and optically analysed in Perspex casts providing gel sections 1 cm thick and up to 20 cm x 30 cm in dimension. The gels were also infused with a metal indicator dye (xylenol orange) to render the radiation induced oxidation of the iron in the gel sensitive to optical radiation, specifically in the green spectral region. Data acquisition with the CCD camera involved illumination of the irradiated gel section with a diffuse white light source, with the light from the plane of the gel section focussed to the CCD array with a manual zoom lens. The light was also filtered with a green colour glass filter to maximise the contrast between unirradiated and irradiated gels. The CCD camera (EG and G Reticon MC4013) featured a 1024 x 1024 pixel array and was interfaced to a PC via a frame grabber acquisition board with 8 bit resolution. The performance of the gel dosimeter was appraised in mapping of physical and dynamic wedged 6 MV X-ray fields. The results from the CCD camera detection system were compared with both ionisation chamber data and laser based optical density measurements of the gels. Cross beam profiles were extracted from each measurement system at a particular depth (eg. 2.3 cm for the physical wedge field) for direct comparison. A

  15. Modelling Virtual Camera Behaviour Through Player Gaze

    DEFF Research Database (Denmark)

    Picardi, Andrea; Burelli, Paolo; Yannakakis, Georgios N.

    2012-01-01

    industry and game AI research focus on the devel- opment of increasingly sophisticated systems to automate the control of the virtual camera integrating artificial intel- ligence algorithms within physical simulations. However, in both industry and academia little research has been carried out......In a three-dimensional virtual environment, aspects such as narrative and interaction largely depend on the placement and animation of the virtual camera. Therefore, virtual camera control plays a critical role in player experience and, thereby, in the overall quality of a computer game. Both game...... on the relationship between virtual camera, game-play and player behaviour. We run a game user experiment to shed some light on this relationship and identify relevant dif- ferences between camera behaviours through different game sessions, playing behaviours and player gaze patterns. Re- sults show that users can...

  16. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Romps, David [Univ. of California, Berkeley, CA (United States); Oktem, Rusen [Univ. of California, Berkeley, CA (United States)

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together to obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.

  17. Gamma camera performance: technical assessment protocol

    Energy Technology Data Exchange (ETDEWEB)

    Bolster, A.A. [West Glasgow Hospitals NHS Trust, London (United Kingdom). Dept. of Clinical Physics; Waddington, W.A. [University College London Hospitals NHS Trust, London (United Kingdom). Inst. of Nuclear Medicine

    1996-12-31

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera`s computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author).

  18. A design of a high speed dual spectrometer by single line scan camera

    Science.gov (United States)

    Palawong, Kunakorn; Meemon, Panomsak

    2018-03-01

    A spectrometer that can capture two orthogonal polarization components of s light beam is demanded for polarization sensitive imaging system. Here, we describe the design and implementation of a high speed spectrometer for simultaneous capturing of two orthogonal polarization components, i.e. vertical and horizontal components, of light beam. The design consists of a polarization beam splitter, two polarization-maintain optical fibers, two collimators, a single line-scan camera, a focusing lens, and a reflection blaze grating. The alignment of two beam paths was designed to be symmetrically incident on the blaze side and reverse blaze side of reflection grating, respectively. The two diffracted beams were passed through the same focusing lens and focused on the single line-scan sensors of a CMOS camera. The two spectra of orthogonal polarization were imaged on 1000 pixels per spectrum. With the proposed setup, the amplitude and shape of the two detected spectra can be controlled by rotating the collimators. The technique for optical alignment of spectrometer will be presented and discussed. The two orthogonal polarization spectra can be simultaneously captured at a speed of 70,000 spectra per second. The high speed dual spectrometer can simultaneously detected two orthogonal polarizations, which is an important component for the development of polarization-sensitive optical coherence tomography. The performance of the spectrometer have been measured and analyzed.

  19. Development of high-speed video cameras

    Science.gov (United States)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  20. Cloud Computing with Context Cameras

    Science.gov (United States)

    Pickles, A. J.; Rosing, W. E.

    2016-05-01

    We summarize methods and plans to monitor and calibrate photometric observations with our autonomous, robotic network of 2m, 1m and 40cm telescopes. These are sited globally to optimize our ability to observe time-variable sources. Wide field "context" cameras are aligned with our network telescopes and cycle every ˜2 minutes through BVr'i'z' filters, spanning our optical range. We measure instantaneous zero-point offsets and transparency (throughput) against calibrators in the 5-12m range from the all-sky Tycho2 catalog, and periodically against primary standards. Similar measurements are made for all our science images, with typical fields of view of ˜0.5 degrees. These are matched against Landolt, Stetson and Sloan standards, and against calibrators in the 10-17m range from the all-sky APASS catalog. Such measurements provide pretty good instantaneous flux calibration, often to better than 5%, even in cloudy conditions. Zero-point and transparency measurements can be used to characterize, monitor and inter-compare sites and equipment. When accurate calibrations of Target against Standard fields are required, monitoring measurements can be used to select truly photometric periods when accurate calibrations can be automatically scheduled and performed.

  1. [Representation and mathematical analysis of human crystalline lens].

    Science.gov (United States)

    Tălu, Stefan; Giovanzana, Stefano; Tălu, Mihai

    2011-01-01

    The surface of human crystalline lens can be described and analyzed using mathematical models based on parametric representations, used in biomechanical studies and 3D solid modeling of the lens. The mathematical models used in lens biomechanics allow the study and the behavior of crystalline lens on variables and complex dynamic loads. Also, the lens biomechanics has the potential to improve the results in the development of intraocular lenses and cataract surgery. The paper presents the most representative mathematical models currently used for the modeling of human crystalline lens, both optically and biomechanically.

  2. Development of Powerhouse Using Fresnel lens

    Directory of Open Access Journals (Sweden)

    Al-Dohani Nawar Saif

    2018-01-01

    Full Text Available Solar energy is an alternative source of renewable energy. Sultanate of Oman government showed initiation on utilization of solar energy for domestic and industrial applications. Fresnel lens is one of the methods to collect maximum energy by gathering heat of the sun in the concentrated form (using solar collectors. Earlier research work discloses that Fresnel lens gave better result in terms of power output and produces lower heat loss as compared to linear –parabolic solar collectors. In this work, development of a proto Fresnel lens power house was made to generate electricity. The focused heat from Fresnel lens was used to heat the molten salt in a heat exchanger to produce the steam. The generated steam was used to rotate the steam engine coupled to a generator. In the current work, a maximum power of 30 W was produced. In addition, comparative study was carried out regarding solar salts and heat exchanger materials to understand the Fresnel powerhouse performance. Overall the present study gave valuable information regarding usage of Fresnel lens for electricity generation in Oman.

  3. A lazy way to design infrared lens

    Science.gov (United States)

    Qiu, RongSheng; Wu, JianDong; Chen, LongJiang; Yu, Kun; Pang, HaoJun; Hu, BaiZhen

    2017-08-01

    We designed a compact middle-wave infrared (MWIR) lens with a large focal length ratio (about 1.5:1), used in the 3.7 to 4.8 μm range. The lens is consisted of a compact front group and a re-imaging group. Thanks to the compact front group configuration, it is possible to install a filter wheel mechanism in such a tight space. The total track length of the lens is about 50mm, which includes a 2mm thick protective window and a cold shield of 12mm. The full field of view of the lens is about 3.6°, and F number is less than 1.6, the image circle is about 4.6mm in diameter. The design performance of the lens reaches diffraction limitation, and doesn't change a lot during a temperature range of -40°C +60°C. This essay proposed a stepwise design method of infrared optical system guided by the qualitative approach. The method fully utilize the powerful global optimization ability, with a little effort to write code snippet in optical design software, frees optical engineer from tedious calculation of the original structure.

  4. Towards Adaptive Virtual Camera Control In Computer Games

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2011-01-01

    Automatic camera control aims to define a framework to control virtual camera movements in dynamic and unpredictable virtual environments while ensuring a set of desired visual properties. We inves- tigate the relationship between camera placement and playing behaviour in games and build a user...... model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platform game, we cluster gaze and camera information to identify camera behaviour profiles and we employ...... camera control in games is discussed....

  5. The first detection of neutral hydrogen in emission in a strong spiral lens

    Science.gov (United States)

    Lipnicky, Andrew; Chakrabarti, Sukanya; Wright, Melvyn C. H.; Blitz, Leo; Heiles, Carl; Cotton, William; Frayer, David; Blandford, Roger; Shu, Yiping; Bolton, Adam S.

    2018-05-01

    We report H I observations of eight spiral galaxies that are strongly lensing background sources. Our targets were selected from the Sloan WFC (Wide Field Camera) Edge-on Late-type Lens Survey (SWELLS) using the Arecibo, Karl G. Jansky Very Large Array, and Green Bank telescopes. We securely detect J1703+2451 at z = 0.063 with a signal-to-noise ratio of 6.7 and W50 = 79 ± 13 km s-1, obtaining the first detection of H I emission in a strong spiral lens. We measure a mass of M_{H I} = (1.77± 0.06^{+0.35}_{-0.75})× 10^9 M_{⊙} for this source. We find that this lens is a normal spiral, with observable properties that are fairly typical of spiral galaxies. For three other sources, we did not secure a detection; however, we are able to place strong constraints on the H I masses of those galaxies. The observations for four of our sources were rendered unusable due to strong radio frequency interference.

  6. Reducing the Variance of Intrinsic Camera Calibration Results in the ROS Camera_Calibration Package

    Science.gov (United States)

    Chiou, Geoffrey Nelson

    The intrinsic calibration of a camera is the process in which the internal optical and geometric characteristics of the camera are determined. If accurate intrinsic parameters of a camera are known, the ray in 3D space that every point in the image lies on can be determined. Pairing with another camera allows for the position of the points in the image to be calculated by intersection of the rays. Accurate intrinsics also allow for the position and orientation of a camera relative to some world coordinate system to be calculated. These two reasons for having accurate intrinsic calibration for a camera are especially important in the field of industrial robotics where 3D cameras are frequently mounted on the ends of manipulators. In the ROS (Robot Operating System) ecosystem, the camera_calibration package is the default standard for intrinsic camera calibration. Several researchers from the Industrial Robotics & Automation division at Southwest Research Institute have noted that this package results in large variances in the intrinsic parameters of the camera when calibrating across multiple attempts. There are also open issues on this matter in their public repository that have not been addressed by the developers. In this thesis, we confirm that the camera_calibration package does indeed return different results across multiple attempts, test out several possible hypothesizes as to why, identify the reason, and provide simple solution to fix the cause of the issue.

  7. Radiation Dose-Rate Extraction from the Camera Image of Quince 2 Robot System using Optical Character Recognition

    International Nuclear Information System (INIS)

    Cho, Jai Wan; Jeong, Kyung Min

    2012-01-01

    In the case of the Japanese Quince 2 robot system, 7 CCD/CMOS cameras were used. 2 CCD cameras of Quince robot are used for the forward and backward monitoring of the surroundings during navigation. And 2 CCD (or CMOS) cameras are used for monitoring the status of front-end and back-end motion mechanics such as flippers and crawlers. A CCD camera with wide field of view optics is used for monitoring the status of the communication (VDSL) cable reel. And another 2 CCD cameras are assigned for reading the indication value of the radiation dosimeter and the instrument. The Quince 2 robot measured radiation in the unit 2 reactor building refueling floor of the Fukushima nuclear power plant. The CCD camera with wide field-of-view (fisheye) lens reads indicator of the dosimeter loaded on the Quince 2 robot, which was sent to carry out investigating the unit 2 reactor building refueling floor situation. The camera image with gamma ray dose-rate information is transmitted to the remote control site via VDSL communication line. At the remote control site, the radiation information in the unit 2 reactor building refueling floor can be perceived by monitoring the camera image. To make up the radiation profile in the surveyed refueling floor, the gamma ray dose-rate information in the image should be converted to numerical value. In this paper, we extract the gamma ray dose-rate value in the unit 2 reactor building refueling floor using optical character recognition method

  8. Radiation Dose-Rate Extraction from the Camera Image of Quince 2 Robot System using Optical Character Recognition

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Wan; Jeong, Kyung Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2012-05-15

    In the case of the Japanese Quince 2 robot system, 7 CCD/CMOS cameras were used. 2 CCD cameras of Quince robot are used for the forward and backward monitoring of the surroundings during navigation. And 2 CCD (or CMOS) cameras are used for monitoring the status of front-end and back-end motion mechanics such as flippers and crawlers. A CCD camera with wide field of view optics is used for monitoring the status of the communication (VDSL) cable reel. And another 2 CCD cameras are assigned for reading the indication value of the radiation dosimeter and the instrument. The Quince 2 robot measured radiation in the unit 2 reactor building refueling floor of the Fukushima nuclear power plant. The CCD camera with wide field-of-view (fisheye) lens reads indicator of the dosimeter loaded on the Quince 2 robot, which was sent to carry out investigating the unit 2 reactor building refueling floor situation. The camera image with gamma ray dose-rate information is transmitted to the remote control site via VDSL communication line. At the remote control site, the radiation information in the unit 2 reactor building refueling floor can be perceived by monitoring the camera image. To make up the radiation profile in the surveyed refueling floor, the gamma ray dose-rate information in the image should be converted to numerical value. In this paper, we extract the gamma ray dose-rate value in the unit 2 reactor building refueling floor using optical character recognition method

  9. Characterization of a digital camera as an absolute tristimulus colorimeter

    Science.gov (United States)

    Martinez-Verdu, Francisco; Pujol, Jaume; Vilaseca, Meritxell; Capilla, Pascual

    2003-01-01

    An algorithm is proposed for the spectral and colorimetric characterization of digital still cameras (DSC) which allows to use them as tele-colorimeters with CIE-XYZ color output, in cd/m2. The spectral characterization consists of the calculation of the color-matching functions from the previously measured spectral sensitivities. The colorimetric characterization consists of transforming the RGB digital data into absolute tristimulus values CIE-XYZ (in cd/m2) under variable and unknown spectroradiometric conditions. Thus, at the first stage, a gray balance has been applied over the RGB digital data to convert them into RGB relative colorimetric values. At a second stage, an algorithm of luminance adaptation vs. lens aperture has been inserted in the basic colorimetric profile. Capturing the ColorChecker chart under different light sources, the DSC color analysis accuracy indexes, both in a raw state and with the corrections from a linear model of color correction, have been evaluated using the Pointer'86 color reproduction index with the unrelated Hunt'91 color appearance model. The results indicate that our digital image capture device, in raw performance, lightens and desaturates the colors.

  10. Characterization of a smartphone camera's response to ultraviolet A radiation.

    Science.gov (United States)

    Igoe, Damien; Parisi, Alfio; Carter, Brad

    2013-01-01

    As part of a wider study into the use of smartphones as solar ultraviolet radiation monitors, this article characterizes the ultraviolet A (UVA; 320-400 nm) response of a consumer complementary metal oxide semiconductor (CMOS)-based smartphone image sensor in a controlled laboratory environment. The CMOS image sensor in the camera possesses inherent sensitivity to UVA, and despite the attenuation due to the lens and neutral density and wavelength-specific bandpass filters, the measured relative UVA irradiances relative to the incident irradiances range from 0.0065% at 380 nm to 0.0051% at 340 nm. In addition, the sensor demonstrates a predictable response to low-intensity discrete UVA stimuli that can be modelled using the ratio of recorded digital values to the incident UVA irradiance for a given automatic exposure time, and resulting in measurement errors that are typically less than 5%. Our results support the idea that smartphones can be used for scientific monitoring of UVA radiation. © 2012 Wiley Periodicals, Inc. Photochemistry and Photobiology © 2012 The American Society of Photobiology.

  11. Wearable Contact Lens Biosensors for Continuous Glucose Monitoring Using Smartphones.

    Science.gov (United States)

    Elsherif, Mohamed; Hassan, Mohammed Umair; Yetisen, Ali K; Butt, Haider

    2018-05-17

    Low-cost, robust, and reusable continuous glucose monitoring systems that can provide quantitative measurements at point-of-care settings is an unmet medical need. Optical glucose sensors require complex and time-consuming fabrication processes, and their readouts are not practical for quantitative analyses. Here, a wearable contact lens optical sensor was created for the continuous quantification of glucose at physiological conditions, simplifying the fabrication process and facilitating smartphone readouts. A photonic microstructure having a periodicity of 1.6 μm was printed on a glucose-selective hydrogel film functionalized with phenylboronic acid. Upon binding with glucose, the microstructure volume swelled, which modulated the periodicity constant. The resulting change in the Bragg diffraction modulated the space between zero- and first-order spots. A correlation was established between the periodicity constant and glucose concentration within 0-50 mM. The sensitivity of the sensor was 12 nm mM -1 , and the saturation response time was less than 30 min. The sensor was integrated with commercial contact lenses and utilized for continuous glucose monitoring using smartphone camera readouts. The reflected power of the first-order diffraction was measured via a smartphone application and correlated to the glucose concentrations. A short response time of 3 s and a saturation time of 4 min was achieved in the continuous monitoring mode. Glucose-sensitive photonic microstructures may have applications in point-of-care continuous monitoring devices and diagnostics at home settings.

  12. Development, characterization, and modeling of a tunable filter camera

    Science.gov (United States)

    Sartor, Mark Alan

    1999-10-01

    This paper describes the development, characterization, and modeling of a Tunable Filter Camera (TFC). The TFC is a new multispectral instrument with electronically tuned spectral filtering and low-light-level sensitivity. It represents a hybrid between hyperspectral and multispectral imaging spectrometers that incorporates advantages from each, addressing issues such as complexity, cost, lack of sensitivity, and adaptability. These capabilities allow the TFC to be applied to low- altitude video surveillance for real-time spectral and spatial target detection and image exploitation. Described herein are the theory and principles of operation for the TFC, which includes a liquid crystal tunable filter, an intensified CCD, and a custom apochromatic lens. The results of proof-of-concept testing, and characterization of two prototype cameras are included, along with a summary of the design analyses for the development of a multiple-channel system. A significant result of this effort was the creation of a system-level model, which was used to facilitate development and predict performance. It includes models for the liquid crystal tunable filter and intensified CCD. Such modeling was necessary in the design of the system and is useful for evaluation of the system in remote-sensing applications. Also presented are characterization data from component testing, which included quantitative results for linearity, signal to noise ratio (SNR), linearity, and radiometric response. These data were used to help refine and validate the model. For a pre-defined source, the spatial and spectral response, and the noise of the camera, system can now be predicted. The innovation that sets this development apart is the fact that this instrument has been designed for integrated, multi-channel operation for the express purpose of real-time detection/identification in low- light-level conditions. Many of the requirements for the TFC were derived from this mission. In order to provide

  13. Cine-servo lens technology for 4K broadcast and cinematography

    Science.gov (United States)

    Nurishi, Ryuji; Wakazono, Tsuyoshi; Usui, Fumiaki

    2015-09-01

    Central to the rapid evolution of 4K image capture technology in the past few years, deployment of large-format cameras with Super35mm Single Sensors is increasing in TV production for diverse shows such as dramas, documentaries, wildlife, and sports. While large format image capture has been the standard in the cinema world for quite some time, the recent experiences within the broadcast industry have revealed a variety of requirement differences for large format lenses compared to those of the cinema industry. A typical requirement for a broadcast lens is a considerably higher zoom ratio in order to avoid changing lenses in the middle of a live event, which is mostly not the case for traditional cinema productions. Another example is the need for compact size, light weight, and servo operability for a single camera operator shooting in a shoulder-mount ENG style. On the other hand, there are new requirements that are common to both worlds, such as smooth and seamless change in angle of view throughout the long zoom range, which potentially offers new image expression that never existed in the past. This paper will discuss the requirements from the two industries of cinema and broadcast, while at the same time introducing the new technologies and new optical design concepts applied to our latest "CINE-SERVO" lens series which presently consists of two models, CN7x17KAS-S and CN20x50IAS-H. It will further explain how Canon has realized 4K optical performance and fast servo control while simultaneously achieving compact size, light weight and high zoom ratio, by referring to patent-pending technologies such as the optical power layout, lens construction, and glass material combinations.

  14. Terahertz lens made out of natural stone.

    Science.gov (United States)

    Han, Daehoon; Lee, Kanghee; Lim, Jongseok; Hong, Sei Sun; Kim, Young Kie; Ahn, Jaewook

    2013-12-20

    Terahertz (THz) time-domain spectroscopy probes the optical properties of naturally occurring solid aggregates of minerals, or stones, in the THz frequency range. Refractive index and extinction coefficient measurement reveals that most natural stones, including mudstone, sandstone, granite, tuff, gneiss, diorite, slate, marble, and dolomite, are fairly transparent for THz frequency waves. Dolomite in particular exhibits a nearly uniform refractive index of 2.7 over the broad frequency range from 0.1 to 1 THz. The high index of refraction allows flexibility in lens designing with a shorter accessible focal length or a thinner lens with a given focal length. Good agreement between the experiment and calculation for the THz beam profile confirms that dolomite has high homogeneity as a lens material, suggesting the possibility of using natural stones for THz optical elements.

  15. Catadioptric aberration correction in cathode lens microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Tromp, R.M. [IBM T.J. Watson Research Center, PO Box 218, Yorktown Heights, NY 10598 (United States); Kamerlingh Onnes Laboratory, Leiden Institute of Physics, Niels Bohrweg 2, 2333 CA Leiden (Netherlands)

    2015-04-15

    In this paper I briefly review the use of electrostatic electron mirrors to correct the aberrations of the cathode lens objective lens in low energy electron microscope (LEEM) and photo electron emission microscope (PEEM) instruments. These catadioptric systems, combining electrostatic lens elements with a reflecting mirror, offer a compact solution, allowing simultaneous and independent correction of both spherical and chromatic aberrations. A comparison with catadioptric systems in light optics informs our understanding of the working principles behind aberration correction with electron mirrors, and may point the way to further improvements in the latter. With additional developments in detector technology, 1 nm spatial resolution in LEEM appears to be within reach. - Highlights: • The use of electron mirrors for aberration correction in LEEM/PEEM is reviewed. • A comparison is made with similar systems in light optics. • Conditions for 1 nm spatial resolution are discussed.

  16. Freeform micromachining of an infrared Alvarez lens

    Science.gov (United States)

    Smilie, Paul J.; Dutterer, Brian S.; Lineberger, Jennifer L.; Davies, Matthew A.; Suleski, Thomas J.

    2011-02-01

    In 1967, Luis Alvarez introduced a novel concept for a focusing lens whereby two transmitting elements with cubic polynomial surfaces yield a composite lens of variable focal length with small lateral shifts. Computer simulations have demonstrated the behavior of these devices, but fabricating the refractive cubic surfaces of the types needed with adequate precision and depth modulation has proven to be challenging using standard methods, and, to the authors' knowledge, Alvarez lens elements have not been previously machined in infrared materials. Recent developments in freeform diamond machining capability have enabled the fabrication of such devices. In this paper, we discuss the fabrication of freeform refractive Alvarez elements in germanium using diamond micro-milling on a five-axis Moore Nanotech® 350FG Freeform Generator. Machining approaches are discussed, and measurements of surface figure and finish are presented. Initial experimental tests of optical performance are also discussed.

  17. Invited review article: the electrostatic plasma lens.

    Science.gov (United States)

    Goncharov, Alexey

    2013-02-01

    The fundamental principles, experimental results, and potential applications of the electrostatic plasma lens for focusing and manipulating high-current, energetic, heavy ion beams are reviewed. First described almost 50 years ago, this optical beam device provides space charge neutralization of the ion beam within the lens volume, and thus provides an effective and unique tool for focusing high current beams where a high degree of neutralization is essential to prevent beam blow-up. Short and long lenses have been explored, and a lens in which the magnetic field is provided by rare-earth permanent magnets has been demonstrated. Applications include the use of this kind of optical tool for laboratory ion beam manipulation, high dose ion implantation, heavy ion accelerator injection, in heavy ion fusion, and other high technology.

  18. 3D printed helical antenna with lens

    KAUST Repository

    Farooqui, Muhammad Fahad

    2016-12-19

    The gain of an antenna can be enhanced through the integration of a lens, however this technique has traditionally been restricted to planar antennas due to fabrication limitations of standard manufacturing processes. Here, with a unique combination of 3D and 2D inkjet printing of dielectric and metallic inks respectively, we demonstrate a Fresnel lens that has been monolithically integrated to a non-planar antenna (helix) for the first time. Antenna measurements show that the integration of a Fresnel lens enhances the gain of a 2-turn helix by around 4.6 dB giving a peak gain of about 12.9 dBi at 8.8 GHz.

  19. Autonomous Multicamera Tracking on Embedded Smart Cameras

    Directory of Open Access Journals (Sweden)

    Bischof Horst

    2007-01-01

    Full Text Available There is currently a strong trend towards the deployment of advanced computer vision methods on embedded systems. This deployment is very challenging since embedded platforms often provide limited resources such as computing performance, memory, and power. In this paper we present a multicamera tracking method on distributed, embedded smart cameras. Smart cameras combine video sensing, processing, and communication on a single embedded device which is equipped with a multiprocessor computation and communication infrastructure. Our multicamera tracking approach focuses on a fully decentralized handover procedure between adjacent cameras. The basic idea is to initiate a single tracking instance in the multicamera system for each object of interest. The tracker follows the supervised object over the camera network, migrating to the camera which observes the object. Thus, no central coordination is required resulting in an autonomous and scalable tracking approach. We have fully implemented this novel multicamera tracking approach on our embedded smart cameras. Tracking is achieved by the well-known CamShift algorithm; the handover procedure is realized using a mobile agent system available on the smart camera network. Our approach has been successfully evaluated on tracking persons at our campus.

  20. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-04-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2 0 deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  1. Automatic camera tracking for remote manipulators

    International Nuclear Information System (INIS)

    Stoughton, R.S.; Martin, H.L.; Bentz, R.R.

    1984-07-01

    The problem of automatic camera tracking of mobile objects is addressed with specific reference to remote manipulators and using either fixed or mobile cameras. The technique uses a kinematic approach employing 4 x 4 coordinate transformation matrices to solve for the needed camera PAN and TILT angles. No vision feedback systems are used, as the required input data are obtained entirely from position sensors from the manipulator and the camera-positioning system. All hardware requirements are generally satisfied by currently available remote manipulator systems with a supervisory computer. The system discussed here implements linear plus on/off (bang-bang) closed-loop control with a +-2-deg deadband. The deadband area is desirable to avoid operator seasickness caused by continuous camera movement. Programming considerations for camera control, including operator interface options, are discussed. The example problem presented is based on an actual implementation using a PDP 11/34 computer, a TeleOperator Systems SM-229 manipulator, and an Oak Ridge National Laboratory (ORNL) camera-positioning system. 3 references, 6 figures, 2 tables

  2. New camera systems for fuel services

    International Nuclear Information System (INIS)

    Hummel, W.; Beck, H.J.

    2010-01-01

    AREVA NP Fuel Services have many years of experience in visual examination and measurements on fuel assemblies and associated core components by using state of the art cameras and measuring technologies. The used techniques allow the surface and dimensional characterization of materials and shapes by visual examination. New enhanced and sophisticated technologies for fuel services f. e. are two shielded color camera systems for use under water and close inspection of a fuel assembly. Nowadays the market requirements for detecting and characterization of small defects (lower than the 10th of one mm) or cracks and analyzing surface appearances on an irradiated fuel rod cladding or fuel assembly structure parts have increased. Therefore it is common practice to use movie cameras with higher resolution. The radiation resistance of high resolution CCD cameras is in general very low and it is not possible to use them unshielded close to a fuel assembly. By extending the camera with a mirror system and shielding around the sensitive parts, the movie camera can be utilized for fuel assembly inspection. AREVA NP Fuel Services is now equipped with such kind of movie cameras. (orig.)

  3. First results from the TOPSAT camera

    Science.gov (United States)

    Greenway, Paul; Tosh, Ian; Morris, Nigel; Burton, Gary; Cawley, Steve

    2017-11-01

    The TopSat camera is a low cost remote sensing imager capable of producing 2.5 metre resolution panchromatic imagery, funded by the British National Space Centre's Mosaic programme. The instrument was designed and assembled at the Space Science & Technology Department of the CCLRC's Rutherford Appleton Laboratory (RAL) in the UK, and was launched on the 27th October 2005 from Plesetsk Cosmodrome in Northern Russia on a Kosmos-3M. The camera utilises an off-axis three mirror system, which has the advantages of excellent image quality over a wide field of view, combined with a compactness that makes its overall dimensions smaller than its focal length. Keeping the costs to a minimum has been a major design driver in the development of this camera. The camera is part of the TopSat mission, which is a collaboration between four UK organisations; QinetiQ, Surrey Satellite Technology Ltd (SSTL), RAL and Infoterra. Its objective is to demonstrate provision of rapid response high resolution imagery to fixed and mobile ground stations using a low cost minisatellite. The paper "Development of the TopSat Camera" presented by RAL at the 5th ICSO in 2004 described the opto-mechanical design, assembly, alignment and environmental test methods implemented. Now that the spacecraft is in orbit and successfully acquiring images, this paper presents the first results from the camera and makes an initial assessment of the camera's in-orbit performance.

  4. State of art in radiation tolerant camera

    Energy Technology Data Exchange (ETDEWEB)

    Choi; Young Soo; Kim, Seong Ho; Cho, Jae Wan; Kim, Chang Hoi; Seo, Young Chil

    2002-02-01

    Working in radiation environment such as nuclear power plant, RI facility, nuclear fuel fabrication facility, medical center has to be considered radiation exposure, and we can implement these job by remote observation and operation. However the camera used for general industry is weakened at radiation, so radiation-tolerant camera is needed for radiation environment. The application of radiation-tolerant camera system is nuclear industry, radio-active medical, aerospace, and so on. Specially nuclear industry, the demand is continuous in the inspection of nuclear boiler, exchange of pellet, inspection of nuclear waste. In the nuclear developed countries have been an effort to develop radiation-tolerant cameras. Now they have many kinds of radiation-tolerant cameras which can tolerate to 10{sup 6}-10{sup 8} rad total dose. In this report, we examine into the state-of-art about radiation-tolerant cameras, and analyze these technology. We want to grow up the concern of developing radiation-tolerant camera by this paper, and upgrade the level of domestic technology.

  5. Analytic models of plausible gravitational lens potentials

    International Nuclear Information System (INIS)

    Baltz, Edward A.; Marshall, Phil; Oguri, Masamune

    2009-01-01

    Gravitational lenses on galaxy scales are plausibly modelled as having ellipsoidal symmetry and a universal dark matter density profile, with a Sérsic profile to describe the distribution of baryonic matter. Predicting all lensing effects requires knowledge of the total lens potential: in this work we give analytic forms for that of the above hybrid model. Emphasising that complex lens potentials can be constructed from simpler components in linear combination, we provide a recipe for attaining elliptical symmetry in either projected mass or lens potential. We also provide analytic formulae for the lens potentials of Sérsic profiles for integer and half-integer index. We then present formulae describing the gravitational lensing effects due to smoothly-truncated universal density profiles in cold dark matter model. For our isolated haloes the density profile falls off as radius to the minus fifth or seventh power beyond the tidal radius, functional forms that allow all orders of lens potential derivatives to be calculated analytically, while ensuring a non-divergent total mass. We show how the observables predicted by this profile differ from that of the original infinite-mass NFW profile. Expressions for the gravitational flexion are highlighted. We show how decreasing the tidal radius allows stripped haloes to be modelled, providing a framework for a fuller investigation of dark matter substructure in galaxies and clusters. Finally we remark on the need for finite mass halo profiles when doing cosmological ray-tracing simulations, and the need for readily-calculable higher order derivatives of the lens potential when studying catastrophes in strong lenses

  6. Photonic crystal based polarization insensitive flat lens

    International Nuclear Information System (INIS)

    Turduev, M; Bor, E; Kurt, H

    2017-01-01

    The paper proposes a new design of an inhomogeneous artificially created photonic crystal lens structure consisting of annular dielectric rods to efficiently focus both transverse electric and transverse magnetic polarizations of light into the same focal point. The locations of each individual cell that contains the annular dielectric rods are determined according to a nonlinear distribution function. The inner and outer radii of the annular photonic dielectric rods are optimized with respect to the polarization insensitive frequency response of the transmission spectrum of the lens structure. The physical background of the polarization insensitive focusing mechanism is investigated in both spatial and frequency domains. Moreover, polarization independent wavefront transformation/focusing has been explored in detail by investigating the dispersion relation of the structure. Corresponding phase index distribution of the lens is attained for polarization insensitive normalized frequency range of a / λ   =  0.280 and a / λ   =  0.300, where a denotes the lattice constant of the designed structure and λ denotes the wavelength of the incident light. We show the wave transformation performance and focal point movement dynamics for both polarizations of the lens structure by specially adjusting the length of the structure. The 3D finite-difference time domain numerical analysis is also performed to verifiy that the proposed design is able to focus the wave regardless of polarization into approximately the same focal point (difference between focal distances of both polarizations stays below 0.25 λ ) with an operating bandwidth of 4.30% between 1476 nm and 1541 nm at telecom wavelengths. The main superiorities of the proposed lens structure are being all dielectric and compact, and having flat front and back surfaces, rendering the proposed lens design more practical in the photonic integration process in various applications such as optical switch

  7. Effect of infrared radiation on the lens

    Directory of Open Access Journals (Sweden)

    Aly Eman

    2011-01-01

    Full Text Available Background: Infrared (IR radiation is becoming more popular in industrial manufacturing processes and in many instruments used for diagnostic and therapeutic application to the human eye. Aim : The present study was designed to investigate the effect of IR radiation on rabbit′s crystalline lens and lens membrane. Materials and Methods: Fifteen New Zealand rabbits were used in the present work. The rabbits were classified into three groups; one of them served as control. The other two groups were exposed to IR radiation for 5 or 10 minutes. Animals from these two irradiated groups were subdivided into two subgroups; one of them was decapitated directly after IR exposure, while the other subgroup was decapitated 1 hour post exposure. IR was delivered from a General Electric Lamp model 250R 50/10, placed 20 cm from the rabbit and aimed at each eye. The activity of Na + -K + ATPase was measured in the lens membrane. Soluble lens proteins were extracted and the following measurements were carried out: estimation of total soluble protein, sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE and Fourier transform infrared (FTIR spectroscopy. For comparison between multiple groups, analysis of variance was used with significance level set at P < 0.001. Results: The results indicated a change in the molecular weight of different lens crystalline accompanied with changes in protein backbone structure. These changes increased for the groups exposed to IR for 10 minutes. Moreover, the activity of Na + -K + ATPase significantly decreased for all groups. Conclusions: The protein of eye lens is very sensitive to IR radiation which is hazardous and may lead to cataract.

  8. ANALYTICAL SOLUTIONS OF SINGULAR ISOTHERMAL QUADRUPOLE LENS

    International Nuclear Information System (INIS)

    Chu Zhe; Lin, W. P.; Yang Xiaofeng

    2013-01-01

    Using an analytical method, we study the singular isothermal quadrupole (SIQ) lens system, which is the simplest lens model that can produce four images. In this case, the radial mass distribution is in accord with the profile of the singular isothermal sphere lens, and the tangential distribution is given by adding a quadrupole on the monopole component. The basic properties of the SIQ lens have been studied in this Letter, including the deflection potential, deflection angle, magnification, critical curve, caustic, pseudo-caustic, and transition locus. Analytical solutions of the image positions and magnifications for the source on axes are derived. We find that naked cusps will appear when the relative intensity k of quadrupole to monopole is larger than 0.6. According to the magnification invariant theory of the SIQ lens, the sum of the signed magnifications of the four images should be equal to unity, as found by Dalal. However, if a source lies in the naked cusp, the summed magnification of the left three images is smaller than the invariant 1. With this simple lens system, we study the situations where a point source infinitely approaches a cusp or a fold. The sum of the magnifications of the cusp image triplet is usually not equal to 0, and it is usually positive for major cusps while negative for minor cusps. Similarly, the sum of magnifications of the fold image pair is usually not equal to 0 either. Nevertheless, the cusp and fold relations are still equal to 0 in that the sum values are divided by infinite absolute magnifications by definition.

  9. Intraocular Lens Calcification; a Clinicopathologic Report

    Directory of Open Access Journals (Sweden)

    Mozhgan Rezaei-Kanavi

    2009-04-01

    Full Text Available

    PURPOSE: To describe the clinical and pathological features of a case of hydrogel intraocular lens (IOL calcification. CASE REPORT: A 48-year-old man underwent explantation of a single-piece hydrophilic acrylic intraocular lens in his left eye because of decreased visual acuity and milky white opalescence of the IOL. The opacified lens was exchanged uneventfully with a hydrophobic acrylic IOL. Gross examination of the explanted IOL disclosed opacification of the optic and haptics. Full-thickness sections of the lens optic were stained with hematoxylin and eosin (H&E, von Kossa and Gram Tworts'. Microscopic examination of the sections revealed fine and diffuse basophilic granular deposits of variable size within the lens optic parallel to the lens curvature but separated from the surface by a moderately clear zone. The deposits were of high calcium content as evident by dark brown staining with von Kossa. Gram Tworts' staining disclosed no microorganisms. CONCLUSION: This report further contributes to the existing literature on hydrogel IOL calcification.

  10. Comparison of clear lens extraction and collamer lens implantation in high myopia

    Directory of Open Access Journals (Sweden)

    Ahmed M Emarah

    2010-05-01

    Full Text Available Ahmed M Emarah, Mostafa A El-Helw, Hazem M YassinCairo University, Cairo, EgyptAim: To compare the outcomes of clear lens extraction and collamer lens implantation in high myopia.Patients and methods: Myopic patients younger than 40 years old with more than 12 diopters of myopia or who were not fit for laser-assisted in situ keratomileusis were included. Group 1 comprised patients undergoing clear lens extraction and Group 2 patients received the Visian implantable collamer lens. Outcome and complications were evaluated.Results: Postoperative best corrected visual acuity was -0.61 ± 0.18 in Group 1 and 0.79 ± 0.16 in Group 2. In Group 1, 71.4% achieved a postoperative uncorrected visual acuity better than the preoperative best corrected visual acuity, while only 51.8% patients achieved this in Group 2. Intraocular pressure decreased by 12.55% in Group 1, and increased by 15.11% in Group 2. Corneal endothelial cell density decreased by 4.47% in Group 1 and decreased by 5.67% in Group 2. Posterior capsule opacification occurred in Group 1. In Group 2, lens opacification occurred in 11.11%, significant pigment dispersion in 3.7%, and pupillary block glaucoma in 3.7%.Conclusion: Clear lens extraction presents less of a financial load up front, and less likelihood of the need for a secondary intervention in the future. Clear lens extraction is a more viable solution in developing countries with limited financial resources.Keywords: clear lens extraction, implantable collamer lens, myopia

  11. Pulse transformer for the AA lithium lens

    CERN Multimedia

    CERN PhotoLab

    1980-01-01

    The antiprotons emanating from the target were initially focused by a magnetic horn. Later on, a Li-lens was used during operation for the SPS collider, until 1992. A Li-rod (130 mm long, 34 mm in diameter) constituted the secondary of a 1:23 pulse-transformer. The half-sine pulse rose to 1000 kA in 900 microsec. The angular acceptance was 95 mrad. In operation after 1992, for LEAR only, a more modest Li-lens was used (155 mm long, diameter 20 mm, 480 kA, risetime 240 microsec, angular acceptance 75 mrad).

  12. Herniation of the anterior lens capsule

    Directory of Open Access Journals (Sweden)

    Pereira Nolette

    2007-01-01

    Full Text Available Herniation of the anterior lens capsule is a rare abnormality in which the capsule bulges forward in the pupillary area. This herniation can be mistaken for an anterior lenticonus where both the capsule and the cortex bulge forward. The exact pathology behind this finding is still unclear. We report the clinical, ultrasound biomicroscopy (UBM and histopathological findings of a case of herniation of the anterior lens capsule. UBM helped to differentiate this entity from anterior lenticonus. Light microscopy revealed capsular splitting suggestive of capsular delamination and collection of fluid (aqueous in the area of herniation giving it a characteristic appearance.

  13. Fuzzy logic control for camera tracking system

    Science.gov (United States)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  14. A Benchmark for Virtual Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Yannakakis, Georgios N.

    2015-01-01

    Automatically animating and placing the virtual camera in a dynamic environment is a challenging task. The camera is expected to maximise and maintain a set of properties — i.e. visual composition — while smoothly moving through the environment and avoiding obstacles. A large number of different....... For this reason, in this paper, we propose a benchmark for the problem of virtual camera control and we analyse a number of different problems in different virtual environments. Each of these scenarios is described through a set of complexity measures and, as a result of this analysis, a subset of scenarios...

  15. Scintillation camera with second order resolution

    International Nuclear Information System (INIS)

    Muehllehner, G.

    1976-01-01

    A scintillation camera for use in radioisotope imaging to determine the concentration of radionuclides in a two-dimensional area is described in which means is provided for second order positional resolution. The phototubes, which normally provide only a single order of resolution, are modified to provide second order positional resolution of radiation within an object positioned for viewing by the scintillation camera. The phototubes are modified in that multiple anodes are provided to receive signals from the photocathode in a manner such that each anode is particularly responsive to photoemissions from a limited portion of the photocathode. Resolution of radioactive events appearing as an output of this scintillation camera is thereby improved

  16. Immunochemical analyses of soluble lens proteins in some marine fishes

    Digital Repository Service at National Institute of Oceanography (India)

    Menezes, M.R.

    Soluble eye lens proteins of 10 fishes, belonging to the families Clupeidae, Hemirhamphidae, Lactaridae, Scombridae, Stromatidae, Psettodidae, Bothidae and Soleidae were studied by immunoelectrophoresis using the lens antiserum of Sardinella...

  17. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    Science.gov (United States)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  18. Chapter 03: Correct use of a hand lens

    Science.gov (United States)

    Alex Wiedenhoeft

    2011-01-01

    A hand lens is a powerful tool for the identification of wood, but like all tools it must be used correctly to take full advantage of its powers. The hand lens has two main parts, a lens that magnifies the object of interest (generally we use 10X or 14X lenses in wood identification; a 14X lens is recommended for use with this manual) and a housing to hold and protect...

  19. Application of X-ray CCD camera in X-ray spot diagnosis of rod-pinch diode

    International Nuclear Information System (INIS)

    Song Yan; Zhou Ming; Song Guzhou; Ma Jiming; Duan Baojun; Han Changcai; Yao Zhiming

    2015-01-01

    The pinhole imaging technique is widely used in the measurement of X-ray spot of rod-pinch diode. The X-ray CCD camera, which was composed of film, fiber optic taper and CCD camera, was employed to replace the imaging system based on scintillator, lens and CCD camera in the diagnosis of X-ray spot. The resolution of the X-ray CCD camera was studied. The resolution is restricted by the film and is 5 lp/mm in the test with Pb resolution chart. The frequency is 1.5 lp/mm when the MTF is 0.5 in the test with edge image. The resolution tests indicate that the X-ray CCD camera can meet the requirement of the diagnosis of X-ray spot whose scale is about 1.5 mm when the pinhole imaging magnification is 0.5. At last, the image of X-ray spot was gained and the restoration was implemented in the diagnosis of X-ray spot of rod-pinch diode. (authors)

  20. A test of lens opacity as an indicator of preclinical Alzheimer Disease.

    Science.gov (United States)

    Bei, Ling; Shui, Ying-Bo; Bai, Fang; Nelson, Suzanne K; Van Stavern, Gregory P; Beebe, David C

    2015-11-01

    Previous studies reported that characteristic lens opacities were present in Alzheimer Disease (AD) patients postmortem. We therefore determined whether cataract grade or lens opacity is related to the risk of Alzheimer dementia in participants who have biomarkers that predict a high risk of developing the disease. AD biomarker status was determined by positron emission tomography-Pittsburgh compound B (PET-PiB) imaging and cerebrospinal fluid (CSF) levels of Aβ42. Cognitively normal participants with a clinical dementia rating of zero (CDR = 0; N = 40) or with slight evidence of dementia (CDR = 0.5; N = 2) were recruited from longitudinal studies of memory and aging at the Washington University Knight Alzheimer's Disease Research Center. The age, sex, race, cataract type and cataract grade of all participants were recorded and an objective measure of lens light scattering was obtained for each eye using a Scheimpflug camera. Twenty-seven participants had no biomarkers of Alzheimer dementia and were CDR = 0. Fifteen participants had biomarkers indicating increased risk of AD, two of which were CDR = 0.5. Participants who were biomarker positive were older than those who were biomarker negative. Biomarker positive participants had more advanced cataracts and increased cortical light scattering, none of which reached statistical significance after adjustment for age. We conclude that cataract grade or lens opacity is unlikely to provide a non-invasive measure of the risk of developing Alzheimer dementia. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Lunar Reconnaissance Orbiter Camera (LROC) instrument overview

    Science.gov (United States)

    Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.

  2. Gamma camera performance: technical assessment protocol

    International Nuclear Information System (INIS)

    Bolster, A.A.; Waddington, W.A.

    1996-01-01

    This protocol addresses the performance assessment of single and dual headed gamma cameras. No attempt is made to assess the performance of any associated computing systems. Evaluations are usually performed on a gamma camera commercially available within the United Kingdom and recently installed at a clinical site. In consultation with the manufacturer, GCAT selects the site and liaises with local staff to arrange a mutually convenient time for assessment. The manufacturer is encouraged to have a representative present during the evaluation. Three to four days are typically required for the evaluation team to perform the necessary measurements. When access time is limited, the team will modify the protocol to test the camera as thoroughly as possible. Data are acquired on the camera's computer system and are subsequently transferred to the independent GCAT computer system for analysis. This transfer from site computer to the independent system is effected via a hardware interface and Interfile data transfer. (author)

  3. Camera Based Navigation System with Augmented Reality

    Directory of Open Access Journals (Sweden)

    M. Marcu

    2012-06-01

    Full Text Available Nowadays smart mobile devices have enough processing power, memory, storage and always connected wireless communication bandwidth that makes them available for any type of application. Augmented reality (AR proposes a new type of applications that tries to enhance the real world by superimposing or combining virtual objects or computer generated information with it. In this paper we present a camera based navigation system with augmented reality integration. The proposed system aims to the following: the user points the camera of the smartphone towards a point of interest, like a building or any other place, and the application searches for relevant information about that specific place and superimposes the data over the video feed on the display. When the user moves the camera away, changing its orientation, the data changes as well, in real-time, with the proper information about the place that is now in the camera view.

  4. Portable mini gamma camera for medical applications

    CERN Document Server

    Porras, E; Benlloch, J M; El-Djalil-Kadi-Hanifi, M; López, S; Pavon, N; Ruiz, J A; Sánchez, F; Sebastiá, A

    2002-01-01

    A small, portable and low-cost gamma camera for medical applications has been developed and clinically tested. This camera, based on a scintillator crystal and a Position Sensitive Photo-Multiplier Tube, has a useful field of view of 4.6 cm diameter and provides 2.2 mm of intrinsic spatial resolution. Its mobility and light weight allow to reach the patient from any desired direction. This camera images small organs with high efficiency and so addresses the demand for devices of specific clinical applications. In this paper, we present the camera and briefly describe the procedures that have led us to choose its configuration and the image reconstruction method. The clinical tests and diagnostic capability are also presented and discussed.

  5. 21 CFR 886.1405 - Ophthalmic trial lens set.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Ophthalmic trial lens set. 886.1405 Section 886...) MEDICAL DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1405 Ophthalmic trial lens set. (a) Identification. An ophthalmic trial lens set is a device that is a set of lenses of various dioptric powers...

  6. 21 CFR 886.1420 - Ophthalmic lens gauge.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Ophthalmic lens gauge. 886.1420 Section 886.1420...) MEDICAL DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1420 Ophthalmic lens gauge. (a) Identification. An ophthalmic lens gauge is a calibrated device intended to manually measure the curvature of a...

  7. 21 CFR 886.1410 - Ophthalmic trial lens clip.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Ophthalmic trial lens clip. 886.1410 Section 886...) MEDICAL DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1410 Ophthalmic trial lens clip. (a) Identification. An ophthalmic trial lens clip is a device intended to hold prisms, spheres, cylinders, or...

  8. 21 CFR 886.1380 - Diagnostic condensing lens.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Diagnostic condensing lens. 886.1380 Section 886...) MEDICAL DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1380 Diagnostic condensing lens. (a) Identification. A diagnostic condensing lens is a device used in binocular indirect ophthalmoscopy (a procedure...

  9. 21 CFR 886.1415 - Ophthalmic trial lens frame.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Ophthalmic trial lens frame. 886.1415 Section 886...) MEDICAL DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1415 Ophthalmic trial lens frame. (a) Identification. An opthalmic trial lens frame is a mechanical device intended to hold trial lenses for vision...

  10. In vivo study of lens regeneration in Rana cyanophlyctis under ...

    African Journals Online (AJOL)

    SAM

    2014-03-12

    Mar 12, 2014 ... enhanced the percentage lens regeneration not only in young tadpoles but also in froglets. Lens regeneration ability ... Influence of vitamin A and ascorbic acid on lens regeneration in young, mature tadpoles and froglets of the frog Rana cyanophlyctis. Group .... ingested by macrophages. Dorsal iris cells ...

  11. An imaging system for a gamma camera

    International Nuclear Information System (INIS)

    Miller, D.W.; Gerber, M.S.

    1980-01-01

    A detailed description is given of a novel gamma camera which is designed to produce superior images than conventional cameras used in nuclear medicine. The detector consists of a solid state detector (e.g. germanium) which is formed to have a plurality of discrete components to enable 2-dimensional position identification. Details of the electronic processing circuits are given and the problems and limitations introduced by noise are discussed in full. (U.K.)

  12. Semiotic Analysis of Canon Camera Advertisements

    OpenAIRE

    INDRAWATI, SUSAN

    2015-01-01

    Keywords: Semiotic Analysis, Canon Camera, Advertisement. Advertisement is a medium to deliver message to people with the goal to influence the to use certain products. Semiotics is applied to develop a correlation within element used in an advertisement. In this study, the writer chose the Semiotic analysis of canon camera advertisement as the subject to be analyzed using semiotic study based on Peirce's theory. Semiotic approach is employed in interpreting the sign, symbol, icon, and index ...

  13. Imaging camera with multiwire proportional chamber

    International Nuclear Information System (INIS)

    Votruba, J.

    1980-01-01

    The camera for imaging radioisotope dislocations for use in nuclear medicine or for other applications, claimed in the patent, is provided by two multiwire lattices for the x-coordinate connected to a first coincidence circuit, and by two multiwire lattices for the y-coordinate connected to a second coincidence circuit. This arrangement eliminates the need of using a collimator and increases camera sensitivity while reducing production cost. (Ha)

  14. Imaging capabilities of germanium gamma cameras

    International Nuclear Information System (INIS)

    Steidley, J.W.

    1977-01-01

    Quantitative methods of analysis based on the use of a computer simulation were developed and used to investigate the imaging capabilities of germanium gamma cameras. The main advantage of the computer simulation is that the inherent unknowns of clinical imaging procedures are removed from the investigation. The effects of patient scattered radiation were incorporated using a mathematical LSF model which was empirically developed and experimentally verified. Image modifying effects of patient motion, spatial distortions, and count rate capabilities were also included in the model. Spatial domain and frequency domain modeling techniques were developed and used in the simulation as required. The imaging capabilities of gamma cameras were assessed using low contrast lesion source distributions. The results showed that an improvement in energy resolution from 10% to 2% offers significant clinical advantages in terms of improved contrast, increased detectability, and reduced patient dose. The improvements are of greatest significance for small lesions at low contrast. The results of the computer simulation were also used to compare a design of a hypothetical germanium gamma camera with a state-of-the-art scintillation camera. The computer model performed a parametric analysis of the interrelated effects of inherent and technological limitations of gamma camera imaging. In particular, the trade-off between collimator resolution and collimator efficiency for detection of a given low contrast lesion was directly addressed. This trade-off is an inherent limitation of both gamma cameras. The image degrading effects of patient motion, camera spatial distortions, and low count rate were shown to modify the improvements due to better energy resolution. Thus, based on this research, the continued development of germanium cameras to the point of clinical demonstration is recommended

  15. Compact Optical Technique for Streak Camera Calibration

    International Nuclear Information System (INIS)

    Curt Allen; Terence Davies; Frans Janson; Ronald Justin; Bruce Marshall; Oliver Sweningsen; Perry Bell; Roger Griffith; Karla Hagans; Richard Lerche

    2004-01-01

    The National Ignition Facility is under construction at the Lawrence Livermore National Laboratory for the U.S. Department of Energy Stockpile Stewardship Program. Optical streak cameras are an integral part of the experimental diagnostics instrumentation. To accurately reduce data from the streak cameras a temporal calibration is required. This article describes a technique for generating trains of precisely timed short-duration optical pulses that are suitable for temporal calibrations

  16. Vitrectorhexis and lens aspiration with posterior chamber intraocular lens implantation in spherophakia.

    Science.gov (United States)

    Al-Haddad, Christiane; Khatib, Lama

    2012-07-01

    We describe a technique that uses the vitrector to perform successful lens aspiration and posterior chamber intraocular lens (IOL) implantation in children with spherophakia and anterior lens subluxation. After an anterior chamber maintainer is placed, the ocutome is introduced through a limbal incision to perform a circular vitrectorhexis to avoid excessive manipulation of the unstable lens followed by gentle cortex aspiration. A foldable IOL is injected into the sulcus (3-piece IOL) or bag (1-piece IOL) if the capsule is sufficiently stable. Through a pars plana incision, the ocutome is then used to perform a posterior capsulotomy to prevent late posterior capsule opacification. In our patient, sulcus IOL placement was more stable than in-the-bag placement. Neither author has a financial or proprietary interest in any material or method mentioned. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  17. 3D printed helical antenna with lens

    KAUST Repository

    Farooqui, Muhammad Fahad; Shamim, Atif

    2016-01-01

    of 3D and 2D inkjet printing of dielectric and metallic inks respectively, we demonstrate a Fresnel lens that has been monolithically integrated to a non-planar antenna (helix) for the first time. Antenna measurements show that the integration of a

  18. Corneal ring infiltration in contact lens wearers

    Directory of Open Access Journals (Sweden)

    Seyed Ali Tabatabaei

    2017-01-01

    Full Text Available To report a case of atypical sterile ring infiltrates during wearing soft silicone hydrogel contact lens due to poor lens care. A 29-year-old woman presented with complaints of pain, redness, and morning discharge. She was wearing soft silicone hydrogel contact lens previously; her current symptoms began 1 week before presentation. On examination, best-corrected visual acuity was 20/40 in that eye. Slit-lamp examination revealed dense, ring-shaped infiltrate involving both the superficial and deep stromal layers with lucid interval to the limbus, edema of the epithelium, epithelial defect, and vascularization of the superior limbus. Cornea-specific in vivo laser confocal microscopy (Heidelberg Retina Tomograph 2 Rostock Cornea Module, HRT 2-RCM, Heidelberg Engineering GmbH, Dossenheim, Germany revealed Langerhans cells and no sign of Acanthamoeba or fungal features, using lid scraping and anti-inflammatory drops; her vision completely recovered. We reported an atypical case of a sterile corneal ring infiltrate associated with soft contact lens wearing; smear, culture, and confocal microscopy confirmed a sterile inflammatory reaction.

  19. Surgical treatment of hereditary lens subluxations.

    Science.gov (United States)

    Ozdek, Sengul; Sari, Ayca; Bilgihan, Kamil; Akata, Fikret; Hasanreisoglu, Berati

    2002-01-01

    To evaluate the effectiveness and results of pars plana vitreolensectomy approach with transscleral fixation of intraocular lens in hereditary lens subluxations. Fifteen eyes of 9 consecutive patients with a mean age of 12.8+/-6.2 years (6-26 years) with hereditary lens subluxation were operated on and the results were evaluated in a prospective study. Surgery was considered if best spectacle corrected visual acuity (BSCVA) was less than 20/70. All eyes underwent a 2-port pars plana vitreolensectomy and transscleral fixation of an intraocular lens (IOL). The mean follow-up period was 12.6+/-7.5 months (6-22 months). There was no major intraoperative complication. Preoperatively, 8 eyes (53.3%) had a BSCVA of counting fingers (CF) and 7 eyes (46.6%) had a BSCVA of 20/200 to 20/70. Postoperatively, 14 eyes (93.3%) had a BSCVA of 20/50 or better. None of the patients had IOL decentration or intraocular pressure (IOP) increase during the follow-up period. There was a macular hole formation in 1 eye postoperatively. The early results of pars plana vitreolensectomy with IOL implantation using scleral fixation technique had shown that it not only promises a rapid visual rehabilitation but it is also a relatively safe method. More serious complications, however, may occur in the long term.

  20. Lens Ray Diagrams with a Spreadsheet

    Science.gov (United States)

    González, Manuel I.

    2018-01-01

    Physicists create spreadsheets customarily to carry out numerical calculations and to display their results in a meaningful, nice-looking way. Spreadsheets can also be used to display a vivid geometrical model of a physical system. This statement is illustrated with an example taken from geometrical optics: images formed by a thin lens. A careful…

  1. Electronic states in a quantum lens

    International Nuclear Information System (INIS)

    Rodriguez, Arezky H.; Trallero-Giner, C.; Ulloa, S. E.; Marin-Antuna, J.

    2001-01-01

    We present a model to find analytically the electronic states in self-assembled quantum dots with a truncated spherical cap (''lens'') geometry. A conformal analytical image is designed to map the quantum dot boundary into a dot with semispherical shape. The Hamiltonian for a carrier confined in the quantum lens is correspondingly mapped into an equivalent operator and its eigenvalues and eigenfunctions for the corresponding Dirichlet problem are analyzed. A modified Rayleigh-Schro''dinger perturbation theory is presented to obtain analytical expressions for the energy levels and wave functions as a function of the spherical cap height b and radius a of the circular cross section. Calculations for a hard wall confinement potential are presented, and the effect of decreasing symmetry on the energy values and eigenfunctions of the lens-shape quantum dot is studied. As the degeneracies of a semicircular geometry are broken for b≠a, our perturbation approach allows tracking of the split states. Energy states and electronic wave functions with m=0 present the most pronounced influence on the reduction of the lens height. The method and expressions presented here can be straightforwardly extended to deal with more general Hamiltonians, including strains and valence-band coupling effects in Group III--V and Group II--VI self-assembled quantum dots

  2. The Use of Camera Traps in Wildlife

    Directory of Open Access Journals (Sweden)

    Yasin Uçarlı

    2013-11-01

    Full Text Available Camera traps are increasingly used in the abundance and density estimates of wildlife species. Camera traps are very good alternative for direct observation in case, particularly, steep terrain, dense vegetation covered areas or nocturnal species. The main reason for the use of camera traps is eliminated that the economic, personnel and time loss in a continuous manner at the same time in different points. Camera traps, motion and heat sensitive, can take a photo or video according to the models. Crossover points and feeding or mating areas of the focal species are addressed as a priority camera trap set locations. The population size can be finding out by the images combined with Capture-Recapture methods. The population density came out the population size divided to effective sampling area size. Mating and breeding season, habitat choice, group structures and survival rates of the focal species can be achieved from the images. Camera traps are very useful to obtain the necessary data about the particularly mysterious species with economically in planning and conservation efforts.

  3. Towards next generation 3D cameras

    Science.gov (United States)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (robotic inspection and assembly systems.

  4. Multi-Angle Snowflake Camera Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Stuefer, Martin [Univ. of Alaska, Fairbanks, AK (United States); Bailey, J. [Univ. of Alaska, Fairbanks, AK (United States)

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASC cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.

  5. 4-mm-diameter three-dimensional imaging endoscope with steerable camera for minimally invasive surgery (3-D-MARVEL).

    Science.gov (United States)

    Bae, Sam Y; Korniski, Ronald J; Shearn, Michael; Manohara, Harish M; Shahinian, Hrayr

    2017-01-01

    High-resolution three-dimensional (3-D) imaging (stereo imaging) by endoscopes in minimally invasive surgery, especially in space-constrained applications such as brain surgery, is one of the most desired capabilities. Such capability exists at larger than 4-mm overall diameters. We report the development of a stereo imaging endoscope of 4-mm maximum diameter, called Multiangle, Rear-Viewing Endoscopic Tool (MARVEL) that uses a single-lens system with complementary multibandpass filter (CMBF) technology to achieve 3-D imaging. In addition, the system is endowed with the capability to pan from side-to-side over an angle of [Formula: see text], which is another unique aspect of MARVEL for such a class of endoscopes. The design and construction of a single-lens, CMBF aperture camera with integrated illumination to generate 3-D images, and the actuation mechanism built into it is summarized.

  6. Curiosity’s robotic arm-mounted Mars Hand Lens Imager (MAHLI): Characterization and calibration status

    Science.gov (United States)

    Edgett, Kenneth S.; Caplinger, Michael A.; Maki, Justin N.; Ravine, Michael A.; Ghaemi, F. Tony; McNair, Sean; Herkenhoff, Kenneth E.; Duston, Brian M.; Wilson, Reg G.; Yingst, R. Aileen; Kennedy, Megan R.; Minitti, Michelle E.; Sengstacken, Aaron J.; Supulver, Kimberley D.; Lipkaman, Leslie J.; Krezoski, Gillian M.; McBride, Marie J.; Jones, Tessa L.; Nixon, Brian E.; Van Beek, Jason K.; Krysak, Daniel J.; Kirk, Randolph L.

    2015-01-01

    MAHLI (Mars Hand Lens Imager) is a 2-megapixel, Bayer pattern color CCD camera with a macro lens mounted on a rotatable turret at the end of the 2-meters-long robotic arm aboard the Mars Science Laboratory rover, Curiosity. The camera includes white and longwave ultraviolet LEDs to illuminate targets at night. Onboard data processing services include focus stack merging and data compression. Here we report on the results and status of MAHLI characterization and calibration, covering the pre-launch period from August 2008 through the early months of the extended surface mission through February 2015. Since landing in Gale crater in August 2012, MAHLI has been used for a wide range of science and engineering applications, including distinction among a variety of mafic, siliciclastic sedimentary rocks; investigation of grain-scale rock, regolith, and eolian sediment textures and structures; imaging of the landscape; inspection and monitoring of rover and science instrument hardware concerns; and supporting geologic sample selection, extraction, analysis, delivery, and documentation. The camera has a dust cover and focus mechanism actuated by a single stepper motor. The transparent cover was coated with a thin film of dust during landing, thus MAHLI is usually operated with the cover open. The camera focuses over a range from a working distance of 2.04 cm to infinity; the highest resolution images are at 13.9 µm per pixel; images acquired from 6.9 cm show features at the same scale as the Mars Exploration Rover Microscopic Imagers at 31 µm/pixel; and 100 µm/pixel is achieved at a working distance of ~26.5 cm. The very highest resolution images returned from Mars permit distinction of high contrast silt grains in the 30–40 µm size range. MAHLI has performed well; the images need no calibration in order to achieve most of the investigation’s science and engineering goals. The positioning and repeatability of robotic arm placement of the MAHLI camera head have

  7. Method for producing an isoplanatic aspheric monofocal intraocular lens, and resul ting lens

    OpenAIRE

    Barbero, Sergio; Marcos, Susana; Dorronsoro, Carlos; Montejo, Javier; Salazar Salegui, Pedro

    2010-01-01

    [EN] The invention can be used to obtain isoplanatic aspheric mono focal intraocular lenses in a viewing range of up to 25° (preferably up to 10°). The method comprises the following steps: l. mathematical defmition of an aphakic eye model; 2. mathematical definition of an intraocular lens model; 3. mathematical defmition of the implantation of the lens; 4. mathematical defmition of the merit function; 5. definition of the contour conditions; 6. defmition of a measurement for charact...

  8. Properties of the cathode lens combined with a focusing magnetic/immersion-magnetic lens

    Czech Academy of Sciences Publication Activity Database

    Konvalina, Ivo; Müllerová, Ilona

    2011-01-01

    Roč. 645, č. 1 (2011), s. 55-59 ISSN 0168-9002 R&D Projects: GA ČR GAP102/10/1410; GA AV ČR IAA100650902; GA MŠk ED0017/01/01 Institutional research plan: CEZ:AV0Z20650511 Keywords : cathode lens * compound objective lens * aberration coefficients * spot size * field calculations Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 1.207, year: 2011

  9. Preliminary Investigation of an Active PLZT Lens

    Science.gov (United States)

    Lightsey, W. D.; Peters, B. R.; Reardon, P. J.; Wong, J. K.

    2001-01-01

    The design, analysis and preliminary testing of a prototype Adjustable Focus Optical Correction Lens (AFOCL) is described. The AFOCL is an active optical component composed of solid state lead lanthanum-modified zirconate titanate (PLZT) ferroelectric ceramic with patterned indium tin oxide (ITO) transparent surface electrodes that modulate the refractive index of the PLZT to function as an electro-optic lens. The AFOCL was developed to perform optical re-alignment and wavefront correction to enhance the performance of Ultra-Lightweight Structures and Space Observatories (ULSSO). The AFOCL has potential application as an active optical component within a larger optical system. As such, information from a wavefront sensor would be processed to provide input to the AFOCL to drive the sensed wavefront to the desired shape and location. While offering variable and rapid focussing capability (controlled wavefront manipulation) similar to liquid crystal based spatial light modulators (SLM), the AFOCL offers some potential advantages because it is a solid-state, stationary, low-mass, rugged, and thin optical element that can produce wavefront quality comparable to the solid refractive lens it replaces. The AFOCL acts as a positive or negative lens by producing a parabolic phase-shift in the PLZT material through the application of a controlled voltage potential across the ITO electrodes. To demonstrate the technology, a 4 mm diameter lens was fabricated to produce 5-waves of optical power operating at 2.051 micrometer wavelength. Optical metrology was performed on the device to measure focal length, optical quality, and efficiency for a variety of test configurations. The data was analyzed and compared to theoretical data available from computer-based models of the AFOCL.

  10. Calibration Procedures in Mid Format Camera Setups

    Science.gov (United States)

    Pivnicka, F.; Kemper, G.; Geissler, S.

    2012-07-01

    A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU), the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and camera can be applied

  11. CALIBRATION PROCEDURES IN MID FORMAT CAMERA SETUPS

    Directory of Open Access Journals (Sweden)

    F. Pivnicka

    2012-07-01

    Full Text Available A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU, the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and

  12. Gram negative bacteria and contact lens induced acute red eye

    Directory of Open Access Journals (Sweden)

    Sankaridurg Padmaja

    1996-01-01

    Full Text Available Two patients using hydrogel contact lenses on a daily wear schedule slept overnight with the lenses and woke up with a Contact Lens Induced Acute Red Eye (CLARE. The contact lenses recovered aseptically at the time of the event grew significant colonies of Pseudomonas aeruginosa and Aeromonas hydrophila in patient A and Pseudomonas aeruginosa and Serratia liquefaciens from patient B. Similar organisams from the contact lenses were recovered from the lens case and lens care solutions of patient B. In both the patients the condition resolved on discontinuation of lens wear. Patient compliance as a requirement for successful contact lens wear is highlighted with the illustration of these cases.

  13. Soft x-ray streak cameras

    International Nuclear Information System (INIS)

    Stradling, G.L.

    1988-01-01

    This paper is a discussion of the development and of the current state of the art in picosecond soft x-ray streak camera technology. Accomplishments from a number of institutions are discussed. X-ray streak cameras vary from standard visible streak camera designs in the use of an x-ray transmitting window and an x-ray sensitive photocathode. The spectral sensitivity range of these instruments includes portions of the near UV and extends from the subkilovolt x- ray region to several tens of kilovolts. Attendant challenges encountered in the design and use of x-ray streak cameras include the accommodation of high-voltage and vacuum requirements, as well as manipulation of a photocathode structure which is often fragile. The x-ray transmitting window is generally too fragile to withstand atmospheric pressure, necessitating active vacuum pumping and a vacuum line of sight to the x-ray signal source. Because of the difficulty of manipulating x-ray beams with conventional optics, as is done with visible light, the size of the photocathode sensing area, access to the front of the tube, the ability to insert the streak tube into a vacuum chamber and the capability to trigger the sweep with very short internal delay times are issues uniquely relevant to x-ray streak camera use. The physics of electron imaging may place more stringent limitations on the temporal and spatial resolution obtainable with x-ray photocathodes than with the visible counterpart. Other issues which are common to the entire streak camera community also concern the x-ray streak camera users and manufacturers

  14. The influence of end of day silicone hydrogel daily disposable contact lens fit on ocular comfort, physiology and lens wettability.

    Science.gov (United States)

    Wolffsohn, James; Hall, Lee; Mroczkowska, Stephanie; Hunt, Olivia A; Bilkhu, Paramdeep; Drew, Tom; Sheppard, Amy

    2015-10-01

    To quantify the end-of-day silicone-hydrogel daily disposable contact lens fit and its influence of on ocular comfort, physiology and lens wettability. Thirty-nine subjects (22.1±3.5 years) were randomised to wear each of 3 silicone-hydrogel daily-disposable contact lenses (narafilcon A, delefilcon A and filcon II 3), bilaterally, for one week. Lens fit was assessed objectively using a digital video slit-lamp at 8, 12 and 16h after lens insertion. Hyperaemia, non-invasive tear break-up time, tear meniscus height and comfort were also evaluated at these timepoints, while corneal and conjunctival staining were assessed on lens removal. Lens fit assessments were not different between brands (P>0.05), with the exception of the movement at blink where narafilcon A was more mobile. Overall, lag reduced but push-up speed increased from 8 to 12h (P0.05). Movement-on-blink was unaffected by wear-time (F=0.403, P=0.670). A more mobile lens fit with one brand did not indicate that person would have a more mobile fit with another brand (r=-0.06 to 0.63). Lens fit was not correlated with comfort, ocular physiology or lens wettability (P>0.01). Among the lenses tested, objective lens fit changed between 8h and 12h of lens wear. The weak correlation in individual lens fit between brands indicates that fit is dependent on more than ocular shape. Consequently, substitution of a different lens brand with similar parameters will not necessarily provide comparable lens fit. Copyright © 2015 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  15. Acid phosphatase and lipid peroxidation in human cataractous lens epithelium

    Directory of Open Access Journals (Sweden)

    Vasavada Abhay

    1993-01-01

    Full Text Available The anterior lens epithelial cells undergo a variety of degenerative and proliferative changes during cataract formation. Acid phosphatase is primarily responsible for tissue regeneration and tissue repair. The lipid hydroperoxides that are obtained by lipid peroxidation of polysaturated or unsaturated fatty acids bring about deterioration of biological membranes at cellular and tissue levels. Acid phosphatase and lipid peroxidation activities were studied on the lens epithelial cells of nuclear cataract, posterior subcapsular cataract, mature cataract, and mixed cataract. Of these, mature cataractous lens epithelium showed maximum activity for acid phosphatase (516.83 moles of p-nitrophenol released/g lens epithelium and maximum levels of lipid peroxidation (86.29 O.D./min/g lens epithelium. In contrast, mixed cataractous lens epithelium showed minimum activity of acid phosphatase (222.61 moles of p-nitrophenol released/g lens epithelium and minimum levels of lipid peroxidation (54.23 O.D./min/g lens epithelium. From our study, we correlated the maximum activity of acid phosphatase in mature cataractous lens epithelium with the increased areas of superimposed cells associated with the formation of mature cataract. Likewise, the maximum levels of lipid peroxidation in mature cataractous lens epithelium was correlated with increased permeability of the plasma membrane. Conversely, the minimum levels of lipid peroxidation in mixed cataractous lens epithelium makes us presume that factors other than lipid peroxidation may also account for the formation of mixed type of cataract.

  16. The partial coherence modulation transfer function in testing lithography lens

    Science.gov (United States)

    Huang, Jiun-Woei

    2018-03-01

    Due to the lithography demanding high performance in projection of semiconductor mask to wafer, the lens has to be almost free in spherical and coma aberration, thus, in situ optical testing for diagnosis of lens performance has to be established to verify the performance and to provide the suggesting for further improvement of the lens, before the lens has been build and integrated with light source. The measurement of modulation transfer function of critical dimension (CD) is main performance parameter to evaluate the line width of semiconductor platform fabricating ability for the smallest line width of producing tiny integrated circuits. Although the modulation transfer function (MTF) has been popularly used to evaluation the optical system, but in lithography, the contrast of each line-pair is in one dimension or two dimensions, analytically, while the lens stand along in the test bench integrated with the light source coherent or near coherent for the small dimension near the optical diffraction limit, the MTF is not only contributed by the lens, also by illumination of platform. In the study, the partial coherence modulation transfer function (PCMTF) for testing a lithography lens is suggested by measuring MTF in the high spatial frequency of in situ lithography lens, blended with the illumination of partial and in coherent light source. PCMTF can be one of measurement to evaluate the imperfect lens of lithography lens for further improvement in lens performance.

  17. Oxygen transport through soft contact lens and cornea: Lens characterization and metabolic modeling

    Science.gov (United States)

    Chhabra, Mahendra

    The human cornea requires oxygen to sustain metabolic processes critical for its normal functioning. Any restriction to corneal oxygen supply from the external environment (e.g., by wearing a low oxygen-permeability contact lens) can lead to hypoxia, which may cause corneal edema (swelling), limbal hyperemia, neovascularization, and corneal acidosis. The need for adequate oxygen to the cornea is a major driving force for research and development of hypertransmissible soft contact lenses (SCLs). Currently, there is no standard technique for measuring oxygen permeability (Dk) of hypertransmissible silicone-hydrogel SCLs. In this work, an electrochemistry-based polarographic apparatus was designed, built, and operated to measure oxygen permeability in hypertransmissible SCLs. Unlike conventional methods where a range of lens thickness is needed for determining oxygen permeabilities of SCLs, this apparatus requires only a single lens thickness. The single-lens permeameter provides a reliable, efficient, and economic tool for measuring oxygen permeabilities of commercial hypertransmissible SCLs. The single-lens permeameter measures not only the product Dk, but, following modification, it measures separately diffusivity, D, and solubility, k, of oxygen in hypertransmissible SCLs. These properties are critical for designing better lens materials that ensure sufficient oxygen supply to the cornea. Metabolism of oxygen in the cornea is influenced by contact-lens-induced hypoxia, diseases such as diabetes, surgery, and drug treatment, Thus, estimation of the in-vivo corneal oxygen consumption rate is essential for gauging adequate oxygen supply to the cornea. Therefore, we have developed an unsteady-state reactive-diffusion model for the cornea-contact-lens system to determine in-vivo human corneal oxygen-consumption rate. Finally, a metabolic model was developed to determine the relation between contact-lens oxygen transmissibility (Dk/L) and corneal oxygen deficiency. A

  18. Characterization of SWIR cameras by MRC measurements

    Science.gov (United States)

    Gerken, M.; Schlemmer, H.; Haan, Hubertus A.; Siemens, Christofer; Münzberg, M.

    2014-05-01

    Cameras for the SWIR wavelength range are becoming more and more important because of the better observation range for day-light operation under adverse weather conditions (haze, fog, rain). In order to choose the best suitable SWIR camera or to qualify a camera for a given application, characterization of the camera by means of the Minimum Resolvable Contrast MRC concept is favorable as the MRC comprises all relevant properties of the instrument. With the MRC known for a given camera device the achievable observation range can be calculated for every combination of target size, illumination level or weather conditions. MRC measurements in the SWIR wavelength band can be performed widely along the guidelines of the MRC measurements of a visual camera. Typically measurements are performed with a set of resolution targets (e.g. USAF 1951 target) manufactured with different contrast values from 50% down to less than 1%. For a given illumination level the achievable spatial resolution is then measured for each target. The resulting curve is showing the minimum contrast that is necessary to resolve the structure of a target as a function of spatial frequency. To perform MRC measurements for SWIR cameras at first the irradiation parameters have to be given in radiometric instead of photometric units which are limited in their use to the visible range. In order to do so, SWIR illumination levels for typical daylight and twilight conditions have to be defined. At second, a radiation source is necessary with appropriate emission in the SWIR range (e.g. incandescent lamp) and the irradiance has to be measured in W/m2 instead of Lux = Lumen/m2. At third, the contrast values of the targets have to be calibrated newly for the SWIR range because they typically differ from the values determined for the visual range. Measured MRC values of three cameras are compared to the specified performance data of the devices and the results of a multi-band in-house designed Vis-SWIR camera

  19. Using a slit lamp-mounted digital high-speed camera for dynamic observation of phakic lenses during eye movements: a pilot study

    Directory of Open Access Journals (Sweden)

    Leitritz MA

    2014-07-01

    Full Text Available Martin Alexander Leitritz, Focke Ziemssen, Karl Ulrich Bartz-Schmidt, Bogomil Voykov Centre for Ophthalmology, University Eye Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany Purpose: To evaluate a digital high-speed camera combined with digital morphometry software for dynamic measurements of phakic intraocular lens movements to observe kinetic influences, particularly in fast direction changes and at lateral end points. Materials and methods: A high-speed camera taking 300 frames per second observed movements of eight iris-claw intraocular lenses and two angle-supported intraocular lenses. Standardized saccades were performed by the patients to trigger mass inertia with lens position changes. Freeze images with maximum deviation were used for digital software-based morphometry analysis with ImageJ.Results: Two eyes from each of five patients (median age 32 years, range 28–45 years without findings other than refractive errors were included. The high-speed images showed sufficient usability for further morphometric processing. In the primary eye position, the median decentrations downward and in a lateral direction were -0.32 mm (range -0.69 to 0.024 and 0.175 mm (range -0.37 to 0.45, respectively. Despite the small sample size of asymptomatic patients, we found a considerable amount of lens dislocation. The median distance amplitude during eye movements was 0.158 mm (range 0.02–0.84. There was a slight positive corrlation (r=0.39, P<0.001 between the grade of deviation in the primary position and the distance increase triggered by movements.Conclusion: With the use of a slit lamp-mounted high-speed camera system and morphometry software, observation and objective measurements of iris-claw intraocular lenses and angle-supported intraocular lenses movements seem to be possible. Slight decentration in the primary position might be an indicator of increased lens mobility during kinetic stress during eye movements

  20. Advanced system for Gamma Cameras modernization

    International Nuclear Information System (INIS)

    Osorio Deliz, J. F.; Diaz Garcia, A.; Arista Romeu, E. J.

    2015-01-01

    Analog and digital gamma cameras still largely used in developing countries. Many of them rely in old hardware electronics, which in many cases limits their use in actual nuclear medicine diagnostic studies. Consequently, there are different worldwide companies that produce medical equipment engaged into a partial or total Gamma Cameras modernization. Present work has demonstrated the possibility of substitution of almost entire signal processing electronics placed at inside a Gamma Camera detector head by a digitizer PCI card. this card includes four 12 Bits Analog-to-Digital-Converters of 50 MHz speed. It has been installed in a PC and controlled through software developed in Lab View. Besides, there were done some changes to the hardware inside the detector head including redesign of the Orientation Display Block (ODA card). Also a new electronic design was added to the Microprocessor Control Block (MPA card) which comprised a PIC micro controller acting as a tuning system for individual Photomultiplier Tubes. The images, obtained by measurement of 99m Tc point radioactive source, using modernized camera head demonstrate its overall performance. The system was developed and tested in an old Gamma Camera ORBITER II SIEMENS GAMMASONIC at National Institute of Oncology and Radiobiology (INOR) under CAMELUD project supported by National Program PNOULU and IAEA . (Author)